diff --git "a/related_53K/test_related_long_2404.18065v1.json" "b/related_53K/test_related_long_2404.18065v1.json" new file mode 100644--- /dev/null +++ "b/related_53K/test_related_long_2404.18065v1.json" @@ -0,0 +1,8364 @@ +[ + { + "url": "http://arxiv.org/abs/2404.18065v1", + "title": "Grounded Compositional and Diverse Text-to-3D with Pretrained Multi-View Diffusion Model", + "abstract": "In this paper, we propose an effective two-stage approach named\nGrounded-Dreamer to generate 3D assets that can accurately follow complex,\ncompositional text prompts while achieving high fidelity by using a pre-trained\nmulti-view diffusion model. Multi-view diffusion models, such as MVDream, have\nshown to generate high-fidelity 3D assets using score distillation sampling\n(SDS). However, applied naively, these methods often fail to comprehend\ncompositional text prompts, and may often entirely omit certain subjects or\nparts. To address this issue, we first advocate leveraging text-guided 4-view\nimages as the bottleneck in the text-to-3D pipeline. We then introduce an\nattention refocusing mechanism to encourage text-aligned 4-view image\ngeneration, without the necessity to re-train the multi-view diffusion model or\ncraft a high-quality compositional 3D dataset. We further propose a hybrid\noptimization strategy to encourage synergy between the SDS loss and the sparse\nRGB reference images. Our method consistently outperforms previous\nstate-of-the-art (SOTA) methods in generating compositional 3D assets,\nexcelling in both quality and accuracy, and enabling diverse 3D from the same\ntext prompt.", + "authors": "Xiaolong Li, Jiawei Mo, Ying Wang, Chethan Parameshwara, Xiaohan Fei, Ashwin Swaminathan, CJ Taylor, Zhuowen Tu, Paolo Favaro, Stefano Soatto", + "published": "2024-04-28", + "updated": "2024-04-28", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "Distillation", + "gt": "2.1. Text-to-3D Early works like CLIP-Mesh [27] and DreamField first show the possibility of generating 3D assets with text prompts using 2D priors. Later Score Distillation Sampling (SDS) is proposed by Boole et al. in DreamFusion [29] and followed by many [6, 7, 17, 19, 25, 35, 36, 38, 39, 41, 44, 47]. The key idea is to supervise NeRF [3, 26] training using the supervision signals from a pre-trained and frozen large text-to-image model [31, 32], which can be considered as distilling deterministic generators (i.e. neural radius field) as student models for a pre-trained large-scale diffusion models conditioned on specific text [23]. Several techniques have been proposed to improve the SDS frame2 (a) Running into the Janus issue (b) Ignoring compositional priors Figure 3. Illustration of common failure patterns when naively combining sparse reference images to the SDS loss. Text prompt: \u201cTwo foxes fighting\u201d. When combining 4 reference images, (a) ends up generating a fox with two tails, while (b) misses the \u2018two\u2019 information. work including better viewpoint conditioning [1], better timestep scheduling [13], variational score distillation [44], accelerated NeRF representation [28], surface representation [33, 42, 46], improved efficiency using Gaussian Splatting [14], and improved fidelity [48]. However, these methods only demonstrate limited capability of generating compositional or diverse 3D assets. 2.2. Sparse Image-to-3D with Diffusion Models A number of image-to-3D methods in existing literature attempt to synthesize 3D models from a single image, including RealFusion [24], Zero-1-to-3 [20], Magic123 [30], and Wonder3D [22]. Such methods are often used as part of a two-stage text-to-3D pipeline in which an image is first generated using a high quality text-to-image model, and subsequently used as a reference for 3D object synthesis. Although such approaches bear the advantage of the user being able to \u201cselect\u201d the desired aesthetics before 3D synthesis commences, a single image is unable to fully capture multi-object compositions in 3D space in which objects may occlude each other, or when the prompt describes details that are impossible to capture from a single camera view, and easily becomes a bottleneck in preserving 3D compositional accuracy in the full text-to-3D pipeline. 2.3. 3D Compositional Generation Previous works on compositional generation can be divided into two major tracks, layout or depth-conditioned 3D scene generation, like [2, 4, 37, 43, 45], or perform iterative 3D editing to add new compositional attributes [8, 18, 49]. The first track usually relies on a domain-specific dataset to learn scene priors, and the focus is different from ours on generating compositional 3D assets. While the second track can only add compistional attributes to the single target object through iterative editing, our methods can generate 3D assets with multiple compositional subjects or attributes in single round of training.", + "pre_questions": [], + "main_content": "Introduction The quest to transform textual descriptions into vivid 3D models has seen remarkable advancements with methods like Score Jacobian Chaining [40], DreamFusion [29], and subsequent developments like [6, 19, 35]. However, the field still faces significant challenges in accurately rendering compositional prompts and ensuring diversity in the synthesized objects. Our research introduces a novel approach that not only addresses these challenges but also represents a paradigm shift in text-to-3D synthesis. We draw inspiration from the 2D domain, where the same pre-trained diffusion models can generate compositionally correct images under multiple attempts. These im1 arXiv:2404.18065v1 [cs.CV] 28 Apr 2024 ages can serve as robust references for 3D synthesis, leading us to the key question: can we leverage these textconditioned, diverse, compositionally correct views to enhance 3D asset creation? A naive solution is to combine text-to-image (T2I) and single-image-to-3D pipelines, such as [21, 22, 30, 34]. However, they often result in inconsistent geometry or semantics mainly due to inherent ambiguities and the domain gap when conditioning on a single image. This inconsistency is particularly evident in 3D assets where different views (front, side, rear) appear incongruent. In our approach, we advocate for establishing a more robust foundation for Text-to-3D synthesis by utilizing multiview images. Instead of relying on a single view, which often leads to ambiguities and inconsistencies, we generate four spatially distinct views, each separated by 90 degrees. This multi-view approach effectively constrains and defines an object\u2019s shape and appearance, bridging the gap between 2D imagery and 3D modeling. By employing a pre-trained multi-view diffusion model [35], we can generate these four views from a text prompt in a multi-view consistent manner. This process of generating and utilizing multiple views provides a more reliable and \u201cgrounded\u201d basis for 3D reconstruction, as it reduces the uncertainty often associated with interpreting and extrapolating from a single image. However, akin to the limitations of Stable Diffusion [31] in generating compositional single-view images [12], advanced models like MVDream can also struggle to consistently produce four-view images that accurately capture the correct compositional subjects, attributes, and their spatial relationships. To counter this, our first stage employs an attention refocusing mechanism during the inference phase, as inspired by [5]. This strategy ensures that each subject token from the text is precisely represented across all views, effectively addressing the ambiguities common in single-view reconstructions. By enhancing compositional accuracy without the need for re-training or fine-tuning the existing multi-view diffusion model, our method not only conserves resources but also leverages the rich knowledge embedded in the pre-trained text-guided diffusion model. This approach promotes greater adaptability and creativity in a wide range of scenarios. In the second stage of our method, we implement a nuanced, coarse-to-fine reconstruction process. This stage is characterized by an integration of sparse-view Neural Radiance Fields (NeRF) with text-guided diffusion priors. The process begins by establishing a coarse 3D structure using sparse-view NeRF, grounded in the compositional accuracy achieved in the first stage. We then refine the details of this structure by introducing text-guided diffusion priors. A critical component of this stage is the implementation of a delayed Score Distillation Sampling (SDS) loss, coupled with an aggressively annealed timestep schedule. This combination is designed to refine textures and geometries in a scene-agnostic manner, ensuring that the enhancements do not distort the compositional accuracy established earlier. It is important to note that a straightforward combination of sparse-view image supervision with existing Text-to-3D pipelines can lead to significant geometric distortions, such as the duplication of body parts (commonly referred to as the \u2018Janus\u2019 issue) or a complete disregard for compositional priors, resulting in a regression to the original MVDream Text-to-3D outputs. These common failure patterns, as illustrated in Fig. 3, underscore the need for a more sophisticated approach to integrating these elements. Our method\u2019s staged process, with its careful balance of NeRF and diffusion priors, is designed to avoid these pitfalls, ensuring a coherent and accurate 3D representation. By integrating our novel two-stage framework with a pre-trained multi-view diffusion model, we develop an effective pipeline for compositional Text-to-3D synthesis that accurately adheres to complex text prompts. Our method not only generates diverse 3D assets for the same text prompts by varying the sets of four-view images but also marks significant advancements in the field: \u2022 Innovative Two-Stage Framework: We introduce a new paradigm in Text-to-3D synthesis, where sparse-view images generated from a Text-to-Image (T2I) model serve as an intermediary, ensuring the preservation of compositional priors and facilitating diverse 3D generation. \u2022 Compositional Alignment via Test-Time Optimization: tional priors and facilitating diverse 3D generation. \u2022 Compositional Alignment via Test-Time Optimization: Our method includes a novel test-time optimization technique for multi-view generation, significantly improving text-image alignment, particularly in terms of compositional accuracy. \u2022 Hybrid Training Strategy for High-Fidelity 3D Assets: tional accuracy. \u2022 Hybrid Training Strategy for High-Fidelity 3D Assets: We propose a synergistic training approach that combines few-shot NeRF with Score Distillation Sampling (SDS)based optimization. This strategy not only achieves highfidelity, text-guided 3D asset generation but also maintains precise compositional relationships. 3.1. Attend-and-Excite Revisited Attend-and-Excite [5] was originally proposed to ease the catastrophic neglect issue in Text-to-Image generation domain, in which the text-guided image diffusion model can fail to generate one or more subjects specified in the target text prompt. Attend-and-Excite [5] comes up a test-time optimization framework over the noisy latents, and encourages the cross-attention layers to attend to all subjects in the text during the iterative denoising process. The intuition lies in the adopted cross-attention mechanism to bring in text condition into image generation. At each timestep t, the text embeddings will be fed into the cross-attention layer of the U-Net, and each latent feature over the feature grid will perform attention operation with all the text embeddings, resulting an attention activation matrix per text token. The attention matrix can be reshaped to obtain a spatial map As t \u2208RH\u00d7W per text token s. Intuitively, for a token to be manifested in the generated image, there should be at least one patch in its map with a high activation value. To guide such desired behavior, Attendand-Excite introduces f(zt) = Latt = max s\u2208S Ls, Ls = 1 \u2212max(Gaussian(As t)), where zt is the noisy latent. L s\u2208S L L 1 \u2212max(Gaussian(As t)), where zt is the noisy latent. Gaussian denotes applying Gaussian smoothing to the 2D activation map in order to cover a larger patch that later can emerge to the target objects. Such a loss will strengthen the activations of the most neglected subject token at the current timestep t. 3.2. Multi-View Diffusion Models MVDream is a recent effort that adapts the common Textto-Image diffusion model to have multi-view consistency, and enables Janus-free and high-fidelity Text-to-3D. Given a set of noisy images xt \u2208RF \u00d7H\u00d7W \u00d7C , a text prompt as condition y, and a set of extrinsic camera parameters c \u2208RF \u00d716, MVDream is trained to simultaneously denoise 3 and generate multiple images x0 \u2208RF \u00d7H\u00d7W \u00d7C that correspond to F different views of the same scene. At each step t, we have the predicted noise as \u03f5\u03b8(xt; y, c, t), where \u03b8 denotes the parameters of the latent U-Net. To inherit the generalizability of the 2D diffusion models, while also obtaining the capability of multi-view consistency, MVDream fine-tunes on Stable Diffusion v2.1. However, MVDream also inherits the same issue as Stable Diffusion and can fail in generating compositionally correct 4-view images, and the lack of large-scale compositional scene-level 3D data for fine-tuning makes the issue more significant when applying to Text-to-3D. 4. Method To tackle the specific challenges when generating compositionally correct 3D assets while achieving diversity, we propose a novel 2-stage approach that well incorporates attention refocusing mechanism and sparse-view guidance in a unified framework, as drawed in Fig. 4. In Sec. 4.1, we detail our method for generating compositionally accurate four-view images. This stage focuses on ensuring that the subjects within these images are not only compositionally correct but also maintain the correct spatial relationships. Following this, in Sec. 4.2, we explore the integration of these consistent reference images with a pre-trained multiview diffusion model. Here, we examine the optimal combination of sparse-view reference images and Score Distillation Sampling (SDS) loss to achieve high-fidelity 3D asset generation. 4.1. Attention Refocusing for Accurate Compositional 4-View Generation Despite the success of Attend-and-Excite in Text-to-Image generation with multiple subjects, it is non-trivial extending the attention refocusing to Text-to-3D generation to optimize the target NeRF. Instead of optimizing a single-view latent, now we need to jointly optimize the 4-view latents without breaking the multi-view consistency, and we don\u2019t want to result in latent updates that lead to the latent becoming out-of-distribution. While it looks appealing to directly train a NeRF with combined attention refocusing loss and SDS loss, it renders a more challenging optimization problem since the attention refocusing loss is not optimizing the NeRF directly but on the rendered noisy latents from NeRF. The asynchronous NeRF updates can easily violate the assumption of in-distribution noisy latents on the attention refocusing loss. As we will show in the ablation study Sec. 6, such an attempt can easily lead to sub-optimal solutions and significantly enlarge the convergence time. To design an more effective paradigm for composition control in Text-to-3D, we thus first adapt attention refocusing mechanism into compositionally correct 4-view generation, then we use the sparse-view images as additional compositional constraints in the 2nd-stage SDS-based NeRF training. When using the pre-trained multi-view diffusion model to generate 4-view images following a text prompt, we will have attention activation map As t \u2208RF \u00d7H\u00d7W per text token s, F is the number of frames. Instead of naively updating the per-view latent using a per-view Latt, we found that if first aggregating the attention maps across the 4 views using average operation, we tend to get more reasonable 4view images, in which the final loss is Latt = max s\u2208S (1 \u2212max(mean(Gaussian(As t[v, :, :])))). (1) Our final algorithm for applying attention refocusing in multi-view generation is drawn in Algorithm 1. Compared to Attend-and-Exicte, we also perform such optimization at more timesteps especially on the early stage instead of a few selected steps, which can still be done in minutes. Algorithm 1 A Single Denoising Step on Compositional 4-View Generation Input: A text prompt y, 4-view camera poses c, a set of subject token indices \u222b, a timestep t, a set of iterations for refinement {t1, \u00b7 \u00b7 \u00b7 , tk}, a set of thresholds {T1, \u00b7 \u00b7 \u00b7 , Tk}, and a trained multi-view diffusion model SDmv. Output: A noised latent zt\u22121 for the next timestep 1: , At \u2190SDmv(zt, y, c, t) 2: At \u2190Softmax(At \u2212 \u27e8sot\u27e9) 3: for s \u2208S do 4: As t \u2190Meanv(At[v, : , :, s]) 5: As t \u2190Gaussian(As t) 6: Ls \u21901 \u2212max(As t) 7: end for 8: L \u2190maxs(Ls) 9: z\u2032 t \u2190zt \u2212\u03b1t \u00b7 \u2206ztL 10: if t \u2208{t1, \u00b7 \u00b7 \u00b7 , tk} then 11: if L > 1 \u2212Tt then 12: zt \u2190z\u2032 t 13: Go to Step 1 14: end if 15: end if 16: zt\u22121, \u2190SDmv(z\u2032 t, y, c, t) 17: return zt\u22121 4.2. Coarse-to-Fine Synergistic Reconstruction With Diffusion Priors We adopt an optimization-based reconstruction framework that leverages both the 4-view reference images, and a pretrained multi-view diffusion model for priors-augmented reconstruction. To avoid running into the failure patterns as mentioned above, we hypothesize that the key lies in designing an effective training strategy that can create synergy between the two different supervision signals. Our key insight is that, the rough 4 reference views give coarse but nearly complete information about geometry of the target 4 Target NeRF \u201cA corgi dog wearing a red fedora and blue sunglasses\u201d Add noise \u201cA corgi dog wearing a red fedora and blue sunglasses\u201d Attention Refocusing Render Stage 1 Stage 2 Distillation with Annealed SDS loss Figure 4. Illustration of the two-stage pipeline with our Grounded-Dreamer. Given a text prompt, we first generate compositionally correct 4-view images using iterative latent optimization at selected DDIM sampling steps. The 4-view reference images together with the masks are combined with score distillation sampling (SDS) loss in our hybrid training strategy, which will create high fidelity 3D assets while preserving the compositional priors accurately. scene, especially on the compositional subjects, their interaction poses and spatial arrangement. These can provide a coarse initialization to later diffusion-based 3D distillation process. Early few-shot NeRF training At early stages, we hypothesize that GT reference images can be more informative and stable compared to SDS loss with the pre-trained MVDream diffusion model given a large timestep. Thus we introduce sparse-view NeRF to establish the coarse geometry and texture. We adopt the hierarchical hash-grid MLP introduced in Instant-NGP [28] as our learnable NeRF representation, and we can simply rely on RGB and mask reconstruction loss which are sufficient to obtain a coarse NeRF representation. The reconstruction loss is defined as Limg = LRGB + Lmask. 3D distillation with warm-start SDS loss When the rough compositional geometry, and the associated texture emerge from the early few-shot NeRF training, we marry sparse-view NeRF with SDS-based 3D distillation to enable high-fidelity 3D generation, while preserving the compositional priors. The key idea is to bring in additional supervision from multiple unobserved viewpoints. In addition to the 4 fixed observed viewpoints that already have the roughly correct ground-truth images, we will randomly sample unobserved views and render multi-view images, the gradients come from SDS loss with a pre-trained multiview diffusion model under text guidance. The total loss for NeRF training is defined as: Ltotal = \u03bbimg(i) \u2217Lfixed views img + \u03bbSDS(i) \u2217Lrandom views SDS (2) Lrandom views SDS = Et,c,\u03f5[\u03c9(t)\u2225\u03f5\u03b8(xt; y, c, t(i)) \u2212\u03f5)\u22252 2] (3) When the NeRF representation of the target scene is optimized to a certain level, further relying on poor-quality 4-view reference images may hinder the creation of highfidelity 3D assets. Hence, we choose to gradually reduce the weight of the image reconstruction loss to 0, while increasing the weight of the SDS loss. We have t(i) \u223cU(Tmin(i), Tmax(i)). In prior works [13, 35, 48] a time-annealing approach is implemented, where Tmax(istart) is set close to the total number of timesteps. We found such implementation can lead to large variation in SDS loss and drastic content change in NeRF output towards entirely different directions. It can raise the resurfacing alignment issues with the compositional priors drawn from the reference images. To solve this and preserve composition, the key modification we made is to set the initial Tmax(istart) as small as 680, thus the SDS loss can be leveraged to add more details, and refine NeRF to high fidelity. Specifically, Tmax(i) = c1 + (c2 \u2212c1) \u2217i \u2212a b \u2212a (4) The above hierarchical training design and modification to timestep annealing are simple but critical. It works effectively to have SDS loss and sparse image supervisions to make synergy between each other, an example is shown in 5 Fig. 5. We will next demonstrate through experiments and ablations. 5. Experiments 5.1. Experimental Setup Implementation details We use the same settings for our method on all the text prompts. The early few-shot NeRF is trained for 200 steps, and then we add SDS loss by setting the Tmax(i) to linearly reduce from 680 to 500, while Tmin(i) is linearly reduced from 380 to 20. During the first 5000 steps we train the NeRF at 64x64 resolution, and switch to 256x256 for the latter 5000 steps for refinement. We gradually reduce the weighting on image reconstruction loss from 1000 to 100, while the weighting on SDS loss is increased from 0.025 to 0.25. For most of the baselines, we adopt default implementations within [9], and we run Wonder3D [22] using their released code, and use the scripts provided by Magic123 [30] for background removal. All the experiments are conducted on Nvidia A100 GPUs. Prompts set To cover various scenarios of compositional Text-to-3D, we select and categorize our text prompts into (1) compositional-objects, which involve multiple subjects arranged in specific spatial relationships, e.g. \u201ca green spoon on a red cake in a yellow tray\u201d; (2) compositionalanimals, which contain scenarios of animal-object interaction, or specific activities, e.g. \u201cAn artist is painting on a blank canvas\u201d or \u201ctwo foxes fighting\u201d. We select 50 for each subgroup based on existing text prompts from [29] and [8], with 100 text prompts in total. Baselines We consider MVDream [35] as our baseline for multi-view generation. For text to 3D generation, we adopt recent SOTA Text-to-3D methods such as Magic3D [19], ProlificDreamer [44], and the MVDream-ThreeStudio [35] as the baselines. To further illustrate the unique advantages of 4-view input, we compare with recent single-view-to3D methods like Magic-123 [30] by first using the text to generate a proper single-view image. To demonstrate the unique benefits of our hierarchical NeRF optimization with text guided diffusion priors, we also compare the results with a recent SOTA image-conditioned multi-view diffusion model named Wonder3D [22], which is essentially a single-view-to-4-view-to-3D method. Wonder3D learns 3D priors that are capable of generating normal maps and RGB images on 3 additional viewpoints, however, they adopt a normal-assisted few-shot NeRF to get the final 3D assets. Metrics CLIP-based metrics might fail to measure the fine-grained correspondences between described objects and binding attributes, thus we only use CLIP R-Precision following [29], which measures the relative closeness between all generated images and their corresponding text prompts. Following a recent effort T 3Bench [10], we adopt a VQA-like pipeline by first using image-to-text models like BLIP-2 [16] to generate captions for rendered multi-view images, and then evaluate the alignment score between the compressed predicted captions and the given text prompt, named as T3 Score II. The purpose is to evaluate the capability of handling general text prompts with complex semantics in a fine-grained manner. We also conduct user study and add human preferences score on text-image alignment as additional metric. For measuring images realism, we compute the FID score following [11]. 5.2. Enhanced Compositional 4-view Synthesis We show both quantitative and qualitative results in Fig. 6 and Tab. 1. Our inference-stage editing method improves on the text-image alignment, while achieving comparable image realism. Method FID Score \u2193 T3 Score II (%) \u2191 MVDream 71.43 2.72 Ours 74.16 2.85 Table 1. Quantitative evaluation on 4-view generation. We run each method on 100 prompts with different random seeds, and compare the CLIP score of the generated images. Our inferencestage editing can generate more accurate images regarding the composition of different target subjects, while not breaking the multi-view consistency. 5.3. Compositional and Diverse 3D Generation We show quantitative and qualitative comparisons in Tab. 2 and Fig. 7 for text-guided 3D generation with multiple subjects present. As detailed in Tab. 2, our method consistently outperforms existing SOTA baselines considering the overall performance of text-image alignment, view quality and consistency. Specifically, our approach excels in the CLIP-RPrecision and T3 Score II metrics, indicative of superior performance in consistent text-guided 3D generation. In terms of view consistency, our method performs on par with MVDream while significantly exceeding all other models. Compared with MVDream, our method largely outperforms it for compositional generation. As illustrated in Fig. 7, MVDream frequently overlooks certain compositional elements, leading to incomplete or imprecise representations. In contrast, our method generates more compositionally complete views. Also thanks to the hierarchical optimization strategy, our method achieves high-fidelity 3D generation as evidenced by the FID score. ProlificDreamer, on the other hand, also achieves remarkable visual quality, producing sharp and highly detailed results, however it suffers from slow training speeds (refer to our supplementary material for an efficiency report) and is prone to issues such as 6 Figure 5. Illustration on the 2nd-stage training progress with Grounded-Dreamer . Here we are showing a fixed front-view rendering of the target NeRF at different optimization steps. Our method can gradually create high fidelity 3D assets while preserving the compositional priors accurately. Figure 6. 4-view generation, each pair uses the same random seed. Our inference-stage optimization encourages compositionally correct 4-view generation compared to the original MVDream. corrupted flat geometries or \u2018Janus\u2019 problems, as shown in the case of row 4. From Tab. 2 we can also see its inferior view consistency performance. Our method also largely outperforms single-image-to3D approaches like Magic-123 [30], Wonder3D [22] on the benchmark text prompts. While Magic-123 can effectively reconstruct the front view of objects, it struggles with side or back perspectives, leading to incorrect anatomy, like in the case \u201ca zoomed out DSLR photo of a chimpanzee holding a cup of hot coffee\u201d. Other recent works, such as Wonder3D, typically face challenges in reconstruction quality, particularly when relying solely on sparse-view images. In contrast, our method innovatively integrates text-guided natural image priors with sparse-view supervision. Our generated 3D assets can harvest the natural images priors from a pre-trained diffusion model, which not only enhances the quality of reconstruction but also ensures generalizability across various text prompts describing a wide range of 3D compositional subjects. The results demonstrate the capability of our method in achieving high-quality, composition7 (a) (b) (c) (d) (e) (f) Figure 7. Qualitative results comparison for compositional Text-to-3D. From top to down, the methods are: our Grounded-Dreamer, MVDream [35], Magic123 [30], Wonder3D [22], Magic3D [19], ProlificDreamer [44]. Our method generates more compositionally complete views with high quality. Text prompts: (a) \u201ca zoomed out DSLR photo of an adorable kitten lying next to a flower\u201d, (b) \u201ca zoomed out DSLR photo of a beagle eating a donut\u201d, (c) \u201ca zoomed out DSLR photo of a chimpanzee holding a cup of hot coffee\u201d, (d) \u201ca blue candle on a red cake in a yellow tray\u201d, (e) \u201ca lego tank with a golden gun and a red flying flag\u201d, (f) \u201ca model of a silver house with a golden roof beside an origami coconut tree\u201d. Method T3 Score II \u2191CLIP R-P.(%) \u2191Good alignment \u2191Freq. of Janus\u2193FID Score \u2193 Magic3D 2.29/5.0 27.10 27.10 58.88 137.45 ProlificDreamer 2.68/5.0 48.91 60.87 77.17 129.11 Magic-123 2.32/5.0 24.74 28.87 64.95 121.16 Wonder3D 1.96/5.0 20.22 35.60 21.25 129.93 MVDream 2.33/5.0 44.95 44.44 5.88 109.78 Ours 2.53/5.0 62.73 56.71 17.15 115.94 Table 2. Quantitative evaluation on 3D composition. We run each method on 100 prompts with same random seeds. Under \u201cGood alignment\u201d, for each method, we show the percentage of the generated 3D outputs that human reviewers annotate as aligning well with the text prompts. Under \u201cFreq. of Janus\u201d, we show the preference ratio of generated outputs for each model in terms of view consistency. ally accurate 3D reconstructions without compromising details and structural integrity. Compared to ProlificDreamer, our method can generate diverse 3D assets in a well time-bounded manner, as shown in Fig. 1. The diversity can be easily controlled with different pairs of edited 4-view images as additional guidance. 6. Ablation Study Method FID Score \u2193CLIP R-P. (%) \u2191GPU-hours \u2193 Ours-2-stage 115.94 62.73 1.91 Ours-1-stage a 114.68 48.18 3.90 Ours-1-stage b 112.45 40.91 5.25 Table 3. Ablation with 1-stage designs. 1-stage design with attention refocusing loss We also implement and train two 1-stage variants without first generating 4-view images, each with balanced losses: a) we directly add the attention score loss to the SDS loss; b) we first update latents using the attention loss, then we use the updated noisy latents for the SDS loss. Tab. 3 show the ablation results. Compared to our 2-stage approach: 1) the 1-stage ones significantly increase the training time by two to three times; 2) stark performance drop on compositional accuracy; 3) corrupted shapes, like the tree mixes with the house in the 2nd row of Fig. 8. Different strategies for multi-view latent updates For example, if naively updating each view sequentially using Attend-and-Excite, we can end up with corrupted views as shown in Fig. 9. Our later mean operation across the 4 views 8 1-stage a 1-stage b Ours 2-stage Figure 8. Results visualization with different pipeline designs. Ours-naive Ours-final Figure 9. Examples on \u201ca pig wearing a backpack\u201d. latents is simple and already works sufficiently well on creating compositionally correct 4-view images. Effects of warm-start timestep in SDS loss One of the keys to our method\u2019s success is to set the initial timestep of SDS loss to be relatively smaller, so the SDS loss can guide NeRF towards adding more geometric and texture details, while preserving the compositional priors. As shown in Fig. 10, if following the default setting like [Tmin(t0), Tmax(t0)] = [0.98, 0.98] out of 1000, or[Tmin(t0), Tmax(t0)] = [0.74, 0.86], it can miss the compositional information like \u201ctwo\u201d in the first example, or \u201cbeside an origami coconut tree\u201d in the 5th example. [Tmin(t0), Tmax(t0)] (1) (2) (3) (4) (5) Reference views [0.98, 0.98] [0.74, 0.86] [0.62, 0.80] Figure 10. 3D generation under different initial timestep sampling range. We pick 5 text prompts, and visualize the reference 4-view images, and two side-view of the generated 3D assets with associated normal images attached to the top-left corner. Effects of backbone used in SDS loss To validate the effects of using pre-trained multi-view diffusion model, we can replace our Score Distillation Sampling (SDS) loss backbone with SD v2.1. The results, as depicted in the accompanying Tab. 4, demonstrate that using pre-trained multi-view diffusion model for SDS loss help consistently yield better textural quality. Method Freq. of Janus (%) \u2193FID Score \u2193 ours with SD 40.90 135.60 Ours 17.15 115.94 Table 4. Text-to-3D with different pretrained T2I models. 7. Conclusion In summary, our work introduces a novel two-stage framework for Text-to-3D synthesis, effectively overcoming challenges in compositional accuracy and diversity. The first stage leverages a multi-view diffusion model for generating spatially coherent views from text, while the second stage synergizes sparse-view NeRF with text-guided diffusion priors for refined 3D reconstruction. This approach not only enhances the fidelity and compositional integrity of 3D models from complex text prompts, but also paves the way for future explorations in seamless 2D-to-3D transitions and model versatility. Our method demonstrates a significant leap in Text-to-3D synthesis, offering a robust foundation for further advancements in this evolving field. 9", + "additional_info": [ + [ + { + "url": "http://arxiv.org/abs/2404.14027v1", + "title": "OccFeat: Self-supervised Occupancy Feature Prediction for Pretraining BEV Segmentation Networks", + "abstract": "We introduce a self-supervised pretraining method, called OcFeat, for\ncamera-only Bird's-Eye-View (BEV) segmentation networks. With OccFeat, we\npretrain a BEV network via occupancy prediction and feature distillation tasks.\nOccupancy prediction provides a 3D geometric understanding of the scene to the\nmodel. However, the geometry learned is class-agnostic. Hence, we add semantic\ninformation to the model in the 3D space through distillation from a\nself-supervised pretrained image foundation model. Models pretrained with our\nmethod exhibit improved BEV semantic segmentation performance, particularly in\nlow-data scenarios. Moreover, empirical results affirm the efficacy of\nintegrating feature distillation with 3D occupancy prediction in our\npretraining approach.", + "authors": "Sophia Sirko-Galouchenko, Alexandre Boulch, Spyros Gidaris, Andrei Bursuc, Antonin Vobecky, Patrick P\u00e9rez, Renaud Marlet", + "published": "2024-04-22", + "updated": "2024-04-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Distillation", + "gt": "Camera-only BEV perception. BEV perception aims for a unified representation of the surrounding environment of a vehicle. BEV has recently arised as a prevailing paradigm for multi-camera perception systems for autonomous driving. Camera-based BEV models are typically composed of three parts: (i) an image encoder shared across cameras for extracting 2D features, (ii) a view-projection module for \u201clifting\u201d features in the 3D space to produce BEV features, and (iii) one or more task decoder modules that process BEV features towards addressing a task of interest, e.g., semantic segmentation, map prediction, 3D detection, etc. Among them, the view-projection has been in the spotlight of numerous works in this area in the past few years. The diversity of the approaches for view projection is impressive, ranging from purely geometric ones, e.g., using inverse perspective mapping with strong assumptions about the world [63] to projections completely learned from data, typically leveraging a cross-attention mechanism between image features (often imbued with 3D knowledge or priors) and learnable-queries from the projected space [6, 48, 49, 75, 83]. Currently, the most commonly used ones belong to one of the so-called \u201cpush\u201d or \u201cpull\u201d BEV projection approaches [24, 46]. The former are derived from the seminal Lift-Splat-Shoot (LSS) [58] that leverages depth uncertainty estimations from each view to project features in a shared BEV space. The predictive performance of LSS can be further improved with other supervision signals, e.g., depth from Lidar points [42, 61], stereo from time [41, 44, 71] or different depth parameterization [80]. Runtime speed can be significantly reduced thanks to custom pooling strategies [33, 34, 43, 51, 73, 84]. Pull approaches forego depth estimation and, instead, map 3D locations from the BEV space to the image space of the cameras. Then, they collect image features with deformable attention [45] or bilinear interpolation [24] and spread them in the 3D space filling it. Pull approaches have a simpler projection and have also been largely adopted and improved [14, 38, 78]. While predictive performance of BEV methods has been improved significantly in the last two years, very few methods deal with annotation-efficient learning beyond data augmentation strategies in the image or BEV space [34, 43]. Obtaining precise 3D annotations for BEV perception is a costly multi-stage labour-intensive process involving annotation of both point clouds and im2 ages from multiple cameras. In this work we propose a strategy to improve the per-annotation efficiency of different BEV models and showcase it on two types of viewprojections: SimpleBEV [24] and BEVFormer [45]. Self-supervised representation learning is a prominent paradigm that leverages unlabelled data towards producing useful representations (generalizable, robust, ready-touse) for different tasks of interest. Self-supervised learning (SSL) defines an annotation-free pretext task that is determined solely by raw data, with the aim of providing supervision signal to extract useful patterns from the data and to learn representations. SSL pretrained models are subsequently used off-the-shelf, probed or finetuned on different tasks of interest with limited amounts of labels, displaying better performance per-annotation compared to fully supervised counterparts [28]. In the image domain, a myriad of SSL pretext tasks have been proposed such as predicting perturbations incurred into an image [19, 20, 82], contrastive learning by separating similar from dissimilar views [15, 16, 26, 55], learning by clustering [10, 11], selfdistillation [2, 5, 12, 21, 23], or most recently maskedimage modeling objectives, particularly suitable for Vision Transformer architectures [4, 22, 27, 56, 85]. In spite of the success on curated internet images, e.g., ImageNet, SSL pretraining of image backbones on driving data is nontrivial, due to the high redundancy and class imbalance specific to driving data [13, 76]. However, the availability of multiple sensor types on the vehicle, such as Lidar, and the emergence of vision foundation models, opens the door to different strategies to acquire 3D and/or semantic knowledge into image encoders for driving perception, e.g., unsupervised semantic segmentation [68] or detection [65]. The synchronization of surround-view cameras and Lidar is also leveraged for pretraining Lidar networks with label-free knowledge distillation from pretrained image models leading to substantial performance gains in lowlabel regimes [50, 52, 59, 62] and generalization [59]. BEV pretraining and distillation. Pretraining is commonly used for perception in autonomous driving (2D/3D object detection, semantic segmentation) to boost performance and compensate for the scarcity of labeled data and task difficulty, requiring semantic and 3D geometry awareness. ImageNet or depth estimation pretraining are widely used [57] for monocular perception. For BEV perception most works start from a backbone pretrained for monocular 3D object detection [70]. The shared BEV space between cameras and Lidar enables new forms of pretraining by distilling 3D reasoning skills from Lidar networks into camera-based ones in order to compensate for potential loss of geometric information in the view-projection module [18, 30, 35, 36, 47, 72]. They take the form a teacher-student architecture, with the Lidar teacher network trained in a supervised manner on 3D annotations which are however costly to acquire. First approaches to pretrain camera-based BEV networks following the SSL paradigm with an annotation-free pretext task, such as occupancy estimation [54], forecasting Lidar point clouds [81], reconstructing 3D surfaces and RGB images [79], have emerged only very recently with promising results. However they focus mostly on purely geometric cues or rendering RGB pixels which can be sub-optimal for downstream perception performance [3] and potentially hinder the existing semantic knowledge in the image encoder. We propose to distill pretrained image features from DINOv2 [56] in the voxel space such that the produced BEV features are not only geometry aware, but also semantic aware. Closest to our work, is the recent POP-3D [69] that distills CLIP features [60] into a 3D occupancy prediction model [37] towards openvocabulary perception and not for pretraining.", + "pre_questions": [], + "main_content": "Introduction Camera-only bird\u2019s-eye-view (BEV) networks have gained significant interest in recent years within the field of autonomous driving perception [6, 24, 31, 34, 45, 58, 83]. The appeal of the BEV, or else top-view, is that it offers a unified space for various sensors, including surround-view cameras, Lidar, and radar [29, 51, 53], for both annotation and runtime perception purposes, and can serve as input for subsequent tasks in the driving pipeline, such as forecasting and planning [1, 8, 29, 31, 32]. Common tasks in the BEV space are semantic segmentation of objects [6, 24, 58, 83] and layouts [39, 40], as well as object detection [34, 45]. Our work specifically targets the camera-only BEV semantic segmentation task. *Work done at valeo.ai. \u2020Czech Institute of Informatics, Robotics and Cybernetics at the Czech Technical University in Prague. Figure 1. Performance comparison in low data regime (1% annotated data of nuScenes) Until now, training networks for camera-only semantic segmentation in BEV space has relied on full supervision, necessitating annotations for each scene. This process is time-consuming due to the transition from the input image to the \u201csynthetic\u201d BEV space. For instance, annotations are typically generated for Lidar data, checked for visibility and classes in images, and then projected onto BEV segmentation images. To reduce annotation costs, we explore the potential of self-supervised pretraining camera-only BEV segmentation networks. In self-supervised pretraining, networks are typically trained on annotation-free pretext tasks before the primary (downstream) task, like semantic segmentation. The aim is to guide the network in learning useful data representations during this pretraining phase. This process is intended to enhance the network\u2019s performance on the downstream task, enabling it to achieve higher accuracy while utilizing a reduced amount of annotated data. Pretraining has been proven efficient for several modalities from images [15, 27] to Lidar [7] and with different strategies, such as contrastive learning [15, 26], teacher-student architectures [12, 21\u201323] or reconstruction tasks [4, 27]. 1 arXiv:2404.14027v1 [cs.CV] 22 Apr 2024 In the field of autonomous driving, self-supervised pretraining for camera-only BEV networks has received limited attention despite its crucial role. Recently, a few methods have emerged that delve into this subject [54, 79, 81], but they predominantly focus on pretraining with 3D geometry prediction tasks. For instance, ViDAR [81] employs Lidar point cloud forecasting for pre-training, UniPAD [79] 3D surface and RGB pixels reconstruction, and UniScene [54] 3D occupancy prediction. While these methods equip the BEV networks with 3D geometry understanding, they often fall short in making the network capture semantic-aware information of the 3D scene, essential for tasks like BEV-based semantic segmentation. Our approach, called Occupancy Feature Prediction (OccFeat), addresses this gap by presenting a pretraining objective that promotes a more comprehensive understanding of the 3D scene, encompassing both geometric and semantic aspects. In our approach, the camera-only BEV network is tasked to predict a 3D voxel-grid representation that includes (a) features indicating voxel occupancy and (b) high-level self-supervised image features characterizing occupied voxels. To create this target voxel grid representation, we leverage aligned Lidar and image data in autonomous driving setups, along with a self-supervised image foundation model like DINOv2 [56], which has been pretrained to extract high-level 2D image features. Specifically, the occupancy of each voxel is determined using Lidar data, considering a voxel occupied if it contains at least one Lidar point. Simultaneously, the self-supervised image foundation model fills the occupied voxels with high-level image features. This process involves projecting the center coordinates of each occupied voxel cell into the 2D space of the image features extracted from the foundation model. Unlike approaches solely focused on 3D geometry prediction, our method goes beyond by training the BEV network to predict a richer, more semantic representation of the 3D scene, all without requiring manual annotation, leveraging the pre-trained image foundation model. We empirically demonstrate that this enhancement leads to significantly better downstream BEV semantic segmentation results, especially in low-data regimes (e.g., see Fig. 1). Our contributions are the following: 1. We present OccFeat, a self-supervised pretraining approach for camera-only BEV segmentation networks that enforces both geometric and semantic understanding of 3D scenes. 2. OccFeat exploits three modalities for pretraining: iming of 3D scenes. 2. OccFeat exploits three modalities for pretraining: image, Lidar, and DINOv2 features. To the best of our knowledge, we are the first to leverage foundation image models (DINOv2) for pretraining camera-only BEV networks. We note that after pretraining, the Lidar and DINOv2 data are not used anymore. 3. We evaluate OccFeat on nuScenes [9] for BEV semantic segmentation of both vehicles and map layout. The results show the benefit of our pretraining method, especially in low-shot regimes, e.g., when using annotations only for 1% or 10% of nuScene\u2019s training data. Additionally, our OccFeat pretraining improves the robustness, as evaluated on the nuScenes-C benchmark [74]. Our goal is to pretrain a camera-only BEV segmentation network in a self-supervised way. To this end, we intend to equip the learned BEV representations with the ability to encode both the 3D geometry of the scene and semanticaware information, crucial for downstream tasks such as semantic segmentation within the BEV space. To achieve this goal, we leverage the availability of (i) aligned Lidar and image data in autonomous driving setups and (ii) a selfsupervised pretrained image encoder able to extract highlevel 2D features from images (e.g., DINOv2 [56]). The proposed self-supervised BEV pretraining method OccFeat, illustrated in Figure 2, encompasses two training objectives: Occupancy reconstruction (Locc): This objective enforces the BEV network to capture the 3D geometry of the scene through an occupancy reconstruction task, defined using the available Lidar data. Occupancy-guided feature distillation (Lfeat): This objective enforces the BEV network to reconstruct highlevel semantic features. The network is trained to predict, at occupied voxel locations, the features of an offthe-shelf self-supervised pretrained image encoder. The total objective that our self-supervised pre-training approach minimizes is: \\lab e l {eq:total_loss} \\losstot = \\lossrec + \\lambda \\cdot \\lossdistill , (1) where \u03bb is the weight coefficient for balancing the two loss terms. Except otherwise stated, we use \u03bb = 0.01. In the following, we begin with a brief overview of camera-only BEV networks in Sec. 3.1. Then, we describe our occupancy reconstruction objective in Sec. 3.2 and our occupancy-guided feature distillation objective in Sec. 3.3. 3 BEV feature map Reference distillation feature projection 3D Conv Layer Cosine similarity Occupancy guided distillation BCE loss Occupancy reconstruction Auxilliary pretraining head Reference occupancy label BEV network Unsplatting Decoder 3D feature volume DINOv2 2D image features Lidar Voxelisation Figure 2. Overview of OccFeat\u2019s self-supervised BEV pretraining approach. OccFeat attaches an auxiliary pretraining head on top of the BEV network. This head \u201cunsplats\u201d the BEV features to a 3D feature volume and predicts with it (a) the 3D occupancy of the scene (occupancy reconstruction loss) and (b) high-level self-supervised image features characterizing the occupied voxels (occupancy-guided distillation loss). The occupancy targets are produced by \u201cvoxelizing\u201d Lidar points (see Fig. 3), while the self-supervised image foundation model DINOv2 provides the feature targets for the occupied voxels. The pretraining head is removed after the pretraining. 3.1. BEV networks The BEV networks aim to build a BEV feature map from registered image data. In our case, this feature map is used for semantic segmentation. These networks share a common architecture composed of 1) an image encoder, 2) a projection module and 3) a decoder. The image encoder EI produces a feature map Fc for each image Ic, with c ranging from 1 to C, where C is the total number of surround-view cameras in the vehicle. Typically, these encoders come either from ResNet [25] or EfficientNet [64] family of models. The projection module PB is the module responsible for changing the representation space, from the sensor coordinate system (image features Fc) to the BEV space. PB takes as input the image features {Fc}C c=1 and camera calibration and projects the image features in the BEV space. Architectures differ in the way they operate this projection, from a full image feature volume aggregated over the vertical axis (SimpleBEV [24]) or a sparser volume filled according to an estimated depth distribution (LSS [58]) to an attentionbased projection as in CVT [83], BEVFormer [45]. The decoder DB takes as input the image features in the BEV space generated by the PB and further processes them with 2D convolutional layers and optionally upsamples them to the desired segmentation map resolution [24, 58]. This produces the output BEV features FB \u2208 RNB\u00d7HB\u00d7WB, where HB \u00d7 WB is the spatial resolution of the BEV features and NB is the number of feature channels. Architecture-agnostic BEV representation pretraining. The self-supervised pretraining approach that we propose is applied on these BEV features FB that DB produces. Hence, it is possible to apply this pretraining approach to any BEV model by plugging in the pretraining head, which we describe next, at the end of the BEV network, before the Lidar sensor Voxel with points are occupied Figure 3. Occupancy grid. A voxel is considered occupied if there is at least one point inside it. downstream task-specific head. We note that the auxiliary pretraining head is removed after the end of pretraining. 3.2. Occupancy reconstruction Building on the insights from previous studies that have highlighted the effectiveness of reconstruction as a valuable prior for diverse modalities, such as images [4, 27] and Lidar point clouds [7], we employ a simple occupancy reconstruction pretraining task for BEV networks. The goal is to lead the BEV network to learn BEV features that encode information about the 3D geometry of the scene. Let V represent a voxel grid with shape ZB \u00d7 HB \u00d7 WB, where ZB is the height dimension. To create the occupancy targets O \u2208{0, 1}ZB\u00d7HB\u00d7WB, we adhere to the common convention [37, 66] wherein a voxel v \u2208V is considered occupied if at least one Lidar point falls inside (see Fig. 3). To estimate the occupancy grid \u02c6 O, we employ an \u201cunsplatting\u201d decoder network DV that takes the 2D BEV feature maps FB \u2208RNB\u00d7HB\u00d7WB from DB as input and generates a 3D feature volume FV \u2208RNB\u00d7ZB\u00d7HB\u00d7WB. The unsplatting decoder DV starts with two 2D convolutional layers. The first layer, with 3\u00d73 kernels and NB output channels, is followed by Instance Normalization [67] and ReLU units. The second layer has 1\u00d71 kernels with 4 NBZB output channels. These layers produce 2D BEV feature maps of shape (NBZB)\u00d7HB\u00d7WB, which are reshaped into a 3D feature volume of shape NB \u00d7 ZB \u00d7 HB \u00d7 WB. This reshaping is done by dividing the (NBZB)-dimensional feature channels into ZB groups, each with NB channels. Next, the decoder DV processes these 3D features with two 3D convolutional layers. The first layer, with 1\u00d71\u00d71 kernels and 2NB output channels, is followed by a Softplus non-linearity. The second layer has 1\u00d71\u00d71 kernels with NB output channels, producing the final 3D feature volume FV \u2208RNB\u00d7ZB\u00d7HB\u00d7WB. Finally, to generate the occupancy prediction \u02c6 O, a single 3D convolution layer with a 1\u00d71\u00d71 kernel is applied on FV, followed by a sigmoid activation. The loss function to minimize is a binary cross-entropy loss on the voxel occupation: \\l a b el { eq: occ_ r ec_ loss} \\lossrec = \\frac {1}{|\\voxgrid |} \\sum _{v \\in \\voxgrid } \\text {BCE}(\\hat {\\occ }_v, \\occ _v). (2) 3.3. Occupancy-guided feature distillation We introduce a self-supervised objective that complements occupancy reconstruction by guiding our BEV network to encode high-level semantic information. To achieve this, we leverage the availability of a self-supervised pretrained image network, denoted as EY I , which takes an image as input and produces high-level 2D feature maps with Ny feature channels. Our OccFeat approach involves a feature distillation objective, where we fill the occupied voxels in V with features extracted from EY I and then train the BEV network to predict these voxel features. Let VOcc represent the set of occupied voxels, defined as VOcc = {\u2200v \u2208V | Ov = 1}. Our feature distillation objective operates specifically on these occupied voxels. To create the target feature Yv \u2208RNy for each occupied voxel v \u2208 VOcc, we project the voxel\u2019s center 3D coordinates onto the image feature maps extracted by the target image encoder EY I from the surround-view input images {Ic}C c=1. Given these projections of 3D points into 2D images, we obtain an Ny-dimensional feature vector by bilinearly sampling a feature map of each image Ic with a valid projection (i.e., if a point is projected inside an image Ic). The target feature Yv is then computed across images with valid projections as the average of the bilinearly-sampled feature vectors. To predict the target features of the occupied voxels, we use the 3D feature volume FV \u2208RNB\u00d7ZB\u00d7HB\u00d7WB produced by the DV decoder. A single 3D convolution layer with 1\u00d71\u00d71 kernels and Ny output channels is applied on FV, resulting in the 3D feature volume \u02c6 Y \u2208RNy\u00d7ZB\u00d7HB\u00d7WB. Then, for each occupied voxel v \u2208VOcc, we extract its corresponding feature \u02c6 Yv \u2208RNy from \u02c6 Y . The feature distillation loss that we aim to minimize is the average negative cosine similarity between the predicted and target features for each occupied voxel v \u2208VOcc: \\la b e l {eq:d i still_ occ_ l oss } \\lossdistill = \\frac {1}{|\\occvox |} \\sum _{v \\in \\occvox } \\text {cos}(\\hat {\\teacherfeat }_v, \\teacherfeat _v). (3) We note that there exists a small number of occupied voxels that lack valid projections into any of the images {Ic}C c=1. Although not explicitly shown in Eq. (3) for the sake of notation simplicity, these voxels are, in fact, excluded from the computation of the feature distillation loss. 4. Experiments 4.1. Experimental setup Datasets. To evaluate our approach we use the nuScenes [9] dataset for both conducting the pretraining of the camera-only BEV networks and for finetuning them on the downstream tasks of BEV-based semantic segmentation. The dataset is composed of 1,000 sequences recorded in Boston and Singapore. The data is divided in training (700 sequences), validation (100 sequences) and test (200 sequences) splits. Each frame contains a point cloud acquired with 32-layer Lidar and 6 images covering the surroundings of the ego-vehicle . BEV segmentation tasks. We consider two different tasks related to BEV segmentation. First, as in [24], we evaluate our method on vehicle segmentation. We build target BEV segmentation ground truth by projecting the boxes of vehicles on the BEV plane. We use the same setting as [24], i.e., a range of 50 meters around the ego-vehicle, and BEV ground truths of size 200x200 pixels. Second, we focus on the layout and evaluate on map segmentation. Here, we evaluate the ability of our finetuned BEV network to segment bakcground classes: \u201croad\u201d, \u201csidewalk\u201d, \u201ccrosswalk\u201d, \u201cparking area\u201d and \u201croad dividers\u201d. Architectures. BEV networks. We experiment with two BEV architectures: SimpleBEV [24] and BEVFormer [45] (implementation from [24]). Image encoders. As image backbones in the BEV segmentation networks, we employ either EfficientNet-B0 [64] (EN-B0) or ResNet-50 [25] (RN-50). Following the common practice in camera-only BEV segmentation [24], these image backbones have undergone pretraining on ImageNet, either in a supervised way (for EN-B0) or self-supervised using MoCov2 [17] (for RN-50). Teacher model. The self-supervised image foundation model EY I , used as the teacher for the occupancy-guided feature distillation, is the ViT-S/14 variant of DINOv2 [56]. OccFeat pretraining. We pretrain BEV segmentation networks with batch size 16 on 4\u00d7V100 GPUs using the Adam optimizer with weight decay 1e\u22127 and a constant learning rate of 1e\u22123 during training. By default we pretrain for 50 epochs. We also pretrain some models for 100 epochs, 5 Architecture Image BEV Vehicles Map Backb. Pretraining 1% 10% 100% 1% 10% 100% None 13.7 26.0 37.4 20.1 28.1 45.3 Img-ALSO 18.5 (+4.8) 26.6 (-0.6) 32.1 (-5.3) 23.2 (+3.1) 29.0 (+0.9) 41.1 (-4.2) OccFeat 24.3 (+10.6) 30.9 (+4.9) 37.7 (+0.3) 25.4 (+5.3) 33.7 (+5.6) 47.3 (+2.0) EN-B0 OccFeat\u2020 24.5 (+10.8) 32.0 (+6.0) 38.1 (+0.7) 26.2 (+6.1) 34.3 (+6.2) 47.6 (+2.3) None 13.9 28.6 42.2 20.1 31.2 44.2 Img-ALSO 18.4 (+4.5) 25.6 (-3.0) 37.2 (-5.0) 22.3 (+2.2) 28.7 (-2.5) 41.2 (-3.0) SimpleBEV OccFeat 23.5 (+9.6) 31.4 (+2.8) 41.0 (-1.2) 25.2 (+5.1) 33.6 (+2.4) 50.0 (+5.8) RN-50 OccFeat\u2020 24.8 (+10.9) 31.3 (+2.7) 41.7 (-0.5) 25.9 (+5.8) 34.4 (+3.2) 51.3 (+7.1) None 11.3 22.8 37.2 20.4 29.4 49.6 BEVFormer EN-B0 OccFeat 24.9 (+13.6) 30.1 (+7.3) 38.8 (+1.6) 26.6 (+6.2) 36.2 (+6.8) 50.5 (+0.9) Table 1. Segmentation IoU results for Vehicle and Map classes. Comparing our OccFeat against no BEV pretraining (None) and the pretraining baseline Img-ALSO. Results with 224\u00d7400 image resolution. \u2020: pretrained for 100 epochs; all other models for 50 epochs. which we denote with OccFeat\u2020 in the tables. We use augmentations from [24]. Except otherwise stated, we use 224\u00d7400 image resolution for the input camera frames. Finetuning. We experiment with finetuning at 1%, 10% and 100%. We use the the AdamW optimizer and One Cycle scheduler with 1e\u22127 as weight decay. The complete hyperparameters (number of epochs / starting learning rate / batch size / batch gradient accumulation) are the following, 1%: 100/1e\u22124/6/1; 10%: 50/1e\u22124/6/1 and 100%: 30/3e\u22124/8/5. The training is conducted with 2\u00d7V100 GPUs. If the BEV segmentation network undergoing finetuning has not been pretrained with OccFeat, we initialize its image backbone with weights pretrained on ImageNet, following the same initialization procedure used before the OccFeat pretraining. This is common practice in cameraonly BEV segmentation [24]. 4.2. Pretraining baselines To validate the effectiveness of our OccFeat approach, which integrates feature distillation with 3D geometry prediction, we implement a pretraining baseline focusing solely on 3D geometry reconstruction. To implement this baseline, called Img-ALSO, we modify the Lidar pretraining method ALSO [7] for the purpose of pretraining camera-only BEV networks. ALSO is a top-performing reconstruction-based approach for self-supervised pretraining Lidar networks. Its pretext task is to learn an implicit function partitioning the space between empty and occupied (inside objects) spaces, where the supervision signal is directly generated from the input Lidar data, using the sensor line of sights. This entails a geometry reconstruction task, where each pair of 3D point/output feature must reconstruct a local neighborhood. To implement our ImgALSO baseline, we begin with the ALSO variant tailored for pretraining Lidar detection networks, such as the SECOND network [77], where the features from the input 3D point cloud are projected on a top-view plane. In this setup, we replace the 3D backbone with the image BEV network intended for pretraining. As the supervision signal in ImgALSO still comes from the Lidar points (as in ALSO), we use the same hyper-parameters as ALSO, i.e., decimation grid of 10cm, maximal distance-to-Lidar-point for query of 10cm and a reconstruction radius of 1m. We compare against Img-ALSO in Sec. 4.3. In addition to comparing OccFeat with the above baseline, in Sec. 4.4 we include comparisons against ablations that use only the occupancy reconstruction objective (Locc), which is another 3D geometry-based approach, or only the occupancy-guided feature distillation objective (Lfeat). 4.3. Results Comparison with baselines. In the SimpleBEV results presented in Tab. 1, our OccFeat self-supervised BEV pretraining approach is compared against the proposed baseline Img-ALSO, along with the case of not conducting BEV pretraining. We provide results for both Vehicle and Map segmentation, utilizing 1%, 10%, and 100% of annotated training data. For the SimpleBEV network we use either the EN-B0 or the RN-50 image backbones and 224\u00d7400 image resolution for the input camera frames. We observe that, compared to the absence of BEV pretraining, our OccFeat enhances segmentation results in almost all settings. The only exception is that of Vehicle segmentation with 100% annotations using SimpleBEV with RN-50. The improvement is particularly prominent with only 1% or 10% of annotated data, showcasing the effectiveness of our approach in low-shot settings. When comparing with the examined baseline (Img-ALSO), our OccFeat outperforms them in almost all cases. An interesting observation is that Img-ALSO improves segmentation performance only in the 1%-annotation settings, while yielding worse results for the 10%and 100%-annotation settings. Scaling with longer BEV pretraining. In Tab. 1, apart from presenting results for 50 pretraining epochs, we also 6 Figure 4. Study on robustness. Segmentation results on nuScenes-C dataset for Vehicle classes using BEVFormer network with EN-B0 image backbone on 100% annotated data. Comparison of our OccFeat against no BEV pretraining. BEV Pretraining Image Resolution Vehicles Pretraining Finetuning 1% 10% 100% None N/A 224\u00d7400 13.7 26.0 37.4 OccFeat 224\u00d7400 24.3 30.9 37.7 OccFeat\u2020 224\u00d7400 24.5 32.0 38.1 None N/A 448\u00d7800 12.9 28.4 41.6 OccFeat 448\u00d7800 26.5 34.5 41.5 OccFeat\u2020 224\u00d7400 26.1 33.3 41.5 Table 2. Impact of image resolution. SimpleBEV segmentation IoU results for the Vehicle class. Results with the EN-B0 image backbone. \u2020: pretrained for 100 epochs. include results for our OccFeat approach and SimpleBEV (with EN-B0 or RN-50) with an extended pretraining duration of 100 epochs (OccFeat\u2020 model). We observe consistent improvements in segmentation results with this longer pretraining duration. This observation underscores the scalability of our approach, demonstrating its ability to gain further benefits from longer pretraining periods. Adaptable to various BEV network architectures. As mentioned in Sec. 3, our BEV pretraining method can be used with any BEV network architecture. In Tab. 1, apart from results with SimpleBEV networks, we also present results using the BEVFormer [45] segmentation network for the 1%, 10%, and 100% annotation settings. We compare the performance of our OccFeat method against the scenario where BEV pretraining is not conducted. We see that even when employing the BEVFormer network architecture, our OccFeat pretraining approach enhances segmentation performance across all evaluation settings. Exploiting higher resolution images. In Tab. 2, we present segmentation results using either 224\u00d7400 or 448\u00d7800 resolutions for the camera frames fed into the SimpleBEV network. Our OccFeat pretraining approach consistently enhances segmentation results across all cases. What is noteworthy is that pretraining with the lower BEV Network BEV OccFeat losses Vehicles Maps Pretraining Locc Lfeat 1% 1% SimpleBEV \u2717 13.7 20.1 \u2713 \u2713 22.8 25.2 \u2713 \u2713 17.4 23.4 \u2713 \u2713 \u2713 24.3 25.4 BEVFormer \u2717 11.3 20.4 \u2713 \u2713 21.9 25.1 \u2713 \u2713 \u2713 24.9 26.6 Table 3. Ablation of OccFeat\u2019s losses. BEV segmentation results (IoU) for the Vehicle and Map classes using 1% annotated data. Results with 224\u00d7400 resolution and the EN-B0 image backbone. Loss weight \u03bb 0 0.0001 0.001 0.01 0.1 1.0 Vehicle (1%) 22.8 22.9 22.7 24.3 23.9 19.4 Table 4. Impact of loss weight \u03bb (L = Locc + \u03bb \u00b7 Lfeat). SimpleBEV vehicle segmentation results using 1% annotated data, 224\u00d7400 image resolution and the EN-B0 image backbone. 224\u00d7400 resolution and then fine-tuning with the higher 448\u00d7800 resolution also improves the results, almost as much as pretraining directly with the higher resolution. Robustness study. We study the robustness of OccFeat by evaluating it on the nuScenes-C benchmark [74]. This benchmark consists of eight distinct data corruptions, each with three intensity levels, applied to the validation set of nuScenes. In Fig. 4 we present vehicle segmentation results on nuScenes-C using BEVFormer with EN-B0 backbone finetuned on 100% annotation data. For each corruption type we report the average across three severity levels. The comparison of our OccFeat against no BEV pretraining illustrates that the OccFeat pretraining improves the robustness of the final BEV model. 4.4. Ablation study Ablation of OccFeat\u2019s losses. In Tab. 3 we conduct an ablation study on the two pretraining objectives, namely Locc and Lfeat, of our OccFeat approach. The evaluation focuses on Vehicle and Map segmentation results with 1% annotated training data, using both the SimpleBEV and BEVFormer networks. For SimpleBEV, both Locc (a 3D geometry prediction objective) and Lfeat (an occupancy-guided feature distillation objective) showcase improvements compared to the scenario without BEV pretraining. The combination of both pretraining objectives, forming our OccFeat approach, yields the most favorable segmentation results. This underscores the efficacy of our OccFeat pretraining method, distinguishing it from prior self-supervised BEV pretraining works that rely solely on 3D geometry prediction. The advantage of enhancing 3D geometry prediction (Locc) with feature distillation (Lfeat) is further affirmed by the ablation results obtained with BEVFormer. 7 Figure 5. Visualisation of predicted 3D features, using a 3-dimensional PCA mapped on RGB channels. The features contain semantic information, e.g., cars in cyan color. Using the same PCA mapping on a different scene (right), we show that semantic features are consistent across scenes. Objects within colored circles in the feature space correspond to those in similarly colored circles in the image. (a) Car (b) Road Figure 6. Correlation maps of the student\u2019s predicted 3D features and features selected on a car (a) and on the road (b). Impact of loss weight \u03bb. In Tab. 4, we investigate the influence of the loss weight \u03bb, responsible for balancing the two loss terms of OccFeat (refer to Eq. (1)). The most favorable segmentation results are achieved for \u03bb values ranging between 0.001 and 0.01, with 0.01 being the optimal choice. 4.5. Qualitative results In Fig. 5 and Fig. 6, we assess the semantic quality of the reconstructed features, using a colored mapping from PCA (Fig. 5) and correlation maps (Fig. 6). We can observe that semantic information from DINOv2 teacher has been preserved and semantic classes such as cars are easily separable. Additionally, the representations are consistent accross scenes, e.g., on Fig. 5 (right), we apply the PCA mapping computed on the left scene to a new scene. As an example, the cyan points on the right correspond to a car. 5. Conclusion We introduced OccFeat, a self-supervised pretraining method for camera-only BEV segmentation networks. Our approach combines two pretraining objectives: a 3D occupancy prediction task using raw Lidar data and an occupancy-guided feature distillation task based on the selfsupervised pre-trained image foundation model DINOv2. The former enhances the learning of 3D geometry-aware BEV features, while the latter focuses on semantic-aware BEV features. Our empirical results demonstrate that both pretraining objectives enhance segmentation performance compared to not conducting BEV pre-training. Combining both objectives yields the most favorable results, emphasizing the effectiveness of our OccFeat approach. This sets our method apart from prior self-supervised BEV pre-training methods that solely rely on 3D geometry prediction. Limitations and Perspectives. While OccFeat has proven highly effective for low-data scenarios (i.e., 1% and 10% finetuning), it yields slight or no improvements in the 100% finetuning setting. This could be because we selfsupervisedly pretrained and then supervisedly finetuned on the same data, leaving all information available at finetuning. Additionally, the nuScenes dataset used for pretraining is relatively small. Perhaps using a larger pretraining dataset could further enhance performance in the 100% finetuning setting. One aspect not sufficiently explored in our work is \u201cscaling the pretraining\u201d. As demonstrated in ScaLR [59], when pretraining Lidar networks via self-supervised distillation of image-based models, scaling the teacher and student can significantly boost performance. For example, using a larger teacher model, such as going from ViT-S to ViT-L or ViT-G variants of DINOv2, could yield superior features for OccFeat\u2019s distillation loss. Similarly, scaling the student components \u2014specifically, the image encoder and BEV decoder\u2014might enhance OccFeat\u2019s distillation process. Furthermore, another avenue for improvement could involve incorporating time into our self-supervised pretraining task. For instance, accumulating Lidar points over several frames to generate denser occupancy maps could enhance the effectiveness of OccFeat\u2019s pretraining. Acknowledgements This work was performed using HPC resources from GENCI\u2013IDRIS (Grants 2022-AD011012883R2, 2022AD011012884R2 and 2024-AD011015037). It was supported in part by the French Agence Nationale de la Recherche (ANR) grant MultiTrans (ANR21-CE23-0032). It also received the support of CTU Student Grant SGS21184OHK33T37 and by the Ministry of Education, Youth and Sports of the Czech Republic through the e-INFRA CZ (ID:90254), and the support of EXA4MIND, a European Union\u2019s Horizon Europe Research and Innovation programme under grant agreement N\u00b0101092944. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the granting authority can be held responsible for them. The authors have no competing interests to declare that are relevant to the content of this article. 8" + }, + { + "url": "http://arxiv.org/abs/2002.05709v3", + "title": "A Simple Framework for Contrastive Learning of Visual Representations", + "abstract": "This paper presents SimCLR: a simple framework for contrastive learning of\nvisual representations. We simplify recently proposed contrastive\nself-supervised learning algorithms without requiring specialized architectures\nor a memory bank. In order to understand what enables the contrastive\nprediction tasks to learn useful representations, we systematically study the\nmajor components of our framework. We show that (1) composition of data\naugmentations plays a critical role in defining effective predictive tasks, (2)\nintroducing a learnable nonlinear transformation between the representation and\nthe contrastive loss substantially improves the quality of the learned\nrepresentations, and (3) contrastive learning benefits from larger batch sizes\nand more training steps compared to supervised learning. By combining these\nfindings, we are able to considerably outperform previous methods for\nself-supervised and semi-supervised learning on ImageNet. A linear classifier\ntrained on self-supervised representations learned by SimCLR achieves 76.5%\ntop-1 accuracy, which is a 7% relative improvement over previous\nstate-of-the-art, matching the performance of a supervised ResNet-50. When\nfine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy,\noutperforming AlexNet with 100X fewer labels.", + "authors": "Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton", + "published": "2020-02-13", + "updated": "2020-07-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2401.09413v1", + "title": "POP-3D: Open-Vocabulary 3D Occupancy Prediction from Images", + "abstract": "We describe an approach to predict open-vocabulary 3D semantic voxel\noccupancy map from input 2D images with the objective of enabling 3D grounding,\nsegmentation and retrieval of free-form language queries. This is a challenging\nproblem because of the 2D-3D ambiguity and the open-vocabulary nature of the\ntarget tasks, where obtaining annotated training data in 3D is difficult. The\ncontributions of this work are three-fold. First, we design a new model\narchitecture for open-vocabulary 3D semantic occupancy prediction. The\narchitecture consists of a 2D-3D encoder together with occupancy prediction and\n3D-language heads. The output is a dense voxel map of 3D grounded language\nembeddings enabling a range of open-vocabulary tasks. Second, we develop a\ntri-modal self-supervised learning algorithm that leverages three modalities:\n(i) images, (ii) language and (iii) LiDAR point clouds, and enables training\nthe proposed architecture using a strong pre-trained vision-language model\nwithout the need for any 3D manual language annotations. Finally, we\ndemonstrate quantitatively the strengths of the proposed model on several\nopen-vocabulary tasks: Zero-shot 3D semantic segmentation using existing\ndatasets; 3D grounding and retrieval of free-form language queries, using a\nsmall dataset that we propose as an extension of nuScenes. You can find the\nproject page here https://vobecant.github.io/POP3D.", + "authors": "Antonin Vobecky, Oriane Sim\u00e9oni, David Hurych, Spyros Gidaris, Andrei Bursuc, Patrick P\u00e9rez, Josef Sivic", + "published": "2024-01-17", + "updated": "2024-01-17", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2206.10965v1", + "title": "Polar Parametrization for Vision-based Surround-View 3D Detection", + "abstract": "3D detection based on surround-view camera system is a critical technique in\nautopilot. In this work, we present Polar Parametrization for 3D detection,\nwhich reformulates position parametrization, velocity decomposition, perception\nrange, label assignment and loss function in polar coordinate system. Polar\nParametrization establishes explicit associations between image patterns and\nprediction targets, exploiting the view symmetry of surround-view cameras as\ninductive bias to ease optimization and boost performance. Based on Polar\nParametrization, we propose a surround-view 3D DEtection TRansformer, named\nPolarDETR. PolarDETR achieves promising performance-speed trade-off on\ndifferent backbone configurations. Besides, PolarDETR ranks 1st on the\nleaderboard of nuScenes benchmark in terms of both 3D detection and 3D tracking\nat the submission time (Mar. 4th, 2022). Code will be released at\n\\url{https://github.com/hustvl/PolarDETR}.", + "authors": "Shaoyu Chen, Xinggang Wang, Tianheng Cheng, Qian Zhang, Chang Huang, Wenyu Liu", + "published": "2022-06-22", + "updated": "2022-06-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1905.09272v3", + "title": "Data-Efficient Image Recognition with Contrastive Predictive Coding", + "abstract": "Human observers can learn to recognize new categories of images from a\nhandful of examples, yet doing so with artificial ones remains an open\nchallenge. We hypothesize that data-efficient recognition is enabled by\nrepresentations which make the variability in natural signals more predictable.\nWe therefore revisit and improve Contrastive Predictive Coding, an unsupervised\nobjective for learning such representations. This new implementation produces\nfeatures which support state-of-the-art linear classification accuracy on the\nImageNet dataset. When used as input for non-linear classification with deep\nneural networks, this representation allows us to use 2-5x less labels than\nclassifiers trained directly on image pixels. Finally, this unsupervised\nrepresentation substantially improves transfer learning to object detection on\nthe PASCAL VOC dataset, surpassing fully supervised pre-trained ImageNet\nclassifiers.", + "authors": "Olivier J. H\u00e9naff, Aravind Srinivas, Jeffrey De Fauw, Ali Razavi, Carl Doersch, S. M. Ali Eslami, Aaron van den Oord", + "published": "2019-05-22", + "updated": "2020-07-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.10593v1", + "title": "MatrixVT: Efficient Multi-Camera to BEV Transformation for 3D Perception", + "abstract": "This paper proposes an efficient multi-camera to Bird's-Eye-View (BEV) view\ntransformation method for 3D perception, dubbed MatrixVT. Existing view\ntransformers either suffer from poor transformation efficiency or rely on\ndevice-specific operators, hindering the broad application of BEV models. In\ncontrast, our method generates BEV features efficiently with only convolutions\nand matrix multiplications (MatMul). Specifically, we propose describing the\nBEV feature as the MatMul of image feature and a sparse Feature Transporting\nMatrix (FTM). A Prime Extraction module is then introduced to compress the\ndimension of image features and reduce FTM's sparsity. Moreover, we propose the\nRing \\& Ray Decomposition to replace the FTM with two matrices and reformulate\nour pipeline to reduce calculation further. Compared to existing methods,\nMatrixVT enjoys a faster speed and less memory footprint while remaining\ndeploy-friendly. Extensive experiments on the nuScenes benchmark demonstrate\nthat our method is highly efficient but obtains results on par with the SOTA\nmethod in object detection and map segmentation tasks", + "authors": "Hongyu Zhou, Zheng Ge, Zeming Li, Xiangyu Zhang", + "published": "2022-11-19", + "updated": "2022-11-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2306.09347v2", + "title": "Segment Any Point Cloud Sequences by Distilling Vision Foundation Models", + "abstract": "Recent advancements in vision foundation models (VFMs) have opened up new\npossibilities for versatile and efficient visual perception. In this work, we\nintroduce Seal, a novel framework that harnesses VFMs for segmenting diverse\nautomotive point cloud sequences. Seal exhibits three appealing properties: i)\nScalability: VFMs are directly distilled into point clouds, obviating the need\nfor annotations in either 2D or 3D during pretraining. ii) Consistency: Spatial\nand temporal relationships are enforced at both the camera-to-LiDAR and\npoint-to-segment regularization stages, facilitating cross-modal representation\nlearning. iii) Generalizability: Seal enables knowledge transfer in an\noff-the-shelf manner to downstream tasks involving diverse point clouds,\nincluding those from real/synthetic, low/high-resolution, large/small-scale,\nand clean/corrupted datasets. Extensive experiments conducted on eleven\ndifferent point cloud datasets showcase the effectiveness and superiority of\nSeal. Notably, Seal achieves a remarkable 45.0% mIoU on nuScenes after linear\nprobing, surpassing random initialization by 36.9% mIoU and outperforming prior\narts by 6.1% mIoU. Moreover, Seal demonstrates significant performance gains\nover existing methods across 20 different few-shot fine-tuning tasks on all\neleven tested point cloud datasets.", + "authors": "Youquan Liu, Lingdong Kong, Jun Cen, Runnan Chen, Wenwei Zhang, Liang Pan, Kai Chen, Ziwei Liu", + "published": "2023-06-15", + "updated": "2023-10-24", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG", + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.07171v1", + "title": "Cross-Modality Knowledge Distillation Network for Monocular 3D Object Detection", + "abstract": "Leveraging LiDAR-based detectors or real LiDAR point data to guide monocular\n3D detection has brought significant improvement, e.g., Pseudo-LiDAR methods.\nHowever, the existing methods usually apply non-end-to-end training strategies\nand insufficiently leverage the LiDAR information, where the rich potential of\nthe LiDAR data has not been well exploited. In this paper, we propose the\nCross-Modality Knowledge Distillation (CMKD) network for monocular 3D detection\nto efficiently and directly transfer the knowledge from LiDAR modality to image\nmodality on both features and responses. Moreover, we further extend CMKD as a\nsemi-supervised training framework by distilling knowledge from large-scale\nunlabeled data and significantly boost the performance. Until submission, CMKD\nranks $1^{st}$ among the monocular 3D detectors with publications on both KITTI\n$test$ set and Waymo $val$ set with significant performance gains compared to\nprevious state-of-the-art methods.", + "authors": "Yu Hong, Hang Dai, Yong Ding", + "published": "2022-11-14", + "updated": "2022-11-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.17111v1", + "title": "BEVPoolv2: A Cutting-edge Implementation of BEVDet Toward Deployment", + "abstract": "We release a new codebase version of the BEVDet, dubbed branch dev2.0. With\ndev2.0, we propose BEVPoolv2 upgrade the view transformation process from the\nperspective of engineering optimization, making it free from a huge burden in\nboth calculation and storage aspects. It achieves this by omitting the\ncalculation and preprocessing of the large frustum feature. As a result, it can\nbe processed within 0.82 ms even with a large input resolution of 640x1600,\nwhich is 15.1 times the previous fastest implementation. Besides, it is also\nless cache consumptive when compared with the previous implementation,\nnaturally as it no longer needs to store the large frustum feature. Last but\nnot least, this also makes the deployment to the other backend handy. We offer\nan example of deployment to the TensorRT backend in branch dev2.0 and show how\nfast the BEVDet paradigm can be processed on it. Other than BEVPoolv2, we also\nselect and integrate some substantial progress that was proposed in the past\nyear. As an example configuration, BEVDet4D-R50-Depth-CBGS scores 52.3 NDS on\nthe NuScenes validation set and can be processed at a speed of 16.4 FPS with\nthe PyTorch backend. The code has been released to facilitate the study on\nhttps://github.com/HuangJunJie2017/BEVDet/tree/dev2.0.", + "authors": "Junjie Huang, Guan Huang", + "published": "2022-11-30", + "updated": "2022-11-30", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2208.10145v1", + "title": "STS: Surround-view Temporal Stereo for Multi-view 3D Detection", + "abstract": "Learning accurate depth is essential to multi-view 3D object detection.\nRecent approaches mainly learn depth from monocular images, which confront\ninherent difficulties due to the ill-posed nature of monocular depth learning.\nInstead of using a sole monocular depth method, in this work, we propose a\nnovel Surround-view Temporal Stereo (STS) technique that leverages the geometry\ncorrespondence between frames across time to facilitate accurate depth\nlearning. Specifically, we regard the field of views from all cameras around\nthe ego vehicle as a unified view, namely surroundview, and conduct temporal\nstereo matching on it. The resulting geometrical correspondence between\ndifferent frames from STS is utilized and combined with the monocular depth to\nyield final depth prediction. Comprehensive experiments on nuScenes show that\nSTS greatly boosts 3D detection ability, notably for medium and long distance\nobjects. On BEVDepth with ResNet-50 backbone, STS improves mAP and NDS by 2.6%\nand 1.4%, respectively. Consistent improvements are observed when using a\nlarger backbone and a larger image resolution, demonstrating its effectiveness", + "authors": "Zengran Wang, Chen Min, Zheng Ge, Yinhao Li, Zeming Li, Hongyu Yang, Di Huang", + "published": "2022-08-22", + "updated": "2022-08-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2105.04906v3", + "title": "VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning", + "abstract": "Recent self-supervised methods for image representation learning are based on\nmaximizing the agreement between embedding vectors from different views of the\nsame image. A trivial solution is obtained when the encoder outputs constant\nvectors. This collapse problem is often avoided through implicit biases in the\nlearning architecture, that often lack a clear justification or interpretation.\nIn this paper, we introduce VICReg (Variance-Invariance-Covariance\nRegularization), a method that explicitly avoids the collapse problem with a\nsimple regularization term on the variance of the embeddings along each\ndimension individually. VICReg combines the variance term with a decorrelation\nmechanism based on redundancy reduction and covariance regularization, and\nachieves results on par with the state of the art on several downstream tasks.\nIn addition, we show that incorporating our new variance term into other\nmethods helps stabilize the training and leads to performance improvements.", + "authors": "Adrien Bardes, Jean Ponce, Yann LeCun", + "published": "2021-05-11", + "updated": "2022-01-28", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2111.07832v3", + "title": "iBOT: Image BERT Pre-Training with Online Tokenizer", + "abstract": "The success of language Transformers is primarily attributed to the pretext\ntask of masked language modeling (MLM), where texts are first tokenized into\nsemantically meaningful pieces. In this work, we study masked image modeling\n(MIM) and indicate the advantages and challenges of using a semantically\nmeaningful visual tokenizer. We present a self-supervised framework iBOT that\ncan perform masked prediction with an online tokenizer. Specifically, we\nperform self-distillation on masked patch tokens and take the teacher network\nas the online tokenizer, along with self-distillation on the class token to\nacquire visual semantics. The online tokenizer is jointly learnable with the\nMIM objective and dispenses with a multi-stage training pipeline where the\ntokenizer needs to be pre-trained beforehand. We show the prominence of iBOT by\nachieving an 82.3% linear probing accuracy and an 87.8% fine-tuning accuracy\nevaluated on ImageNet-1K. Beyond the state-of-the-art image classification\nresults, we underline emerging local semantic patterns, which helps the models\nto obtain strong robustness against common corruptions and achieve leading\nresults on dense downstream tasks, eg., object detection, instance\nsegmentation, and semantic segmentation.", + "authors": "Jinghao Zhou, Chen Wei, Huiyu Wang, Wei Shen, Cihang Xie, Alan Yuille, Tao Kong", + "published": "2021-11-15", + "updated": "2022-01-27", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2301.05709v2", + "title": "Self-Supervised Image-to-Point Distillation via Semantically Tolerant Contrastive Loss", + "abstract": "An effective framework for learning 3D representations for perception tasks\nis distilling rich self-supervised image features via contrastive learning.\nHowever, image-to point representation learning for autonomous driving datasets\nfaces two main challenges: 1) the abundance of self-similarity, which results\nin the contrastive losses pushing away semantically similar point and image\nregions and thus disturbing the local semantic structure of the learned\nrepresentations, and 2) severe class imbalance as pretraining gets dominated by\nover-represented classes. We propose to alleviate the self-similarity problem\nthrough a novel semantically tolerant image-to-point contrastive loss that\ntakes into consideration the semantic distance between positive and negative\nimage regions to minimize contrasting semantically similar point and image\nregions. Additionally, we address class imbalance by designing a class-agnostic\nbalanced loss that approximates the degree of class imbalance through an\naggregate sample-to-samples semantic similarity measure. We demonstrate that\nour semantically-tolerant contrastive loss with class balancing improves\nstate-of-the art 2D-to-3D representation learning in all evaluation settings on\n3D semantic segmentation. Our method consistently outperforms state-of-the-art\n2D-to-3D representation learning frameworks across a wide range of 2D\nself-supervised pretrained models.", + "authors": "Anas Mahmoud, Jordan S. K. Hu, Tianshu Kuai, Ali Harakeh, Liam Paull, Steven L. Waslander", + "published": "2023-01-12", + "updated": "2023-03-24", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2206.13294v2", + "title": "LaRa: Latents and Rays for Multi-Camera Bird's-Eye-View Semantic Segmentation", + "abstract": "Recent works in autonomous driving have widely adopted the bird's-eye-view\n(BEV) semantic map as an intermediate representation of the world. Online\nprediction of these BEV maps involves non-trivial operations such as\nmulti-camera data extraction as well as fusion and projection into a common\ntopview grid. This is usually done with error-prone geometric operations (e.g.,\nhomography or back-projection from monocular depth estimation) or expensive\ndirect dense mapping between image pixels and pixels in BEV (e.g., with MLP or\nattention). In this work, we present 'LaRa', an efficient encoder-decoder,\ntransformer-based model for vehicle semantic segmentation from multiple\ncameras. Our approach uses a system of cross-attention to aggregate information\nover multiple sensors into a compact, yet rich, collection of latent\nrepresentations. These latent representations, after being processed by a\nseries of self-attention blocks, are then reprojected with a second\ncross-attention in the BEV space. We demonstrate that our model outperforms the\nbest previous works using transformers on nuScenes. The code and trained models\nare available at https://github.com/valeoai/LaRa", + "authors": "Florent Bartoccioni, \u00c9loi Zablocki, Andrei Bursuc, Patrick P\u00e9rez, Matthieu Cord, Karteek Alahari", + "published": "2022-06-27", + "updated": "2022-11-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.RO", + "68T45" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2108.12178v1", + "title": "MultiSiam: Self-supervised Multi-instance Siamese Representation Learning for Autonomous Driving", + "abstract": "Autonomous driving has attracted much attention over the years but turns out\nto be harder than expected, probably due to the difficulty of labeled data\ncollection for model training. Self-supervised learning (SSL), which leverages\nunlabeled data only for representation learning, might be a promising way to\nimprove model performance. Existing SSL methods, however, usually rely on the\nsingle-centric-object guarantee, which may not be applicable for multi-instance\ndatasets such as street scenes. To alleviate this limitation, we raise two\nissues to solve: (1) how to define positive samples for cross-view consistency\nand (2) how to measure similarity in multi-instance circumstances. We first\nadopt an IoU threshold during random cropping to transfer global-inconsistency\nto local-consistency. Then, we propose two feature alignment methods to enable\n2D feature maps for multi-instance similarity measurement. Additionally, we\nadopt intra-image clustering with self-attention for further mining intra-image\nsimilarity and translation-invariance. Experiments show that, when pre-trained\non Waymo dataset, our method called Multi-instance Siamese Network (MultiSiam)\nremarkably improves generalization ability and achieves state-of-the-art\ntransfer performance on autonomous driving benchmarks, including Cityscapes and\nBDD100K, while existing SSL counterparts like MoCo, MoCo-v2, and BYOL show\nsignificant performance drop. By pre-training on SODA10M, a large-scale\nautonomous driving dataset, MultiSiam exceeds the ImageNet pre-trained MoCo-v2,\ndemonstrating the potential of domain-specific pre-training. Code will be\navailable at https://github.com/KaiChen1998/MultiSiam.", + "authors": "Kai Chen, Lanqing Hong, Hang Xu, Zhenguo Li, Dit-Yan Yeung", + "published": "2021-08-27", + "updated": "2021-08-27", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2307.04106v2", + "title": "Parametric Depth Based Feature Representation Learning for Object Detection and Segmentation in Bird's Eye View", + "abstract": "Recent vision-only perception models for autonomous driving achieved\npromising results by encoding multi-view image features into Bird's-Eye-View\n(BEV) space. A critical step and the main bottleneck of these methods is\ntransforming image features into the BEV coordinate frame. This paper focuses\non leveraging geometry information, such as depth, to model such feature\ntransformation. Existing works rely on non-parametric depth distribution\nmodeling leading to significant memory consumption, or ignore the geometry\ninformation to address this problem. In contrast, we propose to use parametric\ndepth distribution modeling for feature transformation. We first lift the 2D\nimage features to the 3D space defined for the ego vehicle via a predicted\nparametric depth distribution for each pixel in each view. Then, we aggregate\nthe 3D feature volume based on the 3D space occupancy derived from depth to the\nBEV frame. Finally, we use the transformed features for downstream tasks such\nas object detection and semantic segmentation. Existing semantic segmentation\nmethods do also suffer from an hallucination problem as they do not take\nvisibility information into account. This hallucination can be particularly\nproblematic for subsequent modules such as control and planning. To mitigate\nthe issue, our method provides depth uncertainty and reliable visibility-aware\nestimations. We further leverage our parametric depth modeling to present a\nnovel visibility-aware evaluation metric that, when taken into account, can\nmitigate the hallucination problem. Extensive experiments on object detection\nand semantic segmentation on the nuScenes datasets demonstrate that our method\noutperforms existing methods on both tasks.", + "authors": "Jiayu Yang, Enze Xie, Miaomiao Liu, Jose M. Alvarez", + "published": "2023-07-09", + "updated": "2023-07-11", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2203.17270v2", + "title": "BEVFormer: Learning Bird's-Eye-View Representation from Multi-Camera Images via Spatiotemporal Transformers", + "abstract": "3D visual perception tasks, including 3D detection and map segmentation based\non multi-camera images, are essential for autonomous driving systems. In this\nwork, we present a new framework termed BEVFormer, which learns unified BEV\nrepresentations with spatiotemporal transformers to support multiple autonomous\ndriving perception tasks. In a nutshell, BEVFormer exploits both spatial and\ntemporal information by interacting with spatial and temporal space through\npredefined grid-shaped BEV queries. To aggregate spatial information, we design\nspatial cross-attention that each BEV query extracts the spatial features from\nthe regions of interest across camera views. For temporal information, we\npropose temporal self-attention to recurrently fuse the history BEV\ninformation. Our approach achieves the new state-of-the-art 56.9\\% in terms of\nNDS metric on the nuScenes \\texttt{test} set, which is 9.0 points higher than\nprevious best arts and on par with the performance of LiDAR-based baselines. We\nfurther show that BEVFormer remarkably improves the accuracy of velocity\nestimation and recall of objects under low visibility conditions. The code is\navailable at \\url{https://github.com/zhiqi-li/BEVFormer}.", + "authors": "Zhiqi Li, Wenhai Wang, Hongyang Li, Enze Xie, Chonghao Sima, Tong Lu, Qiao Yu, Jifeng Dai", + "published": "2022-03-31", + "updated": "2022-07-13", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1912.01991v1", + "title": "Self-Supervised Learning of Pretext-Invariant Representations", + "abstract": "The goal of self-supervised learning from images is to construct image\nrepresentations that are semantically meaningful via pretext tasks that do not\nrequire semantic annotations for a large training set of images. Many pretext\ntasks lead to representations that are covariant with image transformations. We\nargue that, instead, semantic representations ought to be invariant under such\ntransformations. Specifically, we develop Pretext-Invariant Representation\nLearning (PIRL, pronounced as \"pearl\") that learns invariant representations\nbased on pretext tasks. We use PIRL with a commonly used pretext task that\ninvolves solving jigsaw puzzles. We find that PIRL substantially improves the\nsemantic quality of the learned image representations. Our approach sets a new\nstate-of-the-art in self-supervised learning from images on several popular\nbenchmarks for self-supervised learning. Despite being unsupervised, PIRL\noutperforms supervised pre-training in learning image representations for\nobject detection. Altogether, our results demonstrate the potential of\nself-supervised learning of image representations with good invariance\nproperties.", + "authors": "Ishan Misra, Laurens van der Maaten", + "published": "2019-12-04", + "updated": "2019-12-04", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2106.08254v2", + "title": "BEiT: BERT Pre-Training of Image Transformers", + "abstract": "We introduce a self-supervised vision representation model BEiT, which stands\nfor Bidirectional Encoder representation from Image Transformers. Following\nBERT developed in the natural language processing area, we propose a masked\nimage modeling task to pretrain vision Transformers. Specifically, each image\nhas two views in our pre-training, i.e, image patches (such as 16x16 pixels),\nand visual tokens (i.e., discrete tokens). We first \"tokenize\" the original\nimage into visual tokens. Then we randomly mask some image patches and fed them\ninto the backbone Transformer. The pre-training objective is to recover the\noriginal visual tokens based on the corrupted image patches. After pre-training\nBEiT, we directly fine-tune the model parameters on downstream tasks by\nappending task layers upon the pretrained encoder. Experimental results on\nimage classification and semantic segmentation show that our model achieves\ncompetitive results with previous pre-training methods. For example, base-size\nBEiT achieves 83.2% top-1 accuracy on ImageNet-1K, significantly outperforming\nfrom-scratch DeiT training (81.8%) with the same setup. Moreover, large-size\nBEiT obtains 86.3% only using ImageNet-1K, even outperforming ViT-L with\nsupervised pre-training on ImageNet-22K (85.2%). The code and pretrained models\nare available at https://aka.ms/beit.", + "authors": "Hangbo Bao, Li Dong, Songhao Piao, Furu Wei", + "published": "2021-06-15", + "updated": "2022-09-03", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2203.05625v3", + "title": "PETR: Position Embedding Transformation for Multi-View 3D Object Detection", + "abstract": "In this paper, we develop position embedding transformation (PETR) for\nmulti-view 3D object detection. PETR encodes the position information of 3D\ncoordinates into image features, producing the 3D position-aware features.\nObject query can perceive the 3D position-aware features and perform end-to-end\nobject detection. PETR achieves state-of-the-art performance (50.4% NDS and\n44.1% mAP) on standard nuScenes dataset and ranks 1st place on the benchmark.\nIt can serve as a simple yet strong baseline for future research. Code is\navailable at \\url{https://github.com/megvii-research/PETR}.", + "authors": "Yingfei Liu, Tiancai Wang, Xiangyu Zhang, Jian Sun", + "published": "2022-03-10", + "updated": "2022-07-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1603.08511v5", + "title": "Colorful Image Colorization", + "abstract": "Given a grayscale photograph as input, this paper attacks the problem of\nhallucinating a plausible color version of the photograph. This problem is\nclearly underconstrained, so previous approaches have either relied on\nsignificant user interaction or resulted in desaturated colorizations. We\npropose a fully automatic approach that produces vibrant and realistic\ncolorizations. We embrace the underlying uncertainty of the problem by posing\nit as a classification task and use class-rebalancing at training time to\nincrease the diversity of colors in the result. The system is implemented as a\nfeed-forward pass in a CNN at test time and is trained on over a million color\nimages. We evaluate our algorithm using a \"colorization Turing test,\" asking\nhuman participants to choose between a generated and ground truth color image.\nOur method successfully fools humans on 32% of the trials, significantly higher\nthan previous methods. Moreover, we show that colorization can be a powerful\npretext task for self-supervised feature learning, acting as a cross-channel\nencoder. This approach results in state-of-the-art performance on several\nfeature learning benchmarks.", + "authors": "Richard Zhang, Phillip Isola, Alexei A. Efros", + "published": "2016-03-28", + "updated": "2016-10-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2203.11160v2", + "title": "Drive&Segment: Unsupervised Semantic Segmentation of Urban Scenes via Cross-modal Distillation", + "abstract": "This work investigates learning pixel-wise semantic image segmentation in\nurban scenes without any manual annotation, just from the raw non-curated data\ncollected by cars which, equipped with cameras and LiDAR sensors, drive around\na city. Our contributions are threefold. First, we propose a novel method for\ncross-modal unsupervised learning of semantic image segmentation by leveraging\nsynchronized LiDAR and image data. The key ingredient of our method is the use\nof an object proposal module that analyzes the LiDAR point cloud to obtain\nproposals for spatially consistent objects. Second, we show that these 3D\nobject proposals can be aligned with the input images and reliably clustered\ninto semantically meaningful pseudo-classes. Finally, we develop a cross-modal\ndistillation approach that leverages image data partially annotated with the\nresulting pseudo-classes to train a transformer-based model for image semantic\nsegmentation. We show the generalization capabilities of our method by testing\non four different testing datasets (Cityscapes, Dark Zurich, Nighttime Driving\nand ACDC) without any finetuning, and demonstrate significant improvements\ncompared to the current state of the art on this problem. See project webpage\nhttps://vobecant.github.io/DriveAndSegment/ for the code and more.", + "authors": "Antonin Vobecky, David Hurych, Oriane Sim\u00e9oni, Spyros Gidaris, Andrei Bursuc, Patrick P\u00e9rez, Josef Sivic", + "published": "2022-03-21", + "updated": "2024-02-21", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2212.13979v1", + "title": "TiG-BEV: Multi-view BEV 3D Object Detection via Target Inner-Geometry Learning", + "abstract": "To achieve accurate and low-cost 3D object detection, existing methods\npropose to benefit camera-based multi-view detectors with spatial cues provided\nby the LiDAR modality, e.g., dense depth supervision and bird-eye-view (BEV)\nfeature distillation. However, they directly conduct point-to-point mimicking\nfrom LiDAR to camera, which neglects the inner-geometry of foreground targets\nand suffers from the modal gap between 2D-3D features. In this paper, we\npropose the learning scheme of Target Inner-Geometry from the LiDAR modality\ninto camera-based BEV detectors for both dense depth and BEV features, termed\nas TiG-BEV. First, we introduce an inner-depth supervision module to learn the\nlow-level relative depth relations between different foreground pixels. This\nenables the camera-based detector to better understand the object-wise spatial\nstructures. Second, we design an inner-feature BEV distillation module to\nimitate the high-level semantics of different keypoints within foreground\ntargets. To further alleviate the BEV feature gap between two modalities, we\nadopt both inter-channel and inter-keypoint distillation for feature-similarity\nmodeling. With our target inner-geometry distillation, TiG-BEV can effectively\nboost BEVDepth by +2.3% NDS and +2.4% mAP, along with BEVDet by +9.1% NDS and\n+10.3% mAP on nuScenes val set. Code will be available at\nhttps://github.com/ADLab3Ds/TiG-BEV.", + "authors": "Peixiang Huang, Li Liu, Renrui Zhang, Song Zhang, Xinli Xu, Baichao Wang, Guoyi Liu", + "published": "2022-12-28", + "updated": "2022-12-28", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1505.05192v3", + "title": "Unsupervised Visual Representation Learning by Context Prediction", + "abstract": "This work explores the use of spatial context as a source of free and\nplentiful supervisory signal for training a rich visual representation. Given\nonly a large, unlabeled image collection, we extract random pairs of patches\nfrom each image and train a convolutional neural net to predict the position of\nthe second patch relative to the first. We argue that doing well on this task\nrequires the model to learn to recognize objects and their parts. We\ndemonstrate that the feature representation learned using this within-image\ncontext indeed captures visual similarity across images. For example, this\nrepresentation allows us to perform unsupervised visual discovery of objects\nlike cats, people, and even birds from the Pascal VOC 2011 detection dataset.\nFurthermore, we show that the learned ConvNet can be used in the R-CNN\nframework and provides a significant boost over a randomly-initialized ConvNet,\nresulting in state-of-the-art performance among algorithms which use only\nPascal-provided training set annotations.", + "authors": "Carl Doersch, Abhinav Gupta, Alexei A. Efros", + "published": "2015-05-19", + "updated": "2016-01-16", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2304.03105v2", + "title": "Geometric-aware Pretraining for Vision-centric 3D Object Detection", + "abstract": "Multi-camera 3D object detection for autonomous driving is a challenging\nproblem that has garnered notable attention from both academia and industry. An\nobstacle encountered in vision-based techniques involves the precise extraction\nof geometry-conscious features from RGB images. Recent approaches have utilized\ngeometric-aware image backbones pretrained on depth-relevant tasks to acquire\nspatial information. However, these approaches overlook the critical aspect of\nview transformation, resulting in inadequate performance due to the\nmisalignment of spatial knowledge between the image backbone and view\ntransformation. To address this issue, we propose a novel geometric-aware\npretraining framework called GAPretrain. Our approach incorporates spatial and\nstructural cues to camera networks by employing the geometric-rich modality as\nguidance during the pretraining phase. The transference of modal-specific\nattributes across different modalities is non-trivial, but we bridge this gap\nby using a unified bird's-eye-view (BEV) representation and structural hints\nderived from LiDAR point clouds to facilitate the pretraining process.\nGAPretrain serves as a plug-and-play solution that can be flexibly applied to\nmultiple state-of-the-art detectors. Our experiments demonstrate the\neffectiveness and generalization ability of the proposed method. We achieve\n46.2 mAP and 55.5 NDS on the nuScenes val set using the BEVFormer method, with\na gain of 2.7 and 2.1 points, respectively. We also conduct experiments on\nvarious image backbones and view transformations to validate the efficacy of\nour approach. Code will be released at\nhttps://github.com/OpenDriveLab/BEVPerception-Survey-Recipe.", + "authors": "Linyan Huang, Huijie Wang, Jia Zeng, Shengchuan Zhang, Liujuan Cao, Junchi Yan, Hongyang Li", + "published": "2023-04-06", + "updated": "2023-04-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2103.00020v1", + "title": "Learning Transferable Visual Models From Natural Language Supervision", + "abstract": "State-of-the-art computer vision systems are trained to predict a fixed set\nof predetermined object categories. This restricted form of supervision limits\ntheir generality and usability since additional labeled data is needed to\nspecify any other visual concept. Learning directly from raw text about images\nis a promising alternative which leverages a much broader source of\nsupervision. We demonstrate that the simple pre-training task of predicting\nwhich caption goes with which image is an efficient and scalable way to learn\nSOTA image representations from scratch on a dataset of 400 million (image,\ntext) pairs collected from the internet. After pre-training, natural language\nis used to reference learned visual concepts (or describe new ones) enabling\nzero-shot transfer of the model to downstream tasks. We study the performance\nof this approach by benchmarking on over 30 different existing computer vision\ndatasets, spanning tasks such as OCR, action recognition in videos,\ngeo-localization, and many types of fine-grained object classification. The\nmodel transfers non-trivially to most tasks and is often competitive with a\nfully supervised baseline without the need for any dataset specific training.\nFor instance, we match the accuracy of the original ResNet-50 on ImageNet\nzero-shot without needing to use any of the 1.28 million training examples it\nwas trained on. We release our code and pre-trained model weights at\nhttps://github.com/OpenAI/CLIP.", + "authors": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever", + "published": "2021-02-26", + "updated": "2021-02-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1803.07728v1", + "title": "Unsupervised Representation Learning by Predicting Image Rotations", + "abstract": "Over the last years, deep convolutional neural networks (ConvNets) have\ntransformed the field of computer vision thanks to their unparalleled capacity\nto learn high level semantic image features. However, in order to successfully\nlearn those features, they usually require massive amounts of manually labeled\ndata, which is both expensive and impractical to scale. Therefore, unsupervised\nsemantic feature learning, i.e., learning without requiring manual annotation\neffort, is of crucial importance in order to successfully harvest the vast\namount of visual data that are available today. In our work we propose to learn\nimage features by training ConvNets to recognize the 2d rotation that is\napplied to the image that it gets as input. We demonstrate both qualitatively\nand quantitatively that this apparently simple task actually provides a very\npowerful supervisory signal for semantic feature learning. We exhaustively\nevaluate our method in various unsupervised feature learning benchmarks and we\nexhibit in all of them state-of-the-art performance. Specifically, our results\non those benchmarks demonstrate dramatic improvements w.r.t. prior\nstate-of-the-art approaches in unsupervised representation learning and thus\nsignificantly close the gap with supervised feature learning. For instance, in\nPASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model\nachieves the state-of-the-art (among unsupervised methods) mAP of 54.4% that is\nonly 2.4 points lower from the supervised case. We get similarly striking\nresults when we transfer our unsupervised learned features on various other\ntasks, such as ImageNet classification, PASCAL classification, PASCAL\nsegmentation, and CIFAR-10 classification. The code and models of our paper\nwill be published on: https://github.com/gidariss/FeatureLearningRotNet .", + "authors": "Spyros Gidaris, Praveer Singh, Nikos Komodakis", + "published": "2018-03-21", + "updated": "2018-03-21", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2206.10092v2", + "title": "BEVDepth: Acquisition of Reliable Depth for Multi-view 3D Object Detection", + "abstract": "In this research, we propose a new 3D object detector with a trustworthy\ndepth estimation, dubbed BEVDepth, for camera-based Bird's-Eye-View (BEV) 3D\nobject detection. Our work is based on a key observation -- depth estimation in\nrecent approaches is surprisingly inadequate given the fact that depth is\nessential to camera 3D detection. Our BEVDepth resolves this by leveraging\nexplicit depth supervision. A camera-awareness depth estimation module is also\nintroduced to facilitate the depth predicting capability. Besides, we design a\nnovel Depth Refinement Module to counter the side effects carried by imprecise\nfeature unprojection. Aided by customized Efficient Voxel Pooling and\nmulti-frame mechanism, BEVDepth achieves the new state-of-the-art 60.9% NDS on\nthe challenging nuScenes test set while maintaining high efficiency. For the\nfirst time, the NDS score of a camera model reaches 60%.", + "authors": "Yinhao Li, Zheng Ge, Guanyi Yu, Jinrong Yang, Zengran Wang, Yukang Shi, Jianjian Sun, Zeming Li", + "published": "2022-06-21", + "updated": "2022-11-30", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2006.07733v3", + "title": "Bootstrap your own latent: A new approach to self-supervised Learning", + "abstract": "We introduce Bootstrap Your Own Latent (BYOL), a new approach to\nself-supervised image representation learning. BYOL relies on two neural\nnetworks, referred to as online and target networks, that interact and learn\nfrom each other. From an augmented view of an image, we train the online\nnetwork to predict the target network representation of the same image under a\ndifferent augmented view. At the same time, we update the target network with a\nslow-moving average of the online network. While state-of-the art methods rely\non negative pairs, BYOL achieves a new state of the art without them. BYOL\nreaches $74.3\\%$ top-1 classification accuracy on ImageNet using a linear\nevaluation with a ResNet-50 architecture and $79.6\\%$ with a larger ResNet. We\nshow that BYOL performs on par or better than the current state of the art on\nboth transfer and semi-supervised benchmarks. Our implementation and pretrained\nmodels are given on GitHub.", + "authors": "Jean-Bastien Grill, Florian Strub, Florent Altch\u00e9, Corentin Tallec, Pierre H. Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu, R\u00e9mi Munos, Michal Valko", + "published": "2020-06-13", + "updated": "2020-09-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2112.11790v3", + "title": "BEVDet: High-performance Multi-camera 3D Object Detection in Bird-Eye-View", + "abstract": "Autonomous driving perceives its surroundings for decision making, which is\none of the most complex scenarios in visual perception. The success of paradigm\ninnovation in solving the 2D object detection task inspires us to seek an\nelegant, feasible, and scalable paradigm for fundamentally pushing the\nperformance boundary in this area. To this end, we contribute the BEVDet\nparadigm in this paper. BEVDet performs 3D object detection in Bird-Eye-View\n(BEV), where most target values are defined and route planning can be handily\nperformed. We merely reuse existing modules to build its framework but\nsubstantially develop its performance by constructing an exclusive data\naugmentation strategy and upgrading the Non-Maximum Suppression strategy. In\nthe experiment, BEVDet offers an excellent trade-off between accuracy and\ntime-efficiency. As a fast version, BEVDet-Tiny scores 31.2% mAP and 39.2% NDS\non the nuScenes val set. It is comparable with FCOS3D, but requires just 11%\ncomputational budget of 215.3 GFLOPs and runs 9.2 times faster at 15.6 FPS.\nAnother high-precision version dubbed BEVDet-Base scores 39.3% mAP and 47.2%\nNDS, significantly exceeding all published results. With a comparable inference\nspeed, it surpasses FCOS3D by a large margin of +9.8% mAP and +10.0% NDS. The\nsource code is publicly available for further research at\nhttps://github.com/HuangJunJie2017/BEVDet .", + "authors": "Junjie Huang, Guan Huang, Zheng Zhu, Yun Ye, Dalong Du", + "published": "2021-12-22", + "updated": "2022-06-16", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.10439v1", + "title": "BEVFormer v2: Adapting Modern Image Backbones to Bird's-Eye-View Recognition via Perspective Supervision", + "abstract": "We present a novel bird's-eye-view (BEV) detector with perspective\nsupervision, which converges faster and better suits modern image backbones.\nExisting state-of-the-art BEV detectors are often tied to certain depth\npre-trained backbones like VoVNet, hindering the synergy between booming image\nbackbones and BEV detectors. To address this limitation, we prioritize easing\nthe optimization of BEV detectors by introducing perspective space supervision.\nTo this end, we propose a two-stage BEV detector, where proposals from the\nperspective head are fed into the bird's-eye-view head for final predictions.\nTo evaluate the effectiveness of our model, we conduct extensive ablation\nstudies focusing on the form of supervision and the generality of the proposed\ndetector. The proposed method is verified with a wide spectrum of traditional\nand modern image backbones and achieves new SoTA results on the large-scale\nnuScenes dataset. The code shall be released soon.", + "authors": "Chenyu Yang, Yuntao Chen, Hao Tian, Chenxin Tao, Xizhou Zhu, Zhaoxiang Zhang, Gao Huang, Hongyang Li, Yu Qiao, Lewei Lu, Jie Zhou, Jifeng Dai", + "published": "2022-11-18", + "updated": "2022-11-18", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2204.05088v2", + "title": "M$^2$BEV: Multi-Camera Joint 3D Detection and Segmentation with Unified Birds-Eye View Representation", + "abstract": "In this paper, we propose M$^2$BEV, a unified framework that jointly performs\n3D object detection and map segmentation in the Birds Eye View~(BEV) space with\nmulti-camera image inputs. Unlike the majority of previous works which\nseparately process detection and segmentation, M$^2$BEV infers both tasks with\na unified model and improves efficiency. M$^2$BEV efficiently transforms\nmulti-view 2D image features into the 3D BEV feature in ego-car coordinates.\nSuch BEV representation is important as it enables different tasks to share a\nsingle encoder. Our framework further contains four important designs that\nbenefit both accuracy and efficiency: (1) An efficient BEV encoder design that\nreduces the spatial dimension of a voxel feature map. (2) A dynamic box\nassignment strategy that uses learning-to-match to assign ground-truth 3D boxes\nwith anchors. (3) A BEV centerness re-weighting that reinforces with larger\nweights for more distant predictions, and (4) Large-scale 2D detection\npre-training and auxiliary supervision. We show that these designs\nsignificantly benefit the ill-posed camera-based 3D perception tasks where\ndepth information is missing. M$^2$BEV is memory efficient, allowing\nsignificantly higher resolution images as input, with faster inference speed.\nExperiments on nuScenes show that M$^2$BEV achieves state-of-the-art results in\nboth 3D object detection and BEV segmentation, with the best single model\nachieving 42.5 mAP and 57.0 mIoU in these two tasks, respectively.", + "authors": "Enze Xie, Zhiding Yu, Daquan Zhou, Jonah Philion, Anima Anandkumar, Sanja Fidler, Ping Luo, Jose M. Alvarez", + "published": "2022-04-11", + "updated": "2022-04-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2304.07193v2", + "title": "DINOv2: Learning Robust Visual Features without Supervision", + "abstract": "The recent breakthroughs in natural language processing for model pretraining\non large quantities of data have opened the way for similar foundation models\nin computer vision. These models could greatly simplify the use of images in\nany system by producing all-purpose visual features, i.e., features that work\nacross image distributions and tasks without finetuning. This work shows that\nexisting pretraining methods, especially self-supervised methods, can produce\nsuch features if trained on enough curated data from diverse sources. We\nrevisit existing approaches and combine different techniques to scale our\npretraining in terms of data and model size. Most of the technical\ncontributions aim at accelerating and stabilizing the training at scale. In\nterms of data, we propose an automatic pipeline to build a dedicated, diverse,\nand curated image dataset instead of uncurated data, as typically done in the\nself-supervised literature. In terms of models, we train a ViT model\n(Dosovitskiy et al., 2020) with 1B parameters and distill it into a series of\nsmaller models that surpass the best available all-purpose features, OpenCLIP\n(Ilharco et al., 2021) on most of the benchmarks at image and pixel levels.", + "authors": "Maxime Oquab, Timoth\u00e9e Darcet, Th\u00e9o Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mahmoud Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, Herv\u00e9 Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, Piotr Bojanowski", + "published": "2023-04-14", + "updated": "2024-02-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2205.02833v1", + "title": "Cross-view Transformers for real-time Map-view Semantic Segmentation", + "abstract": "We present cross-view transformers, an efficient attention-based model for\nmap-view semantic segmentation from multiple cameras. Our architecture\nimplicitly learns a mapping from individual camera views into a canonical\nmap-view representation using a camera-aware cross-view attention mechanism.\nEach camera uses positional embeddings that depend on its intrinsic and\nextrinsic calibration. These embeddings allow a transformer to learn the\nmapping across different views without ever explicitly modeling it\ngeometrically. The architecture consists of a convolutional image encoder for\neach view and cross-view transformer layers to infer a map-view semantic\nsegmentation. Our model is simple, easily parallelizable, and runs in\nreal-time. The presented architecture performs at state-of-the-art on the\nnuScenes dataset, with 4x faster inference speeds. Code is available at\nhttps://github.com/bradyz/cross_view_transformers.", + "authors": "Brady Zhou, Philipp Kr\u00e4henb\u00fchl", + "published": "2022-05-05", + "updated": "2022-05-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2206.15398v6", + "title": "PolarFormer: Multi-camera 3D Object Detection with Polar Transformer", + "abstract": "3D object detection in autonomous driving aims to reason \"what\" and \"where\"\nthe objects of interest present in a 3D world. Following the conventional\nwisdom of previous 2D object detection, existing methods often adopt the\ncanonical Cartesian coordinate system with perpendicular axis. However, we\nconjugate that this does not fit the nature of the ego car's perspective, as\neach onboard camera perceives the world in shape of wedge intrinsic to the\nimaging geometry with radical (non-perpendicular) axis. Hence, in this paper we\nadvocate the exploitation of the Polar coordinate system and propose a new\nPolar Transformer (PolarFormer) for more accurate 3D object detection in the\nbird's-eye-view (BEV) taking as input only multi-camera 2D images.\nSpecifically, we design a cross attention based Polar detection head without\nrestriction to the shape of input structure to deal with irregular Polar grids.\nFor tackling the unconstrained object scale variations along Polar's distance\ndimension, we further introduce a multi-scalePolar representation learning\nstrategy. As a result, our model can make best use of the Polar representation\nrasterized via attending to the corresponding image observation in a\nsequence-to-sequence fashion subject to the geometric constraints. Thorough\nexperiments on the nuScenes dataset demonstrate that our PolarFormer\noutperforms significantly state-of-the-art 3D object detection alternatives.", + "authors": "Yanqin Jiang, Li Zhang, Zhenwei Miao, Xiatian Zhu, Jin Gao, Weiming Hu, Yu-Gang Jiang", + "published": "2022-06-30", + "updated": "2023-01-16", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2006.09882v5", + "title": "Unsupervised Learning of Visual Features by Contrasting Cluster Assignments", + "abstract": "Unsupervised image representations have significantly reduced the gap with\nsupervised pretraining, notably with the recent achievements of contrastive\nlearning methods. These contrastive methods typically work online and rely on a\nlarge number of explicit pairwise feature comparisons, which is computationally\nchallenging. In this paper, we propose an online algorithm, SwAV, that takes\nadvantage of contrastive methods without requiring to compute pairwise\ncomparisons. Specifically, our method simultaneously clusters the data while\nenforcing consistency between cluster assignments produced for different\naugmentations (or views) of the same image, instead of comparing features\ndirectly as in contrastive learning. Simply put, we use a swapped prediction\nmechanism where we predict the cluster assignment of a view from the\nrepresentation of another view. Our method can be trained with large and small\nbatches and can scale to unlimited amounts of data. Compared to previous\ncontrastive methods, our method is more memory efficient since it does not\nrequire a large memory bank or a special momentum network. In addition, we also\npropose a new data augmentation strategy, multi-crop, that uses a mix of views\nwith different resolutions in place of two full-resolution views, without\nincreasing the memory or compute requirements much. We validate our findings by\nachieving 75.3% top-1 accuracy on ImageNet with ResNet-50, as well as\nsurpassing supervised pretraining on all the considered transfer tasks.", + "authors": "Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, Armand Joulin", + "published": "2020-06-17", + "updated": "2021-01-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2301.08243v3", + "title": "Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture", + "abstract": "This paper demonstrates an approach for learning highly semantic image\nrepresentations without relying on hand-crafted data-augmentations. We\nintroduce the Image-based Joint-Embedding Predictive Architecture (I-JEPA), a\nnon-generative approach for self-supervised learning from images. The idea\nbehind I-JEPA is simple: from a single context block, predict the\nrepresentations of various target blocks in the same image. A core design\nchoice to guide I-JEPA towards producing semantic representations is the\nmasking strategy; specifically, it is crucial to (a) sample target blocks with\nsufficiently large scale (semantic), and to (b) use a sufficiently informative\n(spatially distributed) context block. Empirically, when combined with Vision\nTransformers, we find I-JEPA to be highly scalable. For instance, we train a\nViT-Huge/14 on ImageNet using 16 A100 GPUs in under 72 hours to achieve strong\ndownstream performance across a wide range of tasks, from linear classification\nto object counting and depth prediction.", + "authors": "Mahmoud Assran, Quentin Duval, Ishan Misra, Piotr Bojanowski, Pascal Vincent, Michael Rabbat, Yann LeCun, Nicolas Ballas", + "published": "2023-01-19", + "updated": "2023-04-13", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG", + "eess.IV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1911.05722v3", + "title": "Momentum Contrast for Unsupervised Visual Representation Learning", + "abstract": "We present Momentum Contrast (MoCo) for unsupervised visual representation\nlearning. From a perspective on contrastive learning as dictionary look-up, we\nbuild a dynamic dictionary with a queue and a moving-averaged encoder. This\nenables building a large and consistent dictionary on-the-fly that facilitates\ncontrastive unsupervised learning. MoCo provides competitive results under the\ncommon linear protocol on ImageNet classification. More importantly, the\nrepresentations learned by MoCo transfer well to downstream tasks. MoCo can\noutperform its supervised pre-training counterpart in 7 detection/segmentation\ntasks on PASCAL VOC, COCO, and other datasets, sometimes surpassing it by large\nmargins. This suggests that the gap between unsupervised and supervised\nrepresentation learning has been largely closed in many vision tasks.", + "authors": "Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, Ross Girshick", + "published": "2019-11-13", + "updated": "2020-03-23", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2303.10209v1", + "title": "CAPE: Camera View Position Embedding for Multi-View 3D Object Detection", + "abstract": "In this paper, we address the problem of detecting 3D objects from multi-view\nimages. Current query-based methods rely on global 3D position embeddings (PE)\nto learn the geometric correspondence between images and 3D space. We claim\nthat directly interacting 2D image features with global 3D PE could increase\nthe difficulty of learning view transformation due to the variation of camera\nextrinsics. Thus we propose a novel method based on CAmera view Position\nEmbedding, called CAPE. We form the 3D position embeddings under the local\ncamera-view coordinate system instead of the global coordinate system, such\nthat 3D position embedding is free of encoding camera extrinsic parameters.\nFurthermore, we extend our CAPE to temporal modeling by exploiting the object\nqueries of previous frames and encoding the ego-motion for boosting 3D object\ndetection. CAPE achieves state-of-the-art performance (61.0% NDS and 52.5% mAP)\namong all LiDAR-free methods on nuScenes dataset. Codes and models are\navailable on \\href{https://github.com/PaddlePaddle/Paddle3D}{Paddle3D} and\n\\href{https://github.com/kaixinbear/CAPE}{PyTorch Implementation}.", + "authors": "Kaixin Xiong, Shi Gong, Xiaoqing Ye, Xiao Tan, Ji Wan, Errui Ding, Jingdong Wang, Xiang Bai", + "published": "2023-03-17", + "updated": "2023-03-17", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2209.10248v1", + "title": "BEVStereo: Enhancing Depth Estimation in Multi-view 3D Object Detection with Dynamic Temporal Stereo", + "abstract": "Bounded by the inherent ambiguity of depth perception, contemporary\ncamera-based 3D object detection methods fall into the performance bottleneck.\nIntuitively, leveraging temporal multi-view stereo (MVS) technology is the\nnatural knowledge for tackling this ambiguity. However, traditional attempts of\nMVS are flawed in two aspects when applying to 3D object detection scenes: 1)\nThe affinity measurement among all views suffers expensive computation cost; 2)\nIt is difficult to deal with outdoor scenarios where objects are often mobile.\nTo this end, we introduce an effective temporal stereo method to dynamically\nselect the scale of matching candidates, enable to significantly reduce\ncomputation overhead. Going one step further, we design an iterative algorithm\nto update more valuable candidates, making it adaptive to moving candidates. We\ninstantiate our proposed method to multi-view 3D detector, namely BEVStereo.\nBEVStereo achieves the new state-of-the-art performance (i.e., 52.5% mAP and\n61.0% NDS) on the camera-only track of nuScenes dataset. Meanwhile, extensive\nexperiments reflect our method can deal with complex outdoor scenarios better\nthan contemporary MVS approaches. Codes have been released at\nhttps://github.com/Megvii-BaseDetection/BEVStereo.", + "authors": "Yinhao Li, Han Bao, Zheng Ge, Jinrong Yang, Jianjian Sun, Zeming Li", + "published": "2022-09-21", + "updated": "2022-09-21", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2103.01100v2", + "title": "Categorical Depth Distribution Network for Monocular 3D Object Detection", + "abstract": "Monocular 3D object detection is a key problem for autonomous vehicles, as it\nprovides a solution with simple configuration compared to typical multi-sensor\nsystems. The main challenge in monocular 3D detection lies in accurately\npredicting object depth, which must be inferred from object and scene cues due\nto the lack of direct range measurement. Many methods attempt to directly\nestimate depth to assist in 3D detection, but show limited performance as a\nresult of depth inaccuracy. Our proposed solution, Categorical Depth\nDistribution Network (CaDDN), uses a predicted categorical depth distribution\nfor each pixel to project rich contextual feature information to the\nappropriate depth interval in 3D space. We then use the computationally\nefficient bird's-eye-view projection and single-stage detector to produce the\nfinal output bounding boxes. We design CaDDN as a fully differentiable\nend-to-end approach for joint depth estimation and object detection. We\nvalidate our approach on the KITTI 3D object detection benchmark, where we rank\n1st among published monocular methods. We also provide the first monocular 3D\ndetection results on the newly released Waymo Open Dataset. We provide a code\nrelease for CaDDN which is made available.", + "authors": "Cody Reading, Ali Harakeh, Julia Chae, Steven L. Waslander", + "published": "2021-03-01", + "updated": "2021-03-23", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2307.09361v1", + "title": "MOCA: Self-supervised Representation Learning by Predicting Masked Online Codebook Assignments", + "abstract": "Self-supervised learning can be used for mitigating the greedy needs of\nVision Transformer networks for very large fully-annotated datasets. Different\nclasses of self-supervised learning offer representations with either good\ncontextual reasoning properties, e.g., using masked image modeling strategies,\nor invariance to image perturbations, e.g., with contrastive methods. In this\nwork, we propose a single-stage and standalone method, MOCA, which unifies both\ndesired properties using novel mask-and-predict objectives defined with\nhigh-level features (instead of pixel-level details). Moreover, we show how to\neffectively employ both learning paradigms in a synergistic and\ncomputation-efficient way. Doing so, we achieve new state-of-the-art results on\nlow-shot settings and strong experimental results in various evaluation\nprotocols with a training that is at least 3 times faster than prior methods.", + "authors": "Spyros Gidaris, Andrei Bursuc, Oriane Simeoni, Antonin Vobecky, Nikos Komodakis, Matthieu Cord, Patrick P\u00e9rez", + "published": "2023-07-18", + "updated": "2023-07-18", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2111.06377v3", + "title": "Masked Autoencoders Are Scalable Vision Learners", + "abstract": "This paper shows that masked autoencoders (MAE) are scalable self-supervised\nlearners for computer vision. Our MAE approach is simple: we mask random\npatches of the input image and reconstruct the missing pixels. It is based on\ntwo core designs. First, we develop an asymmetric encoder-decoder architecture,\nwith an encoder that operates only on the visible subset of patches (without\nmask tokens), along with a lightweight decoder that reconstructs the original\nimage from the latent representation and mask tokens. Second, we find that\nmasking a high proportion of the input image, e.g., 75%, yields a nontrivial\nand meaningful self-supervisory task. Coupling these two designs enables us to\ntrain large models efficiently and effectively: we accelerate training (by 3x\nor more) and improve accuracy. Our scalable approach allows for learning\nhigh-capacity models that generalize well: e.g., a vanilla ViT-Huge model\nachieves the best accuracy (87.8%) among methods that use only ImageNet-1K\ndata. Transfer performance in downstream tasks outperforms supervised\npre-training and shows promising scaling behavior.", + "authors": "Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Doll\u00e1r, Ross Girshick", + "published": "2021-11-11", + "updated": "2021-12-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2205.13542v2", + "title": "BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation", + "abstract": "Multi-sensor fusion is essential for an accurate and reliable autonomous\ndriving system. Recent approaches are based on point-level fusion: augmenting\nthe LiDAR point cloud with camera features. However, the camera-to-LiDAR\nprojection throws away the semantic density of camera features, hindering the\neffectiveness of such methods, especially for semantic-oriented tasks (such as\n3D scene segmentation). In this paper, we break this deeply-rooted convention\nwith BEVFusion, an efficient and generic multi-task multi-sensor fusion\nframework. It unifies multi-modal features in the shared bird's-eye view (BEV)\nrepresentation space, which nicely preserves both geometric and semantic\ninformation. To achieve this, we diagnose and lift key efficiency bottlenecks\nin the view transformation with optimized BEV pooling, reducing latency by more\nthan 40x. BEVFusion is fundamentally task-agnostic and seamlessly supports\ndifferent 3D perception tasks with almost no architectural changes. It\nestablishes the new state of the art on nuScenes, achieving 1.3% higher mAP and\nNDS on 3D object detection and 13.6% higher mIoU on BEV map segmentation, with\n1.9x lower computation cost. Code to reproduce our results is available at\nhttps://github.com/mit-han-lab/bevfusion.", + "authors": "Zhijian Liu, Haotian Tang, Alexander Amini, Xinyu Yang, Huizi Mao, Daniela Rus, Song Han", + "published": "2022-05-26", + "updated": "2022-06-16", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2104.14294v2", + "title": "Emerging Properties in Self-Supervised Vision Transformers", + "abstract": "In this paper, we question if self-supervised learning provides new\nproperties to Vision Transformer (ViT) that stand out compared to convolutional\nnetworks (convnets). Beyond the fact that adapting self-supervised methods to\nthis architecture works particularly well, we make the following observations:\nfirst, self-supervised ViT features contain explicit information about the\nsemantic segmentation of an image, which does not emerge as clearly with\nsupervised ViTs, nor with convnets. Second, these features are also excellent\nk-NN classifiers, reaching 78.3% top-1 on ImageNet with a small ViT. Our study\nalso underlines the importance of momentum encoder, multi-crop training, and\nthe use of small patches with ViTs. We implement our findings into a simple\nself-supervised method, called DINO, which we interpret as a form of\nself-distillation with no labels. We show the synergy between DINO and ViTs by\nachieving 80.1% top-1 on ImageNet in linear evaluation with ViT-Base.", + "authors": "Mathilde Caron, Hugo Touvron, Ishan Misra, Herv\u00e9 J\u00e9gou, Julien Mairal, Piotr Bojanowski, Armand Joulin", + "published": "2021-04-29", + "updated": "2021-05-24", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2101.06553v2", + "title": "Self-Supervised Representation Learning from Flow Equivariance", + "abstract": "Self-supervised representation learning is able to learn semantically\nmeaningful features; however, much of its recent success relies on multiple\ncrops of an image with very few objects. Instead of learning view-invariant\nrepresentation from simple images, humans learn representations in a complex\nworld with changing scenes by observing object movement, deformation, pose\nvariation, and ego motion. Motivated by this ability, we present a new\nself-supervised learning representation framework that can be directly deployed\non a video stream of complex scenes with many moving objects. Our framework\nfeatures a simple flow equivariance objective that encourages the network to\npredict the features of another frame by applying a flow transformation to the\nfeatures of the current frame. Our representations, learned from\nhigh-resolution raw video, can be readily used for downstream tasks on static\nimages. Readout experiments on challenging semantic segmentation, instance\nsegmentation, and object detection benchmarks show that we are able to\noutperform representations obtained from previous state-of-the-art methods\nincluding SimCLR and BYOL.", + "authors": "Yuwen Xiong, Mengye Ren, Wenyuan Zeng, Raquel Urtasun", + "published": "2021-01-16", + "updated": "2021-10-12", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2008.05711v1", + "title": "Lift, Splat, Shoot: Encoding Images From Arbitrary Camera Rigs by Implicitly Unprojecting to 3D", + "abstract": "The goal of perception for autonomous vehicles is to extract semantic\nrepresentations from multiple sensors and fuse these representations into a\nsingle \"bird's-eye-view\" coordinate frame for consumption by motion planning.\nWe propose a new end-to-end architecture that directly extracts a\nbird's-eye-view representation of a scene given image data from an arbitrary\nnumber of cameras. The core idea behind our approach is to \"lift\" each image\nindividually into a frustum of features for each camera, then \"splat\" all\nfrustums into a rasterized bird's-eye-view grid. By training on the entire\ncamera rig, we provide evidence that our model is able to learn not only how to\nrepresent images but how to fuse predictions from all cameras into a single\ncohesive representation of the scene while being robust to calibration error.\nOn standard bird's-eye-view tasks such as object segmentation and map\nsegmentation, our model outperforms all baselines and prior work. In pursuit of\nthe goal of learning dense representations for motion planning, we show that\nthe representations inferred by our model enable interpretable end-to-end\nmotion planning by \"shooting\" template trajectories into a bird's-eye-view cost\nmap output by our network. We benchmark our approach against models that use\noracle depth from lidar. Project page with code:\nhttps://nv-tlabs.github.io/lift-splat-shoot .", + "authors": "Jonah Philion, Sanja Fidler", + "published": "2020-08-13", + "updated": "2020-08-13", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.08370v2", + "title": "UniPAD: A Universal Pre-training Paradigm for Autonomous Driving", + "abstract": "In the context of autonomous driving, the significance of effective feature\nlearning is widely acknowledged. While conventional 3D self-supervised\npre-training methods have shown widespread success, most methods follow the\nideas originally designed for 2D images. In this paper, we present UniPAD, a\nnovel self-supervised learning paradigm applying 3D volumetric differentiable\nrendering. UniPAD implicitly encodes 3D space, facilitating the reconstruction\nof continuous 3D shape structures and the intricate appearance characteristics\nof their 2D projections. The flexibility of our method enables seamless\nintegration into both 2D and 3D frameworks, enabling a more holistic\ncomprehension of the scenes. We manifest the feasibility and effectiveness of\nUniPAD by conducting extensive experiments on various downstream 3D tasks. Our\nmethod significantly improves lidar-, camera-, and lidar-camera-based baseline\nby 9.1, 7.7, and 6.9 NDS, respectively. Notably, our pre-training pipeline\nachieves 73.2 NDS for 3D object detection and 79.4 mIoU for 3D semantic\nsegmentation on the nuScenes validation set, achieving state-of-the-art results\nin comparison with previous methods. The code will be available at\nhttps://github.com/Nightmare-n/UniPAD.", + "authors": "Honghui Yang, Sha Zhang, Di Huang, Xiaoyang Wu, Haoyi Zhu, Tong He, Shixiang Tang, Hengshuang Zhao, Qibo Qiu, Binbin Lin, Xiaofei He, Wanli Ouyang", + "published": "2023-10-12", + "updated": "2024-04-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.17504v2", + "title": "Three Pillars improving Vision Foundation Model Distillation for Lidar", + "abstract": "Self-supervised image backbones can be used to address complex 2D tasks\n(e.g., semantic segmentation, object discovery) very efficiently and with\nlittle or no downstream supervision. Ideally, 3D backbones for lidar should be\nable to inherit these properties after distillation of these powerful 2D\nfeatures. The most recent methods for image-to-lidar distillation on autonomous\ndriving data show promising results, obtained thanks to distillation methods\nthat keep improving. Yet, we still notice a large performance gap when\nmeasuring the quality of distilled and fully supervised features by linear\nprobing. In this work, instead of focusing only on the distillation method, we\nstudy the effect of three pillars for distillation: the 3D backbone, the\npretrained 2D backbones, and the pretraining dataset. In particular, thanks to\nour scalable distillation method named ScaLR, we show that scaling the 2D and\n3D backbones and pretraining on diverse datasets leads to a substantial\nimprovement of the feature quality. This allows us to significantly reduce the\ngap between the quality of distilled and fully-supervised 3D features, and to\nimprove the robustness of the pretrained backbones to domain gaps and\nperturbations.", + "authors": "Gilles Puy, Spyros Gidaris, Alexandre Boulch, Oriane Sim\u00e9oni, Corentin Sautier, Patrick P\u00e9rez, Andrei Bursuc, Renaud Marlet", + "published": "2023-10-26", + "updated": "2024-02-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2303.11325v2", + "title": "GeoMIM: Towards Better 3D Knowledge Transfer via Masked Image Modeling for Multi-view 3D Understanding", + "abstract": "Multi-view camera-based 3D detection is a challenging problem in computer\nvision. Recent works leverage a pretrained LiDAR detection model to transfer\nknowledge to a camera-based student network. However, we argue that there is a\nmajor domain gap between the LiDAR BEV features and the camera-based BEV\nfeatures, as they have different characteristics and are derived from different\nsources. In this paper, we propose Geometry Enhanced Masked Image Modeling\n(GeoMIM) to transfer the knowledge of the LiDAR model in a pretrain-finetune\nparadigm for improving the multi-view camera-based 3D detection. GeoMIM is a\nmulti-camera vision transformer with Cross-View Attention (CVA) blocks that\nuses LiDAR BEV features encoded by the pretrained BEV model as learning\ntargets. During pretraining, GeoMIM's decoder has a semantic branch completing\ndense perspective-view features and the other geometry branch reconstructing\ndense perspective-view depth maps. The depth branch is designed to be\ncamera-aware by inputting the camera's parameters for better transfer\ncapability. Extensive results demonstrate that GeoMIM outperforms existing\nmethods on nuScenes benchmark, achieving state-of-the-art performance for\ncamera-based 3D object detection and 3D segmentation. Code and pretrained\nmodels are available at https://github.com/Sense-X/GeoMIM.", + "authors": "Jihao Liu, Tai Wang, Boxiao Liu, Qihang Zhang, Yu Liu, Hongsheng Li", + "published": "2023-03-20", + "updated": "2023-08-28", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2309.15109v1", + "title": "DistillBEV: Boosting Multi-Camera 3D Object Detection with Cross-Modal Knowledge Distillation", + "abstract": "3D perception based on the representations learned from multi-camera\nbird's-eye-view (BEV) is trending as cameras are cost-effective for mass\nproduction in autonomous driving industry. However, there exists a distinct\nperformance gap between multi-camera BEV and LiDAR based 3D object detection.\nOne key reason is that LiDAR captures accurate depth and other geometry\nmeasurements, while it is notoriously challenging to infer such 3D information\nfrom merely image input. In this work, we propose to boost the representation\nlearning of a multi-camera BEV based student detector by training it to imitate\nthe features of a well-trained LiDAR based teacher detector. We propose\neffective balancing strategy to enforce the student to focus on learning the\ncrucial features from the teacher, and generalize knowledge transfer to\nmulti-scale layers with temporal fusion. We conduct extensive evaluations on\nmultiple representative models of multi-camera BEV. Experiments reveal that our\napproach renders significant improvement over the student models, leading to\nthe state-of-the-art performance on the popular benchmark nuScenes.", + "authors": "Zeyu Wang, Dingwen Li, Chenxu Luo, Cihang Xie, Xiaodong Yang", + "published": "2023-09-26", + "updated": "2023-09-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2301.12511v1", + "title": "Fast-BEV: A Fast and Strong Bird's-Eye View Perception Baseline", + "abstract": "Recently, perception task based on Bird's-Eye View (BEV) representation has\ndrawn more and more attention, and BEV representation is promising as the\nfoundation for next-generation Autonomous Vehicle (AV) perception. However,\nmost existing BEV solutions either require considerable resources to execute\non-vehicle inference or suffer from modest performance. This paper proposes a\nsimple yet effective framework, termed Fast-BEV , which is capable of\nperforming faster BEV perception on the on-vehicle chips. Towards this goal, we\nfirst empirically find that the BEV representation can be sufficiently powerful\nwithout expensive transformer based transformation nor depth representation.\nOur Fast-BEV consists of five parts, We novelly propose (1) a lightweight\ndeployment-friendly view transformation which fast transfers 2D image feature\nto 3D voxel space, (2) an multi-scale image encoder which leverages multi-scale\ninformation for better performance, (3) an efficient BEV encoder which is\nparticularly designed to speed up on-vehicle inference. We further introduce\n(4) a strong data augmentation strategy for both image and BEV space to avoid\nover-fitting, (5) a multi-frame feature fusion mechanism to leverage the\ntemporal information. Through experiments, on 2080Ti platform, our R50 model\ncan run 52.6 FPS with 47.3% NDS on the nuScenes validation set, exceeding the\n41.3 FPS and 47.5% NDS of the BEVDepth-R50 model and 30.2 FPS and 45.7% NDS of\nthe BEVDet4D-R50 model. Our largest model (R101@900x1600) establishes a\ncompetitive 53.5% NDS on the nuScenes validation set. We further develop a\nbenchmark with considerable accuracy and efficiency on current popular\non-vehicle chips. The code is released at:\nhttps://github.com/Sense-GVT/Fast-BEV.", + "authors": "Yangguang Li, Bin Huang, Zeren Chen, Yufeng Cui, Feng Liang, Mingzhu Shen, Fenggang Liu, Enze Xie, Lu Sheng, Wanli Ouyang, Jing Shao", + "published": "2023-01-29", + "updated": "2023-01-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2011.10566v1", + "title": "Exploring Simple Siamese Representation Learning", + "abstract": "Siamese networks have become a common structure in various recent models for\nunsupervised visual representation learning. These models maximize the\nsimilarity between two augmentations of one image, subject to certain\nconditions for avoiding collapsing solutions. In this paper, we report\nsurprising empirical results that simple Siamese networks can learn meaningful\nrepresentations even using none of the following: (i) negative sample pairs,\n(ii) large batches, (iii) momentum encoders. Our experiments show that\ncollapsing solutions do exist for the loss and structure, but a stop-gradient\noperation plays an essential role in preventing collapsing. We provide a\nhypothesis on the implication of stop-gradient, and further show\nproof-of-concept experiments verifying it. Our \"SimSiam\" method achieves\ncompetitive results on ImageNet and downstream tasks. We hope this simple\nbaseline will motivate people to rethink the roles of Siamese architectures for\nunsupervised representation learning. Code will be made available.", + "authors": "Xinlei Chen, Kaiming He", + "published": "2020-11-20", + "updated": "2020-11-20", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1807.05520v2", + "title": "Deep Clustering for Unsupervised Learning of Visual Features", + "abstract": "Clustering is a class of unsupervised learning methods that has been\nextensively applied and studied in computer vision. Little work has been done\nto adapt it to the end-to-end training of visual features on large scale\ndatasets. In this work, we present DeepCluster, a clustering method that\njointly learns the parameters of a neural network and the cluster assignments\nof the resulting features. DeepCluster iteratively groups the features with a\nstandard clustering algorithm, k-means, and uses the subsequent assignments as\nsupervision to update the weights of the network. We apply DeepCluster to the\nunsupervised training of convolutional neural networks on large datasets like\nImageNet and YFCC100M. The resulting model outperforms the current state of the\nart by a significant margin on all the standard benchmarks.", + "authors": "Mathilde Caron, Piotr Bojanowski, Armand Joulin, Matthijs Douze", + "published": "2018-07-15", + "updated": "2019-03-18", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2308.02236v2", + "title": "FB-BEV: BEV Representation from Forward-Backward View Transformations", + "abstract": "View Transformation Module (VTM), where transformations happen between\nmulti-view image features and Bird-Eye-View (BEV) representation, is a crucial\nstep in camera-based BEV perception systems. Currently, the two most prominent\nVTM paradigms are forward projection and backward projection. Forward\nprojection, represented by Lift-Splat-Shoot, leads to sparsely projected BEV\nfeatures without post-processing. Backward projection, with BEVFormer being an\nexample, tends to generate false-positive BEV features from incorrect\nprojections due to the lack of utilization on depth. To address the above\nlimitations, we propose a novel forward-backward view transformation module.\nOur approach compensates for the deficiencies in both existing methods,\nallowing them to enhance each other to obtain higher quality BEV\nrepresentations mutually. We instantiate the proposed module with FB-BEV, which\nachieves a new state-of-the-art result of 62.4% NDS on the nuScenes test set.\nCode and models are available at https://github.com/NVlabs/FB-BEV.", + "authors": "Zhiqi Li, Zhiding Yu, Wenhai Wang, Anima Anandkumar, Tong Lu, Jose M. Alvarez", + "published": "2023-08-04", + "updated": "2023-08-17", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2203.16258v1", + "title": "Image-to-Lidar Self-Supervised Distillation for Autonomous Driving Data", + "abstract": "Segmenting or detecting objects in sparse Lidar point clouds are two\nimportant tasks in autonomous driving to allow a vehicle to act safely in its\n3D environment. The best performing methods in 3D semantic segmentation or\nobject detection rely on a large amount of annotated data. Yet annotating 3D\nLidar data for these tasks is tedious and costly. In this context, we propose a\nself-supervised pre-training method for 3D perception models that is tailored\nto autonomous driving data. Specifically, we leverage the availability of\nsynchronized and calibrated image and Lidar sensors in autonomous driving\nsetups for distilling self-supervised pre-trained image representations into 3D\nmodels. Hence, our method does not require any point cloud nor image\nannotations. The key ingredient of our method is the use of superpixels which\nare used to pool 3D point features and 2D pixel features in visually similar\nregions. We then train a 3D network on the self-supervised task of matching\nthese pooled point features with the corresponding pooled image pixel features.\nThe advantages of contrasting regions obtained by superpixels are that: (1)\ngrouping together pixels and points of visually coherent regions leads to a\nmore meaningful contrastive task that produces features well adapted to 3D\nsemantic segmentation and 3D object detection; (2) all the different regions\nhave the same weight in the contrastive loss regardless of the number of 3D\npoints sampled in these regions; (3) it mitigates the noise produced by\nincorrect matching of points and pixels due to occlusions between the different\nsensors. Extensive experiments on autonomous driving datasets demonstrate the\nability of our image-to-Lidar distillation strategy to produce 3D\nrepresentations that transfer well on semantic segmentation and object\ndetection tasks.", + "authors": "Corentin Sautier, Gilles Puy, Spyros Gidaris, Alexandre Boulch, Andrei Bursuc, Renaud Marlet", + "published": "2022-03-30", + "updated": "2022-03-30", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2011.12953v3", + "title": "Unsupervised Object Detection with LiDAR Clues", + "abstract": "Despite the importance of unsupervised object detection, to the best of our\nknowledge, there is no previous work addressing this problem. One main issue,\nwidely known to the community, is that object boundaries derived only from 2D\nimage appearance are ambiguous and unreliable. To address this, we exploit\nLiDAR clues to aid unsupervised object detection. By exploiting the 3D scene\nstructure, the issue of localization can be considerably mitigated. We further\nidentify another major issue, seldom noticed by the community, that the\nlong-tailed and open-ended (sub-)category distribution should be accommodated.\nIn this paper, we present the first practical method for unsupervised object\ndetection with the aid of LiDAR clues. In our approach, candidate object\nsegments based on 3D point clouds are firstly generated. Then, an iterative\nsegment labeling process is conducted to assign segment labels and to train a\nsegment labeling network, which is based on features from both 2D images and 3D\npoint clouds. The labeling process is carefully designed so as to mitigate the\nissue of long-tailed and open-ended distribution. The final segment labels are\nset as pseudo annotations for object detection network training. Extensive\nexperiments on the large-scale Waymo Open dataset suggest that the derived\nunsupervised object detection method achieves reasonable accuracy compared with\nthat of strong supervision within the LiDAR visible range. Code shall be\nreleased.", + "authors": "Hao Tian, Yuntao Chen, Jifeng Dai, Zhaoxiang Zhang, Xizhou Zhu", + "published": "2020-11-25", + "updated": "2021-04-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2206.01256v3", + "title": "PETRv2: A Unified Framework for 3D Perception from Multi-Camera Images", + "abstract": "In this paper, we propose PETRv2, a unified framework for 3D perception from\nmulti-view images. Based on PETR, PETRv2 explores the effectiveness of temporal\nmodeling, which utilizes the temporal information of previous frames to boost\n3D object detection. More specifically, we extend the 3D position embedding (3D\nPE) in PETR for temporal modeling. The 3D PE achieves the temporal alignment on\nobject position of different frames. A feature-guided position encoder is\nfurther introduced to improve the data adaptability of 3D PE. To support for\nmulti-task learning (e.g., BEV segmentation and 3D lane detection), PETRv2\nprovides a simple yet effective solution by introducing task-specific queries,\nwhich are initialized under different spaces. PETRv2 achieves state-of-the-art\nperformance on 3D object detection, BEV segmentation and 3D lane detection.\nDetailed robustness analysis is also conducted on PETR framework. We hope\nPETRv2 can serve as a strong baseline for 3D perception. Code is available at\n\\url{https://github.com/megvii-research/PETR}.", + "authors": "Yingfei Liu, Junjie Yan, Fan Jia, Shuailin Li, Aqi Gao, Tiancai Wang, Xiangyu Zhang, Jian Sun", + "published": "2022-06-02", + "updated": "2022-11-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.09386v1", + "title": "BEVDistill: Cross-Modal BEV Distillation for Multi-View 3D Object Detection", + "abstract": "3D object detection from multiple image views is a fundamental and\nchallenging task for visual scene understanding. Owing to its low cost and high\nefficiency, multi-view 3D object detection has demonstrated promising\napplication prospects. However, accurately detecting objects through\nperspective views is extremely difficult due to the lack of depth information.\nCurrent approaches tend to adopt heavy backbones for image encoders, making\nthem inapplicable for real-world deployment. Different from the images, LiDAR\npoints are superior in providing spatial cues, resulting in highly precise\nlocalization. In this paper, we explore the incorporation of LiDAR-based\ndetectors for multi-view 3D object detection. Instead of directly training a\ndepth prediction network, we unify the image and LiDAR features in the\nBird-Eye-View (BEV) space and adaptively transfer knowledge across\nnon-homogenous representations in a teacher-student paradigm. To this end, we\npropose \\textbf{BEVDistill}, a cross-modal BEV knowledge distillation (KD)\nframework for multi-view 3D object detection. Extensive experiments demonstrate\nthat the proposed method outperforms current KD approaches on a\nhighly-competitive baseline, BEVFormer, without introducing any extra cost in\nthe inference phase. Notably, our best model achieves 59.4 NDS on the nuScenes\ntest leaderboard, achieving new state-of-the-art in comparison with various\nimage-based detectors. Code will be available at\nhttps://github.com/zehuichen123/BEVDistill.", + "authors": "Zehui Chen, Zhenyu Li, Shiquan Zhang, Liangji Fang, Qinhong Jiang, Feng Zhao", + "published": "2022-11-17", + "updated": "2022-11-17", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2012.11552v2", + "title": "OBoW: Online Bag-of-Visual-Words Generation for Self-Supervised Learning", + "abstract": "Learning image representations without human supervision is an important and\nactive research field. Several recent approaches have successfully leveraged\nthe idea of making such a representation invariant under different types of\nperturbations, especially via contrastive-based instance discrimination\ntraining. Although effective visual representations should indeed exhibit such\ninvariances, there are other important characteristics, such as encoding\ncontextual reasoning skills, for which alternative reconstruction-based\napproaches might be better suited.\n With this in mind, we propose a teacher-student scheme to learn\nrepresentations by training a convolutional net to reconstruct a\nbag-of-visual-words (BoW) representation of an image, given as input a\nperturbed version of that same image. Our strategy performs an online training\nof both the teacher network (whose role is to generate the BoW targets) and the\nstudent network (whose role is to learn representations), along with an online\nupdate of the visual-words vocabulary (used for the BoW targets). This idea\neffectively enables fully online BoW-guided unsupervised learning. Extensive\nexperiments demonstrate the interest of our BoW-based strategy which surpasses\nprevious state-of-the-art methods (including contrastive-based ones) in several\napplications. For instance, in downstream tasks such Pascal object detection,\nPascal classification and Places205 classification, our method improves over\nall prior unsupervised approaches, thus establishing new state-of-the-art\nresults that are also significantly better even than those of supervised\npre-training. We provide the implementation code at\nhttps://github.com/valeoai/obow.", + "authors": "Spyros Gidaris, Andrei Bursuc, Gilles Puy, Nikos Komodakis, Matthieu Cord, Patrick P\u00e9rez", + "published": "2020-12-21", + "updated": "2021-10-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.11337v1", + "title": "Learning by Reconstruction Produces Uninformative Features For Perception", + "abstract": "Input space reconstruction is an attractive representation learning paradigm.\nDespite interpretability of the reconstruction and generation, we identify a\nmisalignment between learning by reconstruction, and learning for perception.\nWe show that the former allocates a model's capacity towards a subspace of the\ndata explaining the observed variance--a subspace with uninformative features\nfor the latter. For example, the supervised TinyImagenet task with images\nprojected onto the top subspace explaining 90\\% of the pixel variance can be\nsolved with 45\\% test accuracy. Using the bottom subspace instead, accounting\nfor only 20\\% of the pixel variance, reaches 55\\% test accuracy. The features\nfor perception being learned last explains the need for long training time,\ne.g., with Masked Autoencoders. Learning by denoising is a popular strategy to\nalleviate that misalignment. We prove that while some noise strategies such as\nmasking are indeed beneficial, others such as additive Gaussian noise are not.\nYet, even in the case of masking, we find that the benefits vary as a function\nof the mask's shape, ratio, and the considered dataset. While tuning the noise\nstrategy without knowledge of the perception task seems challenging, we provide\nfirst clues on how to detect if a noise strategy is never beneficial regardless\nof the perception task.", + "authors": "Randall Balestriero, Yann LeCun", + "published": "2024-02-17", + "updated": "2024-02-17", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2312.17655v1", + "title": "Visual Point Cloud Forecasting enables Scalable Autonomous Driving", + "abstract": "In contrast to extensive studies on general vision, pre-training for scalable\nvisual autonomous driving remains seldom explored. Visual autonomous driving\napplications require features encompassing semantics, 3D geometry, and temporal\ninformation simultaneously for joint perception, prediction, and planning,\nposing dramatic challenges for pre-training. To resolve this, we bring up a new\npre-training task termed as visual point cloud forecasting - predicting future\npoint clouds from historical visual input. The key merit of this task captures\nthe synergic learning of semantics, 3D structures, and temporal dynamics. Hence\nit shows superiority in various downstream tasks. To cope with this new\nproblem, we present ViDAR, a general model to pre-train downstream visual\nencoders. It first extracts historical embeddings by the encoder. These\nrepresentations are then transformed to 3D geometric space via a novel Latent\nRendering operator for future point cloud prediction. Experiments show\nsignificant gain in downstream tasks, e.g., 3.1% NDS on 3D detection, ~10%\nerror reduction on motion forecasting, and ~15% less collision rate on\nplanning.", + "authors": "Zetong Yang, Li Chen, Yanan Sun, Hongyang Li", + "published": "2023-12-29", + "updated": "2023-12-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2304.04185v1", + "title": "BEVStereo++: Accurate Depth Estimation in Multi-view 3D Object Detection via Dynamic Temporal Stereo", + "abstract": "Bounded by the inherent ambiguity of depth perception, contemporary\nmulti-view 3D object detection methods fall into the performance bottleneck.\nIntuitively, leveraging temporal multi-view stereo (MVS) technology is the\nnatural knowledge for tackling this ambiguity. However, traditional attempts of\nMVS has two limitations when applying to 3D object detection scenes: 1) The\naffinity measurement among all views suffers expensive computational cost; 2)\nIt is difficult to deal with outdoor scenarios where objects are often mobile.\nTo this end, we propose BEVStereo++: by introducing a dynamic temporal stereo\nstrategy, BEVStereo++ is able to cut down the harm that is brought by\nintroducing temporal stereo when dealing with those two scenarios. Going one\nstep further, we apply Motion Compensation Module and long sequence Frame\nFusion to BEVStereo++, which shows further performance boosting and error\nreduction. Without bells and whistles, BEVStereo++ achieves\nstate-of-the-art(SOTA) on both Waymo and nuScenes dataset.", + "authors": "Yinhao Li, Jinrong Yang, Jianjian Sun, Han Bao, Zheng Ge, Li Xiao", + "published": "2023-04-09", + "updated": "2023-04-09", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2104.10956v3", + "title": "FCOS3D: Fully Convolutional One-Stage Monocular 3D Object Detection", + "abstract": "Monocular 3D object detection is an important task for autonomous driving\nconsidering its advantage of low cost. It is much more challenging than\nconventional 2D cases due to its inherent ill-posed property, which is mainly\nreflected in the lack of depth information. Recent progress on 2D detection\noffers opportunities to better solving this problem. However, it is non-trivial\nto make a general adapted 2D detector work in this 3D task. In this paper, we\nstudy this problem with a practice built on a fully convolutional single-stage\ndetector and propose a general framework FCOS3D. Specifically, we first\ntransform the commonly defined 7-DoF 3D targets to the image domain and\ndecouple them as 2D and 3D attributes. Then the objects are distributed to\ndifferent feature levels with consideration of their 2D scales and assigned\nonly according to the projected 3D-center for the training procedure.\nFurthermore, the center-ness is redefined with a 2D Gaussian distribution based\non the 3D-center to fit the 3D target formulation. All of these make this\nframework simple yet effective, getting rid of any 2D detection or 2D-3D\ncorrespondence priors. Our solution achieves 1st place out of all the\nvision-only methods in the nuScenes 3D detection challenge of NeurIPS 2020.\nCode and models are released at https://github.com/open-mmlab/mmdetection3d.", + "authors": "Tai Wang, Xinge Zhu, Jiangmiao Pang, Dahua Lin", + "published": "2021-04-22", + "updated": "2021-09-24", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2302.07817v2", + "title": "Tri-Perspective View for Vision-Based 3D Semantic Occupancy Prediction", + "abstract": "Modern methods for vision-centric autonomous driving perception widely adopt\nthe bird's-eye-view (BEV) representation to describe a 3D scene. Despite its\nbetter efficiency than voxel representation, it has difficulty describing the\nfine-grained 3D structure of a scene with a single plane. To address this, we\npropose a tri-perspective view (TPV) representation which accompanies BEV with\ntwo additional perpendicular planes. We model each point in the 3D space by\nsumming its projected features on the three planes. To lift image features to\nthe 3D TPV space, we further propose a transformer-based TPV encoder\n(TPVFormer) to obtain the TPV features effectively. We employ the attention\nmechanism to aggregate the image features corresponding to each query in each\nTPV plane. Experiments show that our model trained with sparse supervision\neffectively predicts the semantic occupancy for all voxels. We demonstrate for\nthe first time that using only camera inputs can achieve comparable performance\nwith LiDAR-based methods on the LiDAR segmentation task on nuScenes. Code:\nhttps://github.com/wzzheng/TPVFormer.", + "authors": "Yuanhui Huang, Wenzhao Zheng, Yunpeng Zhang, Jie Zhou, Jiwen Lu", + "published": "2023-02-15", + "updated": "2023-03-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2401.15863v1", + "title": "Importance-Aware Adaptive Dataset Distillation", + "abstract": "Herein, we propose a novel dataset distillation method for constructing small\ninformative datasets that preserve the information of the large original\ndatasets. The development of deep learning models is enabled by the\navailability of large-scale datasets. Despite unprecedented success,\nlarge-scale datasets considerably increase the storage and transmission costs,\nresulting in a cumbersome model training process. Moreover, using raw data for\ntraining raises privacy and copyright concerns. To address these issues, a new\ntask named dataset distillation has been introduced, aiming to synthesize a\ncompact dataset that retains the essential information from the large original\ndataset. State-of-the-art (SOTA) dataset distillation methods have been\nproposed by matching gradients or network parameters obtained during training\non real and synthetic datasets. The contribution of different network\nparameters to the distillation process varies, and uniformly treating them\nleads to degraded distillation performance. Based on this observation, we\npropose an importance-aware adaptive dataset distillation (IADD) method that\ncan improve distillation performance by automatically assigning importance\nweights to different network parameters during distillation, thereby\nsynthesizing more robust distilled datasets. IADD demonstrates superior\nperformance over other SOTA dataset distillation methods based on parameter\nmatching on multiple benchmark datasets and outperforms them in terms of\ncross-architecture generalization. In addition, the analysis of self-adaptive\nweights demonstrates the effectiveness of IADD. Furthermore, the effectiveness\nof IADD is validated in a real-world medical application such as COVID-19\ndetection.", + "authors": "Guang Li, Ren Togo, Takahiro Ogawa, Miki Haseyama", + "published": "2024-01-29", + "updated": "2024-01-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2310.18628v2", + "title": "Personalised Distillation: Empowering Open-Sourced LLMs with Adaptive Learning for Code Generation", + "abstract": "With the rise of powerful closed-sourced LLMs (ChatGPT, GPT-4), there are\nincreasing interests in distilling the capabilies of close-sourced LLMs to\nsmaller open-sourced LLMs. Previous distillation methods usually prompt ChatGPT\nto generate a set of instructions and answers, for the student model to learn.\nHowever, such standard distillation approach neglects the merits and conditions\nof the student model. Inspired by modern teaching principles, we design a\npersonalised distillation process, in which the student attempts to solve a\ntask first, then the teacher provides an adaptive refinement for the student to\nimprove. Instead of feeding the student with teacher's prior, personalised\ndistillation enables personalised learning for the student model, as it only\nlearns on examples it makes mistakes upon and learns to improve its own\nsolution. On code generation, personalised distillation consistently\noutperforms standard distillation with only one third of the data. With only\n2.5-3K personalised examples that incur a data-collection cost of 4-6$, we\nboost CodeGen-mono-16B by 7% to achieve 36.4% pass@1 and StarCoder by 12.2% to\nachieve 45.8% pass@1 on HumanEval.", + "authors": "Hailin Chen, Amrita Saha, Steven Hoi, Shafiq Joty", + "published": "2023-10-28", + "updated": "2024-01-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2206.08491v1", + "title": "Revisiting Self-Distillation", + "abstract": "Knowledge distillation is the procedure of transferring \"knowledge\" from a\nlarge model (the teacher) to a more compact one (the student), often being used\nin the context of model compression. When both models have the same\narchitecture, this procedure is called self-distillation. Several works have\nanecdotally shown that a self-distilled student can outperform the teacher on\nheld-out data. In this work, we systematically study self-distillation in a\nnumber of settings. We first show that even with a highly accurate teacher,\nself-distillation allows a student to surpass the teacher in all cases.\nSecondly, we revisit existing theoretical explanations of (self) distillation\nand identify contradicting examples, revealing possible drawbacks of these\nexplanations. Finally, we provide an alternative explanation for the dynamics\nof self-distillation through the lens of loss landscape geometry. We conduct\nextensive experiments to show that self-distillation leads to flatter minima,\nthereby resulting in better generalization.", + "authors": "Minh Pham, Minsu Cho, Ameya Joshi, Chinmay Hegde", + "published": "2022-06-17", + "updated": "2022-06-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/0803.0345v2", + "title": "Secret key distillation from shielded two-qubit states", + "abstract": "The quantum states corresponding to a secret key are characterized using the\nso-called private states, where the key part consisting of a secret key is\nshielded by the additional systems. Based on the construction, it was shown\nthat a secret key can be distilled from bound entangled states. In this work, I\nconsider the shielded two-qubit states in a key-distillation scenario and\nderive the conditions under which a secret key can be distilled using the\nrecurrence protocol or the two-way classical distillation, advantage\ndistillation together with one-way postprocessing. From the security\nconditions, it is shown that a secret key can be distilled from bound entangled\nstates in a much wider range. In addition, I consider the case that in which\nwhite noise is added to quantum states and show that the classical distillation\nprotocol still works despite a certain amount of noise although the recurrence\nprotocol does not.", + "authors": "Joonwoo Bae", + "published": "2008-03-03", + "updated": "2010-09-22", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/0908.2142v1", + "title": "Distillation of Bell states in open systems", + "abstract": "In this work we review the entire classification of 2x2 distillable states\nfor protocols with a finite numbers of copies. We show a distillation protocol\nthat allows to distill Bell states with non zero probability at any time for an\ninitial singlet in vacuum. It is shown that the same protocol used in non zero\nthermal baths yields a considerable recovering of entanglement.", + "authors": "E. Isasi, D. Mundarain", + "published": "2009-08-14", + "updated": "2009-08-14", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2402.02781v1", + "title": "Dual Knowledge Distillation for Efficient Sound Event Detection", + "abstract": "Sound event detection (SED) is essential for recognizing specific sounds and\ntheir temporal locations within acoustic signals. This becomes challenging\nparticularly for on-device applications, where computational resources are\nlimited. To address this issue, we introduce a novel framework referred to as\ndual knowledge distillation for developing efficient SED systems in this work.\nOur proposed dual knowledge distillation commences with temporal-averaging\nknowledge distillation (TAKD), utilizing a mean student model derived from the\ntemporal averaging of the student model's parameters. This allows the student\nmodel to indirectly learn from a pre-trained teacher model, ensuring a stable\nknowledge distillation. Subsequently, we introduce embedding-enhanced feature\ndistillation (EEFD), which involves incorporating an embedding distillation\nlayer within the student model to bolster contextual learning. On DCASE 2023\nTask 4A public evaluation dataset, our proposed SED system with dual knowledge\ndistillation having merely one-third of the baseline model's parameters,\ndemonstrates superior performance in terms of PSDS1 and PSDS2. This highlights\nthe importance of proposed dual knowledge distillation for compact SED systems,\nwhich can be ideal for edge devices.", + "authors": "Yang Xiao, Rohan Kumar Das", + "published": "2024-02-05", + "updated": "2024-02-05", + "primary_cat": "cs.SD", + "cats": [ + "cs.SD", + "cs.AI", + "cs.CL", + "cs.LG", + "eess.AS" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2303.05958v1", + "title": "Robust Knowledge Distillation from RNN-T Models With Noisy Training Labels Using Full-Sum Loss", + "abstract": "This work studies knowledge distillation (KD) and addresses its constraints\nfor recurrent neural network transducer (RNN-T) models. In hard distillation, a\nteacher model transcribes large amounts of unlabelled speech to train a student\nmodel. Soft distillation is another popular KD method that distills the output\nlogits of the teacher model. Due to the nature of RNN-T alignments, applying\nsoft distillation between RNN-T architectures having different posterior\ndistributions is challenging. In addition, bad teachers having high\nword-error-rate (WER) reduce the efficacy of KD. We investigate how to\neffectively distill knowledge from variable quality ASR teachers, which has not\nbeen studied before to the best of our knowledge. We show that a sequence-level\nKD, full-sum distillation, outperforms other distillation methods for RNN-T\nmodels, especially for bad teachers. We also propose a variant of full-sum\ndistillation that distills the sequence discriminative knowledge of the teacher\nleading to further improvement in WER. We conduct experiments on public\ndatasets namely SpeechStew and LibriSpeech, and on in-house production data.", + "authors": "Mohammad Zeineldeen, Kartik Audhkhasi, Murali Karthick Baskar, Bhuvana Ramabhadran", + "published": "2023-03-10", + "updated": "2023-03-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.SD", + "eess.AS", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2112.05638v2", + "title": "DistilCSE: Effective Knowledge Distillation For Contrastive Sentence Embeddings", + "abstract": "Large-scale contrastive learning models can learn very informative sentence\nembeddings, but are hard to serve online due to the huge model size. Therefore,\nthey often play the role of \"teacher\", transferring abilities to small\n\"student\" models through knowledge distillation. However, knowledge\ndistillation inevitably brings some drop in embedding effect. To tackle that,\nwe propose an effective knowledge distillation framework for contrastive\nsentence embeddings, termed DistilCSE. It first applies knowledge distillation\non a large amount of unlabeled data, and then fine-tunes student models through\ncontrastive learning on limited labeled data. To achieve better distillation\nresults, we further propose Contrastive Knowledge Distillation (CKD). CKD uses\nInfoNCE as the loss function in knowledge distillation, enhancing the objective\nconsistency among teacher model training, knowledge distillation, and student\nmodel fine-tuning. Extensive experiments show that student models trained with\nthe proposed DistilCSE and CKD suffer from little or even no performance\ndecrease and consistently outperform the corresponding counterparts of the same\nparameter size. Impressively, our 110M student model outperforms the latest\nstate-of-the-art model, i.e., Sentence-T5 (11B), with only 1% parameters and\n0.25% unlabeled data.", + "authors": "Chaochen Gao, Xing Wu, Peng Wang, Jue Wang, Liangjun Zang, Zhongyuan Wang, Songlin Hu", + "published": "2021-12-10", + "updated": "2023-01-30", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2109.14960v3", + "title": "Prune Your Model Before Distill It", + "abstract": "Knowledge distillation transfers the knowledge from a cumbersome teacher to a\nsmall student. Recent results suggest that the student-friendly teacher is more\nappropriate to distill since it provides more transferable knowledge. In this\nwork, we propose the novel framework, \"prune, then distill,\" that prunes the\nmodel first to make it more transferrable and then distill it to the student.\nWe provide several exploratory examples where the pruned teacher teaches better\nthan the original unpruned networks. We further show theoretically that the\npruned teacher plays the role of regularizer in distillation, which reduces the\ngeneralization error. Based on this result, we propose a novel neural network\ncompression scheme where the student network is formed based on the pruned\nteacher and then apply the \"prune, then distill\" strategy. The code is\navailable at https://github.com/ososos888/prune-then-distill", + "authors": "Jinhyuk Park, Albert No", + "published": "2021-09-30", + "updated": "2022-07-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2401.06370v1", + "title": "Graph Relation Distillation for Efficient Biomedical Instance Segmentation", + "abstract": "Instance-aware embeddings predicted by deep neural networks have\nrevolutionized biomedical instance segmentation, but its resource requirements\nare substantial. Knowledge distillation offers a solution by transferring\ndistilled knowledge from heavy teacher networks to lightweight yet\nhigh-performance student networks. However, existing knowledge distillation\nmethods struggle to extract knowledge for distinguishing instances and overlook\nglobal relation information. To address these challenges, we propose a graph\nrelation distillation approach for efficient biomedical instance segmentation,\nwhich considers three essential types of knowledge: instance-level features,\ninstance relations, and pixel-level boundaries. We introduce two graph\ndistillation schemes deployed at both the intra-image level and the inter-image\nlevel: instance graph distillation (IGD) and affinity graph distillation (AGD).\nIGD constructs a graph representing instance features and relations,\ntransferring these two types of knowledge by enforcing instance graph\nconsistency. AGD constructs an affinity graph representing pixel relations to\ncapture structured knowledge of instance boundaries, transferring\nboundary-related knowledge by ensuring pixel affinity consistency. Experimental\nresults on a number of biomedical datasets validate the effectiveness of our\napproach, enabling student models with less than $ 1\\%$ parameters and less\nthan $10\\%$ inference time while achieving promising performance compared to\nteacher models.", + "authors": "Xiaoyu Liu, Yueyi Zhang, Zhiwei Xiong, Wei Huang, Bo Hu, Xiaoyan Sun, Feng Wu", + "published": "2024-01-12", + "updated": "2024-01-12", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2006.08572v3", + "title": "Flexible Dataset Distillation: Learn Labels Instead of Images", + "abstract": "We study the problem of dataset distillation - creating a small set of\nsynthetic examples capable of training a good model. In particular, we study\nthe problem of label distillation - creating synthetic labels for a small set\nof real images, and show it to be more effective than the prior image-based\napproach to dataset distillation. Methodologically, we introduce a more robust\nand flexible meta-learning algorithm for distillation, as well as an effective\nfirst-order strategy based on convex optimization layers. Distilling labels\nwith our new algorithm leads to improved results over prior image-based\ndistillation. More importantly, it leads to clear improvements in flexibility\nof the distilled dataset in terms of compatibility with off-the-shelf\noptimizers and diverse neural architectures. Interestingly, label distillation\ncan also be applied across datasets, for example enabling learning Japanese\ncharacter recognition by training only on synthetically labeled English\nletters.", + "authors": "Ondrej Bohdal, Yongxin Yang, Timothy Hospedales", + "published": "2020-06-15", + "updated": "2020-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2205.16004v3", + "title": "What Knowledge Gets Distilled in Knowledge Distillation?", + "abstract": "Knowledge distillation aims to transfer useful information from a teacher\nnetwork to a student network, with the primary goal of improving the student's\nperformance for the task at hand. Over the years, there has a been a deluge of\nnovel techniques and use cases of knowledge distillation. Yet, despite the\nvarious improvements, there seems to be a glaring gap in the community's\nfundamental understanding of the process. Specifically, what is the knowledge\nthat gets distilled in knowledge distillation? In other words, in what ways\ndoes the student become similar to the teacher? Does it start to localize\nobjects in the same way? Does it get fooled by the same adversarial samples?\nDoes its data invariance properties become similar? Our work presents a\ncomprehensive study to try to answer these questions. We show that existing\nmethods can indeed indirectly distill these properties beyond improving task\nperformance. We further study why knowledge distillation might work this way,\nand show that our findings have practical implications as well.", + "authors": "Utkarsh Ojha, Yuheng Li, Anirudh Sundara Rajan, Yingyu Liang, Yong Jae Lee", + "published": "2022-05-31", + "updated": "2023-11-06", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2403.10045v1", + "title": "Towards Adversarially Robust Dataset Distillation by Curvature Regularization", + "abstract": "Dataset distillation (DD) allows datasets to be distilled to fractions of\ntheir original size while preserving the rich distributional information so\nthat models trained on the distilled datasets can achieve a comparable accuracy\nwhile saving significant computational loads. Recent research in this area has\nbeen focusing on improving the accuracy of models trained on distilled\ndatasets. In this paper, we aim to explore a new perspective of DD. We study\nhow to embed adversarial robustness in distilled datasets, so that models\ntrained on these datasets maintain the high accuracy and meanwhile acquire\nbetter adversarial robustness. We propose a new method that achieves this goal\nby incorporating curvature regularization into the distillation process with\nmuch less computational overhead than standard adversarial training. Extensive\nempirical experiments suggest that our method not only outperforms standard\nadversarial training on both accuracy and robustness with less computation\noverhead but is also capable of generating robust distilled datasets that can\nwithstand various adversarial attacks.", + "authors": "Eric Xue, Yijiang Li, Haoyang Liu, Yifan Shen, Haohan Wang", + "published": "2024-03-15", + "updated": "2024-03-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1901.09135v1", + "title": "Progressive Label Distillation: Learning Input-Efficient Deep Neural Networks", + "abstract": "Much of the focus in the area of knowledge distillation has been on\ndistilling knowledge from a larger teacher network to a smaller student\nnetwork. However, there has been little research on how the concept of\ndistillation can be leveraged to distill the knowledge encapsulated in the\ntraining data itself into a reduced form. In this study, we explore the concept\nof progressive label distillation, where we leverage a series of\nteacher-student network pairs to progressively generate distilled training data\nfor learning deep neural networks with greatly reduced input dimensions. To\ninvestigate the efficacy of the proposed progressive label distillation\napproach, we experimented with learning a deep limited vocabulary speech\nrecognition network based on generated 500ms input utterances distilled\nprogressively from 1000ms source training data, and demonstrated a significant\nincrease in test accuracy of almost 78% compared to direct learning.", + "authors": "Zhong Qiu Lin, Alexander Wong", + "published": "2019-01-26", + "updated": "2019-01-26", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2106.07137v1", + "title": "Why Can You Lay Off Heads? Investigating How BERT Heads Transfer", + "abstract": "The huge size of the widely used BERT family models has led to recent efforts\nabout model distillation. The main goal of distillation is to create a\ntask-agnostic pre-trained model that can be fine-tuned on downstream tasks\nwithout fine-tuning its full-sized version. Despite the progress of\ndistillation, to what degree and for what reason a task-agnostic model can be\ncreated from distillation has not been well studied. Also, the mechanisms\nbehind transfer learning of those BERT models are not well investigated either.\nTherefore, this work focuses on analyzing the acceptable deduction when\ndistillation for guiding the future distillation procedure. Specifically, we\nfirst inspect the prunability of the Transformer heads in RoBERTa and ALBERT\nusing their head importance estimation proposed by Michel et al. (2019), and\nthen check the coherence of the important heads between the pre-trained task\nand downstream tasks. Hence, the acceptable deduction of performance on the\npre-trained task when distilling a model can be derived from the results, and\nwe further compare the behavior of the pruned model before and after\nfine-tuning. Our studies provide guidance for future directions about BERT\nfamily model distillation.", + "authors": "Ting-Rui Chiang, Yun-Nung Chen", + "published": "2021-06-14", + "updated": "2021-06-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0108029v1", + "title": "Distillability, Bell inequalities and multiparticle bound entanglement", + "abstract": "We study the relation between violation of Bell inequalities and\ndistillability properties of quantum states. Recently, D\\\"ur has shown that\nthere are some multiparticle bound entangled states, non-separable and\nnon-distillable, that violate a Bell inequality. We prove that for all the\nstates violating this inequality there exist at least one splitting of the\nparties into two groups such that some pure-state entanglement can be\ndistilled, obtaining a connection between Bell inequalities and bipartite\ndistillable entanglement.", + "authors": "A. Acin", + "published": "2001-08-07", + "updated": "2001-08-07", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2308.07719v1", + "title": "The coherent measurement cost of coherence distillation", + "abstract": "Quantum coherence is an indispensable resource for quantum technological\napplications. It is known to be distillable from a noisy form using operations\nthat cannot create coherence. However, distillation exacts a hidden coherent\nmeasurement cost, whose extent has not previously been estimated. Here we show\nthat this cost (quantified by an equivalent number of Hadamard measurements) is\nrelated to what we call the irretrievable coherence: the difference between the\ncoherence of formation and the distillable coherence. We conjecture (and make\npartial progress towards proving) that when distilling from many copies of a\ngiven noisy coherent state, the coherent measurement cost scales extensively in\nthe number of copies, at an asymptotic rate exactly equalling the input's\nirretrievable coherence. This cost applies to any application whereof coherence\ndistillation is an incidental outcome (e.g. incoherent randomness extraction),\nbut the implications are more dramatic if pure coherence is the only desired\noutcome: the measurement cost may often be higher than the distilled yield, in\nwhich case coherence should rather be prepared afresh than distilled from a\nnoisy input.", + "authors": "Varun Narasimhachar", + "published": "2023-08-15", + "updated": "2023-08-15", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1910.02551v3", + "title": "Soft-Label Dataset Distillation and Text Dataset Distillation", + "abstract": "Dataset distillation is a method for reducing dataset sizes by learning a\nsmall number of synthetic samples containing all the information of a large\ndataset. This has several benefits like speeding up model training, reducing\nenergy consumption, and reducing required storage space. Currently, each\nsynthetic sample is assigned a single `hard' label, and also, dataset\ndistillation can currently only be used with image data.\n We propose to simultaneously distill both images and their labels, thus\nassigning each synthetic sample a `soft' label (a distribution of labels). Our\nalgorithm increases accuracy by 2-4% over the original algorithm for several\nimage classification tasks. Using `soft' labels also enables distilled datasets\nto consist of fewer samples than there are classes as each sample can encode\ninformation for multiple classes. For example, training a LeNet model with 10\ndistilled images (one per class) results in over 96% accuracy on MNIST, and\nalmost 92% accuracy when trained on just 5 distilled images.\n We also extend the dataset distillation algorithm to distill sequential\ndatasets including texts. We demonstrate that text distillation outperforms\nother methods across multiple datasets. For example, models attain almost their\noriginal accuracy on the IMDB sentiment analysis task using just 20 distilled\nsentences.\n Our code can be found at\n$\\href{https://github.com/ilia10000/dataset-distillation}{\\text{https://github.com/ilia10000/dataset-distillation}}$.", + "authors": "Ilia Sucholutsky, Matthias Schonlau", + "published": "2019-10-06", + "updated": "2020-05-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2205.02399v1", + "title": "Spot-adaptive Knowledge Distillation", + "abstract": "Knowledge distillation (KD) has become a well established paradigm for\ncompressing deep neural networks. The typical way of conducting knowledge\ndistillation is to train the student network under the supervision of the\nteacher network to harness the knowledge at one or multiple spots (i.e.,\nlayers) in the teacher network. The distillation spots, once specified, will\nnot change for all the training samples, throughout the whole distillation\nprocess. In this work, we argue that distillation spots should be adaptive to\ntraining samples and distillation epochs. We thus propose a new distillation\nstrategy, termed spot-adaptive KD (SAKD), to adaptively determine the\ndistillation spots in the teacher network per sample, at every training\niteration during the whole distillation period. As SAKD actually focuses on\n\"where to distill\" instead of \"what to distill\" that is widely investigated by\nmost existing works, it can be seamlessly integrated into existing distillation\nmethods to further improve their performance. Extensive experiments with 10\nstate-of-the-art distillers are conducted to demonstrate the effectiveness of\nSAKD for improving their distillation performance, under both homogeneous and\nheterogeneous distillation settings. Code is available at\nhttps://github.com/zju-vipa/spot-adaptive-pytorch", + "authors": "Jie Song, Ying Chen, Jingwen Ye, Mingli Song", + "published": "2022-05-05", + "updated": "2022-05-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2006.01683v1", + "title": "Channel Distillation: Channel-Wise Attention for Knowledge Distillation", + "abstract": "Knowledge distillation is to transfer the knowledge from the data learned by\nthe teacher network to the student network, so that the student has the\nadvantage of less parameters and less calculations, and the accuracy is close\nto the teacher. In this paper, we propose a new distillation method, which\ncontains two transfer distillation strategies and a loss decay strategy. The\nfirst transfer strategy is based on channel-wise attention, called Channel\nDistillation (CD). CD transfers the channel information from the teacher to the\nstudent. The second is Guided Knowledge Distillation (GKD). Unlike Knowledge\nDistillation (KD), which allows the student to mimic each sample's prediction\ndistribution of the teacher, GKD only enables the student to mimic the correct\noutput of the teacher. The last part is Early Decay Teacher (EDT). During the\ntraining process, we gradually decay the weight of the distillation loss. The\npurpose is to enable the student to gradually control the optimization rather\nthan the teacher. Our proposed method is evaluated on ImageNet and CIFAR100. On\nImageNet, we achieve 27.68% of top-1 error with ResNet18, which outperforms\nstate-of-the-art methods. On CIFAR100, we achieve surprising result that the\nstudent outperforms the teacher. Code is available at\nhttps://github.com/zhouzaida/channel-distillation.", + "authors": "Zaida Zhou, Chaoran Zhuge, Xinwei Guan, Wen Liu", + "published": "2020-06-02", + "updated": "2020-06-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1905.09747v2", + "title": "Adversarially Robust Distillation", + "abstract": "Knowledge distillation is effective for producing small, high-performance\nneural networks for classification, but these small networks are vulnerable to\nadversarial attacks. This paper studies how adversarial robustness transfers\nfrom teacher to student during knowledge distillation. We find that a large\namount of robustness may be inherited by the student even when distilled on\nonly clean images. Second, we introduce Adversarially Robust Distillation (ARD)\nfor distilling robustness onto student networks. In addition to producing small\nmodels with high test accuracy like conventional distillation, ARD also passes\nthe superior robustness of large networks onto the student. In our experiments,\nwe find that ARD student models decisively outperform adversarially trained\nnetworks of identical architecture in terms of robust accuracy, surpassing\nstate-of-the-art methods on standard robustness benchmarks. Finally, we adapt\nrecent fast adversarial training methods to ARD for accelerated robust\ndistillation.", + "authors": "Micah Goldblum, Liam Fowl, Soheil Feizi, Tom Goldstein", + "published": "2019-05-23", + "updated": "2019-12-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2302.09632v1", + "title": "HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained Transformers", + "abstract": "Knowledge distillation has been shown to be a powerful model compression\napproach to facilitate the deployment of pre-trained language models in\npractice. This paper focuses on task-agnostic distillation. It produces a\ncompact pre-trained model that can be easily fine-tuned on various tasks with\nsmall computational costs and memory footprints. Despite the practical\nbenefits, task-agnostic distillation is challenging. Since the teacher model\nhas a significantly larger capacity and stronger representation power than the\nstudent model, it is very difficult for the student to produce predictions that\nmatch the teacher's over a massive amount of open-domain training data. Such a\nlarge prediction discrepancy often diminishes the benefits of knowledge\ndistillation. To address this challenge, we propose Homotopic Distillation\n(HomoDistil), a novel task-agnostic distillation approach equipped with\niterative pruning. Specifically, we initialize the student model from the\nteacher model, and iteratively prune the student's neurons until the target\nwidth is reached. Such an approach maintains a small discrepancy between the\nteacher's and student's predictions throughout the distillation process, which\nensures the effectiveness of knowledge transfer. Extensive experiments\ndemonstrate that HomoDistil achieves significant improvements on existing\nbaselines.", + "authors": "Chen Liang, Haoming Jiang, Zheng Li, Xianfeng Tang, Bin Yin, Tuo Zhao", + "published": "2023-02-19", + "updated": "2023-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/9809078v2", + "title": "A rigorous treatment of distillable entanglement", + "abstract": "The notion of distillable entanglement is one of the fundamental concepts of\nquantum information theory. Unfortunately, there is an apparent mismatch\nbetween the intuitive and rigorous definitions of distillable entanglement. To\nbe precise, the existing rigorous definitions impose the constraint that the\ndistilation protocol produce an output of constant dimension. It is therefore\nconceivable that this unnecessary constraint might have led to underestimation\nof the true distillable entanglement. We give a new definition of distillable\nentanglement which removes this constraint, but could conceivably overestimate\nthe true value. Since the definitions turn out to be equivalent, neither\nunderestimation nor overestimation is possible, and both definitions are\narguably correct", + "authors": "Eric M. Rains", + "published": "1998-09-24", + "updated": "1998-10-12", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2304.04615v1", + "title": "A Survey on Recent Teacher-student Learning Studies", + "abstract": "Knowledge distillation is a method of transferring the knowledge from a\ncomplex deep neural network (DNN) to a smaller and faster DNN, while preserving\nits accuracy. Recent variants of knowledge distillation include teaching\nassistant distillation, curriculum distillation, mask distillation, and\ndecoupling distillation, which aim to improve the performance of knowledge\ndistillation by introducing additional components or by changing the learning\nprocess. Teaching assistant distillation involves an intermediate model called\nthe teaching assistant, while curriculum distillation follows a curriculum\nsimilar to human education. Mask distillation focuses on transferring the\nattention mechanism learned by the teacher, and decoupling distillation\ndecouples the distillation loss from the task loss. Overall, these variants of\nknowledge distillation have shown promising results in improving the\nperformance of knowledge distillation.", + "authors": "Minghong Gao", + "published": "2023-04-10", + "updated": "2023-04-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/0704.3661v1", + "title": "Complementarity, distillable secret key, and distillable entanglement", + "abstract": "We consider controllability of two conjugate observables Z and X by two\nparties with classical communication. The ability is specified by two\nalternative tasks, (i) agreement on Z and (ii) preparation of an eigenstate of\nX with use of an extra communication channel. We prove that their feasibility\nis equivalent to that of key distillation if the extra channel is quantum, and\nto that of entanglement distillation if it is classical. This clarifies the\ndistinction between two entanglement measures, distillable key and distillable\nentanglement.", + "authors": "Masato Koashi", + "published": "2007-04-27", + "updated": "2007-04-27", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0303009v2", + "title": "Security bounds in Quantum Cryptography using d-level systems", + "abstract": "We analyze the security of quantum cryptography schemes for $d$-level systems\nusing 2 or $d+1$ maximally conjugated bases, under individual eavesdropping\nattacks based on cloning machines and measurement after the basis\nreconciliation. We consider classical advantage distillation protocols, that\nallow to extract a key even in situations where the mutual information between\nthe honest parties is smaller than the eavesdropper's information. In this\nscenario, advantage distillation protocols are shown to be as powerful as\nquantum distillation: key distillation is possible using classical techniques\nif and only if the corresponding state in the entanglement based protocol is\ndistillable.", + "authors": "Antonio Acin, Nicolas Gisin, Valerio Scarani", + "published": "2003-03-03", + "updated": "2003-11-03", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1607.04311v1", + "title": "Defensive Distillation is Not Robust to Adversarial Examples", + "abstract": "We show that defensive distillation is not secure: it is no more resistant to\ntargeted misclassification attacks than unprotected neural networks.", + "authors": "Nicholas Carlini, David Wagner", + "published": "2016-07-14", + "updated": "2016-07-14", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2211.08071v2", + "title": "Knowledge Distillation for Detection Transformer with Consistent Distillation Points Sampling", + "abstract": "DETR is a novel end-to-end transformer architecture object detector, which\nsignificantly outperforms classic detectors when scaling up the model size. In\nthis paper, we focus on the compression of DETR with knowledge distillation.\nWhile knowledge distillation has been well-studied in classic detectors, there\nis a lack of researches on how to make it work effectively on DETR. We first\nprovide experimental and theoretical analysis to point out that the main\nchallenge in DETR distillation is the lack of consistent distillation points.\nDistillation points refer to the corresponding inputs of the predictions for\nstudent to mimic, and reliable distillation requires sufficient distillation\npoints which are consistent between teacher and student. Based on this\nobservation, we propose a general knowledge distillation paradigm for\nDETR(KD-DETR) with consistent distillation points sampling. Specifically, we\ndecouple detection and distillation tasks by introducing a set of specialized\nobject queries to construct distillation points. In this paradigm, we further\npropose a general-to-specific distillation points sampling strategy to explore\nthe extensibility of KD-DETR. Extensive experiments on different DETR\narchitectures with various scales of backbones and transformer layers validate\nthe effectiveness and generalization of KD-DETR. KD-DETR boosts the performance\nof DAB-DETR with ResNet-18 and ResNet-50 backbone to 41.4$\\%$, 45.7$\\%$ mAP,\nrespectively, which are 5.2$\\%$, 3.5$\\%$ higher than the baseline, and\nResNet-50 even surpasses the teacher model by $2.2\\%$.", + "authors": "Yu Wang, Xin Li, Shengzhao Wen, Fukui Yang, Wanping Zhang, Gang Zhang, Haocheng Feng, Junyu Han, Errui Ding", + "published": "2022-11-15", + "updated": "2022-11-16", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1912.12630v1", + "title": "Real-time Policy Distillation in Deep Reinforcement Learning", + "abstract": "Policy distillation in deep reinforcement learning provides an effective way\nto transfer control policies from a larger network to a smaller untrained\nnetwork without a significant degradation in performance. However, policy\ndistillation is underexplored in deep reinforcement learning, and existing\napproaches are computationally inefficient, resulting in a long distillation\ntime. In addition, the effectiveness of the distillation process is still\nlimited to the model capacity. We propose a new distillation mechanism, called\nreal-time policy distillation, in which training the teacher model and\ndistilling the policy to the student model occur simultaneously. Accordingly,\nthe teacher's latest policy is transferred to the student model in real time.\nThis reduces the distillation time to half the original time or even less and\nalso makes it possible for extremely small student models to learn skills at\nthe expert level. We evaluated the proposed algorithm in the Atari 2600 domain.\nThe results show that our approach can achieve full distillation in most games,\neven with compression ratios up to 1.7%.", + "authors": "Yuxiang Sun, Pooyan Fazli", + "published": "2019-12-29", + "updated": "2019-12-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2304.06461v2", + "title": "Multi-Mode Online Knowledge Distillation for Self-Supervised Visual Representation Learning", + "abstract": "Self-supervised learning (SSL) has made remarkable progress in visual\nrepresentation learning. Some studies combine SSL with knowledge distillation\n(SSL-KD) to boost the representation learning performance of small models. In\nthis study, we propose a Multi-mode Online Knowledge Distillation method (MOKD)\nto boost self-supervised visual representation learning. Different from\nexisting SSL-KD methods that transfer knowledge from a static pre-trained\nteacher to a student, in MOKD, two different models learn collaboratively in a\nself-supervised manner. Specifically, MOKD consists of two distillation modes:\nself-distillation and cross-distillation modes. Among them, self-distillation\nperforms self-supervised learning for each model independently, while\ncross-distillation realizes knowledge interaction between different models. In\ncross-distillation, a cross-attention feature search strategy is proposed to\nenhance the semantic feature alignment between different models. As a result,\nthe two models can absorb knowledge from each other to boost their\nrepresentation learning performance. Extensive experimental results on\ndifferent backbones and datasets demonstrate that two heterogeneous models can\nbenefit from MOKD and outperform their independently trained baseline. In\naddition, MOKD also outperforms existing SSL-KD methods for both the student\nand teacher models.", + "authors": "Kaiyou Song, Jin Xie, Shan Zhang, Zimeng Luo", + "published": "2023-04-13", + "updated": "2023-06-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.14827v1", + "title": "Sentence-Level or Token-Level? A Comprehensive Study on Knowledge Distillation", + "abstract": "Knowledge distillation, transferring knowledge from a teacher model to a\nstudent model, has emerged as a powerful technique in neural machine\ntranslation for compressing models or simplifying training targets. Knowledge\ndistillation encompasses two primary methods: sentence-level distillation and\ntoken-level distillation. In sentence-level distillation, the student model is\ntrained to align with the output of the teacher model, which can alleviate the\ntraining difficulty and give student model a comprehensive understanding of\nglobal structure. Differently, token-level distillation requires the student\nmodel to learn the output distribution of the teacher model, facilitating a\nmore fine-grained transfer of knowledge. Studies have revealed divergent\nperformances between sentence-level and token-level distillation across\ndifferent scenarios, leading to the confusion on the empirical selection of\nknowledge distillation methods. In this study, we argue that token-level\ndistillation, with its more complex objective (i.e., distribution), is better\nsuited for ``simple'' scenarios, while sentence-level distillation excels in\n``complex'' scenarios. To substantiate our hypothesis, we systematically\nanalyze the performance of distillation methods by varying the model size of\nstudent models, the complexity of text, and the difficulty of decoding\nprocedure. While our experimental results validate our hypothesis, defining the\ncomplexity level of a given scenario remains a challenging task. So we further\nintroduce a novel hybrid method that combines token-level and sentence-level\ndistillation through a gating mechanism, aiming to leverage the advantages of\nboth individual methods. Experiments demonstrate that the hybrid method\nsurpasses the performance of token-level or sentence-level distillation methods\nand the previous works by a margin, demonstrating the effectiveness of the\nproposed hybrid method.", + "authors": "Jingxuan Wei, Linzhuang Sun, Yichong Leng, Xu Tan, Bihui Yu, Ruifeng Guo", + "published": "2024-04-23", + "updated": "2024-04-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1504.05965v2", + "title": "Qutrit Magic State Distillation Tight in Some Directions", + "abstract": "Magic state distillation is a crucial component in the leading approaches to\nimplementing universal fault tolerant quantum computation, with existing\nprotocols for both qubit and higher dimensional systems. Early work focused on\ndetermining the region of distillable states for qubit protocols, yet\ncomparatively little is known about which states can be distilled and with what\ndistillable region for d>2. Here we focus on d=3 and present new four-qutrit\ndistillation schemes that improve upon the known distillable region, and\nachieve distillation tight to the boundary of undistillable states for some\nclasses of state. As a consequence of recent results, this implies that there\nis a family of quantum states that enable universality if and only if they\nexhibit contextuality with respect to stabilizer measurements. We also identify\na new routine whose fixed point is a magic state with maximal sum-negativity\ni.e., it is maximally non-stabilizer in a specific sense.", + "authors": "Hillary Dawkins, Mark Howard", + "published": "2015-04-22", + "updated": "2015-09-21", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2011.09969v1", + "title": "Neural network algorithm and its application in reactive distillation", + "abstract": "Reactive distillation is a special distillation technology based on the\ncoupling of chemical reaction and distillation. It has the characteristics of\nlow energy consumption and high separation efficiency. However, because the\ncombination of reaction and separation produces highly nonlinear robust\nbehavior, the control and optimization of the reactive distillation process\ncannot use conventional methods, but must rely on neural network algorithms.\nThis paper briefly describes the characteristics and research progress of\nreactive distillation technology and neural network algorithms, and summarizes\nthe application of neural network algorithms in reactive distillation, aiming\nto provide reference for the development and innovation of industry technology.", + "authors": "Huihui Wang, Ruyang Mo", + "published": "2020-11-16", + "updated": "2020-11-16", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE", + "cs.LG", + "I.2.8" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1903.04197v7", + "title": "Structured Knowledge Distillation for Dense Prediction", + "abstract": "In this work, we consider transferring the structure information from large\nnetworks to compact ones for dense prediction tasks in computer vision.\nPrevious knowledge distillation strategies used for dense prediction tasks\noften directly borrow the distillation scheme for image classification and\nperform knowledge distillation for each pixel separately, leading to\nsub-optimal performance. Here we propose to distill structured knowledge from\nlarge networks to compact networks, taking into account the fact that dense\nprediction is a structured prediction problem. Specifically, we study two\nstructured distillation schemes: i) pair-wise distillation that distills the\npair-wise similarities by building a static graph; and ii) holistic\ndistillation that uses adversarial training to distill holistic knowledge. The\neffectiveness of our knowledge distillation approaches is demonstrated by\nexperiments on three dense prediction tasks: semantic segmentation, depth\nestimation and object detection. Code is available at: https://git.io/StructKD", + "authors": "Yifan Liu, Changyong Shun, Jingdong Wang, Chunhua Shen", + "published": "2019-03-11", + "updated": "2020-06-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2301.01615v2", + "title": "StereoDistill: Pick the Cream from LiDAR for Distilling Stereo-based 3D Object Detection", + "abstract": "In this paper, we propose a cross-modal distillation method named\nStereoDistill to narrow the gap between the stereo and LiDAR-based approaches\nvia distilling the stereo detectors from the superior LiDAR model at the\nresponse level, which is usually overlooked in 3D object detection\ndistillation. The key designs of StereoDistill are: the X-component Guided\nDistillation~(XGD) for regression and the Cross-anchor Logit Distillation~(CLD)\nfor classification. In XGD, instead of empirically adopting a threshold to\nselect the high-quality teacher predictions as soft targets, we decompose the\npredicted 3D box into sub-components and retain the corresponding part for\ndistillation if the teacher component pilot is consistent with ground truth to\nlargely boost the number of positive predictions and alleviate the mimicking\ndifficulty of the student model. For CLD, we aggregate the probability\ndistribution of all anchors at the same position to encourage the highest\nprobability anchor rather than individually distill the distribution at the\nanchor level. Finally, our StereoDistill achieves state-of-the-art results for\nstereo-based 3D detection on the KITTI test benchmark and extensive experiments\non KITTI and Argoverse Dataset validate the effectiveness.", + "authors": "Zhe Liu, Xiaoqing Ye, Xiao Tan, Errui Ding, Xiang Bai", + "published": "2023-01-04", + "updated": "2023-01-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2205.09153v1", + "title": "ERNIE-Search: Bridging Cross-Encoder with Dual-Encoder via Self On-the-fly Distillation for Dense Passage Retrieval", + "abstract": "Neural retrievers based on pre-trained language models (PLMs), such as\ndual-encoders, have achieved promising performance on the task of open-domain\nquestion answering (QA). Their effectiveness can further reach new\nstate-of-the-arts by incorporating cross-architecture knowledge distillation.\nHowever, most of the existing studies just directly apply conventional\ndistillation methods. They fail to consider the particular situation where the\nteacher and student have different structures. In this paper, we propose a\nnovel distillation method that significantly advances cross-architecture\ndistillation for dual-encoders. Our method 1) introduces a self on-the-fly\ndistillation method that can effectively distill late interaction (i.e.,\nColBERT) to vanilla dual-encoder, and 2) incorporates a cascade distillation\nprocess to further improve the performance with a cross-encoder teacher.\nExtensive experiments are conducted to validate that our proposed solution\noutperforms strong baselines and establish a new state-of-the-art on\nopen-domain QA benchmarks.", + "authors": "Yuxiang Lu, Yiding Liu, Jiaxiang Liu, Yunsheng Shi, Zhengjie Huang, Shikun Feng Yu Sun, Hao Tian, Hua Wu, Shuaiqiang Wang, Dawei Yin, Haifeng Wang", + "published": "2022-05-18", + "updated": "2022-05-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + } + ], + [ + { + "url": "http://arxiv.org/abs/2403.19221v1", + "title": "Towards Multimodal Video Paragraph Captioning Models Robust to Missing Modality", + "abstract": "Video paragraph captioning (VPC) involves generating detailed narratives for\nlong videos, utilizing supportive modalities such as speech and event\nboundaries. However, the existing models are constrained by the assumption of\nconstant availability of a single auxiliary modality, which is impractical\ngiven the diversity and unpredictable nature of real-world scenarios. To this\nend, we propose a Missing-Resistant framework MR-VPC that effectively harnesses\nall available auxiliary inputs and maintains resilience even in the absence of\ncertain modalities. Under this framework, we propose the Multimodal VPC (MVPC)\narchitecture integrating video, speech, and event boundary inputs in a unified\nmanner to process various auxiliary inputs. Moreover, to fortify the model\nagainst incomplete data, we introduce DropAM, a data augmentation strategy that\nrandomly omits auxiliary inputs, paired with DistillAM, a regularization target\nthat distills knowledge from teacher models trained on modality-complete data,\nenabling efficient learning in modality-deficient environments. Through\nexhaustive experimentation on YouCook2 and ActivityNet Captions, MR-VPC has\nproven to deliver superior performance on modality-complete and\nmodality-missing test data. This work highlights the significance of developing\nresilient VPC models and paves the way for more adaptive, robust multimodal\nvideo understanding.", + "authors": "Sishuo Chen, Lei Li, Shuhuai Ren, Rundong Gao, Yuanxin Liu, Xiaohan Bi, Xu Sun, Lu Hou", + "published": "2024-03-28", + "updated": "2024-03-28", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "Distillation", + "gt": "Video Paragraph Captioning (VPC) VPC is a widely studied video-language understanding task involving producing paragraph-level captions for long videos lasting for minutes (Park et al., 2019). Existing VPC models commonly incorporate additional auxiliary information alongside video frames as inputs, such as transcribed speech (Yang et al., 2023b) and event boundaries (Zhou et al., 2018b; Yamazaki et al., 2022a,b, etc). Liu and Wan (2021) and Song et al. (2021) build VPC models for raw videos without event boundaries, but their models still underperform those utilizing auxiliary modalities. To the best of our knowledge, our work takes the first step to utilize both transcribed speech and event boundaries for VPC in an end-to-end manner, and we are the first to study the robustness of VPC models to noisy inputs with missing modalities. Robustness to Missing Modality As multimodal neural networks are vulnerable to missing modality (Ma et al., 2022), recent years have seen a surge of studies on enhancing model robustness on modality-incomplete data across various multimodal tasks (Woo et al., 2022; Lee et al., 2023; Wei et al., 2023; Yuan et al., 2023, etc). In terms of methodology, researchers have explored approaches such as modality fusion strategy search (Ma et al., 2022), data augmentation in the form of modality dropout (McKinzie et al., 2023), and regularization objectives (Woo et al., 2022; McKinzie et al., 2023). However, existing efforts are limited to relatively simple classification tasks, and model robustness to missing modality in more complex language generation tasks like VPC is yet to be explored. We have found that simply applying the existing approaches in other tasks does not achieve satisfactory results in VPC and bridge this research gap by developing training strategies customized for VPC in our MR-VPC framework, which will be discussed in \u00a7 3 and \u00a7 4.", + "pre_questions": [], + "main_content": "Introduction Video Paragraph Captioning (VPC) (Park et al., 2019) is a fundamental video-language understanding task that requires the model to generate paragraph-level captions for minutes-long videos. Besides raw video frames, there exist several auxil1Our code is available at https://github.com/ lancopku/MR-VPC. 0% 10% 30% 50% 70% 90% 100% Percentage of missing ASR text 0 10 20 30 40 50 60 70 Validation CIDEr 36.33 68.25 60.28 47.13 33.46 17.38 7.05 3.39 69.5165.33 60.12 48.22 42.62 39.81 39.03 Robustness to ASR Modality Missing on YouCook2 MR-VPC (Ours) Vid2Seq (SOTA) Video-Only Baseline Figure 1: The performance of the previous SOTA model Vid2Seq drastically declines as the percentage of ASR text missing grows. In contrast, our MR-VPC consistently achieves superior performance in both modalitycomplete and modality-missing environments. iary modalities that can potentially serve as supplementary inputs, such as speech inputs utilized in Vid2Seq (Yang et al., 2023b), flow features used in MART (Lei et al., 2020), and event boundaries (the start and end timestamps of the events) leveraged in various models (Zhou et al., 2018b; Yamazaki et al., 2022a,b, etc). Despite the growing performance of these models, we notice that they assume to have access to the same auxiliary modality during both training and testing, which contradicts reality. In real-world scenarios, the availability of modalities undergoes dynamic changes, which leads to the following two issues for the models developed under the unrealistic assumption. Issue-1: Under-utilization of available modalities. Since a specific auxiliary modality is solely considered during training, the models fail to leverage unseen modalities that may emerge at test time. For example, VLCap and VLTinT (Yamazaki et al., 2022a,b) cannot employ transcribed speech, which is proven extremely beneficial in Vid2Seq (Yang arXiv:2403.19221v1 [cs.CV] 28 Mar 2024 et al., 2023b); conversely, Vid2Seq cannot make use of event boundaries, which contain rich information about the temporal structure of videos. Issue-2: Vulnerability to missing modality in noisy environments. The performance of these models may degrade drastically when the required auxiliary modality is absent or of low quality, which is common in real-world situations. For instance, Liu and Wan (2021) find that the VPC models relying on event boundaries yield significantly lower performance when the ground-truth event boundaries are replaced with learned ones. Besides, we observe that the state-of-the-art model Vid2Seq is vulnerable to the missing of automatically transcribed speech (ASR texts) as depicted in Figure 1. In response to issue-1, we design a multimodal VPC (MVPC) architecture to integrate the inputs from multiple modalities. Concretely, MVPC first encodes the two auxiliary modalities (i.e., tokenized event boundaries and transcribed speech) into a unified textual feature space using a shared text encoder. Then, the textual features are fused with the video features before entering the language decoder to generate paragraph captions. Further, to alleviate issue-2, we devise two training strategies to enhance the robustness of our model to missing modalities. Firstly, we simulate the absence of auxiliary modalities by randomly dropping the inputs (named DropAM) during training. This approach reduces the model\u2019s reliance on auxiliary inputs and improves generalization in noisy situations. Second, to take full advantage of the auxiliary modalities, we propose to perform multimodal knowledge distillation (Hinton et al., 2015) (referred to as DistillAM) where the model trained on modality-complete data acting as the teacher and the model operating in modality-missing situations learning as the student. By combining MVPC, DropAM and DistillAM, we present a Multimodal noise-Resistant Video Paragraph Captioning framework (MR-VPC). Experimental results on two benchmarks demonstrate the superiority of MR-VPC in handling both modality-complete and modality-incomplete data. Notably, MR-VPC is tailored for the challenging VPC task and substantially outperforms prior robustness-oriented methods studied for classification tasks. To our knowledge, this work pioneers formulating VPC as a multimodal learning problem with noisy inputs and presents practical solutions that enable VPC systems to utilize inputs from diverse modalities while remaining robust even when parts of them are missing. 3.1 Problem Formulation An instance in a VPC dataset can be formulated as (Vi,Ai,Ei,Ci), where V,A,E,C stand for video frames, ASR texts, event boundaries, and the caption, respectively. An example from the YouCook2 (Zhou et al., 2018a) dataset is illustrated \u2026 <47s> <60s> I\u2018ll be preparing very popular traditional salad called Fatouche. If you\u2019d like to\u2026 Video Frames Event Boundaries Transcription \uff08ASR\uff09 Paragraph Captions <67s> <89s> pick the ends off the verdalago. \u2026 combine lemon juice sumac garlic salt and oil in a bowl. \u2026 \u2026 \u2026 \u2026 \u2026 Figure 2: The composition of an instance in the multimodal VPC task from the validation set of YouCook2. in Figure 2. We assume that the video modality V is always available at test time and the auxiliary modalities A and E are likely to be affected by noise in the wild. Given NA and NE as the noise functions for A and E (e.g., random missing in the context of our study on missing modality), respectively, for a model F (V,A,E) trained on the clean training set Dtr = {(Vi,Ai,Ei,Ci),1 \u2264i \u2264ntr}, where ntr is the size of the training data, our target is to maximize the performance on the noisy test set Dte = {(Vi,NA (Ai),NE (Ei),Ci),1 \u2264i \u2264nte}, where nte is the size of the test data. 3.2 MVPC Model Framework Overview Overall, as illustrated in Figure 3, our multimodal video paragraph captioning (MVPC) model consists of four modules: the video encoder Ev to encode V , the text encoder Et to encode the concatenation of A and E, a fusion module Ef that merges visual and textual features, and a text decoder Dt that generates the caption C. Video Encoder The video encoder Ev encodes the video sequence of F frames xv \u2208RF\u00d7H\u00d7W\u00d7C, where H,W and C are the height, width, and the number of channels, respectively, and outputs the video embedding sequence Ev (xv) \u2208RF\u00d7d, where d is the embedding size. Concretely, we use a CLIP ViT-L/14 (Radford et al., 2021) image encoder to encode each frame and then feed the frame features into a 12-layer Transformer (Vaswani et al., 2017) for temporal interaction. Text Encoder To resolve issue-1, we expect the model to be capable of modeling both A and E inputs end to end. Thus before feeding A and E into the text encoder Et, we adopt the relative time tokenization (Yang et al., 2023b) to map continuous timestamps into discrete time tokens denoting the percentage progress. Then Et transforms the concatenation of the ASR sequence and event boundary sequence xt consisting of n tokens in total into the text embedding sequence Et (xt) \u2208Rn\u00d7d. Test Modalities YouCook2 ActivityNet METEOR CIDEr METEOR CIDEr V+E+A 23.11 74.13 14.09 42.29 V+A 21.05 (-2.06) 59.55 (-14.58) 12.24 (-1.85) 29.71 (-12.58) V+E 12.46 (-10.65) 8.77 (-65.36) 12.91 (-1.18) 43.14 (+0.85) V 6.79 (-16.32) 3.42 (-70.71) 11.64 (-2.45) 26.08 (-16.21) Table 1: The performance of the vanilla MVPC model on YouCook2 and ActicityNet Captions in different modality missing settings. Fusion Module and Text Decoder At the end of the workflow, the text decoder Dt generates the target caption sequence in an auto-regressive manner, conditioned on the encoder embeddings produced by the fusion module Ef merging Ev (xv) and Et (xt). Specifically, for Ef, we adopt a parameter-free concatenation operation; for Et and Dt, we employ the T5v1.1-base encoder-decoder model (Raffel et al., 2020). Weight Initialization To benefit from largescale pretraining, we initialize the model with the Vid2Seq weight pretrained on YT-Temporal1B (Zellers et al., 2022) 2. Note that our work differs from Vid2Seq in terms of the task context and research goal. We aim at the VPC task that generates textual paragraph-level captions C from the input modalities V,A and E, where A and E are likely to be missing, while Vid2Seq is originally designed for the dense video captioning task where the inputs are V and A (without considering missing modality) and the outputs are C and E. To establish a baseline for comparison, we re-implement Vid2Seq and fine-tune its pretrained weights for the VPC task (details in Appendix B). This allows us to evaluate the performance improvement achieved by our proposed framework. Note that MVPC is not a simple extension of Vid2Seq, as our general framework to incorporate A and E unitedly is agnostic to the underlying structure and applies to other vision-language foundation models. 3.3 Training Strategies of MR-VPC As the vanilla training of MVPC does not consider potential noise in the inference stage, it suffers from severe performance drops facing missing modality (issue-2), as shown in Table 1. For instance, the absence of A results in a 65.36 (88.17% relatively) CIDEr drop on YouCook2; the missing of E causes a 12.58 (29.75% relatively) CIDEr decline on AcitivityNet. 3 In light of this weakness, we explore 2Available at this link. 3We find that the ASR data of ActivityNet contains little useful cues and show small negative effects, so we nullify the Video Encoder Ev 47\u2019\u2019 <23, 34>;<45, 79>;\u2026 I\u2019ll be \u2026 [SEP] <23> <34> ; \u2026 preparing Video Frames Text Encoder Et \u2026 \u2026 \u2026 \u2026 Fusion Module Ef Text Decoder Dt ev 1 ev 2 ev n\u22121 ev n et 1 et 2 et n\u22121 et n [BOS] combine lemon juice garlic sumac salt and \u2026 [EOS] \u2026 \u2026 \u2026 \u2026 ASR Event Boundaries Video Embeddings ASR Embeddings Event Boundaries Embeddings I\u2019ll be preparing very\u2026 Figure 3: The overview diagram of our MVPC (multimodal video paragraph captioning) framework. the following training strategies to enhance the model\u2019s resilience to missing modality (the model trained with them is referred to as MR-VPC later). 3.3.1 DropAM: Drop Auxiliary Modalities Since the missing modality can be viewed as a distribution shift from the training data, a fundamental idea to enhance model robustness is simulating the noise during training. To this end, we randomly drop the auxiliary modalities A and E to reduce the dependence of the model on them. Specifically, we transform the original training set Dtr to \u02c6 Dtr = {(Vi, \u02c6 NA (Ai), \u02c6 NE (Ei),Ci),1 \u2264i \u2264ntrain}, in which \u02c6 NA and \u02c6 NE are the proxy noise functions that random replace Ai and Ei with a default null character at probabilities pA and pE, respectively: \u02c6 NA (Ai) = { \u2032\u2032, p \u2264pA Ai, p > pA , \u02c6 NE (Ei) = { \u2032\u2032, p \u2264pE Ei, p > pE , (1) where p is a random variable uniformly drawn from the range [0,1]. We use pA = pE = 0.5 as the value works generally well in practice. Please see the discussion about their effects in Appendix D. 3.3.2 DistillAM: Learning from the Teacher with Modality-Complete Data Solely applying DropAM turns the model training into a multitask learning process involving subtasks with different input conditions, which possibly adds to the learning difficulty and compromises the performance on modality-complete data. Therefore, we resort to knowledge distillation (Hinton et al., 2015), a learning paradigm that transfers the knowledge from teacher models with better conditions, such as more training data and a larger number of parameters, to student models without these advantages. In our problem, we consider ASR input of ActivityNet at test time later. the vanilla MVPC model trained on the modalitycomplete training set Dtr as the teacher model Ft, and our goal is to transfer the knowledge learned by Ft to the MR-VPC model that likely faces missing modality as the student model Fs. In early trials, we have found that distilling from word-level logits (WordKD) achieves limited performance gains in our task. Therefore, inspired by the sequence-level knowledge distillation (SeqKD) (Kim and Rush, 2016) studied in machine translation, we create a new training set Dkd by replacing the ground-truth caption C with the predictions given by Ft based on the modality-complete data: Dkd = {(Vi, Ai, Ei, Ft (Vi, Ai, Ei)) , 1 \u2264i \u2264ntr} , (2) and then construct the augmented training set Daug = Dtr \u22c3Dkd by merging Dkd and the original training data Dtr. It is notable that this procedure named DistillAM is orthogonal to the noise simulation process DropAM in \u00a7 3.3.1, so they can be applied together, i.e., the random noise can be injected into the augmented training data Daug in the training phase in the way stated in \u00a7 3.3.1. 3.3.3 Connection to Prior Strategies for Multimodal Classification Tasks Although MASD (McKinzie et al., 2023), the stateof-the-art approach to enhance model robustness to missing modality in classification problems, also takes the form of modality dropout and knowledge distillation, it differs from our solutions in essence. Concretely, MASD performs self-distillation, namely aligning the predicted probabilities on modality-complete and modalities-incomplete data output by the same model under training. In contrast, we use a fixed teacher model trained on modality-complete data, which facilitates the efficient learning of the student model in the challenging VPC task. We will show the advantage of our MR-VPC over MASD and its variant MASD+WiseFT (McKinzie et al., 2023) in \u00a7 4.2.2. 4 Experiments 4.1 Experimental Setup Evaluation Protocol Following Yang et al. (2023b), we use CIDEr (C) (Vedantam et al., 2015) and METEOR (M) (Banerjee and Lavie, 2005) metrics to evaluate the accuracy of generated captions. For measuring diversity, we use 4-gram repetition (R@4) (Xiong et al., 2018) following Liu and Wan (2021) and Yamazaki et al. (2022a,b). Besides these metrics based on n-gram matching commonly used in previous works, we also report advanced model-based metrics in \u00a7 5.1. Benchmarks We conduct main experiments on YouCook2 (Zhou et al., 2018a) and ActivityNet Captions (Krishna et al., 2017), two widely studied VPC benchmarks containing paragraph-level captions and annotated event boundaries. We report the evaluation metrics on the validation set of YouCook2 and the as-test split of ActivityNet Captions (see Appendix A for details). Acquisition of ASR Data For ActivityNet Captions, we adopt the ASR data provided by Iashin and Rahtu (2020) from the YouTube ASR system. For YouCook2, we obtain the ASR data using the whisper-timestamped tool (Louradour, 2023) based on Whisper (Radford et al., 2022) (the small.en model with 244M parameters) and dynamic time warping (Giorgino, 2009). Model Training and Inference We train the model for 40 epochs on YouCook2 and 20 epochs on ActivityNet Captions using a batch size of 32. The model is trained with the Adam (Kingma and Ba, 2015) optimizer to minimize cross-entropy loss with an initial learning rate of 2e-4 with cosine annealing. For training efficiency, we freeze the image encoder in our experiments unless otherwise mentioned, so the number of trainable parameters is 314M. The weight decay is 5e-2 and we clip the maximum norm of the gradient to 1.0. We uniformly sample 100 frames at resolution 224\u00d7224 pixels for the video input and the ASR text sequence is truncated at the max length of 1000. Temporally consistent random spatial augmentation (Qian et al., 2021) is applied. The inference beam search size is 4 and the repetition penalty is 1.2. See more details in Appendix B. Model Training Strategies Test Modalities DropAM DistillAM V+E+A V+E V+A V MVPC % % 74.13/23.11 8.77/12.46 59.55/21.05 3.42/6.79 ! % 60.40/22.67 35.17/16.94 64.87/22.54 36.73/16.53 MR-VPC ! ! 69.51/22.83 39.03/16.97 69.37/22.59 38.37/16.86 Table 2: The effect of our training strategies with different available modalities at test time on the YouCook2 dataset. CIDEr / METEOR metrics are reported. Evaluation Settings We mainly report results in three representative test settings: (1) the modalitycomplete setting where the auxiliary modalities A and E are not affected by any noise; (2) the video-only setting where both A and E are missing, which is a harsh but realistic setting (in the real world, most users do not enter the video\u2019s event boundaries E; A is also possibly missing, e.g., when the ASR system does not support the conversation language); (3) the random-missing setting where A and E are both randomly missing at the probability of 50% independently. Baselines We compare our models with a wide array of baselines and categorize them according to the input modalities in their original settings: \u25cfV: The Vid2Seq model finetuned on only the video modality, named Vid2Seq (V); SoftNMS (Bodla et al., 2017), ESGN (Mun et al., 2019), Memory Transformer (Song et al., 2021), and VPCSum (Liu and Wan, 2021); MART, MARTCOOT, Vanilla Transformer, and Transformer-XL. The last four models use event boundaries generated by ESGN at test time as done in Liu and Wan (2021). \u25cfV+E: VLTinT (Yamazaki et al., 2022b), VLCap (Yamazaki et al., 2022a), MART (Lei et al., 2020), MARTCOOT (Ging et al., 2020), Vanilla Transformer (Zhou et al., 2018b), and TransformerXL (Dai et al., 2019). \u25cfV+A: Vid2Seq (Yang et al., 2023b). 4.2 Results and Analysis 4.2.1 Comparing MVPC and MR-VPC Our training strategies remarkably boost the model\u2019s robustness to missing modality while maintaining the performance in the modalitycomplete setting. Before comparing our model with baselines, we first examine the effectiveness of our training strategies described in \u00a7 3.3. As the results displayed in Table 2, the vanilla MVPC model without these training strategies is extremely susceptible to missing modality at test time, but the MR-VPC model equipped with these techniques shows substantially improved robustness to missing modality with only minimal BERTScore\u2191 YouCook2 ActivityNet MVPC 82.37 91.72 MR-VPC 91.30 94.16 Table 3: The average BERTScore similarities between captions generated in modality-complete and video-only test scenarios. 40 30 20 10 0 10 20 30 40 TSNE Dim 1 40 30 20 10 0 10 20 30 TSNE Dim 2 Captions Generated by the MVPC Model Modality-Complete Modality-Missing (a) Captions generated by the vanilla MVPC model. 30 20 10 0 10 20 30 40 TSNE Dim 1 40 20 0 20 40 TSNE Dim 2 Captions Generated by the MR-VPC Model Modality-Complete Modality-Missing (b) Captions generated by the MR-VPC model. Figure 4: Visualization of the SimCSE embeddings of the captions generated under modality-complete and modality-missing (video-only) scenarios. performance sacrifice on the modality-complete test data. For instance, MVPC disastrously fails in the video-only setting (the CIDEr falls to 3.42), while MR-VPC yields a CIDEr value of 38.37. We also affirm the validity of each strategy by comparing MR-VPC with the model trained with only the DropAM strategy (the last two rows of Table 2). As shown, although DropAM boosts the model robustness on modality-incomplete data, it significantly hurts the performance on modalitycomplete data (the CIDEr declines from 74.13 to 60.40); DistillAM not only further advances the robustness to missing modality, but also help preserve the performance in the modality-complete setting, as it raises the CIDEr metric to 69.51. Model YouCook2 ActivityNet C \u2191 M \u2191 R@4 \u2193 C \u2191 M \u2191 R@4 \u2193 MVPC (Ours) 74.13 23.11 0.82 43.14 13.91 0.67 MR-VPC (Ours) 69.51 22.83 0.57 41.01 13.84 0.51 Baselines Vid2Seq 68.25 23.01 0.75 30.77 12.51 0.82 Vid2Seq (V) 36.33 16.79 0.79 28.87 12.38 0.57 VLTinT 48.70 17.94 4.29 31.13 17.97 4.75 VLCap 49.41 17.95 5.16 30.29 17.48 4.18 MART 35.74 15.90 4.39 22.16 15.57 5.44 MARTCOOT 46.06 18.17 6.30 28.19 15.99 6.64 Vanilla Trans. 38.00 11.55 21.33 15.54 7.45 Memory Trans. 26.55 15.64 2.75 Trans.-XL 26.40 14.80 21.71 14.91 8.79 VPCSum 23.92 15.11 0.65 24.33 15.84 1.54 Table 4: Evaluation results under the modality-complete setting. \u2191indicates larger is better and \u2193indicates lower is better. The best result is highlighted in bold. MR-VPC shows higher prediction consistency between modality-complete and modalitymissing scenarios. To intuitively understand the impact of our training strategies, we compare the BERTScore (Zhang et al., 2019) similarities between the captions generated on modality-complete and video-only data by the vanilla MVPC and MR-VPC models. As listed in Table 3, MR-VPC exhibits substantially higher similarity scores, which indicates that it is capable of generating more consistent predictions, regardless of the availability of auxiliary modalities. Furthermore, we visualize the SimCSE embeddings (Gao et al., 2021) 4 of the generated captions on YouCook2 using t-SNE (Van der Maaten and Hinton, 2008) in Figure 4, where we observe that the captions generated by MVPC form two distinct clusters depending on whether modality-missing occurs, but those produced by MR-VPC appear in pairs and seem hard to distinguish based on the test scenario. The visualization further proves that DropAM and DistillAM contribute to the consistency of the predictions. 4.2.2 Comparison with Advanced Systems Our MVPC and MR-VPC obtain superior performance in the modality-complete setting. We present the evaluation results in the modalitycomplete setting in Table 4 and observe that our models markedly advance the state-of-the-art on most metrics. In terms of captioning accuracy, we elevate the CIDEr metric from 68.25 (Vid2Seq) to 74.13 on YouCook2 and from 31.13 (VLTinT) to 43.14 on ActivityNet; regarding diversity, we achieve the lowest R@4 repetition scores below 1.0. These results support the necessity to fully leverage 4We use the unsup-simcse-roberta-large model. Model YouCook2 ActivityNet C \u2191 M \u2191 R@4 \u2193 C \u2191 M \u2191 R@4 \u2193 MVPC (Ours) 3.42 6.79 2.31 26.08 11.64 0.60 MR-VPC (Ours) 38.37 16.86 0.57 31.37 12.06 0.58 Baselines Vid2Seq 3.39 6.81 2.80 30.01 12.18 0.73 Vid2Seq (V) 36.33 16.79 0.79 28.87 12.38 0.58 Memory Trans. 26.55 15.64 2.75 VPCSum 23.92 15.11 0.65 24.33 15.84 1.54 SoftNMS 18.18 13.67 4.94 22.58 14.93 10.17 ESGN 21.85 15.74 6.51 17.01 13.37 4.94 Vanilla Trans. 20.95 15.11 7.04 16.88 13.37 2.85 Trans.XL 14.24 12.67 3.20 20.73 14.89 7.45 MART 16.56 13.44 4.63 20.16 14.94 6.09 COOT 19.67 14.21 5.99 21.83 14.67 1.54 Table 5: Evaluation results under the video-only setting. Model YouCook2 ActivityNet C \u2191 M \u2191 R@4 \u2193 C \u2191 M \u2191 R@4 \u2193 MVPC (Ours) 33.31 15.70 1.55 33.55 12.86 0.59 MR-VPC (Ours) 51.13 20.15 0.74 37.05 13.01 0.56 Baselines Vid2Seq 33.46 14.19 1.46 29.93 12.48 0.75 Vid2Seq (V) 36.33 16.79 0.79 28.87 12.38 0.58 VPCSum 23.92 15.11 0.65 24.33 15.84 1.54 Table 6: Results under the random-missing setting. the auxiliary modalities A and E (issue-1) and the effectiveness of our MVPC model frameowork. We notice that VLTinT and some earlier baselines do better in terms of METEOR on AcitivyNet than Vid2Seq and our models, but we contend that ours and Vid2Seq are better models for two reasons: (1) CIDEr is a more reasonable metric because it accounts for the importance of different n-grams and has shown higher consistency with human evaluation (Shi et al., 2022); (2) model-based metrics in \u00a7 5.1 and human study results in \u00a7 5.3 further corroborate the advantages of our models. Our MR-VPC model performs significantly better in modality-missing settings than previous SOTA models. Given the figures displayed in Table 5 and Table 6, MR-VPC yields the best performance in the video-only and random-missing setting with substantial margins over baselines including those specially trained for the video-only setting such as Vid2Seq (V) (Yang et al., 2023b), VPCSum (Liu and Wan, 2021), and Memory Transformer (Song et al., 2021). This suggests that MRVPC fulfills our objective of developing a robust VPC model capable of leveraging available auxiliary modalities while maintaining robustness even when they are missing in real-world scenarios. Our MR-VPC shows the best cross-dataset generalization performance on the video-only Charades dataset. To further examine the Model CIDEr BERTScore BARTScore MVPC 6.79 87.08 -4.56 MR-VPC 8.74 87.22 -4.47 Vid2Seq 4.74 86.83 -4.62 Vid2Seq (V) 6.01 87.00 -4.48 Table 7: Zero-shot evaluation results on Charades (the model weights are trained on ActivityNet Captions). Method Test Modalities Avg. V+E+A V+E V+A V WordKD 64.50 30.62 65.33 27.21 46.92 MASD 67.95 32.98 68.72 33.47 50.78 MASD+WiSE-FT 68.90 34.96 69.54 32.54 51.49 MR-VPC (Ours) 69.51 39.03 69.37 38.37 54.07 Table 8: Comparison with other robustness-oriented methods with different available modalities at test time on YouCook2. CIDEr metrics are reported. cross-dataset generalization capability, we assess the models trained on ActivityNet Captions on the test set of the Charades (Sigurdsson et al., 2016), where only the video modality is available. As the results listed in Table 7, MR-VPC outperforms baselines in the zero-shot scenario where domain shift and missing modality occur simultaneously, further validating the strength of our approach. Our MR-VPC beats the SOTA robustnessoriented training methods in classification problems. As shown in Table 8, MR-VPC remarkably outperforms the state-of-the-art solutions towards robustness to missing modality in classification problems, i.e., MASD and MASD+Wise-FT (McKinzie et al., 2023). This illustrates that our customized approaches for the VPC task make significant strides compared to simply incorporating existing techniques studied for other tasks previously. Besides, we observe that replacing the SeqKD with Word-KD leads to significant performance drops in all scenarios, which supports the rationality of using SeqKD in our DistillAM component. 4.3 Qualitative Results Besides the above quantitative results, we provide qualitative evidence to support the superiority of our models. First, we find that MVPC and Vid2Seq tend to produce degenerated captions in the modality-missing setting, whereas the prediction of MR-VPC remains almost unchanged, as exemplified by the instance given in Table 14 in Appendix H. Moreover, even in the modalitycomplete setting, the Vid2Seq and VLTinT baselines often predict concepts that are not Model YouCook2 ActivityNet Captions PPL \u2193 BERT \u2191 BART \u2191 PPL \u2193 BERT \u2191 BART \u2191 EMS \u2191 EMSref \u2191 VLTinT (Yamazaki et al., 2022b) 21.99 89.01 -3.91 30.97 88.03 -3.94 28.94 36.88 Vid2Seq (Yang et al., 2023b) 15.89 90.58 -3.08 24.68 88.71 -3.78 29.54 36.99 MVPC (Ours) 15.50 90.56 -3.08 18.77 88.98 -3.56 29.37 37.21 MR-VPC (Ours) 15.11 89.51 -3.49 17.17 88.85 -3.58 29.10 36.90 Table 9: The model-based metrics evaluated under the modality-complete setting. \u2191indicates higher is better and \u2193indicates lower is better. We highlight the best model in bold. We do not report EMScore on YouCook2 as the captions of YouCook2 are longer than the max length limit of CLIP, the backbone of the EMScore metric. Noise Type Low-Quality ASR ASR Sentence Deletion Event Deletion Boundary Perturbation Generated Boundary Metric CIDEr BERT BART CIDEr BERT BART CIDEr BERT BART CIDEr BERT BART CIDEr BERT BART Vid2Seq 60.39 90.35 -3.12 48.01 89.62 -3.31 68.25 90.58 -3.08 68.25 90.58 -3.08 68.25 90.58 -3.08 MVPC (Ours) 59.58 90.36 -3.13 48.95 89.66 -3.29 63.43 90.54 -3.11 72.60 90.57 -3.07 61.71 90.58 -3.07 MR-VPC (Ours) 63.69 90.63 -3.08 53.59 90.04 -3.24 70.72 90.85 -3.02 69.11 90.86 -3.03 67.02 90.84 -3.03 Table 10: The evaluation results under five forms of noise in auxiliary modalities. Group1 MVPC VLTinT Equal 56.0% 20.7% 23.3% Group2 MR-VPC VLTinT Equal 56.0% 18.7% 25.3% Table 11: The average percentage of human preferences. present in the video; in contrast, our MVPC and MR-VPC model produces fewer such hallucinations, as illustrated in Figure 5 in Appendix H. 5 Further Evaluation 5.1 Evaluation with Model-Based Metrics Besides the n-gram-based metrics reported in \u00a7 4.2, we further compare our models with competitive baselines (Vid2Seq and VLTinT) using the following model-based metrics (details in Appendix C), as they align better with human preference (Shi et al., 2022): (1) Perplexity (PPL) for fluency; (2) BERTScore (Zhang et al., 2019) and BARTScore (Yuan et al., 2021) measuring prediction-reference similarity; (3) EMScore (Shi et al., 2022) for the matching extent of the prediction and the video frames and its extension EMSref. We present the results in Table 9 and find that our MVPC and MR-VPC obtain the best performance across most of these metrics. Notably, although VLTinT reaches the highest METEOR on ActivityNet, it falls behind our models and Vid2Seq on these metrics. We will further show the advantage of our models through human evaluation in \u00a7 5.3. 5.2 Generalization on Other Forms of Noise Besides completely missing, the auxiliary modalities in the real world may also be affected by other weaker forms of noise, such as variations in ASR quality between the training and test phases. We further test our models and VidSeq under five types of noise: lower ASR quality and sentence deletion for A; event deletion, boundary perturbation, and generated boundaries for E (details in Appendix F). We present the results in Table 10 and see that although these forms of noise are not seen during training, our MR-VPC shows the best robustness in most cases, which again substantiates the generalizability of our training strategies. We believe that we will achieve even better robustness to these types of noises if we consider them in the choice of the proxy noise functions \u02c6 NA and \u02c6 NE in DropAM. 5.3 Human Evaluation We conduct two groups of human evaluation, in which three annotators compare the captions generated by VLTinT and MVPC (or MR-VPC) in the modality-complete setting for 50 randomly sampled videos from the AcitivityNet Captions test set. They need to choose a caption showing higher consistency with the video content or mark that two captions are equally good (details in Appendix I). As shown in Table 11, our MVPC and MRVPC significantly surpass VLTinT in pair-wise comparison, which again proves their superiority. 6 Conclusion We present MR-VPC, a multimodal video paragraph captioning model capable of utilizing three input modalities (video, transcribed speech, and event boundaries) and keeping robust in the presence of missing modality. The MR-VPC framework comprises two key contributions: (1) the MVPC architecture, which seamlessly processes inputs from all three modalities in an end-to-end manner; (2) the incorporation of two training techniques, DropAM and DistillAM, which enhance the model\u2019s robustness when faced with missing modality. Through exhaustive experimental evaluation on YouCook2 and ActivityNet Captions datasets, we demonstrate the superiority of MR-VPC in various test scenarios, highlighting its practicality and efficacy in addressing the challenges of video paragraph captioning in real-world settings. Limitations We discuss the limitations of our work as follows. (1) Despite the outstanding performance of MRVPC in modality-missing settings, it slightly lags behind our vanilla MVPC in the modality-complete setting. This is comprehensible because the optimization of the regularization targets introduced in DropAM and DistillAM may conflict with the learning on modality-complete data to some extent. We will conduct more explorations to reduce this gap. (2) We primarily study the absence (discussed in most of the main text) and other forms of noise (studied in \u00a7 5.2) in two main auxiliary modalities, namely transcribed speech and event boundaries, which do not cover all possible harsh test conditions in the wild. For future work, we intend to investigate the robustness of VPC models to other forms of data noise, such as video frame blurring, for a more comprehensive evaluation. Ethics Statement We believe that our proposal would contribute to the robustness and security of video captioning systems deployed in the open-world environment, as the absence and quality reduction of auxiliary modalities are common in practice. Our proposal also applies to other multimodal natural language generation tasks, e.g., multimodal machine translation, on which we plan to conduct more studies in the future. Moreover, all pretrained models used in this work are publicly available, ensuring transparency and accessibility. Although we do not expect any direct negative consequences resulting from this paper, we hope to continue to build on our MR-VPC framework and develop stronger and safer multimodal VPC models in our future work." + }, + { + "url": "http://arxiv.org/abs/1804.00819v1", + "title": "End-to-End Dense Video Captioning with Masked Transformer", + "abstract": "Dense video captioning aims to generate text descriptions for all events in\nan untrimmed video. This involves both detecting and describing events.\nTherefore, all previous methods on dense video captioning tackle this problem\nby building two models, i.e. an event proposal and a captioning model, for\nthese two sub-problems. The models are either trained separately or in\nalternation. This prevents direct influence of the language description to the\nevent proposal, which is important for generating accurate descriptions. To\naddress this problem, we propose an end-to-end transformer model for dense\nvideo captioning. The encoder encodes the video into appropriate\nrepresentations. The proposal decoder decodes from the encoding with different\nanchors to form video event proposals. The captioning decoder employs a masking\nnetwork to restrict its attention to the proposal event over the encoding\nfeature. This masking network converts the event proposal to a differentiable\nmask, which ensures the consistency between the proposal and captioning during\ntraining. In addition, our model employs a self-attention mechanism, which\nenables the use of efficient non-recurrent structure during encoding and leads\nto performance improvements. We demonstrate the effectiveness of this\nend-to-end model on ActivityNet Captions and YouCookII datasets, where we\nachieved 10.12 and 6.58 METEOR score, respectively.", + "authors": "Luowei Zhou, Yingbo Zhou, Jason J. Corso, Richard Socher, Caiming Xiong", + "published": "2018-04-03", + "updated": "2018-04-03", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2206.12972v2", + "title": "VLCap: Vision-Language with Contrastive Learning for Coherent Video Paragraph Captioning", + "abstract": "In this paper, we leverage the human perceiving process, that involves vision\nand language interaction, to generate a coherent paragraph description of\nuntrimmed videos. We propose vision-language (VL) features consisting of two\nmodalities, i.e., (i) vision modality to capture global visual content of the\nentire scene and (ii) language modality to extract scene elements description\nof both human and non-human objects (e.g. animals, vehicles, etc), visual and\nnon-visual elements (e.g. relations, activities, etc). Furthermore, we propose\nto train our proposed VLCap under a contrastive learning VL loss. The\nexperiments and ablation studies on ActivityNet Captions and YouCookII datasets\nshow that our VLCap outperforms existing SOTA methods on both accuracy and\ndiversity metrics.", + "authors": "Kashu Yamazaki, Sang Truong, Khoa Vo, Michael Kidd, Chase Rainwater, Khoa Luu, Ngan Le", + "published": "2022-06-26", + "updated": "2022-08-06", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.13916v2", + "title": "Towards Good Practices for Missing Modality Robust Action Recognition", + "abstract": "Standard multi-modal models assume the use of the same modalities in training\nand inference stages. However, in practice, the environment in which\nmulti-modal models operate may not satisfy such assumption. As such, their\nperformances degrade drastically if any modality is missing in the inference\nstage. We ask: how can we train a model that is robust to missing modalities?\nThis paper seeks a set of good practices for multi-modal action recognition,\nwith a particular interest in circumstances where some modalities are not\navailable at an inference time. First, we study how to effectively regularize\nthe model during training (e.g., data augmentation). Second, we investigate on\nfusion methods for robustness to missing modalities: we find that\ntransformer-based fusion shows better robustness for missing modality than\nsummation or concatenation. Third, we propose a simple modular network,\nActionMAE, which learns missing modality predictive coding by randomly dropping\nmodality features and tries to reconstruct them with the remaining modality\nfeatures. Coupling these good practices, we build a model that is not only\neffective in multi-modal action recognition but also robust to modality\nmissing. Our model achieves the state-of-the-arts on multiple benchmarks and\nmaintains competitive performances even in missing modality scenarios. Codes\nare available at https://github.com/sangminwoo/ActionMAE.", + "authors": "Sangmin Woo, Sumin Lee, Yeonju Park, Muhammad Adi Nugroho, Changick Kim", + "published": "2022-11-25", + "updated": "2023-03-30", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2302.14115v2", + "title": "Vid2Seq: Large-Scale Pretraining of a Visual Language Model for Dense Video Captioning", + "abstract": "In this work, we introduce Vid2Seq, a multi-modal single-stage dense event\ncaptioning model pretrained on narrated videos which are readily-available at\nscale. The Vid2Seq architecture augments a language model with special time\ntokens, allowing it to seamlessly predict event boundaries and textual\ndescriptions in the same output sequence. Such a unified model requires\nlarge-scale training data, which is not available in current annotated\ndatasets. We show that it is possible to leverage unlabeled narrated videos for\ndense video captioning, by reformulating sentence boundaries of transcribed\nspeech as pseudo event boundaries, and using the transcribed speech sentences\nas pseudo event captions. The resulting Vid2Seq model pretrained on the\nYT-Temporal-1B dataset improves the state of the art on a variety of dense\nvideo captioning benchmarks including YouCook2, ViTT and ActivityNet Captions.\nVid2Seq also generalizes well to the tasks of video paragraph captioning and\nvideo clip captioning, and to few-shot settings. Our code is publicly available\nat https://antoyang.github.io/vid2seq.html.", + "authors": "Antoine Yang, Arsha Nagrani, Paul Hongsuck Seo, Antoine Miech, Jordi Pont-Tuset, Ivan Laptev, Josef Sivic, Cordelia Schmid", + "published": "2023-02-27", + "updated": "2023-03-21", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2304.08028v1", + "title": "MMANet: Margin-aware Distillation and Modality-aware Regularization for Incomplete Multimodal Learning", + "abstract": "Multimodal learning has shown great potentials in numerous scenes and\nattracts increasing interest recently. However, it often encounters the problem\nof missing modality data and thus suffers severe performance degradation in\npractice. To this end, we propose a general framework called MMANet to assist\nincomplete multimodal learning. It consists of three components: the deployment\nnetwork used for inference, the teacher network transferring comprehensive\nmultimodal information to the deployment network, and the regularization\nnetwork guiding the deployment network to balance weak modality combinations.\nSpecifically, we propose a novel margin-aware distillation (MAD) to assist the\ninformation transfer by weighing the sample contribution with the\nclassification uncertainty. This encourages the deployment network to focus on\nthe samples near decision boundaries and acquire the refined inter-class\nmargin. Besides, we design a modality-aware regularization (MAR) algorithm to\nmine the weak modality combinations and guide the regularization network to\ncalculate prediction loss for them. This forces the deployment network to\nimprove its representation ability for the weak modality combinations\nadaptively. Finally, extensive experiments on multimodal classification and\nsegmentation tasks demonstrate that our MMANet outperforms the state-of-the-art\nsignificantly. Code is available at: https://github.com/shicaiwei123/MMANet", + "authors": "Shicai Wei, Yang Luo, Chunbo Luo", + "published": "2023-04-17", + "updated": "2023-04-17", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.MM" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1812.05634v2", + "title": "Adversarial Inference for Multi-Sentence Video Description", + "abstract": "While significant progress has been made in the image captioning task, video\ndescription is still in its infancy due to the complex nature of video data.\nGenerating multi-sentence descriptions for long videos is even more\nchallenging. Among the main issues are the fluency and coherence of the\ngenerated descriptions, and their relevance to the video. Recently,\nreinforcement and adversarial learning based methods have been explored to\nimprove the image captioning models; however, both types of methods suffer from\na number of issues, e.g. poor readability and high redundancy for RL and\nstability issues for GANs. In this work, we instead propose to apply\nadversarial techniques during inference, designing a discriminator which\nencourages better multi-sentence video description. In addition, we find that a\nmulti-discriminator \"hybrid\" design, where each discriminator targets one\naspect of a description, leads to the best results. Specifically, we decouple\nthe discriminator to evaluate on three criteria: 1) visual relevance to the\nvideo, 2) language diversity and fluency, and 3) coherence across sentences.\nOur approach results in more accurate, diverse, and coherent multi-sentence\nvideo descriptions, as shown by automatic as well as human evaluation on the\npopular ActivityNet Captions dataset.", + "authors": "Jae Sung Park, Marcus Rohrbach, Trevor Darrell, Anna Rohrbach", + "published": "2018-12-13", + "updated": "2019-04-16", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2405.00348v1", + "title": "Practical Dataset Distillation Based on Deep Support Vectors", + "abstract": "Conventional dataset distillation requires significant computational\nresources and assumes access to the entire dataset, an assumption impractical\nas it presumes all data resides on a central server. In this paper, we focus on\ndataset distillation in practical scenarios with access to only a fraction of\nthe entire dataset. We introduce a novel distillation method that augments the\nconventional process by incorporating general model knowledge via the addition\nof Deep KKT (DKKT) loss. In practical settings, our approach showed improved\nperformance compared to the baseline distribution matching distillation method\non the CIFAR-10 dataset. Additionally, we present experimental evidence that\nDeep Support Vectors (DSVs) offer unique information to the original\ndistillation, and their integration results in enhanced performance.", + "authors": "Hyunho Lee, Junhoo Lee, Nojun Kwak", + "published": "2024-05-01", + "updated": "2024-05-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2401.15863v1", + "title": "Importance-Aware Adaptive Dataset Distillation", + "abstract": "Herein, we propose a novel dataset distillation method for constructing small\ninformative datasets that preserve the information of the large original\ndatasets. The development of deep learning models is enabled by the\navailability of large-scale datasets. Despite unprecedented success,\nlarge-scale datasets considerably increase the storage and transmission costs,\nresulting in a cumbersome model training process. Moreover, using raw data for\ntraining raises privacy and copyright concerns. To address these issues, a new\ntask named dataset distillation has been introduced, aiming to synthesize a\ncompact dataset that retains the essential information from the large original\ndataset. State-of-the-art (SOTA) dataset distillation methods have been\nproposed by matching gradients or network parameters obtained during training\non real and synthetic datasets. The contribution of different network\nparameters to the distillation process varies, and uniformly treating them\nleads to degraded distillation performance. Based on this observation, we\npropose an importance-aware adaptive dataset distillation (IADD) method that\ncan improve distillation performance by automatically assigning importance\nweights to different network parameters during distillation, thereby\nsynthesizing more robust distilled datasets. IADD demonstrates superior\nperformance over other SOTA dataset distillation methods based on parameter\nmatching on multiple benchmark datasets and outperforms them in terms of\ncross-architecture generalization. In addition, the analysis of self-adaptive\nweights demonstrates the effectiveness of IADD. Furthermore, the effectiveness\nof IADD is validated in a real-world medical application such as COVID-19\ndetection.", + "authors": "Guang Li, Ren Togo, Takahiro Ogawa, Miki Haseyama", + "published": "2024-01-29", + "updated": "2024-01-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2108.12905v1", + "title": "Lipschitz Continuity Guided Knowledge Distillation", + "abstract": "Knowledge distillation has become one of the most important model compression\ntechniques by distilling knowledge from larger teacher networks to smaller\nstudent ones. Although great success has been achieved by prior distillation\nmethods via delicately designing various types of knowledge, they overlook the\nfunctional properties of neural networks, which makes the process of applying\nthose techniques to new tasks unreliable and non-trivial. To alleviate such\nproblem, in this paper, we initially leverage Lipschitz continuity to better\nrepresent the functional characteristic of neural networks and guide the\nknowledge distillation process. In particular, we propose a novel Lipschitz\nContinuity Guided Knowledge Distillation framework to faithfully distill\nknowledge by minimizing the distance between two neural networks' Lipschitz\nconstants, which enables teacher networks to better regularize student networks\nand improve the corresponding performance. We derive an explainable\napproximation algorithm with an explicit theoretical derivation to address the\nNP-hard problem of calculating the Lipschitz constant. Experimental results\nhave shown that our method outperforms other benchmarks over several knowledge\ndistillation tasks (e.g., classification, segmentation and object detection) on\nCIFAR-100, ImageNet, and PASCAL VOC datasets.", + "authors": "Yuzhang Shang, Bin Duan, Ziliang Zong, Liqiang Nie, Yan Yan", + "published": "2021-08-29", + "updated": "2021-08-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1807.04705v2", + "title": "Non-asymptotic assisted distillation of quantum coherence", + "abstract": "We characterize the operational task of environment-assisted distillation of\nquantum coherence under different sets of free operations when only a finite\nsupply of copies of a given state is available. We first evaluate the one-shot\nassisted distillable coherence exactly, and introduce a semidefinite\nprogramming bound on it in terms of a smooth entropic quantity. We prove the\nbound to be tight for all systems in dimensions 2 and 3, which allows us to\nobtain computable expressions for the one-shot rate of distillation, establish\nan analytical expression for the best achievable fidelity of assisted\ndistillation for any finite number of copies, and fully solve the problem of\nasymptotic zero-error assisted distillation for qubit and qutrit systems. Our\ncharacterization shows that all relevant sets of free operations in the\nresource theory of coherence have exactly the same power in the task of\none-shot assisted coherence distillation, and furthermore resolves a conjecture\nregarding the additivity of coherence of assistance in dimension 3.", + "authors": "Bartosz Regula, Ludovico Lami, Alexander Streltsov", + "published": "2018-07-12", + "updated": "2018-10-16", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2305.12330v1", + "title": "Task-agnostic Distillation of Encoder-Decoder Language Models", + "abstract": "Finetuning pretrained language models (LMs) have enabled appealing\nperformance on a diverse array of tasks. The intriguing task-agnostic property\nhas driven a shifted focus from task-specific to task-agnostic distillation of\nLMs. While task-agnostic, compute-efficient, performance-preserved LMs can be\nyielded by task-agnostic distillation, previous studies mainly sit in\ndistillation of either encoder-only LMs (e.g., BERT) or decoder-only ones\n(e.g., GPT) yet largely neglect that distillation of encoder-decoder LMs (e.g.,\nT5) can posit very distinguished behaviors. Frustratingly, we discover that\nexisting task-agnostic distillation methods can fail to handle the distillation\nof encoder-decoder LMs. To the demand, we explore a few paths and uncover a\npath named as MiniEnD that successfully tackles the distillation of\nencoder-decoder LMs in a task-agnostic fashion. We examine MiniEnD on language\nunderstanding and abstractive summarization. The results showcase that MiniEnD\nis generally effective and is competitive compared to other alternatives. We\nfurther scale MiniEnD up to distillation of 3B encoder-decoder language models\nwith interpolated distillation. The results imply the opportunities and\nchallenges in distilling large language models (e.g., LLaMA).", + "authors": "Chen Zhang, Yang Yang, Jingang Wang, Dawei Song", + "published": "2023-05-21", + "updated": "2023-05-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2302.11472v1", + "title": "Distilling Calibrated Student from an Uncalibrated Teacher", + "abstract": "Knowledge distillation is a common technique for improving the performance of\na shallow student network by transferring information from a teacher network,\nwhich in general, is comparatively large and deep. These teacher networks are\npre-trained and often uncalibrated, as no calibration technique is applied to\nthe teacher model while training. Calibration of a network measures the\nprobability of correctness for any of its predictions, which is critical in\nhigh-risk domains. In this paper, we study how to obtain a calibrated student\nfrom an uncalibrated teacher. Our approach relies on the fusion of the\ndata-augmentation techniques, including but not limited to cutout, mixup, and\nCutMix, with knowledge distillation. We extend our approach beyond traditional\nknowledge distillation and find it suitable for Relational Knowledge\nDistillation and Contrastive Representation Distillation as well. The novelty\nof the work is that it provides a framework to distill a calibrated student\nfrom an uncalibrated teacher model without compromising the accuracy of the\ndistilled student. We perform extensive experiments to validate our approach on\nvarious datasets, including CIFAR-10, CIFAR-100, CINIC-10 and TinyImageNet, and\nobtained calibrated student models. We also observe robust performance of our\napproach while evaluating it on corrupted CIFAR-100C data.", + "authors": "Ishan Mishra, Sethu Vamsi Krishna, Deepak Mishra", + "published": "2023-02-22", + "updated": "2023-02-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2302.14643v1", + "title": "Graph-based Knowledge Distillation: A survey and experimental evaluation", + "abstract": "Graph, such as citation networks, social networks, and transportation\nnetworks, are prevalent in the real world. Graph Neural Networks (GNNs) have\ngained widespread attention for their robust expressiveness and exceptional\nperformance in various graph applications. However, the efficacy of GNNs is\nheavily reliant on sufficient data labels and complex network models, with the\nformer obtaining hardly and the latter computing costly. To address the labeled\ndata scarcity and high complexity of GNNs, Knowledge Distillation (KD) has been\nintroduced to enhance existing GNNs. This technique involves transferring the\nsoft-label supervision of the large teacher model to the small student model\nwhile maintaining prediction performance. This survey offers a comprehensive\noverview of Graph-based Knowledge Distillation methods, systematically\ncategorizing and summarizing them while discussing their limitations and future\ndirections. This paper first introduces the background of graph and KD. It then\nprovides a comprehensive summary of three types of Graph-based Knowledge\nDistillation methods, namely Graph-based Knowledge Distillation for deep neural\nnetworks (DKD), Graph-based Knowledge Distillation for GNNs (GKD), and\nSelf-Knowledge Distillation based Graph-based Knowledge Distillation (SKD).\nEach type is further divided into knowledge distillation methods based on the\noutput layer, middle layer, and constructed graph. Subsequently, various\nalgorithms' ideas are analyzed and compared, concluding with the advantages and\ndisadvantages of each algorithm supported by experimental results. In addition,\nthe applications of graph-based knowledge distillation in CV, NLP, RS, and\nother fields are listed. Finally, the graph-based knowledge distillation is\nsummarized and prospectively discussed. We have also released related resources\nat https://github.com/liujing1023/Graph-based-Knowledge-Distillation.", + "authors": "Jing Liu, Tongya Zheng, Guanzheng Zhang, Qinfen Hao", + "published": "2023-02-27", + "updated": "2023-02-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2304.05563v2", + "title": "Entanglement distillation in terms of Schmidt rank and matrix rank", + "abstract": "Entanglement distillation is a key task in quantum-information processing. In\nthis paper, we distill non-positive-partial-transpose (NPT) bipartite states of\nsome given Schmidt rank and matrix rank. We show that all bipartite states of\nSchmidt rank two are locally equivalent to classical-classical states, and all\nbipartite states of Schmidt rank three are 1-undistillable. Subsequently, we\nshow that low-rank B-irreducible NPT states are distillable for large-rank\nreduced density operators by proving low-rank B-irreducible NPT state whose\nrange contains a product vector is distillable. Eventually, we present an\nequivalent condition to distill $M\\times N$ bipartite states of rank\n$\\max\\{M,N\\}+1$.", + "authors": "Tianyi Ding, Lin Chen", + "published": "2023-04-12", + "updated": "2023-07-06", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.14827v1", + "title": "Sentence-Level or Token-Level? A Comprehensive Study on Knowledge Distillation", + "abstract": "Knowledge distillation, transferring knowledge from a teacher model to a\nstudent model, has emerged as a powerful technique in neural machine\ntranslation for compressing models or simplifying training targets. Knowledge\ndistillation encompasses two primary methods: sentence-level distillation and\ntoken-level distillation. In sentence-level distillation, the student model is\ntrained to align with the output of the teacher model, which can alleviate the\ntraining difficulty and give student model a comprehensive understanding of\nglobal structure. Differently, token-level distillation requires the student\nmodel to learn the output distribution of the teacher model, facilitating a\nmore fine-grained transfer of knowledge. Studies have revealed divergent\nperformances between sentence-level and token-level distillation across\ndifferent scenarios, leading to the confusion on the empirical selection of\nknowledge distillation methods. In this study, we argue that token-level\ndistillation, with its more complex objective (i.e., distribution), is better\nsuited for ``simple'' scenarios, while sentence-level distillation excels in\n``complex'' scenarios. To substantiate our hypothesis, we systematically\nanalyze the performance of distillation methods by varying the model size of\nstudent models, the complexity of text, and the difficulty of decoding\nprocedure. While our experimental results validate our hypothesis, defining the\ncomplexity level of a given scenario remains a challenging task. So we further\nintroduce a novel hybrid method that combines token-level and sentence-level\ndistillation through a gating mechanism, aiming to leverage the advantages of\nboth individual methods. Experiments demonstrate that the hybrid method\nsurpasses the performance of token-level or sentence-level distillation methods\nand the previous works by a margin, demonstrating the effectiveness of the\nproposed hybrid method.", + "authors": "Jingxuan Wei, Linzhuang Sun, Yichong Leng, Xu Tan, Bihui Yu, Ruifeng Guo", + "published": "2024-04-23", + "updated": "2024-04-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2302.09632v1", + "title": "HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained Transformers", + "abstract": "Knowledge distillation has been shown to be a powerful model compression\napproach to facilitate the deployment of pre-trained language models in\npractice. This paper focuses on task-agnostic distillation. It produces a\ncompact pre-trained model that can be easily fine-tuned on various tasks with\nsmall computational costs and memory footprints. Despite the practical\nbenefits, task-agnostic distillation is challenging. Since the teacher model\nhas a significantly larger capacity and stronger representation power than the\nstudent model, it is very difficult for the student to produce predictions that\nmatch the teacher's over a massive amount of open-domain training data. Such a\nlarge prediction discrepancy often diminishes the benefits of knowledge\ndistillation. To address this challenge, we propose Homotopic Distillation\n(HomoDistil), a novel task-agnostic distillation approach equipped with\niterative pruning. Specifically, we initialize the student model from the\nteacher model, and iteratively prune the student's neurons until the target\nwidth is reached. Such an approach maintains a small discrepancy between the\nteacher's and student's predictions throughout the distillation process, which\nensures the effectiveness of knowledge transfer. Extensive experiments\ndemonstrate that HomoDistil achieves significant improvements on existing\nbaselines.", + "authors": "Chen Liang, Haoming Jiang, Zheng Li, Xianfeng Tang, Bin Yin, Tuo Zhao", + "published": "2023-02-19", + "updated": "2023-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2109.14960v3", + "title": "Prune Your Model Before Distill It", + "abstract": "Knowledge distillation transfers the knowledge from a cumbersome teacher to a\nsmall student. Recent results suggest that the student-friendly teacher is more\nappropriate to distill since it provides more transferable knowledge. In this\nwork, we propose the novel framework, \"prune, then distill,\" that prunes the\nmodel first to make it more transferrable and then distill it to the student.\nWe provide several exploratory examples where the pruned teacher teaches better\nthan the original unpruned networks. We further show theoretically that the\npruned teacher plays the role of regularizer in distillation, which reduces the\ngeneralization error. Based on this result, we propose a novel neural network\ncompression scheme where the student network is formed based on the pruned\nteacher and then apply the \"prune, then distill\" strategy. The code is\navailable at https://github.com/ososos888/prune-then-distill", + "authors": "Jinhyuk Park, Albert No", + "published": "2021-09-30", + "updated": "2022-07-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2205.02399v1", + "title": "Spot-adaptive Knowledge Distillation", + "abstract": "Knowledge distillation (KD) has become a well established paradigm for\ncompressing deep neural networks. The typical way of conducting knowledge\ndistillation is to train the student network under the supervision of the\nteacher network to harness the knowledge at one or multiple spots (i.e.,\nlayers) in the teacher network. The distillation spots, once specified, will\nnot change for all the training samples, throughout the whole distillation\nprocess. In this work, we argue that distillation spots should be adaptive to\ntraining samples and distillation epochs. We thus propose a new distillation\nstrategy, termed spot-adaptive KD (SAKD), to adaptively determine the\ndistillation spots in the teacher network per sample, at every training\niteration during the whole distillation period. As SAKD actually focuses on\n\"where to distill\" instead of \"what to distill\" that is widely investigated by\nmost existing works, it can be seamlessly integrated into existing distillation\nmethods to further improve their performance. Extensive experiments with 10\nstate-of-the-art distillers are conducted to demonstrate the effectiveness of\nSAKD for improving their distillation performance, under both homogeneous and\nheterogeneous distillation settings. Code is available at\nhttps://github.com/zju-vipa/spot-adaptive-pytorch", + "authors": "Jie Song, Ying Chen, Jingwen Ye, Mingli Song", + "published": "2022-05-05", + "updated": "2022-05-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0202165v1", + "title": "Distinguishing locally of quantum states and the distillation of entanglement", + "abstract": "This paper try to probe the relation of distinguishing locally and\ndistillation of entanglement. The distinguishing information (DI) and the\nmaximal distinguishing information (MDI) of a set of pure states are defined.\nThe interpretation of distillation of entanglement in term of information is\ngiven. The relation between the maximal distinguishing information and\ndistillable entanglement is gained. As a application of this relation the\ndistillable entanglement of Bell-diagonal states is present.", + "authors": "ping-xing. chen, Cheng-zu Li", + "published": "2002-02-27", + "updated": "2002-02-27", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0303009v2", + "title": "Security bounds in Quantum Cryptography using d-level systems", + "abstract": "We analyze the security of quantum cryptography schemes for $d$-level systems\nusing 2 or $d+1$ maximally conjugated bases, under individual eavesdropping\nattacks based on cloning machines and measurement after the basis\nreconciliation. We consider classical advantage distillation protocols, that\nallow to extract a key even in situations where the mutual information between\nthe honest parties is smaller than the eavesdropper's information. In this\nscenario, advantage distillation protocols are shown to be as powerful as\nquantum distillation: key distillation is possible using classical techniques\nif and only if the corresponding state in the entanglement based protocol is\ndistillable.", + "authors": "Antonio Acin, Nicolas Gisin, Valerio Scarani", + "published": "2003-03-03", + "updated": "2003-11-03", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1504.05965v2", + "title": "Qutrit Magic State Distillation Tight in Some Directions", + "abstract": "Magic state distillation is a crucial component in the leading approaches to\nimplementing universal fault tolerant quantum computation, with existing\nprotocols for both qubit and higher dimensional systems. Early work focused on\ndetermining the region of distillable states for qubit protocols, yet\ncomparatively little is known about which states can be distilled and with what\ndistillable region for d>2. Here we focus on d=3 and present new four-qutrit\ndistillation schemes that improve upon the known distillable region, and\nachieve distillation tight to the boundary of undistillable states for some\nclasses of state. As a consequence of recent results, this implies that there\nis a family of quantum states that enable universality if and only if they\nexhibit contextuality with respect to stabilizer measurements. We also identify\na new routine whose fixed point is a magic state with maximal sum-negativity\ni.e., it is maximally non-stabilizer in a specific sense.", + "authors": "Hillary Dawkins, Mark Howard", + "published": "2015-04-22", + "updated": "2015-09-21", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1910.02551v3", + "title": "Soft-Label Dataset Distillation and Text Dataset Distillation", + "abstract": "Dataset distillation is a method for reducing dataset sizes by learning a\nsmall number of synthetic samples containing all the information of a large\ndataset. This has several benefits like speeding up model training, reducing\nenergy consumption, and reducing required storage space. Currently, each\nsynthetic sample is assigned a single `hard' label, and also, dataset\ndistillation can currently only be used with image data.\n We propose to simultaneously distill both images and their labels, thus\nassigning each synthetic sample a `soft' label (a distribution of labels). Our\nalgorithm increases accuracy by 2-4% over the original algorithm for several\nimage classification tasks. Using `soft' labels also enables distilled datasets\nto consist of fewer samples than there are classes as each sample can encode\ninformation for multiple classes. For example, training a LeNet model with 10\ndistilled images (one per class) results in over 96% accuracy on MNIST, and\nalmost 92% accuracy when trained on just 5 distilled images.\n We also extend the dataset distillation algorithm to distill sequential\ndatasets including texts. We demonstrate that text distillation outperforms\nother methods across multiple datasets. For example, models attain almost their\noriginal accuracy on the IMDB sentiment analysis task using just 20 distilled\nsentences.\n Our code can be found at\n$\\href{https://github.com/ilia10000/dataset-distillation}{\\text{https://github.com/ilia10000/dataset-distillation}}$.", + "authors": "Ilia Sucholutsky, Matthias Schonlau", + "published": "2019-10-06", + "updated": "2020-05-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2304.04615v1", + "title": "A Survey on Recent Teacher-student Learning Studies", + "abstract": "Knowledge distillation is a method of transferring the knowledge from a\ncomplex deep neural network (DNN) to a smaller and faster DNN, while preserving\nits accuracy. Recent variants of knowledge distillation include teaching\nassistant distillation, curriculum distillation, mask distillation, and\ndecoupling distillation, which aim to improve the performance of knowledge\ndistillation by introducing additional components or by changing the learning\nprocess. Teaching assistant distillation involves an intermediate model called\nthe teaching assistant, while curriculum distillation follows a curriculum\nsimilar to human education. Mask distillation focuses on transferring the\nattention mechanism learned by the teacher, and decoupling distillation\ndecouples the distillation loss from the task loss. Overall, these variants of\nknowledge distillation have shown promising results in improving the\nperformance of knowledge distillation.", + "authors": "Minghong Gao", + "published": "2023-04-10", + "updated": "2023-04-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2106.12591v1", + "title": "Magic State Distillation from Entangled States", + "abstract": "Magic can be distributed non-locally in many-body entangled states, such as\nthe low energy states of condensed matter systems. Using the Bravyi-Kitaev\nmagic state distillation protocol, we find that non-local magic is distillable\nand can improve the distillation outcome. We analyze a few explicit examples\nand show that spin squeezing can be used to convert non-distillable states into\ndistillable ones.\n Our analysis also suggests that the conventional product input states assumed\nby magic distillation protocols are extremely atypical among general states\nwith distillable magic. It further justifies the need for studying a diverse\nrange of entangled inputs that yield magic states with high probability.", + "authors": "Ning Bao, ChunJun Cao, Vincent Paul Su", + "published": "2021-06-23", + "updated": "2021-06-23", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/9809078v2", + "title": "A rigorous treatment of distillable entanglement", + "abstract": "The notion of distillable entanglement is one of the fundamental concepts of\nquantum information theory. Unfortunately, there is an apparent mismatch\nbetween the intuitive and rigorous definitions of distillable entanglement. To\nbe precise, the existing rigorous definitions impose the constraint that the\ndistilation protocol produce an output of constant dimension. It is therefore\nconceivable that this unnecessary constraint might have led to underestimation\nof the true distillable entanglement. We give a new definition of distillable\nentanglement which removes this constraint, but could conceivably overestimate\nthe true value. Since the definitions turn out to be equivalent, neither\nunderestimation nor overestimation is possible, and both definitions are\narguably correct", + "authors": "Eric M. Rains", + "published": "1998-09-24", + "updated": "1998-10-12", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2112.05638v2", + "title": "DistilCSE: Effective Knowledge Distillation For Contrastive Sentence Embeddings", + "abstract": "Large-scale contrastive learning models can learn very informative sentence\nembeddings, but are hard to serve online due to the huge model size. Therefore,\nthey often play the role of \"teacher\", transferring abilities to small\n\"student\" models through knowledge distillation. However, knowledge\ndistillation inevitably brings some drop in embedding effect. To tackle that,\nwe propose an effective knowledge distillation framework for contrastive\nsentence embeddings, termed DistilCSE. It first applies knowledge distillation\non a large amount of unlabeled data, and then fine-tunes student models through\ncontrastive learning on limited labeled data. To achieve better distillation\nresults, we further propose Contrastive Knowledge Distillation (CKD). CKD uses\nInfoNCE as the loss function in knowledge distillation, enhancing the objective\nconsistency among teacher model training, knowledge distillation, and student\nmodel fine-tuning. Extensive experiments show that student models trained with\nthe proposed DistilCSE and CKD suffer from little or even no performance\ndecrease and consistently outperform the corresponding counterparts of the same\nparameter size. Impressively, our 110M student model outperforms the latest\nstate-of-the-art model, i.e., Sentence-T5 (11B), with only 1% parameters and\n0.25% unlabeled data.", + "authors": "Chaochen Gao, Xing Wu, Peng Wang, Jue Wang, Liangjun Zang, Zhongyuan Wang, Songlin Hu", + "published": "2021-12-10", + "updated": "2023-01-30", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2307.12732v1", + "title": "CLIP-KD: An Empirical Study of Distilling CLIP Models", + "abstract": "CLIP has become a promising language-supervised visual pre-training framework\nand achieves excellent performance over a wide range of tasks. This paper aims\nto distill small CLIP models supervised by a large teacher CLIP model. We\npropose several distillation strategies, including relation, feature, gradient\nand contrastive paradigm, to examine the impact on CLIP distillation. We show\nthat the simplest feature mimicry with MSE loss performs best. Moreover,\ninteractive contrastive learning and relation-based distillation are also\ncritical in performance improvement. We apply the unified method to distill\nseveral student networks trained on 15 million (image, text) pairs.\nDistillation improves the student CLIP models consistently over zero-shot\nImageNet classification and cross-modal retrieval benchmarks. We hope our\nempirical study will become an important baseline for future CLIP distillation\nresearch. The code is available at \\url{https://github.com/winycg/CLIP-KD}.", + "authors": "Chuanguang Yang, Zhulin An, Libo Huang, Junyu Bi, Xinqiang Yu, Han Yang, Yongjun Xu", + "published": "2023-07-24", + "updated": "2023-07-24", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2011.14554v1", + "title": "A Selective Survey on Versatile Knowledge Distillation Paradigm for Neural Network Models", + "abstract": "This paper aims to provide a selective survey about knowledge\ndistillation(KD) framework for researchers and practitioners to take advantage\nof it for developing new optimized models in the deep neural network field. To\nthis end, we give a brief overview of knowledge distillation and some related\nworks including learning using privileged information(LUPI) and generalized\ndistillation(GD). Even though knowledge distillation based on the\nteacher-student architecture was initially devised as a model compression\ntechnique, it has found versatile applications over various frameworks.\n In this paper, we review the characteristics of knowledge distillation from\nthe hypothesis that the three important ingredients of knowledge distillation\nare distilled knowledge and loss,teacher-student paradigm, and the distillation\nprocess. In addition, we survey the versatility of the knowledge distillation\nby studying its direct applications and its usage in combination with other\ndeep learning paradigms. Finally we present some future works in knowledge\ndistillation including explainable knowledge distillation where the analytical\nanalysis of the performance gain is studied and the self-supervised learning\nwhich is a hot research topic in deep learning community.", + "authors": "Jeong-Hoe Ku, JiHun Oh, YoungYoon Lee, Gaurav Pooniwala, SangJeong Lee", + "published": "2020-11-30", + "updated": "2020-11-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2312.00739v1", + "title": "Adversarial Score Distillation: When score distillation meets GAN", + "abstract": "Existing score distillation methods are sensitive to classifier-free guidance\n(CFG) scale: manifested as over-smoothness or instability at small CFG scales,\nwhile over-saturation at large ones. To explain and analyze these issues, we\nrevisit the derivation of Score Distillation Sampling (SDS) and decipher\nexisting score distillation with the Wasserstein Generative Adversarial Network\n(WGAN) paradigm. With the WGAN paradigm, we find that existing score\ndistillation either employs a fixed sub-optimal discriminator or conducts\nincomplete discriminator optimization, resulting in the scale-sensitive issue.\nWe propose the Adversarial Score Distillation (ASD), which maintains an\noptimizable discriminator and updates it using the complete optimization\nobjective. Experiments show that the proposed ASD performs favorably in 2D\ndistillation and text-to-3D tasks against existing methods. Furthermore, to\nexplore the generalization ability of our WGAN paradigm, we extend ASD to the\nimage editing task, which achieves competitive results. The project page and\ncode are at https://github.com/2y7c3/ASD.", + "authors": "Min Wei, Jingkai Zhou, Junyao Sun, Xuesong Zhang", + "published": "2023-12-01", + "updated": "2023-12-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.04057v1", + "title": "Score identity Distillation: Exponentially Fast Distillation of Pretrained Diffusion Models for One-Step Generation", + "abstract": "We introduce Score identity Distillation (SiD), an innovative data-free\nmethod that distills the generative capabilities of pretrained diffusion models\ninto a single-step generator. SiD not only facilitates an exponentially fast\nreduction in Fr\\'echet inception distance (FID) during distillation but also\napproaches or even exceeds the FID performance of the original teacher\ndiffusion models. By reformulating forward diffusion processes as semi-implicit\ndistributions, we leverage three score-related identities to create an\ninnovative loss mechanism. This mechanism achieves rapid FID reduction by\ntraining the generator using its own synthesized images, eliminating the need\nfor real data or reverse-diffusion-based generation, all accomplished within\nsignificantly shortened generation time. Upon evaluation across four benchmark\ndatasets, the SiD algorithm demonstrates high iteration efficiency during\ndistillation and surpasses competing distillation approaches, whether they are\none-step or few-step, data-free, or dependent on training data, in terms of\ngeneration quality. This achievement not only redefines the benchmarks for\nefficiency and effectiveness in diffusion distillation but also in the broader\nfield of diffusion-based generation. Our PyTorch implementation will be\npublicly accessible on GitHub.", + "authors": "Mingyuan Zhou, Huangjie Zheng, Zhendong Wang, Mingzhang Yin, Hai Huang", + "published": "2024-04-05", + "updated": "2024-04-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2211.08071v2", + "title": "Knowledge Distillation for Detection Transformer with Consistent Distillation Points Sampling", + "abstract": "DETR is a novel end-to-end transformer architecture object detector, which\nsignificantly outperforms classic detectors when scaling up the model size. In\nthis paper, we focus on the compression of DETR with knowledge distillation.\nWhile knowledge distillation has been well-studied in classic detectors, there\nis a lack of researches on how to make it work effectively on DETR. We first\nprovide experimental and theoretical analysis to point out that the main\nchallenge in DETR distillation is the lack of consistent distillation points.\nDistillation points refer to the corresponding inputs of the predictions for\nstudent to mimic, and reliable distillation requires sufficient distillation\npoints which are consistent between teacher and student. Based on this\nobservation, we propose a general knowledge distillation paradigm for\nDETR(KD-DETR) with consistent distillation points sampling. Specifically, we\ndecouple detection and distillation tasks by introducing a set of specialized\nobject queries to construct distillation points. In this paradigm, we further\npropose a general-to-specific distillation points sampling strategy to explore\nthe extensibility of KD-DETR. Extensive experiments on different DETR\narchitectures with various scales of backbones and transformer layers validate\nthe effectiveness and generalization of KD-DETR. KD-DETR boosts the performance\nof DAB-DETR with ResNet-18 and ResNet-50 backbone to 41.4$\\%$, 45.7$\\%$ mAP,\nrespectively, which are 5.2$\\%$, 3.5$\\%$ higher than the baseline, and\nResNet-50 even surpasses the teacher model by $2.2\\%$.", + "authors": "Yu Wang, Xin Li, Shengzhao Wen, Fukui Yang, Wanping Zhang, Gang Zhang, Haocheng Feng, Junyu Han, Errui Ding", + "published": "2022-11-15", + "updated": "2022-11-16", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2206.08491v1", + "title": "Revisiting Self-Distillation", + "abstract": "Knowledge distillation is the procedure of transferring \"knowledge\" from a\nlarge model (the teacher) to a more compact one (the student), often being used\nin the context of model compression. When both models have the same\narchitecture, this procedure is called self-distillation. Several works have\nanecdotally shown that a self-distilled student can outperform the teacher on\nheld-out data. In this work, we systematically study self-distillation in a\nnumber of settings. We first show that even with a highly accurate teacher,\nself-distillation allows a student to surpass the teacher in all cases.\nSecondly, we revisit existing theoretical explanations of (self) distillation\nand identify contradicting examples, revealing possible drawbacks of these\nexplanations. Finally, we provide an alternative explanation for the dynamics\nof self-distillation through the lens of loss landscape geometry. We conduct\nextensive experiments to show that self-distillation leads to flatter minima,\nthereby resulting in better generalization.", + "authors": "Minh Pham, Minsu Cho, Ameya Joshi, Chinmay Hegde", + "published": "2022-06-17", + "updated": "2022-06-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0008047v2", + "title": "A semidefinite program for distillable entanglement", + "abstract": "We show that the maximum fidelity obtained by a p.p.t. distillation protocol\nis given by the solution to a certain semidefinite program. This gives a number\nof new lower and upper bounds on p.p.t. distillable entanglement (and thus new\nupper bounds on 2-locally distillable entanglement). In the presence of\nsymmetry, the semidefinite program simplifies considerably, becoming a linear\nprogram in the case of isotropic and Werner states. Using these techniques, we\ndetermine the p.p.t. distillable entanglement of asymmetric Werner states and\n``maximally correlated'' states. We conclude with a discussion of possible\napplications of semidefinite programming to quantum codes and 1-local\ndistillation.", + "authors": "Eric M. Rains", + "published": "2000-08-10", + "updated": "2001-04-12", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2310.18628v2", + "title": "Personalised Distillation: Empowering Open-Sourced LLMs with Adaptive Learning for Code Generation", + "abstract": "With the rise of powerful closed-sourced LLMs (ChatGPT, GPT-4), there are\nincreasing interests in distilling the capabilies of close-sourced LLMs to\nsmaller open-sourced LLMs. Previous distillation methods usually prompt ChatGPT\nto generate a set of instructions and answers, for the student model to learn.\nHowever, such standard distillation approach neglects the merits and conditions\nof the student model. Inspired by modern teaching principles, we design a\npersonalised distillation process, in which the student attempts to solve a\ntask first, then the teacher provides an adaptive refinement for the student to\nimprove. Instead of feeding the student with teacher's prior, personalised\ndistillation enables personalised learning for the student model, as it only\nlearns on examples it makes mistakes upon and learns to improve its own\nsolution. On code generation, personalised distillation consistently\noutperforms standard distillation with only one third of the data. With only\n2.5-3K personalised examples that incur a data-collection cost of 4-6$, we\nboost CodeGen-mono-16B by 7% to achieve 36.4% pass@1 and StarCoder by 12.2% to\nachieve 45.8% pass@1 on HumanEval.", + "authors": "Hailin Chen, Amrita Saha, Steven Hoi, Shafiq Joty", + "published": "2023-10-28", + "updated": "2024-01-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2311.09874v1", + "title": "Experimental virtual distillation of entanglement and coherence", + "abstract": "Noise is in general inevitable and detrimental to practical and useful\nquantum communication and computation. Under the resource theory framework,\nresource distillation serves as a generic tool to overcome the effect of noise.\nYet, conventional resource distillation protocols generally require operations\non multi-copies of resource states, and strong limitations exist that restrict\ntheir practical utilities. Recently, by relaxing the setting of resource\ndistillation to only approximating the measurement statistics instead of the\nquantum state, a resource-frugal protocol, virtual resource distillation, is\nproposed, which allows more effective distillation of noisy resources. Here, we\nreport its experimental implementation on a four-qubit photonic quantum system\nfor the distillation of quantum coherence (up to dimension 4) and bipartite\nentanglement. We show the virtual distillation of the maximal superposed state\nof dimension four from the state of dimension two, an impossible task in\nconventional coherence distillation. Furthermore, we demonstrate the virtual\ndistillation of entanglement with operations acting only on a single copy of\nthe noisy EPR pair and showcase the quantum teleportation task using the\nvirtually distilled EPR pair with a significantly improved fidelity of the\nteleported state. These results illustrate the feasibility of the virtual\nresource distillation method and pave the way for accurate manipulation of\nquantum resources with noisy quantum hardware.", + "authors": "Ting Zhang, Yukun Zhang, Lu Liu, Xiao-Xu Fang, Qian-Xi Zhang, Xiao Yuan, He Lu", + "published": "2023-11-16", + "updated": "2023-11-16", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.00264v1", + "title": "DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation", + "abstract": "Dataset distillation aims to compress a training dataset by creating a small\nnumber of informative synthetic samples such that neural networks trained on\nthem perform as well as those trained on the original training dataset. Current\ntext dataset distillation methods create each synthetic sample as a sequence of\nword embeddings instead of a text to apply gradient-based optimization;\nhowever, such embedding-level distilled datasets cannot be used for training\nother models whose word embedding weights are different from the model used for\ndistillation. To address this issue, we propose a novel text dataset\ndistillation approach, called Distilling dataset into Language Model (DiLM),\nwhich trains a language model to generate informative synthetic training\nsamples as text data, instead of directly optimizing synthetic samples. We\nevaluated DiLM on various text classification datasets and showed that\ndistilled synthetic datasets from DiLM outperform those from current coreset\nselection methods. DiLM achieved remarkable generalization performance in\ntraining different types of models and in-context learning of large language\nmodels. Our code will be available at https://github.com/arumaekawa/DiLM.", + "authors": "Aru Maekawa, Satoshi Kosugi, Kotaro Funakoshi, Manabu Okumura", + "published": "2024-03-30", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2208.08840v1", + "title": "Mind the Gap in Distilling StyleGANs", + "abstract": "StyleGAN family is one of the most popular Generative Adversarial Networks\n(GANs) for unconditional generation. Despite its impressive performance, its\nhigh demand on storage and computation impedes their deployment on\nresource-constrained devices. This paper provides a comprehensive study of\ndistilling from the popular StyleGAN-like architecture. Our key insight is that\nthe main challenge of StyleGAN distillation lies in the output discrepancy\nissue, where the teacher and student model yield different outputs given the\nsame input latent code. Standard knowledge distillation losses typically fail\nunder this heterogeneous distillation scenario. We conduct thorough analysis\nabout the reasons and effects of this discrepancy issue, and identify that the\nmapping network plays a vital role in determining semantic information of\ngenerated images. Based on this finding, we propose a novel initialization\nstrategy for the student model, which can ensure the output consistency to the\nmaximum extent. To further enhance the semantic consistency between the teacher\nand student model, we present a latent-direction-based distillation loss that\npreserves the semantic relations in latent space. Extensive experiments\ndemonstrate the effectiveness of our approach in distilling StyleGAN2 and\nStyleGAN3, outperforming existing GAN distillation methods by a large margin.", + "authors": "Guodong Xu, Yuenan Hou, Ziwei Liu, Chen Change Loy", + "published": "2022-08-18", + "updated": "2022-08-18", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.06170v1", + "title": "CLIP-Embed-KD: Computationally Efficient Knowledge Distillation Using Embeddings as Teachers", + "abstract": "Contrastive Language-Image Pre-training (CLIP) has been shown to improve\nzero-shot generalization capabilities of language and vision models. In this\npaper, we extend CLIP for efficient knowledge distillation, by utilizing\nembeddings as teachers. Typical knowledge distillation frameworks require\nrunning forward passes through a teacher model, which is often prohibitive in\nthe case of billion or trillion parameter teachers. In these cases, using only\nthe embeddings of the teacher models to guide the distillation can yield\nsignificant computational savings. Our preliminary findings show that\nCLIP-based knowledge distillation with embeddings can outperform full scale\nknowledge distillation using $9\\times$ less memory and $8\\times$ less training\ntime. Code available at: https://github.com/lnairGT/CLIP-Distillation/", + "authors": "Lakshmi Nair", + "published": "2024-04-09", + "updated": "2024-04-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2304.14800v1", + "title": "Multi-to-Single Knowledge Distillation for Point Cloud Semantic Segmentation", + "abstract": "3D point cloud semantic segmentation is one of the fundamental tasks for\nenvironmental understanding. Although significant progress has been made in\nrecent years, the performance of classes with few examples or few points is\nstill far from satisfactory. In this paper, we propose a novel multi-to-single\nknowledge distillation framework for the 3D point cloud semantic segmentation\ntask to boost the performance of those hard classes. Instead of fusing all the\npoints of multi-scans directly, only the instances that belong to the\npreviously defined hard classes are fused. To effectively and sufficiently\ndistill valuable knowledge from multi-scans, we leverage a multilevel\ndistillation framework, i.e., feature representation distillation, logit\ndistillation, and affinity distillation. We further develop a novel\ninstance-aware affinity distillation algorithm for capturing high-level\nstructural knowledge to enhance the distillation efficacy for hard classes.\nFinally, we conduct experiments on the SemanticKITTI dataset, and the results\non both the validation and test sets demonstrate that our method yields\nsubstantial improvements compared with the baseline method. The code is\navailable at \\Url{https://github.com/skyshoumeng/M2SKD}.", + "authors": "Shoumeng Qiu, Feng Jiang, Haiqiang Zhang, Xiangyang Xue, Jian Pu", + "published": "2023-04-28", + "updated": "2023-04-28", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2301.01615v2", + "title": "StereoDistill: Pick the Cream from LiDAR for Distilling Stereo-based 3D Object Detection", + "abstract": "In this paper, we propose a cross-modal distillation method named\nStereoDistill to narrow the gap between the stereo and LiDAR-based approaches\nvia distilling the stereo detectors from the superior LiDAR model at the\nresponse level, which is usually overlooked in 3D object detection\ndistillation. The key designs of StereoDistill are: the X-component Guided\nDistillation~(XGD) for regression and the Cross-anchor Logit Distillation~(CLD)\nfor classification. In XGD, instead of empirically adopting a threshold to\nselect the high-quality teacher predictions as soft targets, we decompose the\npredicted 3D box into sub-components and retain the corresponding part for\ndistillation if the teacher component pilot is consistent with ground truth to\nlargely boost the number of positive predictions and alleviate the mimicking\ndifficulty of the student model. For CLD, we aggregate the probability\ndistribution of all anchors at the same position to encourage the highest\nprobability anchor rather than individually distill the distribution at the\nanchor level. Finally, our StereoDistill achieves state-of-the-art results for\nstereo-based 3D detection on the KITTI test benchmark and extensive experiments\non KITTI and Argoverse Dataset validate the effectiveness.", + "authors": "Zhe Liu, Xiaoqing Ye, Xiao Tan, Errui Ding, Xiang Bai", + "published": "2023-01-04", + "updated": "2023-01-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2103.16367v1", + "title": "Complementary Relation Contrastive Distillation", + "abstract": "Knowledge distillation aims to transfer representation ability from a teacher\nmodel to a student model. Previous approaches focus on either individual\nrepresentation distillation or inter-sample similarity preservation. While we\nargue that the inter-sample relation conveys abundant information and needs to\nbe distilled in a more effective way. In this paper, we propose a novel\nknowledge distillation method, namely Complementary Relation Contrastive\nDistillation (CRCD), to transfer the structural knowledge from the teacher to\nthe student. Specifically, we estimate the mutual relation in an anchor-based\nway and distill the anchor-student relation under the supervision of its\ncorresponding anchor-teacher relation. To make it more robust, mutual relations\nare modeled by two complementary elements: the feature and its gradient.\nFurthermore, the low bound of mutual information between the anchor-teacher\nrelation distribution and the anchor-student relation distribution is maximized\nvia relation contrastive loss, which can distill both the sample representation\nand the inter-sample relations. Experiments on different benchmarks demonstrate\nthe effectiveness of our proposed CRCD.", + "authors": "Jinguo Zhu, Shixiang Tang, Dapeng Chen, Shijie Yu, Yakun Liu, Aijun Yang, Mingzhe Rong, Xiaohua Wang", + "published": "2021-03-29", + "updated": "2021-03-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1607.04311v1", + "title": "Defensive Distillation is Not Robust to Adversarial Examples", + "abstract": "We show that defensive distillation is not secure: it is no more resistant to\ntargeted misclassification attacks than unprotected neural networks.", + "authors": "Nicholas Carlini, David Wagner", + "published": "2016-07-14", + "updated": "2016-07-14", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2306.06629v1", + "title": "GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model", + "abstract": "Currently, the reduction in the parameter scale of large-scale pre-trained\nlanguage models (PLMs) through knowledge distillation has greatly facilitated\ntheir widespread deployment on various devices. However, the deployment of\nknowledge distillation systems faces great challenges in real-world\nindustrial-strength applications, which require the use of complex distillation\nmethods on even larger-scale PLMs (over 10B), limited by memory on GPUs and the\nswitching of methods. To overcome these challenges, we propose GKD, a general\nknowledge distillation framework that supports distillation on larger-scale\nPLMs using various distillation methods. With GKD, developers can build larger\ndistillation models on memory-limited GPUs and easily switch and combine\ndifferent distillation methods within a single framework. Experimental results\nshow that GKD can support the distillation of at least 100B-scale PLMs and 25\nmainstream methods on 8 NVIDIA A100 (40GB) GPUs.", + "authors": "Shicheng Tan, Weng Lam Tam, Yuanchun Wang, Wenwen Gong, Yang Yang, Hongyin Tang, Keqing He, Jiahao Liu, Jingang Wang, Shu Zhao, Peng Zhang, Jie Tang", + "published": "2023-06-11", + "updated": "2023-06-11", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/0908.2142v1", + "title": "Distillation of Bell states in open systems", + "abstract": "In this work we review the entire classification of 2x2 distillable states\nfor protocols with a finite numbers of copies. We show a distillation protocol\nthat allows to distill Bell states with non zero probability at any time for an\ninitial singlet in vacuum. It is shown that the same protocol used in non zero\nthermal baths yields a considerable recovering of entanglement.", + "authors": "E. Isasi, D. Mundarain", + "published": "2009-08-14", + "updated": "2009-08-14", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2205.16004v3", + "title": "What Knowledge Gets Distilled in Knowledge Distillation?", + "abstract": "Knowledge distillation aims to transfer useful information from a teacher\nnetwork to a student network, with the primary goal of improving the student's\nperformance for the task at hand. Over the years, there has a been a deluge of\nnovel techniques and use cases of knowledge distillation. Yet, despite the\nvarious improvements, there seems to be a glaring gap in the community's\nfundamental understanding of the process. Specifically, what is the knowledge\nthat gets distilled in knowledge distillation? In other words, in what ways\ndoes the student become similar to the teacher? Does it start to localize\nobjects in the same way? Does it get fooled by the same adversarial samples?\nDoes its data invariance properties become similar? Our work presents a\ncomprehensive study to try to answer these questions. We show that existing\nmethods can indeed indirectly distill these properties beyond improving task\nperformance. We further study why knowledge distillation might work this way,\nand show that our findings have practical implications as well.", + "authors": "Utkarsh Ojha, Yuheng Li, Anirudh Sundara Rajan, Yingyu Liang, Yong Jae Lee", + "published": "2022-05-31", + "updated": "2023-11-06", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2206.12370v2", + "title": "Mixed Sample Augmentation for Online Distillation", + "abstract": "Mixed Sample Regularization (MSR), such as MixUp or CutMix, is a powerful\ndata augmentation strategy to generalize convolutional neural networks.\nPrevious empirical analysis has illustrated an orthogonal performance gain\nbetween MSR and conventional offline Knowledge Distillation (KD). To be more\nspecific, student networks can be enhanced with the involvement of MSR in the\ntraining stage of sequential distillation. Yet, the interplay between MSR and\nonline knowledge distillation, where an ensemble of peer students learn\nmutually from each other, remains unexplored. To bridge the gap, we make the\nfirst attempt at incorporating CutMix into online distillation, where we\nempirically observe a significant improvement. Encouraged by this fact, we\npropose an even stronger MSR specifically for online distillation, named as\nCut\\textsuperscript{n}Mix. Furthermore, a novel online distillation framework\nis designed upon Cut\\textsuperscript{n}Mix, to enhance the distillation with\nfeature level mutual learning and a self-ensemble teacher. Comprehensive\nevaluations on CIFAR10 and CIFAR100 with six network architectures show that\nour approach can consistently outperform state-of-the-art distillation methods.", + "authors": "Yiqing Shen, Liwu Xu, Yuzhe Yang, Yaqian Li, Yandong Guo", + "published": "2022-06-24", + "updated": "2023-03-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0312123v2", + "title": "Many copies may be required for entanglement distillation", + "abstract": "A mixed quantum state shared between two parties is said to be distillable\nif, by means of a protocol involving only local quantum operations and\nclassical communication, the two parties can transform some number of copies of\nthat state into a single shared pair of qubits having high fidelity with a\nmaximally entangled state state. In this paper it is proved that there exist\nstates that are distillable, but for which an arbitrarily large number of\ncopies is required before any distillation procedure can produce a shared pair\nof qubits with even a small amount of entanglement. Specifically, for every\npositive integer n there exists a state that is distillable, but given n or\nfewer copies of that state every distillation procedure outputting a single\nshared pair of qubits will output those qubits in a separable state.\nEssentially all previous examples of states proved to be distillable were such\nthat some distillation procedure could output an entangled pair of qubits given\na single copy of the state in question.", + "authors": "John Watrous", + "published": "2003-12-15", + "updated": "2004-05-31", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/0908.0836v3", + "title": "Bound States for Magic State Distillation in Fault-Tolerant Quantum Computation", + "abstract": "Magic state distillation is an important primitive in fault-tolerant quantum\ncomputation. The magic states are pure non-stabilizer states which can be\ndistilled from certain mixed non-stabilizer states via Clifford group\noperations alone. Because of the Gottesman-Knill theorem, mixtures of Pauli\neigenstates are not expected to be magic state distillable, but it has been an\nopen question whether all mixed states outside this set may be distilled. In\nthis Letter we show that, when resources are finitely limited, non-distillable\nstates exist outside the stabilizer octahedron. In analogy with the bound\nentangled states, which arise in entanglement theory, we call such states bound\nstates for magic state distillation.", + "authors": "Earl T. Campbell, Dan E. Browne", + "published": "2009-08-06", + "updated": "2010-02-01", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0305188v1", + "title": "Dynamics of Distillability", + "abstract": "The time evolution of a maximally entangled bipartite systems is presented in\nthis paper. The distillability criterion is given in terms of Kraus operators.\nUsing the criterion, we discuss the distillability of $2\\times 2$ and $n\\times\nn (n>2)$ systems in their evolution process. There are two distinguished\nprocesses, dissipation and decoherence, which may destroy the distillability.\nWe discuss the effects of those processes on distillability in details.", + "authors": "W. Wu, W. Wang, X. X. Yi", + "published": "2003-05-30", + "updated": "2003-05-30", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2004.03097v1", + "title": "Towards Non-task-specific Distillation of BERT via Sentence Representation Approximation", + "abstract": "Recently, BERT has become an essential ingredient of various NLP deep models\ndue to its effectiveness and universal-usability. However, the online\ndeployment of BERT is often blocked by its large-scale parameters and high\ncomputational cost. There are plenty of studies showing that the knowledge\ndistillation is efficient in transferring the knowledge from BERT into the\nmodel with a smaller size of parameters. Nevertheless, current BERT\ndistillation approaches mainly focus on task-specified distillation, such\nmethodologies lead to the loss of the general semantic knowledge of BERT for\nuniversal-usability. In this paper, we propose a sentence representation\napproximating oriented distillation framework that can distill the pre-trained\nBERT into a simple LSTM based model without specifying tasks. Consistent with\nBERT, our distilled model is able to perform transfer learning via fine-tuning\nto adapt to any sentence-level downstream task. Besides, our model can further\ncooperate with task-specific distillation procedures. The experimental results\non multiple NLP tasks from the GLUE benchmark show that our approach\noutperforms other task-specific distillation methods or even much larger\nmodels, i.e., ELMO, with efficiency well-improved.", + "authors": "Bowen Wu, Huan Zhang, Mengyuan Li, Zongsheng Wang, Qihang Feng, Junhong Huang, Baoxun Wang", + "published": "2020-04-07", + "updated": "2020-04-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2303.05958v1", + "title": "Robust Knowledge Distillation from RNN-T Models With Noisy Training Labels Using Full-Sum Loss", + "abstract": "This work studies knowledge distillation (KD) and addresses its constraints\nfor recurrent neural network transducer (RNN-T) models. In hard distillation, a\nteacher model transcribes large amounts of unlabelled speech to train a student\nmodel. Soft distillation is another popular KD method that distills the output\nlogits of the teacher model. Due to the nature of RNN-T alignments, applying\nsoft distillation between RNN-T architectures having different posterior\ndistributions is challenging. In addition, bad teachers having high\nword-error-rate (WER) reduce the efficacy of KD. We investigate how to\neffectively distill knowledge from variable quality ASR teachers, which has not\nbeen studied before to the best of our knowledge. We show that a sequence-level\nKD, full-sum distillation, outperforms other distillation methods for RNN-T\nmodels, especially for bad teachers. We also propose a variant of full-sum\ndistillation that distills the sequence discriminative knowledge of the teacher\nleading to further improvement in WER. We conduct experiments on public\ndatasets namely SpeechStew and LibriSpeech, and on in-house production data.", + "authors": "Mohammad Zeineldeen, Kartik Audhkhasi, Murali Karthick Baskar, Bhuvana Ramabhadran", + "published": "2023-03-10", + "updated": "2023-03-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.SD", + "eess.AS", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1905.09747v2", + "title": "Adversarially Robust Distillation", + "abstract": "Knowledge distillation is effective for producing small, high-performance\nneural networks for classification, but these small networks are vulnerable to\nadversarial attacks. This paper studies how adversarial robustness transfers\nfrom teacher to student during knowledge distillation. We find that a large\namount of robustness may be inherited by the student even when distilled on\nonly clean images. Second, we introduce Adversarially Robust Distillation (ARD)\nfor distilling robustness onto student networks. In addition to producing small\nmodels with high test accuracy like conventional distillation, ARD also passes\nthe superior robustness of large networks onto the student. In our experiments,\nwe find that ARD student models decisively outperform adversarially trained\nnetworks of identical architecture in terms of robust accuracy, surpassing\nstate-of-the-art methods on standard robustness benchmarks. Finally, we adapt\nrecent fast adversarial training methods to ARD for accelerated robust\ndistillation.", + "authors": "Micah Goldblum, Liam Fowl, Soheil Feizi, Tom Goldstein", + "published": "2019-05-23", + "updated": "2019-12-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2203.11932v1", + "title": "Dataset Distillation by Matching Training Trajectories", + "abstract": "Dataset distillation is the task of synthesizing a small dataset such that a\nmodel trained on the synthetic set will match the test accuracy of the model\ntrained on the full dataset. In this paper, we propose a new formulation that\noptimizes our distilled data to guide networks to a similar state as those\ntrained on real data across many training steps. Given a network, we train it\nfor several iterations on our distilled data and optimize the distilled data\nwith respect to the distance between the synthetically trained parameters and\nthe parameters trained on real data. To efficiently obtain the initial and\ntarget network parameters for large-scale datasets, we pre-compute and store\ntraining trajectories of expert networks trained on the real dataset. Our\nmethod handily outperforms existing methods and also allows us to distill\nhigher-resolution visual data.", + "authors": "George Cazenavette, Tongzhou Wang, Antonio Torralba, Alexei A. Efros, Jun-Yan Zhu", + "published": "2022-03-22", + "updated": "2022-03-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1912.12630v1", + "title": "Real-time Policy Distillation in Deep Reinforcement Learning", + "abstract": "Policy distillation in deep reinforcement learning provides an effective way\nto transfer control policies from a larger network to a smaller untrained\nnetwork without a significant degradation in performance. However, policy\ndistillation is underexplored in deep reinforcement learning, and existing\napproaches are computationally inefficient, resulting in a long distillation\ntime. In addition, the effectiveness of the distillation process is still\nlimited to the model capacity. We propose a new distillation mechanism, called\nreal-time policy distillation, in which training the teacher model and\ndistilling the policy to the student model occur simultaneously. Accordingly,\nthe teacher's latest policy is transferred to the student model in real time.\nThis reduces the distillation time to half the original time or even less and\nalso makes it possible for extremely small student models to learn skills at\nthe expert level. We evaluated the proposed algorithm in the Atari 2600 domain.\nThe results show that our approach can achieve full distillation in most games,\neven with compression ratios up to 1.7%.", + "authors": "Yuxiang Sun, Pooyan Fazli", + "published": "2019-12-29", + "updated": "2019-12-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2006.01683v1", + "title": "Channel Distillation: Channel-Wise Attention for Knowledge Distillation", + "abstract": "Knowledge distillation is to transfer the knowledge from the data learned by\nthe teacher network to the student network, so that the student has the\nadvantage of less parameters and less calculations, and the accuracy is close\nto the teacher. In this paper, we propose a new distillation method, which\ncontains two transfer distillation strategies and a loss decay strategy. The\nfirst transfer strategy is based on channel-wise attention, called Channel\nDistillation (CD). CD transfers the channel information from the teacher to the\nstudent. The second is Guided Knowledge Distillation (GKD). Unlike Knowledge\nDistillation (KD), which allows the student to mimic each sample's prediction\ndistribution of the teacher, GKD only enables the student to mimic the correct\noutput of the teacher. The last part is Early Decay Teacher (EDT). During the\ntraining process, we gradually decay the weight of the distillation loss. The\npurpose is to enable the student to gradually control the optimization rather\nthan the teacher. Our proposed method is evaluated on ImageNet and CIFAR100. On\nImageNet, we achieve 27.68% of top-1 error with ResNet18, which outperforms\nstate-of-the-art methods. On CIFAR100, we achieve surprising result that the\nstudent outperforms the teacher. Code is available at\nhttps://github.com/zhouzaida/channel-distillation.", + "authors": "Zaida Zhou, Chaoran Zhuge, Xinwei Guan, Wen Liu", + "published": "2020-06-02", + "updated": "2020-06-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2011.09969v1", + "title": "Neural network algorithm and its application in reactive distillation", + "abstract": "Reactive distillation is a special distillation technology based on the\ncoupling of chemical reaction and distillation. It has the characteristics of\nlow energy consumption and high separation efficiency. However, because the\ncombination of reaction and separation produces highly nonlinear robust\nbehavior, the control and optimization of the reactive distillation process\ncannot use conventional methods, but must rely on neural network algorithms.\nThis paper briefly describes the characteristics and research progress of\nreactive distillation technology and neural network algorithms, and summarizes\nthe application of neural network algorithms in reactive distillation, aiming\nto provide reference for the development and innovation of industry technology.", + "authors": "Huihui Wang, Ruyang Mo", + "published": "2020-11-16", + "updated": "2020-11-16", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE", + "cs.LG", + "I.2.8" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1812.00249v1", + "title": "On Compressing U-net Using Knowledge Distillation", + "abstract": "We study the use of knowledge distillation to compress the U-net\narchitecture. We show that, while standard distillation is not sufficient to\nreliably train a compressed U-net, introducing other regularization methods,\nsuch as batch normalization and class re-weighting, in knowledge distillation\nsignificantly improves the training process. This allows us to compress a U-net\nby over 1000x, i.e., to 0.1% of its original number of parameters, at a\nnegligible decrease in performance.", + "authors": "Karttikeya Mangalam, Mathieu Salzamann", + "published": "2018-12-01", + "updated": "2018-12-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2307.08436v1", + "title": "DOT: A Distillation-Oriented Trainer", + "abstract": "Knowledge distillation transfers knowledge from a large model to a small one\nvia task and distillation losses. In this paper, we observe a trade-off between\ntask and distillation losses, i.e., introducing distillation loss limits the\nconvergence of task loss. We believe that the trade-off results from the\ninsufficient optimization of distillation loss. The reason is: The teacher has\na lower task loss than the student, and a lower distillation loss drives the\nstudent more similar to the teacher, then a better-converged task loss could be\nobtained. To break the trade-off, we propose the Distillation-Oriented Trainer\n(DOT). DOT separately considers gradients of task and distillation losses, then\napplies a larger momentum to distillation loss to accelerate its optimization.\nWe empirically prove that DOT breaks the trade-off, i.e., both losses are\nsufficiently optimized. Extensive experiments validate the superiority of DOT.\nNotably, DOT achieves a +2.59% accuracy improvement on ImageNet-1k for the\nResNet50-MobileNetV1 pair. Conclusively, DOT greatly benefits the student's\noptimization properties in terms of loss convergence and model generalization.\nCode will be made publicly available.", + "authors": "Borui Zhao, Quan Cui, Renjie Song, Jiajun Liang", + "published": "2023-07-17", + "updated": "2023-07-17", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.01392v1", + "title": "No-go theorem for probabilistic one-way secret-key distillation", + "abstract": "The probabilistic one-way distillable secret key is equal to the largest\nexpected rate at which perfect secret key bits can be probabilistically\ndistilled from a bipartite state by means of local operations and one-way\nclassical communication. Here we define the set of super two-extendible states\nand prove that an arbitrary state in this set cannot be used for probabilistic\none-way secret-key distillation. This broad class of states includes both\nerased states and all full-rank states. Comparing the probabilistic one-way\ndistillable secret key with the more commonly studied approximate one-way\ndistillable secret key, our results demonstrate an extreme gap between them for\nmany states of interest, with the approximate one-way distillable secret key\nbeing much larger. Our findings naturally extend to probabilistic one-way\nentanglement distillation, with similar conclusions.", + "authors": "Vishal Singh, Mark M. Wilde", + "published": "2024-04-01", + "updated": "2024-04-01", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph", + "cs.IT", + "math.IT" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2308.07719v1", + "title": "The coherent measurement cost of coherence distillation", + "abstract": "Quantum coherence is an indispensable resource for quantum technological\napplications. It is known to be distillable from a noisy form using operations\nthat cannot create coherence. However, distillation exacts a hidden coherent\nmeasurement cost, whose extent has not previously been estimated. Here we show\nthat this cost (quantified by an equivalent number of Hadamard measurements) is\nrelated to what we call the irretrievable coherence: the difference between the\ncoherence of formation and the distillable coherence. We conjecture (and make\npartial progress towards proving) that when distilling from many copies of a\ngiven noisy coherent state, the coherent measurement cost scales extensively in\nthe number of copies, at an asymptotic rate exactly equalling the input's\nirretrievable coherence. This cost applies to any application whereof coherence\ndistillation is an incidental outcome (e.g. incoherent randomness extraction),\nbut the implications are more dramatic if pure coherence is the only desired\noutcome: the measurement cost may often be higher than the distilled yield, in\nwhich case coherence should rather be prepared afresh than distilled from a\nnoisy input.", + "authors": "Varun Narasimhachar", + "published": "2023-08-15", + "updated": "2023-08-15", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/0704.3661v1", + "title": "Complementarity, distillable secret key, and distillable entanglement", + "abstract": "We consider controllability of two conjugate observables Z and X by two\nparties with classical communication. The ability is specified by two\nalternative tasks, (i) agreement on Z and (ii) preparation of an eigenstate of\nX with use of an extra communication channel. We prove that their feasibility\nis equivalent to that of key distillation if the extra channel is quantum, and\nto that of entanglement distillation if it is classical. This clarifies the\ndistinction between two entanglement measures, distillable key and distillable\nentanglement.", + "authors": "Masato Koashi", + "published": "2007-04-27", + "updated": "2007-04-27", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1901.09135v1", + "title": "Progressive Label Distillation: Learning Input-Efficient Deep Neural Networks", + "abstract": "Much of the focus in the area of knowledge distillation has been on\ndistilling knowledge from a larger teacher network to a smaller student\nnetwork. However, there has been little research on how the concept of\ndistillation can be leveraged to distill the knowledge encapsulated in the\ntraining data itself into a reduced form. In this study, we explore the concept\nof progressive label distillation, where we leverage a series of\nteacher-student network pairs to progressively generate distilled training data\nfor learning deep neural networks with greatly reduced input dimensions. To\ninvestigate the efficacy of the proposed progressive label distillation\napproach, we experimented with learning a deep limited vocabulary speech\nrecognition network based on generated 500ms input utterances distilled\nprogressively from 1000ms source training data, and demonstrated a significant\nincrease in test accuracy of almost 78% compared to direct learning.", + "authors": "Zhong Qiu Lin, Alexander Wong", + "published": "2019-01-26", + "updated": "2019-01-26", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2403.10045v1", + "title": "Towards Adversarially Robust Dataset Distillation by Curvature Regularization", + "abstract": "Dataset distillation (DD) allows datasets to be distilled to fractions of\ntheir original size while preserving the rich distributional information so\nthat models trained on the distilled datasets can achieve a comparable accuracy\nwhile saving significant computational loads. Recent research in this area has\nbeen focusing on improving the accuracy of models trained on distilled\ndatasets. In this paper, we aim to explore a new perspective of DD. We study\nhow to embed adversarial robustness in distilled datasets, so that models\ntrained on these datasets maintain the high accuracy and meanwhile acquire\nbetter adversarial robustness. We propose a new method that achieves this goal\nby incorporating curvature regularization into the distillation process with\nmuch less computational overhead than standard adversarial training. Extensive\nempirical experiments suggest that our method not only outperforms standard\nadversarial training on both accuracy and robustness with less computation\noverhead but is also capable of generating robust distilled datasets that can\nwithstand various adversarial attacks.", + "authors": "Eric Xue, Yijiang Li, Haoyang Liu, Yifan Shen, Haohan Wang", + "published": "2024-03-15", + "updated": "2024-03-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2311.13811v2", + "title": "Education distillation:getting student models to learn in shcools", + "abstract": "Knowledge distillation is one of the methods for model compression, and\nexisting knowledge distillation techniques focus on how to improve the\ndistillation algorithm so as to enhance the distillation efficiency. This paper\nintroduces dynamic incremental learning into knowledge distillation and\nproposes a distillation strategy for education distillation. Specifically, it\nis proposed to take fragmented student models divided from the complete student\nmodel as lower-grade models. As the grade level rises, fragmented student\nmodels deepen in conjunction with designed teaching reference layers, while\nlearning and distilling from more teacher models. By moving from lower to\nhigher grades, fragmented student models were gradually integrated into a\ncomplete target student model, and the performance of the student models\ngradually improved from lower to higher grades of the stage. Education\ndistillation strategies combined with distillation algorithms outperform the\nresults of single distillation algorithms on the public dataset\nCIFAR100,Caltech256, Food-101 dataset.", + "authors": "Ling Feng, Danyang Li, Tianhao Wu, Xuliang Duan", + "published": "2023-11-23", + "updated": "2023-11-27", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2102.02973v1", + "title": "Show, Attend and Distill:Knowledge Distillation via Attention-based Feature Matching", + "abstract": "Knowledge distillation extracts general knowledge from a pre-trained teacher\nnetwork and provides guidance to a target student network. Most studies\nmanually tie intermediate features of the teacher and student, and transfer\nknowledge through pre-defined links. However, manual selection often constructs\nineffective links that limit the improvement from the distillation. There has\nbeen an attempt to address the problem, but it is still challenging to identify\neffective links under practical scenarios. In this paper, we introduce an\neffective and efficient feature distillation method utilizing all the feature\nlevels of the teacher without manually selecting the links. Specifically, our\nmethod utilizes an attention-based meta-network that learns relative\nsimilarities between features, and applies identified similarities to control\ndistillation intensities of all possible pairs. As a result, our method\ndetermines competent links more efficiently than the previous approach and\nprovides better performance on model compression and transfer learning tasks.\nFurther qualitative analyses and ablative studies describe how our method\ncontributes to better distillation. The implementation code is available at\ngithub.com/clovaai/attention-feature-distillation.", + "authors": "Mingi Ji, Byeongho Heo, Sungrae Park", + "published": "2021-02-05", + "updated": "2021-02-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1108.0537v2", + "title": "Isotropic non-locality cannot be distilled", + "abstract": "We investigate non-locality distillation protocols for isotropic\ncorrelations. These correlations are the hardest instances which respect to\ndistillability and only partial results are known about their behaviour under\nnon-locality distillation protocols. We completely resolve this issue by\nproving that non-locality distillation is impossible for all non-local\nisotropic correlations.", + "authors": "Dejan D. Dukaric", + "published": "2011-08-02", + "updated": "2011-09-20", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2007.09029v1", + "title": "Knowledge Distillation in Deep Learning and its Applications", + "abstract": "Deep learning based models are relatively large, and it is hard to deploy\nsuch models on resource-limited devices such as mobile phones and embedded\ndevices. One possible solution is knowledge distillation whereby a smaller\nmodel (student model) is trained by utilizing the information from a larger\nmodel (teacher model). In this paper, we present a survey of knowledge\ndistillation techniques applied to deep learning models. To compare the\nperformances of different techniques, we propose a new metric called\ndistillation metric. Distillation metric compares different knowledge\ndistillation algorithms based on sizes and accuracy scores. Based on the\nsurvey, some interesting conclusions are drawn and presented in this paper.", + "authors": "Abdolmaged Alkhulaifi, Fahad Alsahli, Irfan Ahmad", + "published": "2020-07-17", + "updated": "2020-07-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2205.09153v1", + "title": "ERNIE-Search: Bridging Cross-Encoder with Dual-Encoder via Self On-the-fly Distillation for Dense Passage Retrieval", + "abstract": "Neural retrievers based on pre-trained language models (PLMs), such as\ndual-encoders, have achieved promising performance on the task of open-domain\nquestion answering (QA). Their effectiveness can further reach new\nstate-of-the-arts by incorporating cross-architecture knowledge distillation.\nHowever, most of the existing studies just directly apply conventional\ndistillation methods. They fail to consider the particular situation where the\nteacher and student have different structures. In this paper, we propose a\nnovel distillation method that significantly advances cross-architecture\ndistillation for dual-encoders. Our method 1) introduces a self on-the-fly\ndistillation method that can effectively distill late interaction (i.e.,\nColBERT) to vanilla dual-encoder, and 2) incorporates a cascade distillation\nprocess to further improve the performance with a cross-encoder teacher.\nExtensive experiments are conducted to validate that our proposed solution\noutperforms strong baselines and establish a new state-of-the-art on\nopen-domain QA benchmarks.", + "authors": "Yuxiang Lu, Yiding Liu, Jiaxiang Liu, Yunsheng Shi, Zhengjie Huang, Shikun Feng Yu Sun, Hao Tian, Hua Wu, Shuaiqiang Wang, Dawei Yin, Haifeng Wang", + "published": "2022-05-18", + "updated": "2022-05-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2010.13002v2", + "title": "Pre-trained Summarization Distillation", + "abstract": "Recent state-of-the-art approaches to summarization utilize large pre-trained\nTransformer models. Distilling these models to smaller student models has\nbecome critically important for practical use; however there are many different\ndistillation methods proposed by the NLP literature. Recent work on distilling\nBERT for classification and regression tasks shows strong performance using\ndirect knowledge distillation. Alternatively, machine translation practitioners\ndistill using pseudo-labeling, where a small model is trained on the\ntranslations of a larger model. A third, simpler approach is to 'shrink and\nfine-tune' (SFT), which avoids any explicit distillation by copying parameters\nto a smaller student model and then fine-tuning. We compare these three\napproaches for distillation of Pegasus and BART, the current and former state\nof the art, pre-trained summarization models, and find that SFT outperforms\nknowledge distillation and pseudo-labeling on the CNN/DailyMail dataset, but\nunder-performs pseudo-labeling on the more abstractive XSUM dataset. PyTorch\nCode and checkpoints of different sizes are available through Hugging Face\ntransformers here http://tiny.cc/4iy0tz.", + "authors": "Sam Shleifer, Alexander M. Rush", + "published": "2020-10-24", + "updated": "2020-10-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/0708.3699v2", + "title": "Convolutional Entanglement Distillation", + "abstract": "We develop a theory of entanglement distillation that exploits a\nconvolutional coding structure. We provide a method for converting an arbitrary\nclassical binary or quaternary convolutional code into a convolutional\nentanglement distillation protocol. The imported classical convolutional code\ndoes not have to be dual-containing or self-orthogonal. The yield and\nerror-correcting properties of such a protocol depend respectively on the rate\nand error-correcting properties of the imported classical convolutional code. A\nconvolutional entanglement distillation protocol has several other benefits.\nTwo parties sharing noisy ebits can distill noiseless ebits ``online'' as they\nacquire more noisy ebits. Distillation yield is high and decoding complexity is\nsimple for a convolutional entanglement distillation protocol. Our theory of\nconvolutional entanglement distillation reduces the problem of finding a good\nconvolutional entanglement distillation protocol to the well-established\nproblem of finding a good classical convolutional code.", + "authors": "Mark M. Wilde, Hari Krovi, Todd A. Brun", + "published": "2007-08-28", + "updated": "2007-09-19", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph", + "cs.IT", + "math.IT" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0108029v1", + "title": "Distillability, Bell inequalities and multiparticle bound entanglement", + "abstract": "We study the relation between violation of Bell inequalities and\ndistillability properties of quantum states. Recently, D\\\"ur has shown that\nthere are some multiparticle bound entangled states, non-separable and\nnon-distillable, that violate a Bell inequality. We prove that for all the\nstates violating this inequality there exist at least one splitting of the\nparties into two groups such that some pure-state entanglement can be\ndistilled, obtaining a connection between Bell inequalities and bipartite\ndistillable entanglement.", + "authors": "A. Acin", + "published": "2001-08-07", + "updated": "2001-08-07", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2403.09053v1", + "title": "Towards a theory of model distillation", + "abstract": "Distillation is the task of replacing a complicated machine learning model\nwith a simpler model that approximates the original [BCNM06,HVD15]. Despite\nmany practical applications, basic questions about the extent to which models\ncan be distilled, and the runtime and amount of data needed to distill, remain\nlargely open.\n To study these questions, we initiate a general theory of distillation,\ndefining PAC-distillation in an analogous way to PAC-learning [Val84]. As\napplications of this theory: (1) we propose new algorithms to extract the\nknowledge stored in the trained weights of neural networks -- we show how to\nefficiently distill neural networks into succinct, explicit decision tree\nrepresentations when possible by using the ``linear representation\nhypothesis''; and (2) we prove that distillation can be much cheaper than\nlearning from scratch, and make progress on characterizing its complexity.", + "authors": "Enric Boix-Adsera", + "published": "2024-03-14", + "updated": "2024-03-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.NE" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2401.06370v1", + "title": "Graph Relation Distillation for Efficient Biomedical Instance Segmentation", + "abstract": "Instance-aware embeddings predicted by deep neural networks have\nrevolutionized biomedical instance segmentation, but its resource requirements\nare substantial. Knowledge distillation offers a solution by transferring\ndistilled knowledge from heavy teacher networks to lightweight yet\nhigh-performance student networks. However, existing knowledge distillation\nmethods struggle to extract knowledge for distinguishing instances and overlook\nglobal relation information. To address these challenges, we propose a graph\nrelation distillation approach for efficient biomedical instance segmentation,\nwhich considers three essential types of knowledge: instance-level features,\ninstance relations, and pixel-level boundaries. We introduce two graph\ndistillation schemes deployed at both the intra-image level and the inter-image\nlevel: instance graph distillation (IGD) and affinity graph distillation (AGD).\nIGD constructs a graph representing instance features and relations,\ntransferring these two types of knowledge by enforcing instance graph\nconsistency. AGD constructs an affinity graph representing pixel relations to\ncapture structured knowledge of instance boundaries, transferring\nboundary-related knowledge by ensuring pixel affinity consistency. Experimental\nresults on a number of biomedical datasets validate the effectiveness of our\napproach, enabling student models with less than $ 1\\%$ parameters and less\nthan $10\\%$ inference time while achieving promising performance compared to\nteacher models.", + "authors": "Xiaoyu Liu, Yueyi Zhang, Zhiwei Xiong, Wei Huang, Bo Hu, Xiaoyan Sun, Feng Wu", + "published": "2024-01-12", + "updated": "2024-01-12", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2305.08076v1", + "title": "Improving Defensive Distillation using Teacher Assistant", + "abstract": "Adversarial attacks pose a significant threat to the security and safety of\ndeep neural networks being applied to modern applications. More specifically,\nin computer vision-based tasks, experts can use the knowledge of model\narchitecture to create adversarial samples imperceptible to the human eye.\nThese attacks can lead to security problems in popular applications such as\nself-driving cars, face recognition, etc. Hence, building networks which are\nrobust to such attacks is highly desirable and essential. Among the various\nmethods present in literature, defensive distillation has shown promise in\nrecent years. Using knowledge distillation, researchers have been able to\ncreate models robust against some of those attacks. However, more attacks have\nbeen developed exposing weakness in defensive distillation. In this project, we\nderive inspiration from teacher assistant knowledge distillation and propose\nthat introducing an assistant network can improve the robustness of the\ndistilled model. Through a series of experiments, we evaluate the distilled\nmodels for different distillation temperatures in terms of accuracy,\nsensitivity, and robustness. Our experiments demonstrate that the proposed\nhypothesis can improve robustness in most cases. Additionally, we show that\nmulti-step distillation can further improve robustness with very little impact\non model accuracy.", + "authors": "Maniratnam Mandal, Suna Gao", + "published": "2023-05-14", + "updated": "2023-05-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CR", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1707.02573v1", + "title": "Distilling Entanglement with Noisy Operations", + "abstract": "Entanglement distillation is a fundamental task in quantum information\nprocessing. It not only extracts entanglement out of corrupted systems but also\nleads to protecting systems of interest against intervention with environment.\nIn this work, we consider a realistic scenario of entanglement distillation\nwhere noisy quantum operations are applied. In particular, the two-way\ndistillation protocol that tolerates the highest error rate is considered. We\nshow that among all types of noise there are only four equivalence classes\naccording to the distillability condition. Since the four classes are connected\nby local unitary transformations, our results can be used to improve\nentanglement distillability in practice when entanglement distillation is\nperformed in a realistic setting.", + "authors": "Jinho Chang, Joonwoo Bae, Younghun Kwon", + "published": "2017-07-09", + "updated": "2017-07-09", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2403.03846v1", + "title": "On the Effectiveness of Distillation in Mitigating Backdoors in Pre-trained Encoder", + "abstract": "In this paper, we study a defense against poisoned encoders in SSL called\ndistillation, which is a defense used in supervised learning originally.\nDistillation aims to distill knowledge from a given model (a.k.a the teacher\nnet) and transfer it to another (a.k.a the student net). Now, we use it to\ndistill benign knowledge from poisoned pre-trained encoders and transfer it to\na new encoder, resulting in a clean pre-trained encoder. In particular, we\nconduct an empirical study on the effectiveness and performance of distillation\nagainst poisoned encoders. Using two state-of-the-art backdoor attacks against\npre-trained image encoders and four commonly used image classification\ndatasets, our experimental results show that distillation can reduce attack\nsuccess rate from 80.87% to 27.51% while suffering a 6.35% loss in accuracy.\nMoreover, we investigate the impact of three core components of distillation on\nperformance: teacher net, student net, and distillation loss. By comparing 4\ndifferent teacher nets, 3 student nets, and 6 distillation losses, we find that\nfine-tuned teacher nets, warm-up-training-based student nets, and\nattention-based distillation loss perform best, respectively.", + "authors": "Tingxu Han, Shenghan Huang, Ziqi Ding, Weisong Sun, Yebo Feng, Chunrong Fang, Jun Li, Hanwei Qian, Cong Wu, Quanjun Zhang, Yang Liu, Zhenyu Chen", + "published": "2024-03-06", + "updated": "2024-03-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2011.06110v1", + "title": "Efficient Knowledge Distillation for RNN-Transducer Models", + "abstract": "Knowledge Distillation is an effective method of transferring knowledge from\na large model to a smaller model. Distillation can be viewed as a type of model\ncompression, and has played an important role for on-device ASR applications.\nIn this paper, we develop a distillation method for RNN-Transducer (RNN-T)\nmodels, a popular end-to-end neural network architecture for streaming speech\nrecognition. Our proposed distillation loss is simple and efficient, and uses\nonly the \"y\" and \"blank\" posterior probabilities from the RNN-T output\nprobability lattice. We study the effectiveness of the proposed approach in\nimproving the accuracy of sparse RNN-T models obtained by gradually pruning a\nlarger uncompressed model, which also serves as the teacher during\ndistillation. With distillation of 60% and 90% sparse multi-domain RNN-T\nmodels, we obtain WER reductions of 4.3% and 12.1% respectively, on a noisy\nFarField eval set. We also present results of experiments on LibriSpeech, where\nthe introduction of the distillation loss yields a 4.8% relative WER reduction\non the test-other dataset for a small Conformer model.", + "authors": "Sankaran Panchapagesan, Daniel S. Park, Chung-Cheng Chiu, Yuan Shangguan, Qiao Liang, Alexander Gruenstein", + "published": "2020-11-11", + "updated": "2020-11-11", + "primary_cat": "eess.AS", + "cats": [ + "eess.AS", + "cs.SD" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/0803.0345v2", + "title": "Secret key distillation from shielded two-qubit states", + "abstract": "The quantum states corresponding to a secret key are characterized using the\nso-called private states, where the key part consisting of a secret key is\nshielded by the additional systems. Based on the construction, it was shown\nthat a secret key can be distilled from bound entangled states. In this work, I\nconsider the shielded two-qubit states in a key-distillation scenario and\nderive the conditions under which a secret key can be distilled using the\nrecurrence protocol or the two-way classical distillation, advantage\ndistillation together with one-way postprocessing. From the security\nconditions, it is shown that a secret key can be distilled from bound entangled\nstates in a much wider range. In addition, I consider the case that in which\nwhite noise is added to quantum states and show that the classical distillation\nprotocol still works despite a certain amount of noise although the recurrence\nprotocol does not.", + "authors": "Joonwoo Bae", + "published": "2008-03-03", + "updated": "2010-09-22", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2309.09920v1", + "title": "Distilling HuBERT with LSTMs via Decoupled Knowledge Distillation", + "abstract": "Much research effort is being applied to the task of compressing the\nknowledge of self-supervised models, which are powerful, yet large and memory\nconsuming. In this work, we show that the original method of knowledge\ndistillation (and its more recently proposed extension, decoupled knowledge\ndistillation) can be applied to the task of distilling HuBERT. In contrast to\nmethods that focus on distilling internal features, this allows for more\nfreedom in the network architecture of the compressed model. We thus propose to\ndistill HuBERT's Transformer layers into an LSTM-based distilled model that\nreduces the number of parameters even below DistilHuBERT and at the same time\nshows improved performance in automatic speech recognition.", + "authors": "Danilo de Oliveira, Timo Gerkmann", + "published": "2023-09-18", + "updated": "2023-09-18", + "primary_cat": "eess.AS", + "cats": [ + "eess.AS", + "cs.LG", + "cs.SD", + "eess.SP" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2109.15014v1", + "title": "Deep Neural Compression Via Concurrent Pruning and Self-Distillation", + "abstract": "Pruning aims to reduce the number of parameters while maintaining performance\nclose to the original network. This work proposes a novel\n\\emph{self-distillation} based pruning strategy, whereby the representational\nsimilarity between the pruned and unpruned versions of the same network is\nmaximized. Unlike previous approaches that treat distillation and pruning\nseparately, we use distillation to inform the pruning criteria, without\nrequiring a separate student network as in knowledge distillation. We show that\nthe proposed {\\em cross-correlation objective for self-distilled pruning}\nimplicitly encourages sparse solutions, naturally complementing magnitude-based\npruning criteria. Experiments on the GLUE and XGLUE benchmarks show that\nself-distilled pruning increases mono- and cross-lingual language model\nperformance. Self-distilled pruned models also outperform smaller Transformers\nwith an equal number of parameters and are competitive against (6 times) larger\ndistilled networks. We also observe that self-distillation (1) maximizes class\nseparability, (2) increases the signal-to-noise ratio, and (3) converges faster\nafter pruning steps, providing further insights into why self-distilled pruning\nimproves generalization.", + "authors": "James O' Neill, Sourav Dutta, Haytham Assem", + "published": "2021-09-30", + "updated": "2021-09-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2106.07137v1", + "title": "Why Can You Lay Off Heads? Investigating How BERT Heads Transfer", + "abstract": "The huge size of the widely used BERT family models has led to recent efforts\nabout model distillation. The main goal of distillation is to create a\ntask-agnostic pre-trained model that can be fine-tuned on downstream tasks\nwithout fine-tuning its full-sized version. Despite the progress of\ndistillation, to what degree and for what reason a task-agnostic model can be\ncreated from distillation has not been well studied. Also, the mechanisms\nbehind transfer learning of those BERT models are not well investigated either.\nTherefore, this work focuses on analyzing the acceptable deduction when\ndistillation for guiding the future distillation procedure. Specifically, we\nfirst inspect the prunability of the Transformer heads in RoBERTa and ALBERT\nusing their head importance estimation proposed by Michel et al. (2019), and\nthen check the coherence of the important heads between the pre-trained task\nand downstream tasks. Hence, the acceptable deduction of performance on the\npre-trained task when distilling a model can be derived from the results, and\nwe further compare the behavior of the pruned model before and after\nfine-tuning. Our studies provide guidance for future directions about BERT\nfamily model distillation.", + "authors": "Ting-Rui Chiang, Yun-Nung Chen", + "published": "2021-06-14", + "updated": "2021-06-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.09740v1", + "title": "Leveraging Zero-Level Distillation to Generate High-Fidelity Magic States", + "abstract": "Magic state distillation plays an important role in universal fault-tolerant\nquantum computing, and its overhead is one of the major obstacles to realizing\nfault-tolerant quantum computers. Hence, many studies have been conducted to\nreduce this overhead. Among these, Litinski has provided a concrete assessment\nof resource-efficient distillation protocol implementations on the rotated\nsurface code. On the other hand, recently, Itogawa et al. have proposed\nzero-level distillation, a distillation protocol offering very small spatial\nand temporal overhead to generate relatively low-fidelity magic states. While\nzero-level distillation offers preferable spatial and temporal overhead, it\ncannot directly generate high-fidelity magic states since it only reduces the\nlogical error rate of the magic state quadratically. In this study, we evaluate\nthe spatial and temporal overhead of two-level distillation implementations\ngenerating relatively high-fidelity magic states, including ones incorporating\nzero-level distillation. To this end, we introduce (0+1)-level distillation, a\ntwo-level distillation protocol which combines zero-level distillation and the\n15-to-1 distillation protocol. We refine the second-level 15-to-1\nimplementation in it to capitalize on the small footprint of zero-level\ndistillation. Under conditions of a physical error probability of\n$p_{\\mathrm{phys}} = 10^{-4}$ ($10^{-3}$) and targeting an error rate for the\nmagic state within $[5 \\times 10^{-17}, 10^{-11}]$ ($[5 \\times 10^{-11},\n10^{-8}]$), (0+1)-level distillation reduces the spatiotemporal overhead by\nmore than 63% (61%) compared to the (15-to-1)$\\times$(15-to-1) protocol and\nmore than 43% (44%) compared to the (15-to-1)$\\times$(20-to-4) protocol,\noffering a substantial efficiency gain over the traditional protocols.", + "authors": "Yutaka Hirano, Tomohiro Itogawa, Keisuke Fujii", + "published": "2024-04-15", + "updated": "2024-04-15", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2104.11928v1", + "title": "Extract then Distill: Efficient and Effective Task-Agnostic BERT Distillation", + "abstract": "Task-agnostic knowledge distillation, a teacher-student framework, has been\nproved effective for BERT compression. Although achieving promising results on\nNLP tasks, it requires enormous computational resources. In this paper, we\npropose Extract Then Distill (ETD), a generic and flexible strategy to reuse\nthe teacher's parameters for efficient and effective task-agnostic\ndistillation, which can be applied to students of any size. Specifically, we\nintroduce two variants of ETD, ETD-Rand and ETD-Impt, which extract the\nteacher's parameters in a random manner and by following an importance metric\nrespectively. In this way, the student has already acquired some knowledge at\nthe beginning of the distillation process, which makes the distillation process\nconverge faster. We demonstrate the effectiveness of ETD on the GLUE benchmark\nand SQuAD. The experimental results show that: (1) compared with the baseline\nwithout an ETD strategy, ETD can save 70\\% of computation cost. Moreover, it\nachieves better results than the baseline when using the same computing\nresource. (2) ETD is generic and has been proven effective for different\ndistillation methods (e.g., TinyBERT and MiniLM) and students of different\nsizes. The source code will be publicly available upon publication.", + "authors": "Cheng Chen, Yichun Yin, Lifeng Shang, Zhi Wang, Xin Jiang, Xiao Chen, Qun Liu", + "published": "2021-04-24", + "updated": "2021-04-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2006.08572v3", + "title": "Flexible Dataset Distillation: Learn Labels Instead of Images", + "abstract": "We study the problem of dataset distillation - creating a small set of\nsynthetic examples capable of training a good model. In particular, we study\nthe problem of label distillation - creating synthetic labels for a small set\nof real images, and show it to be more effective than the prior image-based\napproach to dataset distillation. Methodologically, we introduce a more robust\nand flexible meta-learning algorithm for distillation, as well as an effective\nfirst-order strategy based on convex optimization layers. Distilling labels\nwith our new algorithm leads to improved results over prior image-based\ndistillation. More importantly, it leads to clear improvements in flexibility\nof the distilled dataset in terms of compatibility with off-the-shelf\noptimizers and diverse neural architectures. Interestingly, label distillation\ncan also be applied across datasets, for example enabling learning Japanese\ncharacter recognition by training only on synthetically labeled English\nletters.", + "authors": "Ondrej Bohdal, Yongxin Yang, Timothy Hospedales", + "published": "2020-06-15", + "updated": "2020-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0607126v3", + "title": "Random bipartite entanglement from W and W-like states", + "abstract": "We describe a protocol for distilling maximally entangled bipartite states\nbetween random pairs of parties from those sharing a tripartite W state, and\nshow that, rather surprisingly, the total distillation rate (the total number\nof EPR pairs distilled per W, irrespective of who shares them) may be done at a\nhigher rate than distillation of bipartite entanglement between specified pairs\nof parties. Specifically, the optimal distillation rate for specified\nentanglement for the W has been previously shown to be the asymptotic\nentanglement of assistance of 0.92 EPR pairs per W, while our protocol can\nasymptotically distill 1 EPR pair per W between random pairs of parties, which\nwe conjecture to be optimal. We thus demonstrate a tradeoff between the overall\nasymptotic rate of EPR distillation and the distribution of final EPR pairs\nbetween parties. We further show that by increasing the number of parties in\nthe protocol that there exist states with fixed lower-bounded distillable\nentanglement for random parties but arbitrarily small distillable entanglement\nfor specified parties.", + "authors": "Ben Fortescue, Hoi-Kwong Lo", + "published": "2006-07-18", + "updated": "2007-02-23", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.17732v1", + "title": "Generative Dataset Distillation: Balancing Global Structure and Local Details", + "abstract": "In this paper, we propose a new dataset distillation method that considers\nbalancing global structure and local details when distilling the information\nfrom a large dataset into a generative model. Dataset distillation has been\nproposed to reduce the size of the required dataset when training models. The\nconventional dataset distillation methods face the problem of long redeployment\ntime and poor cross-architecture performance. Moreover, previous methods\nfocused too much on the high-level semantic attributes between the synthetic\ndataset and the original dataset while ignoring the local features such as\ntexture and shape. Based on the above understanding, we propose a new method\nfor distilling the original image dataset into a generative model. Our method\ninvolves using a conditional generative adversarial network to generate the\ndistilled dataset. Subsequently, we ensure balancing global structure and local\ndetails in the distillation process, continuously optimizing the generator for\nmore information-dense dataset generation.", + "authors": "Longzhen Li, Guang Li, Ren Togo, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama", + "published": "2024-04-26", + "updated": "2024-04-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1907.09682v2", + "title": "Similarity-Preserving Knowledge Distillation", + "abstract": "Knowledge distillation is a widely applicable technique for training a\nstudent neural network under the guidance of a trained teacher network. For\nexample, in neural network compression, a high-capacity teacher is distilled to\ntrain a compact student; in privileged learning, a teacher trained with\nprivileged data is distilled to train a student without access to that data.\nThe distillation loss determines how a teacher's knowledge is captured and\ntransferred to the student. In this paper, we propose a new form of knowledge\ndistillation loss that is inspired by the observation that semantically similar\ninputs tend to elicit similar activation patterns in a trained network.\nSimilarity-preserving knowledge distillation guides the training of a student\nnetwork such that input pairs that produce similar (dissimilar) activations in\nthe teacher network produce similar (dissimilar) activations in the student\nnetwork. In contrast to previous distillation methods, the student is not\nrequired to mimic the representation space of the teacher, but rather to\npreserve the pairwise similarities in its own representation space. Experiments\non three public datasets demonstrate the potential of our approach.", + "authors": "Frederick Tung, Greg Mori", + "published": "2019-07-23", + "updated": "2019-08-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2303.05015v2", + "title": "Smooth and Stepwise Self-Distillation for Object Detection", + "abstract": "Distilling the structured information captured in feature maps has\ncontributed to improved results for object detection tasks, but requires\ncareful selection of baseline architectures and substantial pre-training.\nSelf-distillation addresses these limitations and has recently achieved\nstate-of-the-art performance for object detection despite making several\nsimplifying architectural assumptions. Building on this work, we propose Smooth\nand Stepwise Self-Distillation (SSSD) for object detection. Our SSSD\narchitecture forms an implicit teacher from object labels and a feature pyramid\nnetwork backbone to distill label-annotated feature maps using Jensen-Shannon\ndistance, which is smoother than distillation losses used in prior work. We\nadditionally add a distillation coefficient that is adaptively configured based\non the learning rate. We extensively benchmark SSSD against a baseline and two\nstate-of-the-art object detector architectures on the COCO dataset by varying\nthe coefficients and backbone and detector networks. We demonstrate that SSSD\nachieves higher average precision in most experimental settings, is robust to a\nwide range of coefficients, and benefits from our stepwise distillation\nprocedure.", + "authors": "Jieren Deng, Xin Zhou, Hao Tian, Zhihong Pan, Derek Aguiar", + "published": "2023-03-09", + "updated": "2024-01-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2208.10068v1", + "title": "Tree-structured Auxiliary Online Knowledge Distillation", + "abstract": "Traditional knowledge distillation adopts a two-stage training process in\nwhich a teacher model is pre-trained and then transfers the knowledge to a\ncompact student model. To overcome the limitation, online knowledge\ndistillation is proposed to perform one-stage distillation when the teacher is\nunavailable. Recent researches on online knowledge distillation mainly focus on\nthe design of the distillation objective, including attention or gate\nmechanism. Instead, in this work, we focus on the design of the global\narchitecture and propose Tree-Structured Auxiliary online knowledge\ndistillation (TSA), which adds more parallel peers for layers close to the\noutput hierarchically to strengthen the effect of knowledge distillation.\nDifferent branches construct different views of the inputs, which can be the\nsource of the knowledge. The hierarchical structure implies that the knowledge\ntransfers from general to task-specific with the growth of the layers.\nExtensive experiments on 3 computer vision and 4 natural language processing\ndatasets show that our method achieves state-of-the-art performance without\nbells and whistles. To the best of our knowledge, we are the first to\ndemonstrate the effectiveness of online knowledge distillation for machine\ntranslation tasks.", + "authors": "Wenye Lin, Yangning Li, Yifeng Ding, Hai-Tao Zheng", + "published": "2022-08-22", + "updated": "2022-08-22", + "primary_cat": "cs.NI", + "cats": [ + "cs.NI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2305.05233v1", + "title": "DynamicKD: An Effective Knowledge Distillation via Dynamic Entropy Correction-Based Distillation for Gap Optimizing", + "abstract": "The knowledge distillation uses a high-performance teacher network to guide\nthe student network. However, the performance gap between the teacher and\nstudent networks can affect the student's training. This paper proposes a novel\nknowledge distillation algorithm based on dynamic entropy correction to reduce\nthe gap by adjusting the student instead of the teacher. Firstly, the effect of\nchanging the output entropy (short for output information entropy) in the\nstudent on the distillation loss is analyzed in theory. This paper shows that\ncorrecting the output entropy can reduce the gap. Then, a knowledge\ndistillation algorithm based on dynamic entropy correction is created, which\ncan correct the output entropy in real-time with an entropy controller updated\ndynamically by the distillation loss. The proposed algorithm is validated on\nthe CIFAR100 and ImageNet. The comparison with various state-of-the-art\ndistillation algorithms shows impressive results, especially in the experiment\non the CIFAR100 regarding teacher-student pair resnet32x4-resnet8x4. The\nproposed algorithm raises 2.64 points over the traditional distillation\nalgorithm and 0.87 points over the state-of-the-art algorithm CRD in\nclassification accuracy, demonstrating its effectiveness and efficiency.", + "authors": "Songling Zhu, Ronghua Shang, Bo Yuan, Weitong Zhang, Yangyang Li, Licheng Jiao", + "published": "2023-05-09", + "updated": "2023-05-09", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2312.06899v1", + "title": "LoRA-Enhanced Distillation on Guided Diffusion Models", + "abstract": "Diffusion models, such as Stable Diffusion (SD), offer the ability to\ngenerate high-resolution images with diverse features, but they come at a\nsignificant computational and memory cost. In classifier-free guided diffusion\nmodels, prolonged inference times are attributed to the necessity of computing\ntwo separate diffusion models at each denoising step. Recent work has shown\npromise in improving inference time through distillation techniques, teaching\nthe model to perform similar denoising steps with reduced computations.\nHowever, the application of distillation introduces additional memory overhead\nto these already resource-intensive diffusion models, making it less practical.\n To address these challenges, our research explores a novel approach that\ncombines Low-Rank Adaptation (LoRA) with model distillation to efficiently\ncompress diffusion models. This approach not only reduces inference time but\nalso mitigates memory overhead, and notably decreases memory consumption even\nbefore applying distillation. The results are remarkable, featuring a\nsignificant reduction in inference time due to the distillation process and a\nsubstantial 50% reduction in memory consumption. Our examination of the\ngenerated images underscores that the incorporation of LoRA-enhanced\ndistillation maintains image quality and alignment with the provided prompts.\nIn summary, while conventional distillation tends to increase memory\nconsumption, LoRA-enhanced distillation offers optimization without any\ntrade-offs or compromises in quality.", + "authors": "Pareesa Ameneh Golnari", + "published": "2023-12-12", + "updated": "2023-12-12", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0001084v2", + "title": "Distillation of GHZ states by selective information manipulation", + "abstract": "Methods for distilling maximally entangled tripartite (GHZ) states from\narbitrary entangled tripartite pure states are described. These techniques work\nfor virtually any input state. Each technique has two stages which we call\nprimary and secondary distillation. Primary distillation produces a GHZ state\nwith some probability, so that when applied to an ensemble of systems, a\ncertain percentage is discarded. Secondary distillation produces further GHZs\nfrom the discarded systems. These protocols are developed with the help of an\napproach to quantum information theory based on absolutely selective\ninformation, which has other potential applications.", + "authors": "Oliver Cohen, Todd A. Brun", + "published": "2000-01-23", + "updated": "2000-02-02", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2402.02781v1", + "title": "Dual Knowledge Distillation for Efficient Sound Event Detection", + "abstract": "Sound event detection (SED) is essential for recognizing specific sounds and\ntheir temporal locations within acoustic signals. This becomes challenging\nparticularly for on-device applications, where computational resources are\nlimited. To address this issue, we introduce a novel framework referred to as\ndual knowledge distillation for developing efficient SED systems in this work.\nOur proposed dual knowledge distillation commences with temporal-averaging\nknowledge distillation (TAKD), utilizing a mean student model derived from the\ntemporal averaging of the student model's parameters. This allows the student\nmodel to indirectly learn from a pre-trained teacher model, ensuring a stable\nknowledge distillation. Subsequently, we introduce embedding-enhanced feature\ndistillation (EEFD), which involves incorporating an embedding distillation\nlayer within the student model to bolster contextual learning. On DCASE 2023\nTask 4A public evaluation dataset, our proposed SED system with dual knowledge\ndistillation having merely one-third of the baseline model's parameters,\ndemonstrates superior performance in terms of PSDS1 and PSDS2. This highlights\nthe importance of proposed dual knowledge distillation for compact SED systems,\nwhich can be ideal for edge devices.", + "authors": "Yang Xiao, Rohan Kumar Das", + "published": "2024-02-05", + "updated": "2024-02-05", + "primary_cat": "cs.SD", + "cats": [ + "cs.SD", + "cs.AI", + "cs.CL", + "cs.LG", + "eess.AS" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1903.04197v7", + "title": "Structured Knowledge Distillation for Dense Prediction", + "abstract": "In this work, we consider transferring the structure information from large\nnetworks to compact ones for dense prediction tasks in computer vision.\nPrevious knowledge distillation strategies used for dense prediction tasks\noften directly borrow the distillation scheme for image classification and\nperform knowledge distillation for each pixel separately, leading to\nsub-optimal performance. Here we propose to distill structured knowledge from\nlarge networks to compact networks, taking into account the fact that dense\nprediction is a structured prediction problem. Specifically, we study two\nstructured distillation schemes: i) pair-wise distillation that distills the\npair-wise similarities by building a static graph; and ii) holistic\ndistillation that uses adversarial training to distill holistic knowledge. The\neffectiveness of our knowledge distillation approaches is demonstrated by\nexperiments on three dense prediction tasks: semantic segmentation, depth\nestimation and object detection. Code is available at: https://git.io/StructKD", + "authors": "Yifan Liu, Changyong Shun, Jingdong Wang, Chunhua Shen", + "published": "2019-03-11", + "updated": "2020-06-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2304.06461v2", + "title": "Multi-Mode Online Knowledge Distillation for Self-Supervised Visual Representation Learning", + "abstract": "Self-supervised learning (SSL) has made remarkable progress in visual\nrepresentation learning. Some studies combine SSL with knowledge distillation\n(SSL-KD) to boost the representation learning performance of small models. In\nthis study, we propose a Multi-mode Online Knowledge Distillation method (MOKD)\nto boost self-supervised visual representation learning. Different from\nexisting SSL-KD methods that transfer knowledge from a static pre-trained\nteacher to a student, in MOKD, two different models learn collaboratively in a\nself-supervised manner. Specifically, MOKD consists of two distillation modes:\nself-distillation and cross-distillation modes. Among them, self-distillation\nperforms self-supervised learning for each model independently, while\ncross-distillation realizes knowledge interaction between different models. In\ncross-distillation, a cross-attention feature search strategy is proposed to\nenhance the semantic feature alignment between different models. As a result,\nthe two models can absorb knowledge from each other to boost their\nrepresentation learning performance. Extensive experimental results on\ndifferent backbones and datasets demonstrate that two heterogeneous models can\nbenefit from MOKD and outperform their independently trained baseline. In\naddition, MOKD also outperforms existing SSL-KD methods for both the student\nand teacher models.", + "authors": "Kaiyou Song, Jin Xie, Shan Zhang, Zimeng Luo", + "published": "2023-04-13", + "updated": "2023-06-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2204.00548v1", + "title": "Unified and Effective Ensemble Knowledge Distillation", + "abstract": "Ensemble knowledge distillation can extract knowledge from multiple teacher\nmodels and encode it into a single student model. Many existing methods learn\nand distill the student model on labeled data only. However, the teacher models\nare usually learned on the same labeled data, and their predictions have high\ncorrelations with groudtruth labels. Thus, they cannot provide sufficient\nknowledge complementary to task labels for student teaching. Distilling on\nunseen unlabeled data has the potential to enhance the knowledge transfer from\nthe teachers to the student. In this paper, we propose a unified and effective\nensemble knowledge distillation method that distills a single student model\nfrom an ensemble of teacher models on both labeled and unlabeled data. Since\ndifferent teachers may have diverse prediction correctness on the same sample,\non labeled data we weight the predictions of different teachers according to\ntheir correctness. In addition, we weight the distillation loss based on the\noverall prediction correctness of the teacher ensemble to distill high-quality\nknowledge. On unlabeled data, there is no groundtruth to evaluate prediction\ncorrectness. Fortunately, the disagreement among teachers is an indication of\nsample hardness, and thereby we weight the distillation loss based on teachers'\ndisagreement to emphasize knowledge distillation on important samples.\nExtensive experiments on four datasets show the effectiveness of our proposed\nensemble distillation method.", + "authors": "Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang", + "published": "2022-04-01", + "updated": "2022-04-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2302.05637v2", + "title": "Dual Relation Knowledge Distillation for Object Detection", + "abstract": "Knowledge distillation is an effective method for model compression. However,\nit is still a challenging topic to apply knowledge distillation to detection\ntasks. There are two key points resulting in poor distillation performance for\ndetection tasks. One is the serious imbalance between foreground and background\nfeatures, another one is that small object lacks enough feature representation.\nTo solve the above issues, we propose a new distillation method named dual\nrelation knowledge distillation (DRKD), including pixel-wise relation\ndistillation and instance-wise relation distillation. The pixel-wise relation\ndistillation embeds pixel-wise features in the graph space and applies graph\nconvolution to capture the global pixel relation. By distilling the global\npixel relation, the student detector can learn the relation between foreground\nand background features, and avoid the difficulty of distilling features\ndirectly for the feature imbalance issue. Besides, we find that instance-wise\nrelation supplements valuable knowledge beyond independent features for small\nobjects. Thus, the instance-wise relation distillation is designed, which\ncalculates the similarity of different instances to obtain a relation matrix.\nMore importantly, a relation filter module is designed to highlight valuable\ninstance relations. The proposed dual relation knowledge distillation is\ngeneral and can be easily applied for both one-stage and two-stage detectors.\nOur method achieves state-of-the-art performance, which improves Faster R-CNN\nbased on ResNet50 from 38.4% to 41.6% mAP and improves RetinaNet based on\nResNet50 from 37.4% to 40.3% mAP on COCO 2017.", + "authors": "Zhenliang Ni, Fukui Yang, Shengzhao Wen, Gang Zhang", + "published": "2023-02-11", + "updated": "2023-06-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2104.02857v2", + "title": "Soft-Label Anonymous Gastric X-ray Image Distillation", + "abstract": "This paper presents a soft-label anonymous gastric X-ray image distillation\nmethod based on a gradient descent approach. The sharing of medical data is\ndemanded to construct high-accuracy computer-aided diagnosis (CAD) systems.\nHowever, the large size of the medical dataset and privacy protection are\nremaining problems in medical data sharing, which hindered the research of CAD\nsystems. The idea of our distillation method is to extract the valid\ninformation of the medical dataset and generate a tiny distilled dataset that\nhas a different data distribution. Different from model distillation, our\nmethod aims to find the optimal distilled images, distilled labels and the\noptimized learning rate. Experimental results show that the proposed method can\nnot only effectively compress the medical dataset but also anonymize medical\nimages to protect the patient's private information. The proposed approach can\nimprove the efficiency and security of medical data sharing.", + "authors": "Guang Li, Ren Togo, Takahiro Ogawa, Miki Haseyama", + "published": "2021-04-07", + "updated": "2024-03-21", + "primary_cat": "eess.IV", + "cats": [ + "eess.IV", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2112.10047v1", + "title": "Controlling the Quality of Distillation in Response-Based Network Compression", + "abstract": "The performance of a distillation-based compressed network is governed by the\nquality of distillation. The reason for the suboptimal distillation of a large\nnetwork (teacher) to a smaller network (student) is largely attributed to the\ngap in the learning capacities of given teacher-student pair. While it is hard\nto distill all the knowledge of a teacher, the quality of distillation can be\ncontrolled to a large extent to achieve better performance. Our experiments\nshow that the quality of distillation is largely governed by the quality of\nteacher's response, which in turn is heavily affected by the presence of\nsimilarity information in its response. A well-trained large capacity teacher\nloses similarity information between classes in the process of learning\nfine-grained discriminative properties for classification. The absence of\nsimilarity information causes the distillation process to be reduced from one\nexample-many class learning to one example-one class learning, thereby\nthrottling the flow of diverse knowledge from the teacher. With the implicit\nassumption that only the instilled knowledge can be distilled, instead of\nfocusing only on the knowledge distilling process, we scrutinize the knowledge\ninculcation process. We argue that for a given teacher-student pair, the\nquality of distillation can be improved by finding the sweet spot between batch\nsize and number of epochs while training the teacher. We discuss the steps to\nfind this sweet spot for better distillation. We also propose the distillation\nhypothesis to differentiate the behavior of the distillation process between\nknowledge distillation and regularization effect. We conduct all our\nexperiments on three different datasets.", + "authors": "Vibhas Vats, David Crandall", + "published": "2021-12-19", + "updated": "2021-12-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0012022v1", + "title": "Distilling a Greenberger-Horne-Zeilinger State From an Arbitrary Pure State of Three Qubits", + "abstract": "We present a general algorithm to achieve local operators which can produce\nthe GHZ state for an arbitrary given three-qubit state. Thus the distillation\nprocess of the state can be realized optimally. The algorithm is shown to be\nsufficient for the three-qubit state on account of the fact that any state for\nwhich this distillation algorithm is invalid cannot be distilled to the GHZ\nstate by any local actions. Moreover, an analytical result of distillation\noperations is achieved for the general state of three qubits.", + "authors": "Li-Xiang Cen, Shun-Jin Wang", + "published": "2000-12-05", + "updated": "2000-12-05", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + } + ], + [ + { + "url": "http://arxiv.org/abs/2402.17188v3", + "title": "PromptMM: Multi-Modal Knowledge Distillation for Recommendation with Prompt-Tuning", + "abstract": "Multimedia online platforms (e.g., Amazon, TikTok) have greatly benefited\nfrom the incorporation of multimedia (e.g., visual, textual, and acoustic)\ncontent into their personal recommender systems. These modalities provide\nintuitive semantics that facilitate modality-aware user preference modeling.\nHowever, two key challenges in multi-modal recommenders remain unresolved: i)\nThe introduction of multi-modal encoders with a large number of additional\nparameters causes overfitting, given high-dimensional multi-modal features\nprovided by extractors (e.g., ViT, BERT). ii) Side information inevitably\nintroduces inaccuracies and redundancies, which skew the modality-interaction\ndependency from reflecting true user preference. To tackle these problems, we\npropose to simplify and empower recommenders through Multi-modal Knowledge\nDistillation (PromptMM) with the prompt-tuning that enables adaptive quality\ndistillation. Specifically, PromptMM conducts model compression through\ndistilling u-i edge relationship and multi-modal node content from cumbersome\nteachers to relieve students from the additional feature reduction parameters.\nTo bridge the semantic gap between multi-modal context and collaborative\nsignals for empowering the overfitting teacher, soft prompt-tuning is\nintroduced to perform student task-adaptive. Additionally, to adjust the impact\nof inaccuracies in multimedia data, a disentangled multi-modal list-wise\ndistillation is developed with modality-aware re-weighting mechanism.\nExperiments on real-world data demonstrate PromptMM's superiority over existing\ntechniques. Ablation tests confirm the effectiveness of key components.\nAdditional tests show the efficiency and effectiveness.", + "authors": "Wei Wei, Jiabin Tang, Yangqin Jiang, Lianghao Xia, Chao Huang", + "published": "2024-02-27", + "updated": "2024-03-10", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "label": "Original Paper", + "paper_cat": "Distillation", + "gt": "Multi-Modal Recommender Systems. Researchers have explored using multi-modal content [30] to enhance recommenders. To improve vanilla CF, multi-modal attention mechanisms (e.g., ACF [3]) have been introduced to model multi-level item relationships. After that, GNN-enhanced multi-modal recommenders capture highorder connectivity by incorporating modality signals. Inspired by the success of self-supervised learning [27, 39, 56], recent multimedia recommenders, such as MMGCL [66], SLMRec [44], use data augmentation strategies to enhance the representation learning. Recent methods are trying to improve multi-modal method using LLMs (e.g., TALLRec [1], LLMRec [57]). Despite their effectiveness, most of them are built upon cumbersome multi-modal feature encoding which limits their scalability in practice. Knowledge Distillation for Recommendation. KD in recommendation has sparked various research directions [21, 48, 50, 63]. HetComp[21] transfers the ensemble knowledge of heterogeneous teachers to a lightweight student. NOSMOG [45] distills knowledge from GNNs to MLPs. TinyLLM [47] distills knowledge from multiple large language models (LLMs). GNP [49] transfers knowledge from KG [28, 29] to LLMs. We introduce KD in multi-modal scenario for two purposes: i) Solve the problem of large coding parameter models caused by high-dimensional output features in multi-modal scenes; ii) reduce the impact of noise in modal content and emphasize the knowledge that is relevant for downstream tasks. In-context Learning and Prompt-tuning. Prompt learning has become a emerging research direction in the context of large pretrained models [2, 32]. For in-context learning, for example, RLMRec [38] enhance the representation of recommenders using LLMs; some work [12, 42, 43] utilizes contextual learning for structured relationship modeling. GraphPrompt [33] defines the paradigm of prompts on graphs. To transfer knowledge graph semantics into task data, KGTransformer [69] regards task data as a triple prompt for tuning. Additionally, prompt-based learning has also been introduced to to enhance model fairness [62], sequence learning [64]. Motivated by these research lines, we propose a novel multi-modal prompt learning approach that can adaptively guide knowledge distillation for simple yet effective multimedia recommendation.", + "pre_questions": [], + "main_content": "INTRODUCTION Multimedia platforms have grown in importance as tools for sharing and shopping online. Utilizing modalities (e.g., soundtracks of videos, pictures of products) to identify customized user preferences for ranking is the target of multi-modal recommenders [6, 35]. Early works started with introducing visual content[9, 15] and later works employed attention mechanism [31, 65]. GNNs [4] (e.g., MMGCN[61]) then became mainstream due to significant improvement by modeling high-order[53] relations. Some multi-modal works focus on alleviating sparsity by constructing homogeneous graphs(e.g., u-u[51], i-i[58, 67]) or introducing self-supervised [55] tasks through joint training(e.g., MMSSL [56], MICRO [68]). Despite the progress made in previous works, some key issues still remain explored for multimedia recommendation scenarios: \u2022 I1: Overfitting & Sparsity. Current multimedia recommenders excel by employing advanced encoders to handle high-dimensional features from pre-trained extractors (CLIP-ViT[36], BERT[5]). The auxiliary modalities alleviate data sparsity, but inevitably lead to increased consumption [52]. For example, regarding feature extractors of Electronics (Sec. 4.1.1) dataset, the output dimension of SBERT[37] and CNNs[20] are 768 and 4,096, respectively. They are much larger than embedding dimensions of current methods[56, 57], i.e., \ud835\udc51\ud835\udc5a\u226b\ud835\udc51. Retraining pre-trained models can change output dimensions, but will significantly impact performance due to different latent representations and hyperparameters. Besides, training pre-trained models demands significant computational resources and can take days to weeks on multiple arXiv:2402.17188v3 [cs.IR] 10 Mar 2024 WWW \u201924, May 13\u201317, 2024, Singapore, Singapore Wei Wei, Jiabin Tang, Yangqin Jiang, Lianghao Xia, and Chao Huang GPUs. Therefore, current multi-modal works[6, 56] carry additional high-dimensional feature reduction layers. These additional parameters aggravated overfitting that already exists due to data sparsity, further increasing the difficulty of convergence [54]. \u2022 I2: Noise & Semantic Gap. As side information, multimedia content has inherent inaccuracies and redundancies when modeling user preference with collaborative relations. For example, a user may be attracted by a textual title, but the image content is unrelated; and the music in micro-videos might be for trends, not user preferences. Blindly relying on noisy modality data may mislead the u-i relation modeling. Besides, the multi-modal context and u-i collaborative relations are originally derived from two different distributions with a large semantic gap [56], which poses challenges in mining modality-aware user preference and even disrupts the existing sparse supervisory signals. To cope with the above issues, we propose the following solutions: I1: Developing a multi-modal KD (PromptMM) recommendation framework to free the inference recommender from the additional feature reduction parameters, by using KD for model compression. This paradigm prevents overfitting while maintaining accuracy, which also boosts the critical online inference phase with fewer resources. Specifically, PromptMM conducts model compression through distilling edge relationship (ranking KD, denoised modalityaware ranking KD), and node content (modality-aware embedding KD). The three types of KD respectively convey i) Pure knowledge through a modified KL divergence[24] based on BPR loss[40]; ii) Fine-grained modality-aware list-wise ranking knowledge; iii) Modality-aware embedding KD through SCE loss [18], an enhanced version of MSE. I2: Developing two modules to tackle issues \u2019Noise & Semantic Gap\u2019 based on the KD framework: i) Semantic bridging soft prompt-tuning is meant to reduce the impact of redundancy by prompting teacher to deliver student-task adaptive knowledge. In other words, prompt-tuning module can bridge the semantic gap in two aspects: multi-modal content & collaborative signals, and student & frozen teacher. Technically, the module is incorporated into the teacher\u2019s reduction layer and constructs prompts based on multi-modal features. For optimization, the soft prompts train with both teacher and student, to adaptively guide students during the distillation when teacher is frozen. ii) Modality-aware disentangled denoising list-wise ranking KD is to adjust the influence of inaccuracies in modality-aware user preference. The decoupled KD process first separates the results of list-wise ranking based on modality-specific presentation. A re-weighting mechanism is then applied to adjust the influence of unreliable portions. To summarize, the main contributions of this work are as follows: \u2022 In this work, we propose a novel multi-modal KD framework PromptMM for multimedia recommendation, which can produce a lightweight yet effective student inference recommender with minimal online inference time and resource consumption. \u2022 We integrate prompt-tuning with multi-modal KD to bridge the semantic gap between modality content and collaborative signals. Additionally, by disentangling the modality-aware ranking logits, the impact of noise in multimedia data is adjusted. \u2022 We conduct experiments to evaluate our model performance on real-world datasets. The results demonstrate our PromptMM outperforms state-of-the-art baselines. The ablation studies and further analysis show the effectiveness of sub-modules. 2 PRELIMINARIES Interaction Graph with Multi-Modal Context. Motivated by the effectiveness of graph-based recommenders, we represent useritem relationships as a bipartite graph G = ({U, I}, E, X). Here, U, I are users\u2019 set and items\u2019 set, respectively. The edges E in G can be represented by adjacency matrix A \u2208R|U|\u00d7|I| with A[\ud835\udc62,\ud835\udc56] = 1 if the implicit feedback exists, otherwise A[\ud835\udc62,\ud835\udc56] = 0. Furthermore, each item \ud835\udc56\u2208I is associated with multi-modal features X\ud835\udc56= {x1 \ud835\udc56, ..., x\ud835\udc5a \ud835\udc56, ..., x|M| \ud835\udc56 }, where |M| is the number of modalities, indexed by \ud835\udc5a\u2208M. The feature x\ud835\udc5a \ud835\udc56is a high-dimensional vector in R\ud835\udc51\ud835\udc5athat captures the characteristics of modality \ud835\udc5a. Notably, the dimensions \ud835\udc51\ud835\udc5aof multimodal features are often much larger than those \ud835\udc51of recommender representations, i.e., \ud835\udc51\ud835\udc5a\u226b\ud835\udc51. Task Formulation. The goal of multi-modal recommender systems is to learn a function that predicts the likelihood of a user adopting an item, given an interaction graph G with multi-modal context X. The output of the predictive function is the learned preference score of a target user \ud835\udc62over a non-interacted item \ud835\udc56. 3 METHODOLOGY PromptMM conducts model compression to build a lightweight yet effective multi-modal recommender for resource-friendly online collaborative filtering. The overall model flow is shown in Fig. 1. Key components will be elaborated in following subsections. 3.1 Modality-aware Task-adaptive Modeling 3.1.1 Teacher-Student in CF. Knowledge distillation aims to compress a complex large model into a lightweight and effective small model. Inspired by this, our developed PromptMM is to transfer modality-aware collaborative signals from cumbersome teacher to lightweight student. For optimization, we employ offline distillation [11] which is a two-stage process, for flexibility concerns. In the first stage, only the teacher is trained, and in the second stage, the teacher remains fixed while only the student is trained. Teacher T follows pattern of current graph-based multi-modal encoders [56, 67], which encodes id-corresponding embeddings ET \ud835\udc62, ET \ud835\udc56and modality-specific features F\ud835\udc5a \ud835\udc62, F\ud835\udc5a \ud835\udc56through GNNs. The two types of encoded representations will be further distilled to student by our modality KD in Sec. 3.2.2, Sec. 3.2.3 and collaborative KD in Sec. 3.2.1. Teacher T encoding process can be as follows: {ET \ud835\udc62, ET \ud835\udc56}, {F1 \ud835\udc62, ..., F\ud835\udc5a \ud835\udc62, ..., F1 \ud835\udc56, ..., F\ud835\udc5a \ud835\udc56...} = T (A, X) (1) The two types of outputs respectively convey reliable collaborative signals and modality-aware user preferences to student. F\ud835\udc5a \ud835\udc62\u2208R|U|\u00d7\ud835\udc51, F\ud835\udc5a \ud835\udc56 \u2208R|I|\u00d7\ud835\udc51are compressed (i.e., \ud835\udc51\ud835\udc5a\u2192\ud835\udc51) from high-dimensional X \u2208R|I|\u00d7\ud835\udc51\ud835\udc5afrom extractors (e.g., BERT [22]). Student S utilizes lightweight LightGCN [16] to capture user-item collaborative relationship. The embedding process is conducted without computationally intensive encoding of multi-modal features. The encoding of student S can be summarized as: ES \ud835\udc62, ES \ud835\udc56= S(A) (2) E\ud835\udc46 \ud835\udc62, E\ud835\udc46 \ud835\udc56are the final user and item presentation used for online recommendation inference and for receiving teacher knowledge. PromptMM: Multi-Modal Knowledge Distillation for Recommendation with Prompt-Tuning WWW \u201924, May 13\u201317, 2024, Singapore, Singapore Multi-modal High Dimensional Feature Semantic Gap Feature Extraction Multimedia Content Visual Textual Acoustic U-I Sparse Interactions (i) Collaborative KD Task-Irrelevant Prompt Prompt Module Cumbersome Teacher Prompt-enhanced Feature Reduction Lightweight Student = ) ( = ( ) Collaborative Knowledge Multimedia Knowledge (iii) Modality-aware Embedding KD ) ( SCE . . . List: With Redundancy&Noise Model Training Process Stage 1: Offline Teacher Training Stage 2: Rec. Joint KD Training Inference Model (ii) Modality-aware List-wise Knowledge Distillation Soft Prompt-Tuning to Bridge Gap :Disentangle Re-weight: Figure 1: PromptMM is to learn a lightweight recommender with minimal online consumption, including three types of KD: i) ranking KD; ii) denoised modality-aware ranking KD; iii) modality-aware embedding KD. Besides, prompt-tuning is for adaptive task-relevant KD; disentangling and re-weighting are introduced to adjust the impact of noise in modalities. 3.1.2 Soft Prompt-Tuning as Semantic Bridge. Modality content X inevitably includes ranking task-irrelevant redundancies, which not only confuse the target CF task but also exacerbate overfitting. Besides, the large semantic gap between general-purpose modality modeling and u-i interaction modeling also hinders true user preferences. Drawing inspiration from parameter efficient finetuning (PEFT) [25, 26], we employ soft prompt-tuning[25] as the solution. Specifically, we incorporate prompt p into teacher T (\u00b7)\u2019s multi-modal feature reduction layer R(\u00b7), to facilitate the extraction of collaborative signals from modalities. p is constructed by multi-modal features X and finetuned with student S(\u00b7) to provide the frozen teacher T (\u00b7) a student-task related signals as a hint. The specific process can be divided into three steps: i) Construct the prompt; ii) Incorporate in teacher T (\u00b7); iii) Conduct prompt-tuning. Prompt Construction. To better incorporate semantics to prompt module P(\u00b7), we initialize p using semantic content[41], instead of vanilla initialization (e.g., Xavier [10], uniform). Refer to Prefixtuning[26], P(\u00b7) is a feedforward layer that takes soft prompt p as input which aggregates information from multi-modal item features x\ud835\udc5a. The process of obtaining prompt vectors is as follows: p = P(x\ud835\udc5a|\ud835\udf03P) = P \u00a9 \u00ad \u00ab 1 |M| |M| \u2211\ufe01 \ud835\udc5a\u2208M \ud835\udf02(x\ud835\udc5a)\u00aa \u00ae \u00ac (3) \ud835\udf02(\u00b7) denotes the dimensionality reduction function (e.g., PCA) for multi-modal features. The learned prompt p will be incorporated into the teacher\u2019s inference process. The soft prompt module P(\u00b7) will offer adaptive cues to the teacher once the student S(\u00b7) is trained and the teacher T (\u00b7) is frozen. Prompt-guided Teacher. Having obtained prompt p, we apply it to the feature reduction layer R(\u00b7) in teacher T (\u00b7) for enhancing the overfitting teacher, while simultaneously conducting studenttask adaptive knowledge distillation through the frozen teacher. To be specific, we transform our prompt p into the modality-specific module, i.e., p \u2192p\ud835\udc5a, which allows the prompt to capture modalityspecific information. Next, our method leverages a simple yet effective add operator, inspired by [19], to integrate the modality-specific Table 1: Summary of Key Notations. Notations Explanations G, V, E Interaction graph, Node set, Edge set x\ud835\udc5a\u2208R\ud835\udc51\ud835\udc5a, f\ud835\udc5a\u2208R\ud835\udc51 High dimensional/ Densified feature of T ET \ud835\udc62\u2208RU\u00d7\ud835\udc51, E\ud835\udc46 \ud835\udc62\u2208RU\u00d7\ud835\udc51 Final user embedding of teacher/student T(\u00b7), S(\u00b7), P(\u00b7), R(\u00b7) Teacher, Student, Prompt Module, Reduction b, q Binarized/Re-weighted knowledge \ud835\udc4f+/\ud835\udc4f\u2212,\ud835\udc5e\ud835\udc58 Binarized/Re-weighted single score * We use uppercase bold letters (e.g., X) to denote matrices, lowercase bold letters (e.g., x) to denote vectors, and light letters to denote scalar values. prompt p\ud835\udc5ainto the teacher\u2019s multi-modal feature encoding layer. Formally, this prompt integration process can be given as follows: f\ud835\udc5a= R(x\ud835\udc5a, p\ud835\udc5a|\ud835\udf03R) = \ud835\udf1a((x\ud835\udc5a+ \ud835\udf061 \u2217p\ud835\udc5a)W\ud835\udc5a R + b\ud835\udc5a R) (4) R(\u00b7) takes high-dimensional multi-modal features x\ud835\udc5aand modalityspecific prompts p\ud835\udc5aas inputs, and output modality-specific embeddings f\ud835\udc5a\u2208R\ud835\udc51. The modality-specific prompt p\ud835\udc5a\u2208R\ud835\udc51\ud835\udc5ais obtained by reshaping (i.e., \ud835\udc51\u2192\ud835\udc51\ud835\udc5a) from p through p\ud835\udc5a= p \u00b7 p\ud835\udc47x\ud835\udc5a, and adjusted by factor \ud835\udf061. To prevent overfitting caused by numerous parameters high-dimensional features x\ud835\udc5a\u2019s reduction, dropout \ud835\udf1a(\u00b7) is applied here. The filter parameters W\ud835\udc5a R and b\ud835\udc5a R are used to map modality-specific features to their respective embedding space. In this way, the feature reduction R will be strengthened due to: i) bridging the gap between modality content and collaborative signals, extracting modality-aware user preferences; ii) facilitating knowledge distillation process by making modality-aware studentconstrained prompt p participate in teacher\u2019s inference. Soft Prompt-tuning Paradigm. In KD\u2019s soft prompt tuning, we consider the cumbersome teacher as the pre-trained model, and the process is split into two stages. During teacher training, the prompt module P(\u00b7) undergoes gradient descent with teacher T (\u00b7), affecting the teacher\u2019s inference process. During student training, we employ offline knowledge distillation[11], freezing the teacher\u2019s parameters \ud835\udf03T and updating the prompt module P(\u00b7) again according to the student\u2019s recommended loss, which allows the prompt p to provide additional guidance to the feature reduction process and distill task-relevant knowledge from teacher T (\u00b7). WWW \u201924, May 13\u201317, 2024, Singapore, Singapore Wei Wei, Jiabin Tang, Yangqin Jiang, Lianghao Xia, and Chao Huang 3.2 Modality & Ranking Knowledge Distillation To comprehensively obtain the quality collaborative signal and modality-aware user preference from teacher T (\u00b7), we have designed three types of KD paradigms to convey knowledge from different perspectives: i) Ranking KD; ii) Denoised Modality-aware Ranking KD; and iii) Modality-aware Embedding KD. 3.2.1 Pure Ranking KD. As a ranking task, teacher T (\u00b7) ought to convey task-relevant collaborative relations. To this end, we propose to utilize prediction logits in ranking objectives such as BPR[40] for KD optimization. Specifically, we distill valid ranking knowledge from the ultimate representation ET \ud835\udc62, TT \ud835\udc56constrained by the classical pair-wise ranking BPR loss. Pair-wise score \ud835\udc66T pair and \ud835\udc66S pair are taken as logits of KD loss for teacher and student, respectively. The classic KL loss logit represents multi-class scores, while \ud835\udc66pair represents a binary classification logit for determining whether \ud835\udc56+ is better than \ud835\udc56\u2212for user \ud835\udc62. Our KD paradigm with the pairwise ranking loss can be formally presented as follows: LPairKD(\u0398S; \u0398P) = \u2212 | Ebpr| \u2211\ufe01 (\ud835\udc62,\ud835\udc56+,\ud835\udc56\u2212) \ud835\udc66T pair(log\ud835\udc66T pair \u2212log\ud835\udc66S pair) (5) \ud835\udc66pair = log(sigmoid(e\ud835\udc62\u00b7 e\ud835\udc56+ \u2212e\ud835\udc62\u00b7 e\ud835\udc56\u2212)) where LPairKD represents the pair-wise ranking KD objective.\ud835\udf03\ud835\udc46;\ud835\udf03\ud835\udc43 means that both student S(\u00b7) and prompt module P(\u00b7) parameters are updated with the loss LPairKD(\u00b7). In each step, PromptMM samples a batch of triplets Ebpr = {(\ud835\udc62,\ud835\udc56+,\ud835\udc56\u2212)|A[\ud835\udc62,\ud835\udc56+] = 1, A[\ud835\udc62,\ud835\udc56\u2212] = 0}, where \ud835\udc62denotes the target user. Here, \ud835\udc56+ and \ud835\udc56\u2212denote the positive item and negative item of BPR loss, respectively. In this way, teacher model T (\u00b7) imparts collaborative expertise to student model S(\u00b7), offering rich implicit knowledge in a different solution space [23] to help the student escape from local optima [8, 13]. 3.2.2 Denoised Modality-aware Ranking Disentangled KD. Previously encoded multi-modal content f\ud835\udc5a \ud835\udc62, f\ud835\udc5a \ud835\udc56 in teacher T (\u00b7) contains noise and can affect the modality-aware user preferences modeling. To conduct accuracy and fine-grained distillation while reducing the impact of task-irrelevant parts, we design a denoised modality-aware KD. Specifically, we calculate the list-wise score using f\ud835\udc5a \ud835\udc62, f\ud835\udc5a \ud835\udc56to perform modality-aware ranking KD. In addition, to further reduce the impact of noise, we reformulate KD loss into a weighted sum of the disentangled parts. Disentangling Modality-aware List-wise Score. For a \ud835\udc3esamples ranking list, the predicted logits can be denoted as ylist = [ \ud835\udc66+ 1 ;\ud835\udc66\u2212 2 ,\ud835\udc66\u2212 3 , ...,\ud835\udc66\u2212 \ud835\udc58, ...,\ud835\udc66\u2212 \ud835\udc3e], where \ud835\udc66+ and \ud835\udc66\u2212are the scores of the observed edge A+ and unobserved edge A\u2212, respectively. PromptMM take each score in ylist as logit in KL divergence for distilling informative tacit knowledge[11]. The modality-aware list-wise logits then can be reformulated into two parts \ud835\udc3e\ud835\udc3f(bT \u2225bS) and \ud835\udc3e\ud835\udc3f(qT \u2225qS). bT deliver overall user preference to bS; qT deliver fine-grained list-wise ranking prefer to qS. More specifically, b = [\ud835\udc4f+,\ud835\udc4f\u2212] \u2208R1\u00d72 represents the binary logits of observed set {\ud835\udc66+ 1 } and unobserved set { \ud835\udc66\u2212 2 ,\ud835\udc66\u2212 3 , ...,\ud835\udc66\u2212 \ud835\udc58, ...,\ud835\udc66\u2212 \ud835\udc3e}, that softened by softmax: b : \ud835\udc4f+ = \ud835\udc52\ud835\udc65\ud835\udc5d(\ud835\udc66+ 1 ) \u00cd\ud835\udc3e \ud835\udc58=1 \ud835\udc52\ud835\udc65\ud835\udc5d(\ud835\udc66\ud835\udc58) ; \ud835\udc4f\u2212= \u00cd\ud835\udc3e \ud835\udc58=2 \ud835\udc52\ud835\udc65\ud835\udc5d(\ud835\udc66\u2212 \ud835\udc58) \u00cd\ud835\udc3e \ud835\udc58=1 \ud835\udc52\ud835\udc65\ud835\udc5d(\ud835\udc66\ud835\udc58) (6) Disentangle Re-weight 0.57 0.24 0.13 0.10 0.12 0.08 0.08 0.16 0.43 0.31 0.24 0.13 0.10 0.12 0.15 0.14 0.10 0.14 T logits 0.4651 0.5349 0.4061 0.5939 0.1989 0.1930 0.1969 0.1892 0.2047 0.2006 0.1852 0.2006 0.2088 0.5349 0.4651 0.2220 0.1989 0.1930 0.1969 0.1892 0.2220 0.43 0.28 0.20 0.28 0.32 0.30 S logits Figure 2: Calculation Example of Disentangled KD Note that, \u02c6 \ud835\udc66is the sum of unobserved sets. Meanwhile, we declare p = [ \u02c6 \ud835\udc662, \u02c6 \ud835\udc663, ..., \u02c6 \ud835\udc66\ud835\udc58, ..., \u02c6 \ud835\udc66\ud835\udc3e] \u2208R1\u00d7\ud835\udc3eto independently model logits among unobserved set (i.e., without considering \ud835\udc66+). Each element is calculated by: q : \ud835\udc5e\ud835\udc58= \ud835\udc52\ud835\udc65\ud835\udc5d(\ud835\udc66\u2212 \ud835\udc58) \u00cd\ud835\udc3e \ud835\udc58=2 \ud835\udc52\ud835\udc65\ud835\udc5d(\ud835\udc66\ud835\udc58) (7) Re-weighting Modality-aware List-wise Score. Afterward, lower scores are assigned to those uncertain user-item relationships to down-weight their influence in the KD process. This allows PromptMM to focus on the most reliable signals from the teacher model for denoised knowledge transfer. The vanilla KL-Divergence can be disentangled and re-weighted through the following derivation 1 : \ud835\udc3e\ud835\udc3f(yT list\u2225yS list) = \ud835\udc4f+Tlog(\ud835\udc4f+T \ud835\udc4f+S ) + \ud835\udc3e \u2211\ufe01 \ud835\udc58=2,\ud835\udc56\u22601 \ud835\udc66\u2212 \ud835\udc58 Tlog( \ud835\udc66\u2212 \ud835\udc58 T \ud835\udc66\u2212 \ud835\udc58 \ud835\udc46) (8) According to Eq. 6 and Eq. 7, we can derive \ud835\udc5e\ud835\udc58= \ud835\udc66\ud835\udc58/\ud835\udc4f\u2212. Thus, Eq. 8 can be rewritten as follows (detailed derivations in Appendix ): = \ud835\udc4f+Tlog(\ud835\udc4f+T \ud835\udc4f+S ) + \ud835\udc4f\u2212T \ud835\udc3e \u2211\ufe01 \ud835\udc58=2,\ud835\udc56\u22601 \ud835\udc5e\ud835\udc58T (log(\ud835\udc5e\ud835\udc58T \ud835\udc5e\ud835\udc58S ) + log(\ud835\udc4f\u2212T \ud835\udc4f\u2212S )) (9) = \ud835\udc4f+Tlog(\ud835\udc4f+T \ud835\udc4f+S ) + \ud835\udc4f\u2212Tlog(\ud835\udc4f\u2212T \ud835\udc4f\u2212S ) | {z } \ud835\udc3e\ud835\udc3f(bT \u2225bS) + (1 \u2212\ud835\udc4f+T) \ud835\udc3e \u2211\ufe01 \ud835\udc58=2,\ud835\udc56\u22601 \ud835\udc5e\ud835\udc58T \ud835\udc58log(\ud835\udc5e\ud835\udc58T \ud835\udc5e\ud835\udc58S ) | {z } \ud835\udc3e\ud835\udc3f(qT \u2225qS) Then, we can reformulate our disentangled knowledge distillation paradigm with the awareness of multi-modalities as follows: LListKD = \u2212 |M| \u2211\ufe01 \ud835\udc5a\u2208M \ud835\udc3e\ud835\udc3f(bT \u2225bS) + (\ud835\udc4f+T \u22121)\ud835\udc3e\ud835\udc3f(qT \u2225qS) (10) List-wise ranking KD loss LListKD is reformulated as a weighted sum of two terms for adjustablely transferring reliable knowledge and enhancing the accuracy of modality-relevant user preference. 3.2.3 Modality-aware Embedding Distillation. In addition to the logit-based KD, we propose to enhance our PromptMM framework with embedding-level distillation. To achieve embedding alignment in our PromptMM, we employ the Scale Cosine Error (SCE) [18] loss function with auto-encoder [46] for robust training instead of Mean Square Error (MSE). This is because MSE is sensitive and unstable, which can lead to training collapse [18] because of varied feature vector norms and the curse of dimensionality [7]. The utilization of the SCE-based loss LEmbKD for embedding-level 1We omit the temperature \ud835\udf0fof softmax [17] without loss of generality PromptMM: Multi-Modal Knowledge Distillation for Recommendation with Prompt-Tuning WWW \u201924, May 13\u201317, 2024, Singapore, Singapore Table 2: Model compression analysis. Time complexity comparison among SOTA GNN-enhanced multi-modal recommenders. i) the R(\u00b7): time complexity of multi-modal feature reduction layer, by mapping high-dimensional features into dense embeddings, i.e., \ud835\udc51\ud835\udc5a\u2192\ud835\udc51. ii) the \ud835\udc3a\ud835\udc41\ud835\udc41\ud835\udc60: time complexity of various GNN architectures in different models for message propagation. Component MMGCN [61] GRCN [60] LATTICE [67] SLMRec [44] PromptMM R(\u00b7) O( \u00cd \ud835\udc5a\u2208M |I|(\ud835\udc51\ud835\udc5a+ \ud835\udc51)\ud835\udc51\u210e) O( \u00cd \ud835\udc5a\u2208M |I|\ud835\udc51\ud835\udc5a\ud835\udc51) O( \u00cd \ud835\udc5a\u2208M |I|\ud835\udc51\ud835\udc5a\ud835\udc51) O( \u00cd \ud835\udc5a\u2208M |I|\ud835\udc51\ud835\udc5a\ud835\udc51) 0 GNNs O( \u00cd \ud835\udc5a\u2208M \ud835\udc3f|E|\ud835\udc513) O( \u00cd \ud835\udc5a\u2208M (|I|2\ud835\udc51+ \ud835\udc3f| E|\ud835\udc51)) O( \u00cd \ud835\udc5a\u2208M |I|2\ud835\udc51\ud835\udc5a+ \ud835\udc58|I|\ud835\udc59\ud835\udc5c\ud835\udc54(|I|) + \ud835\udc3f| E|\ud835\udc51) \ud835\udc42( \u00cd \ud835\udc5a\u2208M \ud835\udc3f| E|\ud835\udc51) \ud835\udc42(\ud835\udc3f|E|\ud835\udc51) knowledge distillation can take the following forms: LEmbKD = |M| \u2211\ufe01 \ud835\udc5a\u2208M 1 |I| \u2211\ufe01 \ud835\udc56\u2208I (1 \u2212 e\ud835\udc46 \ud835\udc56\u00b7 f\ud835\udc5a \ud835\udc56 \u2225e\ud835\udc46 \ud835\udc56\u2225\u00d7 \u2225f\ud835\udc5a \ud835\udc56\u2225 )\ud835\udefe,\ud835\udefe\u22651 (11) LEmbKD is averaged over all user and item nodes in the interaction graph G. The final representation outputted by the student model S(\u00b7) is denoted as eS \ud835\udc56\u2208ES \ud835\udc56, while the encoded multi-modal feature from the teacher function T (\u00b7) are denoted as f\ud835\udc5a \ud835\udc56 \u2208F\ud835\udc5a \ud835\udc56. The scaling factor \ud835\udefeis an adjustable hyper-parameter. 3.2.4 Model Joint Training of PromptMM. We train our recommender using a multi-task learning scheme to jointly optimize PromptMM with the following tasks: i) the main user-item interaction prediction task, represented by LBPR; ii) the pair-wise robust ranking KD L\ud835\udc43\ud835\udc4e\ud835\udc56\ud835\udc5f\ud835\udc3e\ud835\udc37; iii) the modality-aware list-wise disentangled KD L\ud835\udc3f\ud835\udc56\ud835\udc60\ud835\udc61\ud835\udc3e\ud835\udc37; iv) modality-aware embedding KD L\ud835\udc38\ud835\udc5a\ud835\udc4f\ud835\udc3e\ud835\udc37. The overall loss function L is given as follows: L = LBPR + \ud835\udf062 \u00b7 LPairKD + \ud835\udf063 \u00b7 LListKD + \ud835\udf064 \u00b7 LEmbKD (12) LBPR = |Ebpr| \u2211\ufe01 \ud835\udc62,\ud835\udc56+,\ud835\udc56\u2212 \u2212log (sigmoid(e\ud835\udc62\u00b7 e\ud835\udc56+ \u2212e\ud835\udc62\u00b7 e\ud835\udc56\u2212)) + \u2225\u0398\u22252 (13) where \ud835\udf062, \ud835\udf063, and \ud835\udf064 are parameters for loss term weighting. The last term \u2225\u0398\u22252 is the weight-decay regularization against over-fitting. 3.3 Model Complexity Analysis The time complexity of the current state-of-the-art graph-based multi-modal recommender mainly consists of two parts: i) Modality feature reduction layer R(\u00b7): The multimodal recommendation models inevitably need to incorporate feature reduction layers, as shown in Tab. 2. Most models employ a linear layer \ud835\udc42(\u00cd \ud835\udc5a\u2208M \u00d7|I| \u00d7\ud835\udc51\ud835\udc5a\u00d7 \ud835\udc51) or MLP transformation O(\u00cd \ud835\udc5a\u2208M \u00d7|I|\u00d7(\ud835\udc51\ud835\udc5a+\ud835\udc51)\u00d7\ud835\udc51\u210e). |I| is the number of items. However, our inference model avoids the densification layer, due to the developed multi-modal knowledge distillation recommendation framework. ii) GNNs operations: Our inference model utilizes the LightGCN architecture solely in the graph convolutional component, resulting in the lowest consumption level \ud835\udc42(\ud835\udc3f\u00d7|E|\u00d7\ud835\udc51) among current graph-based recommendation models, where \ud835\udc3fis the number of GNNs layers and |E| denotes the number of observed interactions. Other models (e.g., LATTICE, SLMRec, MICRO) also use lightweight architectures. However, GRCN and LATTICE require reconstruction operations that consume |I|2 \u00d7 \ud835\udc51 and |I|2 \u00d7 \ud835\udc51\ud835\udc5a, respectively. The difference between them is that the weights of the reconstructed edges are based on densification \ud835\udc51and the original high dimension \ud835\udc51\ud835\udc5a, respectively. LATTICE also takes \ud835\udc42(\ud835\udc58\u00d7 |I| \u00d7 \ud835\udc59\ud835\udc5c\ud835\udc54(|I|)) to retrieve top-\ud835\udc58most similar items for each item. We summarize the computational complexity of the graph-based multimodal methods in Tab. 2 Table 3: Statistics of experimented datasets with multi-modal item Visual (V), Acoustic (A), Textual (T) contents. Dataset Netflix Tiktok Electronics Modality V T V A T V T Feat. Dim. 512 768 128 128 768 4096 1024 User 43,739 14,343 41,691 Item 17,239 8,690 21,479 Interaction 609,341 276,637 359,165 Sparsity 99.919% 99.778% 99.960% * Tiktok: https://www.biendata.xyz/competition/icmechallenge2019/ * Electronics: http://jmcauley.ucsd.edu/data/amazon/links.html 4 EVALUATION 4.1 Experimental Settings 4.1.1 Dataset. We conduct experiments on three multi-model recommendation datasets and summarize their statistics in Tab. 3. \u2022 Netflix: This dataset contains user-item interactions from the Netflix platform. To construct the multi-model content, we crawled the movie posters based on the provided movie titles. The CLIPViT model [36] was used as the image feature extractor and BERT [22] is pre-trained for text feature encoding. We have released our pre-processed Netflix dataset, which includes the posters, to facilitate further research. \u2022 Tiktok: This micro-video dataset [56] contains interactions with three types of modality features: visual, acoustic, and textual. The 128-dimensional visual and acoustic features were extracted from micro-video desensitization, while the textual features were extracted from the captions using the Sentence-BERT model [37]. \u2022 Electronics: This dataset is based on the Electronics review data from Amazon. The visual modality includes 4,096-dimensional features that were extracted using pre-trained convolutional neural networks [14]. For the textual modality, we utilized SentenceBERT [37] to combine various item attributes, such as title, descriptions, categories, and brands, into a compact 1024-d vector. 4.1.2 Evaluation Protocols. We use two widely adopted metrics for top-K item recommendation task: Recall@K (R@K) and Normalized Discounted Cumulative Gain (N@K). We set K to 20 and 50 to evaluate the performance of our approach and several state-of-the-art baselines. We adopted the all-ranking strategy for evaluation, following the settings used in previous works [58, 60]. To conduct significance analysis, \ud835\udc5d-values were calculated using the results of our proposed approach and the best-performing baseline. 4.1.3 Hyperparameter Settings. We implemented our model framework using PyTorch and initialized model parameters using the Xavier initializer. We employ the AdamW optimizer [34] for both teacher T (\u00b7) and student S(\u00b7). The optimizer of the student will simultaneously optimize the parameters of both the student S(\u00b7) and the prompt module P(\u00b7), which is similar to that of the teacher\u2019s optimization. We search for the learning rates of T (\u00b7) WWW \u201924, May 13\u201317, 2024, Singapore, Singapore Wei Wei, Jiabin Tang, Yangqin Jiang, Lianghao Xia, and Chao Huang and S(\u00b7) within the ranges of [3.5\ud835\udc52\u22124, 9.8\ud835\udc52\u22123] and [2.5\ud835\udc52\u22124, 8.5\ud835\udc52\u22124] respectively. The decay of the \ud835\udc3f2 regularization term is tuned from {2.5\ud835\udc52\u22123, 7.4\ud835\udc52\u22123, 2.1\ud835\udc52\u22122} for three datasets. All baselines are evaluated based on their source code and original papers, and the corresponding parameter tuning is conducted under a unified process. 4.1.4 Baselines. To comprehensively evaluate the performance of our proposed approach, we compared it against several state-ofthe-art baselines from different research lines. i) Collaborative Filtering Models \u2022 BPR-MF [40]: It presents a generic optimization criterion, BPROpt, for personalized ranking that outperforms standard learning techniques for matrix factorization and adaptive kNN. \u2022 NGCF [53]: It introduces GNNs to the CF framework to model high-order information. The newly proposed embedding propagation layer allows the embeddings of users and items to interact with long-range information to harvest the collaborative signal. \u2022 LightGCN [16]: It simplifies the graph convolution to remove the transformation and activation modules for model simplification. ii) Multi-Modal Recommender Systems \u2022 VBPR [15]: It proposes a Matrix Factorization approach to incorporate visual signals into a prediction of user\u2019s preference for personalized ranking with implicit feedback. \u2022 MMGCN [61]: It is built upon the graph-based information propagation framework with a multi-modal GNN, so as to guide representation learning of user preference in each modality. \u2022 GRCN [60]: It designs adaptive refinement module to identify and prune potential false positive edges in the interaction structure, by considering multi-modal item characteristics. \u2022 LATTICE [67]: This method discovers latent relationships between modalities using modality-aware structure learning layers to supplement collaborative signals for recommendation. \u2022 CLCRec [59]: It studies the cold-start recommendation task and maximizes the mutual dependencies between item content and collaborative signals using contrastive learning. \u2022 SLMRec [44]: This work captures multi-modal patterns in data by generating multiple views of individual items and using contrastive learning to distill additional supervised signals. 4.2 Performance Comparison Tab. 4 presents the results of all methods on three datasets, with the results of PromptMM and the best baseline highlighted in bold and underlined, respectively. Based on the results and our analysis, we make the following key observations and conclusions: \u2022 The proposed PromptMM consistently outperforms both general collaborative filtering (CF) models and state-of-the-art multimodal recommendation methods on all three datasets, demonstrating its effectiveness in multimedia recommendation. The improved outcomes are attributed to our designed multi-modal knowledge distillation enhanced by prompt-tuning, which not only bridges the semantic gap during the multi-modal knowledge transfer but also eliminates the impact of noise and redundancy of modality data. Furthermore, our results support the idea that multi-modal recommender systems perform better than general CF models, due to the incorporation of multi-modal context for assisting user preference learning under sparse data. (a) raw feature (b) w-prompt (c) w/o-prompt Figure 3: t-SNE Visualization on Tiktok for raw high dimensional multi-modal features X\ud835\udc5a, modality-specific representations F\ud835\udc5aof PromptMM and F\ud835\udc5aof variant w/o-Prompt. \u2022 Our PromptMM achieves competitive results with a lightweight architecture and tailored transferred knowledge, suggesting that there may be noise in the multi-modal data. This finding confirms our motivation that directly incorporating multi-modal information into the user representations may introduce noise, which can misguide the encoding of modality-aware user preferences. To address this issue, our proposed approach disentangles the soft labels of collaborative relations during the knowledge distillation, which effectively alleviates the noise of multi-modal content by transferring more informative signals into the student model. \u2022 Multi-modal recommendation methods often exhibit significant performance fluctuations on different datasets, due to overfitting. These models are highly influenced by the quality of model features as well as the number of interactions. For instance, LATTICE performs worse on Netflix with many interactions, which we attribute to the introduction of noise by the homogeneous cocurrent graph. In contrast, GRCN achieves superior performance on Netflix by identifying and removing false-positive edges in user-item graphs. CLCRec do not use classical negative sampling of BPR and perform better on datasets with more implicit feedback than on Tiktok and Electronics. We speculate that this is because negative samples do not necessarily indicate users\u2019 dislikes. It may simply be due to the item not being presented. 4.3 Ablation and Effectiveness Analyses To justify the effectiveness of the proposed key components, we designed four variants of our PromptMM and compared their performance against the original approach. The results in terms of Recall@20 and NDCG@20 are shown in Table 5. Further convergence analysis is provided in Supplementary. In order to gain deeper insights into the efficacy of the key components, we also conducted a further visualization analysis. Variant details are presented below: \u2022 w/o-Prompt: This variant disables the prompt-tuning module to evaluate its impact on bridging the semantic gap during the teacher-student knowledge distillation process. \u2022 w/o-PairKD: This variant examines the effect of ranking-based distillation for collaborative knowledge by removing the pairwise knowledge distillation loss term LPairKD from the joint loss. \u2022 w/o-ListKD: The modality-aware disentangled knowledge distillation is not included to re-weight the soft-labels for alignment with the fine-grained knowledge decoupling. \u2022 w/o-disentangle: This variant preserves the list-wise distillation in Sec. 3.2.2, while removing the disentangled part. Aiming to validate the utility of extracting more informative signals from modality features f\ud835\udc5awith the list-wise objective, as well as the necessity of decoupling the transferred knowledge. PromptMM: Multi-Modal Knowledge Distillation for Recommendation with Prompt-Tuning WWW \u201924, May 13\u201317, 2024, Singapore, Singapore Table 4: Performance comparison of baselines on different datasets in terms of Recall@20/50, and NDCG@20/50. Baseline Netflix Tiktok Electronics R@20 N@20 R@50 N@50 R@20 N@20 R@50 N@50 R@20 N@20 R@50 N@50 MF-BPR 0.1583 0.0578 0.2396 0.0740 0.0488 0.0177 0.1038 0.0285 0.0211 0.0081 0.0399 0.0117 NGCF 0.1617 0.0612 0.2455 0.0767 0.0604 0.0206 0.1099 0.0296 0.0241 0.0095 0.0417 0.0128 LightGCN 0.1605 0.0609 0.2449 0.0768 0.0612 0.0211 0.1119 0.0301 0.0259 0.0101 0.0428 0.0132 VBPR 0.1661 0.0621 0.2402 0.0729 0.0525 0.0186 0.1061 0.0289 0.0234 0.0095 0.0409 0.0125 MMGCN 0.1685 0.0620 0.2486 0.0772 0.0629 0.0208 0.1221 0.0305 0.0273 0.0114 0.0445 0.0138 GRCN 0.1762 0.0661 0.2669 0.0868 0.0642 0.0211 0.1285 0.0311 0.0281 0.0117 0.0518 0.0158 CLCRec 0.1801 0.0719 0.2789 0.0892 0.0657 0.0214 0.1329 0.0329 0.0300 0.0118 0.0559 0.0169 SLMRec 0.1743 0.0682 0.2878 0.0869 0.0669 0.0221 0.1363 0.0342 0.0331 0.0132 0.0624 0.0180 LATTICE 0.1654 0.0623 0.2531 0.0770 0.0675 0.0232 0.1401 0.0362 0.0340 0.0135 0.0641 0.0184 PromptMM 0.1864 0.0743 0.3054 0.1013 0.0737 0.0258 0.1517 0.0410 0.0369 0.0155 0.0691 0.0218 \ud835\udc5d-value 1.60\ud835\udc52\u22126 5.90\ud835\udc52\u22125 2.99\ud835\udc52\u22127 1.11\ud835\udc52\u22126 1.41\ud835\udc52\u22124 5.59\ud835\udc52\u22124 5.00\ud835\udc52\u22126 1.29\ud835\udc52\u22125 3.24\ud835\udc52\u22125 2.96\ud835\udc52\u22126 7.51\ud835\udc52\u22127 4.63\ud835\udc52\u22126 Table 5: Ablation study on key components of PromptMM Data Netflix Tiktok Electronics Metrics R@20 N@20 R@20 N@20 R@20 N@20 w/o-Prompt 0.1665 0.0662 0.0681 0.0240 0.0280 0.0117 w/o-PairKD 0.1774 0.0689 0.0692 0.0242 0.0277 0.0112 w/o-ListKD 0.1690 0.0487 0.0673 0.0234 0.0331 0.0136 w/o-disentangle 0.1712 0.0693 0.0706 0.0249 0.0353 0.0141 PromptMM 0.1864 0.0743 0.0737 0.0258 0.0369 0.0155 4.3.1 Numerical Results. As can be seen in the Tab. 5: (1) For variant w/o-Prompt, its performance on all three datasets has decreased compared to PromptMM. This suggests that the removal of prompt-tuning may lead to the semantic gap for knowledge distillation. The modality-aware projection may also be overfitting and can be limited to encode recommendation task-relevant multi-modal context without prompt-tuning enhancement. (2) The variant w/o-PairKD shows a decrease in performance compared to PromptMM when pair-wise KD is disabled, demonstrating the strength of LPKD in distilling ranking-based signals for model alignment. (3) Modality-aware list-wise distillation can finely extract quality modality-aware collaborative relationships, which helps in multi-modal recommendation. Therefore, the variant w/o-ListKD is inferior to the PromptMM results. (4) The item-centric modality features are heavily biased against the preferences of the user. As a result, the variant w/o-disentangle performs poorly without disentangling and re-weighing distilled soft labels. 4.3.2 Visualization Analysis. As shown in Fig. 3, We conducted a visual analysis of modality-specific features on the TikTok dataset to intuitively understand the influence of introducing prompttuning for bridging the teacher model and the student model. Specifically, we applied t-SNE with PCA initialization to reduce the dimensionality of both the modality-specific densified features f\ud835\udc5a \ud835\udc56 \u2208R|I|\u00d7\ud835\udc51(\ud835\udc64\u2212, \ud835\udc64/\ud835\udc5c\u2212prompt-tuning) obtained from the feature densification layer, and the original multi-modal high-dimensional features x\ud835\udc5a\u2208R|I|\u00d7\ud835\udc51\ud835\udc5ainto a 2-dimensional space. The results show that the original features x\ud835\udc5aof diverse modalities exhibit significant differences in their vector space representation, with clear distinctions among different modalities, highlighting their association with distinct distributions. For modality-specific features f\ud835\udc5a \ud835\udc56, there are more overlaps in the prompt-tuning version, while the non-prompt-tuning version \ud835\udc64/\ud835\udc5c\u2212Prompt remains more confined Table 6: Model compactness and inference efficiency. \"Time\" indicates the average recommendation time for each epoch. \"Memory\" represents GPU memory usage. \"Params\" denotes the number of parameters. \"Ratio\" indicates the relative parameter size compared to the teacher. We use PyTorch with CUDA from RTX 3090 GPU and Intel Xeon W-2133 CPU. Dataset T-Model Time Memory # Params Ratio Netflix Teacher 42.6s 2.95GB 24.91M LATTICE 61.0s 18.24GB 24.06M 96.59% PromptMM 23.3s 2.03GB 1.95M 7.83% Electronics Teacher 30.8s 5.02GB 99.04M LATTICE 45.1s 37.69GB 98.39M 99.34% PromptMM 13.9s 3.81GB 2.67M 2.70% * LATTICE out of memory on Electronics dataset, and we completed its experiment on A100. to a modality-specific space. This suggests that prompt-tuning effectively strengthens the encoding of modality-specific features by extracting common user preferences pertaining to multiple ranking tasks while reducing the task-irrelevant features characteristic. 4.4 Study on Resource Consumption In this section, we investigate the resource utilization of the teacher, student, and several baselines (LATTICE) in terms of training time, storage, parameter count, and student-to-teacher parameter ratio for model compression. The specific numerical results on Netflix and Electronics are reported in Tab. 6. Results show that Our student model exhibits significantly lower inference and recommendation time consumption than other models, likely due to their larger size, which requires more time during gradient descent parameter updates. Additionally, LATTICE has to dynamically learn homogeneous graphs, which increases computational time consumption. We find that the calculation of KL-Divergence in our model does not significantly increase time consumption, resulting in lower latency. Moreover, the results show that our model has low storage consumption, with a much lower parameter quantity compared to other models, such as LATTICE which needs to dynamically calculate and store item-time relationships, incurring significant overhead. The numerical value of \u2019ratio=11.24% or 2.70%\u2019 indicates the effectiveness of our model as a compression algorithm. Supplementary provides model evaluation results with online incremental learning. WWW \u201924, May 13\u201317, 2024, Singapore, Singapore Wei Wei, Jiabin Tang, Yangqin Jiang, Lianghao Xia, and Chao Huang (a) # of GNN layers (b) latent dimensionality \ufffd (c) dropout rate (d) rate (f) prompt rate (e) rate Figure 4: Impact study of hyperparameters in PromptMM. 4.5 Impact Study of Hyperparameters This section investigates the influence of several important hyperparameters in our proposed PromptMM. We report the evaluation results in Fig. 4 and examine the effect of one hyperparameter at a time while keeping other parameters at their default settings. \u2022 Representation Dimensionality \ud835\udc51: We investigated the influence of representation dimensionality \ud835\udc51on both the student S(\u00b7) and teacher T (\u00b7), with respect to the impact on recommendation system outcomes. We selected values of \ud835\udc51from [16, 32, 64, 128], and found that the model\u2019s performance saturates when the number of hidden units reaches approximately 64 for the student. Notably, when the dimensions of the teacher and student are the same, the student\u2019s results are better. This is because the score of KD is obtained by the inner product of the representations, and the dimension size determines the scale of the score. Having the same scale level leads to a more accurate KD. \u2022 Depth of GNNs \ud835\udc3f: We examine the influence of the depth of the GNNs in the range of [1, 2, 3, 4]. The results show that the teacher\u2019s performance improves as the layer count increases, while the student\u2019s performance remains moderate. We speculate that this is because the teacher needs to encode useful knowledge with high-order relationships, and our modality-aware rankingbased KD effectively transfers quality knowledge to the student. \u2022 Dropout Ratio of Teacher\u2019s Modality Encoding Layer: We investigate the influence of the dropout ratio of the teacher\u2019s modality encoding layer, which ranges from 0 to 1. Our results show that without dropout, the teacher\u2019s performance drops sharply, indicating overfitting in the multi-modal feature encoder. A higher dropout rate is required for datasets with higher original feature dimensions, confirming the risk of overfitting the multi-modal feature with existing modality encoders. \u2022 Pair/List-wise KD Loss Weight \ud835\udf062, \ud835\udf063: The pair/list-wise KD loss weights (\ud835\udf062, \ud835\udf063) indicate the strength of collaborative knowledge distillation and disentangled modality-aware knowledge distillation, respectively. We vary the weights in the range of [0, 1\ud835\udc52-2, 1\ud835\udc52-1, 1\ud835\udc520, 1\ud835\udc521, 1\ud835\udc522]. Evaluation results show that absent or small weights significantly decrease the model\u2019s performance. \u2022 Prompt Rate \ud835\udf061: It controls the soft-token rate. Our results show that without a soft-token rate, the model\u2019s performance significantly decreases, indicating that prompt-tuning enables teachers to generate more helpful knowledge for students. We speculate that this is because the prompt module optimizes alongside the student learning, leading to better recommendation performance. The objective of this work is to simplify and enhance multi-modal recommenders using a novel modality-aware KD framework empowered by prompt-tuning. To effectively transfer task-relevant knowledge from the teacher to the student model, we introduce a learnable prompt module that dynamically bridges the semantic gap between the multi-modal context encoding in the teacher model and the collaborative relation modeling in the student model. Additionally, our proposed framework, called PromptMM, aims to disentangle the informative collaborative relationships, thereby enabling augmented knowledge distillation. Through extensive experiments, we demonstrate that PromptMM significantly improves model efficiency while maintaining superior accuracy compared to state-of-the-art solutions. Our future work plans to integrate LLMs with multi-modal context encoding for performance enhancement. PromptMM: Multi-Modal Knowledge Distillation for Recommendation with Prompt-Tuning WWW \u201924, May 13\u201317, 2024, Singapore, Singapore" + }, + { + "url": "http://arxiv.org/abs/2303.03922v1", + "title": "Structure Pretraining and Prompt Tuning for Knowledge Graph Transfer", + "abstract": "Knowledge graphs (KG) are essential background knowledge providers in many\ntasks. When designing models for KG-related tasks, one of the key tasks is to\ndevise the Knowledge Representation and Fusion (KRF) module that learns the\nrepresentation of elements from KGs and fuses them with task representations.\nWhile due to the difference of KGs and perspectives to be considered during\nfusion across tasks, duplicate and ad hoc KRF modules design are conducted\namong tasks. In this paper, we propose a novel knowledge graph pretraining\nmodel KGTransformer that could serve as a uniform KRF module in diverse\nKG-related tasks. We pretrain KGTransformer with three self-supervised tasks\nwith sampled sub-graphs as input. For utilization, we propose a general\nprompt-tuning mechanism regarding task data as a triple prompt to allow\nflexible interactions between task KGs and task data. We evaluate pretrained\nKGTransformer on three tasks, triple classification, zero-shot image\nclassification, and question answering. KGTransformer consistently achieves\nbetter results than specifically designed task models. Through experiments, we\njustify that the pretrained KGTransformer could be used off the shelf as a\ngeneral and effective KRF module across KG-related tasks. The code and datasets\nare available at https://github.com/zjukg/KGTransformer.", + "authors": "Wen Zhang, Yushan Zhu, Mingyang Chen, Yuxia Geng, Yufeng Huang, Yajing Xu, Wenting Song, Huajun Chen", + "published": "2023-03-03", + "updated": "2023-03-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2307.03591v1", + "title": "Structure Guided Multi-modal Pre-trained Transformer for Knowledge Graph Reasoning", + "abstract": "Multimodal knowledge graphs (MKGs), which intuitively organize information in\nvarious modalities, can benefit multiple practical downstream tasks, such as\nrecommendation systems, and visual question answering. However, most MKGs are\nstill far from complete, which motivates the flourishing of MKG reasoning\nmodels. Recently, with the development of general artificial architectures, the\npretrained transformer models have drawn increasing attention, especially for\nmultimodal scenarios. However, the research of multimodal pretrained\ntransformer (MPT) for knowledge graph reasoning (KGR) is still at an early\nstage. As the biggest difference between MKG and other multimodal data, the\nrich structural information underlying the MKG still cannot be fully leveraged\nin existing MPT models. Most of them only utilize the graph structure as a\nretrieval map for matching images and texts connected with the same entity.\nThis manner hinders their reasoning performances. To this end, we propose the\ngraph Structure Guided Multimodal Pretrained Transformer for knowledge graph\nreasoning, termed SGMPT. Specifically, the graph structure encoder is adopted\nfor structural feature encoding. Then, a structure-guided fusion module with\ntwo different strategies, i.e., weighted summation and alignment constraint, is\nfirst designed to inject the structural information into both the textual and\nvisual features. To the best of our knowledge, SGMPT is the first MPT model for\nmultimodal KGR, which mines the structural information underlying the knowledge\ngraph. Extensive experiments on FB15k-237-IMG and WN18-IMG, demonstrate that\nour SGMPT outperforms existing state-of-the-art models, and prove the\neffectiveness of the designed strategies.", + "authors": "Ke Liang, Sihang Zhou, Yue Liu, Lingyuan Meng, Meng Liu, Xinwang Liu", + "published": "2023-07-06", + "updated": "2023-07-06", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.IR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2302.08043v3", + "title": "GraphPrompt: Unifying Pre-Training and Downstream Tasks for Graph Neural Networks", + "abstract": "Graphs can model complex relationships between objects, enabling a myriad of\nWeb applications such as online page/article classification and social\nrecommendation. While graph neural networks(GNNs) have emerged as a powerful\ntool for graph representation learning, in an end-to-end supervised setting,\ntheir performance heavily rely on a large amount of task-specific supervision.\nTo reduce labeling requirement, the \"pre-train, fine-tune\" and \"pre-train,\nprompt\" paradigms have become increasingly common. In particular, prompting is\na popular alternative to fine-tuning in natural language processing, which is\ndesigned to narrow the gap between pre-training and downstream objectives in a\ntask-specific manner. However, existing study of prompting on graphs is still\nlimited, lacking a universal treatment to appeal to different downstream tasks.\nIn this paper, we propose GraphPrompt, a novel pre-training and prompting\nframework on graphs. GraphPrompt not only unifies pre-training and downstream\ntasks into a common task template, but also employs a learnable prompt to\nassist a downstream task in locating the most relevant knowledge from the\npre-train model in a task-specific manner. Finally, we conduct extensive\nexperiments on five public datasets to evaluate and analyze GraphPrompt.", + "authors": "Zemin Liu, Xingtong Yu, Yuan Fang, Xinming Zhang", + "published": "2023-02-16", + "updated": "2023-02-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2204.11091v1", + "title": "On-Device Next-Item Recommendation with Self-Supervised Knowledge Distillation", + "abstract": "Modern recommender systems operate in a fully server-based fashion. To cater\nto millions of users, the frequent model maintaining and the high-speed\nprocessing for concurrent user requests are required, which comes at the cost\nof a huge carbon footprint. Meanwhile, users need to upload their behavior data\neven including the immediate environmental context to the server, raising the\npublic concern about privacy. On-device recommender systems circumvent these\ntwo issues with cost-conscious settings and local inference. However, due to\nthe limited memory and computing resources, on-device recommender systems are\nconfronted with two fundamental challenges: (1) how to reduce the size of\nregular models to fit edge devices? (2) how to retain the original capacity?\nPrevious research mostly adopts tensor decomposition techniques to compress the\nregular recommendation model with limited compression ratio so as to avoid\ndrastic performance degradation. In this paper, we explore ultra-compact models\nfor next-item recommendation, by loosing the constraint of dimensionality\nconsistency in tensor decomposition. Meanwhile, to compensate for the capacity\nloss caused by compression, we develop a self-supervised knowledge distillation\nframework which enables the compressed model (student) to distill the essential\ninformation lying in the raw data, and improves the long-tail item\nrecommendation through an embedding-recombination strategy with the original\nmodel (teacher). The extensive experiments on two benchmarks demonstrate that,\nwith 30x model size reduction, the compressed model almost comes with no\naccuracy loss, and even outperforms its uncompressed counterpart in most cases.", + "authors": "Xin Xia, Hongzhi Yin, Junliang Yu, Qinyong Wang, Guandong Xu, Nguyen Quoc Viet Hung", + "published": "2022-04-23", + "updated": "2022-04-23", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2308.05697v3", + "title": "SSLRec: A Self-Supervised Learning Framework for Recommendation", + "abstract": "Self-supervised learning (SSL) has gained significant interest in recent\nyears as a solution to address the challenges posed by sparse and noisy data in\nrecommender systems. Despite the growing number of SSL algorithms designed to\nprovide state-of-the-art performance in various recommendation scenarios (e.g.,\ngraph collaborative filtering, sequential recommendation, social\nrecommendation, KG-enhanced recommendation), there is still a lack of unified\nframeworks that integrate recommendation algorithms across different domains.\nSuch a framework could serve as the cornerstone for self-supervised\nrecommendation algorithms, unifying the validation of existing methods and\ndriving the design of new ones. To address this gap, we introduce SSLRec, a\nnovel benchmark platform that provides a standardized, flexible, and\ncomprehensive framework for evaluating various SSL-enhanced recommenders. The\nSSLRec framework features a modular architecture that allows users to easily\nevaluate state-of-the-art models and a complete set of data augmentation and\nself-supervised toolkits to help create SSL recommendation models with specific\nneeds. Furthermore, SSLRec simplifies the process of training and evaluating\ndifferent recommendation models with consistent and fair settings. Our SSLRec\nplatform covers a comprehensive set of state-of-the-art SSL-enhanced\nrecommendation models across different scenarios, enabling researchers to\nevaluate these cutting-edge models and drive further innovation in the field.\nOur implemented SSLRec framework is available at the source code repository\nhttps://github.com/HKUDS/SSLRec.", + "authors": "Xubin Ren, Lianghao Xia, Yuhao Yang, Wei Wei, Tianle Wang, Xuheng Cai, Chao Huang", + "published": "2023-08-10", + "updated": "2024-01-30", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2302.00219v1", + "title": "Knowledge Distillation on Graphs: A Survey", + "abstract": "Graph Neural Networks (GNNs) have attracted tremendous attention by\ndemonstrating their capability to handle graph data. However, they are\ndifficult to be deployed in resource-limited devices due to model sizes and\nscalability constraints imposed by the multi-hop data dependency. In addition,\nreal-world graphs usually possess complex structural information and features.\nTherefore, to improve the applicability of GNNs and fully encode the\ncomplicated topological information, knowledge distillation on graphs (KDG) has\nbeen introduced to build a smaller yet effective model and exploit more\nknowledge from data, leading to model compression and performance improvement.\nRecently, KDG has achieved considerable progress with many studies proposed. In\nthis survey, we systematically review these works. Specifically, we first\nintroduce KDG challenges and bases, then categorize and summarize existing\nworks of KDG by answering the following three questions: 1) what to distillate,\n2) who to whom, and 3) how to distillate. Finally, we share our thoughts on\nfuture research directions.", + "authors": "Yijun Tian, Shichao Pei, Xiangliang Zhang, Chuxu Zhang, Nitesh V. Chawla", + "published": "2023-02-01", + "updated": "2023-02-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2311.00423v6", + "title": "LLMRec: Large Language Models with Graph Augmentation for Recommendation", + "abstract": "The problem of data sparsity has long been a challenge in recommendation\nsystems, and previous studies have attempted to address this issue by\nincorporating side information. However, this approach often introduces side\neffects such as noise, availability issues, and low data quality, which in turn\nhinder the accurate modeling of user preferences and adversely impact\nrecommendation performance. In light of the recent advancements in large\nlanguage models (LLMs), which possess extensive knowledge bases and strong\nreasoning capabilities, we propose a novel framework called LLMRec that\nenhances recommender systems by employing three simple yet effective LLM-based\ngraph augmentation strategies. Our approach leverages the rich content\navailable within online platforms (e.g., Netflix, MovieLens) to augment the\ninteraction graph in three ways: (i) reinforcing user-item interaction egde,\n(ii) enhancing the understanding of item node attributes, and (iii) conducting\nuser node profiling, intuitively from the natural language perspective. By\nemploying these strategies, we address the challenges posed by sparse implicit\nfeedback and low-quality side information in recommenders. Besides, to ensure\nthe quality of the augmentation, we develop a denoised data robustification\nmechanism that includes techniques of noisy implicit feedback pruning and\nMAE-based feature enhancement that help refine the augmented data and improve\nits reliability. Furthermore, we provide theoretical analysis to support the\neffectiveness of LLMRec and clarify the benefits of our method in facilitating\nmodel optimization. Experimental results on benchmark datasets demonstrate the\nsuperiority of our LLM-based augmentation approach over state-of-the-art\ntechniques. To ensure reproducibility, we have made our code and augmented data\npublicly available at: https://github.com/HKUDS/LLMRec.git", + "authors": "Wei Wei, Xubin Ren, Jiabin Tang, Qinyong Wang, Lixin Su, Suqi Cheng, Junfeng Wang, Dawei Yin, Chao Huang", + "published": "2023-11-01", + "updated": "2024-01-06", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.13023v3", + "title": "GraphGPT: Graph Instruction Tuning for Large Language Models", + "abstract": "Graph Neural Networks (GNNs) have evolved to understand graph structures\nthrough recursive exchanges and aggregations among nodes. To enhance\nrobustness, self-supervised learning (SSL) has become a vital tool for data\naugmentation. Traditional methods often depend on fine-tuning with\ntask-specific labels, limiting their effectiveness when labeled data is scarce.\nOur research tackles this by advancing graph model generalization in zero-shot\nlearning environments. Inspired by the success of large language models (LLMs),\nwe aim to create a graph-oriented LLM capable of exceptional generalization\nacross various datasets and tasks without relying on downstream graph data. We\nintroduce the GraphGPT framework, which integrates LLMs with graph structural\nknowledge through graph instruction tuning. This framework includes a\ntext-graph grounding component to link textual and graph structures and a\ndual-stage instruction tuning approach with a lightweight graph-text alignment\nprojector. These innovations allow LLMs to comprehend complex graph structures\nand enhance adaptability across diverse datasets and tasks. Our framework\ndemonstrates superior generalization in both supervised and zero-shot graph\nlearning tasks, surpassing existing benchmarks. The open-sourced model\nimplementation of our GraphGPT is available at\nhttps://github.com/HKUDS/GraphGPT.", + "authors": "Jiabin Tang, Yuhao Yang, Wei Wei, Lei Shi, Lixin Su, Suqi Cheng, Dawei Yin, Chao Huang", + "published": "2023-10-19", + "updated": "2024-05-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.00447v3", + "title": "TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation", + "abstract": "Large Language Models (LLMs) have demonstrated remarkable performance across\ndiverse domains, thereby prompting researchers to explore their potential for\nuse in recommendation systems. Initial attempts have leveraged the exceptional\ncapabilities of LLMs, such as rich knowledge and strong generalization through\nIn-context Learning, which involves phrasing the recommendation task as\nprompts. Nevertheless, the performance of LLMs in recommendation tasks remains\nsuboptimal due to a substantial disparity between the training tasks for LLMs\nand recommendation tasks, as well as inadequate recommendation data during\npre-training. To bridge the gap, we consider building a Large Recommendation\nLanguage Model by tunning LLMs with recommendation data. To this end, we\npropose an efficient and effective Tuning framework for Aligning LLMs with\nRecommendation, namely TALLRec. We have demonstrated that the proposed TALLRec\nframework can significantly enhance the recommendation capabilities of LLMs in\nthe movie and book domains, even with a limited dataset of fewer than 100\nsamples. Additionally, the proposed framework is highly efficient and can be\nexecuted on a single RTX 3090 with LLaMA-7B. Furthermore, the fine-tuned LLM\nexhibits robust cross-domain generalization. Our code and data are available at\nhttps://github.com/SAI990323/TALLRec.", + "authors": "Keqin Bao, Jizhi Zhang, Yang Zhang, Wenjie Wang, Fuli Feng, Xiangnan He", + "published": "2023-04-30", + "updated": "2023-10-17", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2302.10632v5", + "title": "Multi-Modal Self-Supervised Learning for Recommendation", + "abstract": "The online emergence of multi-modal sharing platforms (eg, TikTok, Youtube)\nis powering personalized recommender systems to incorporate various modalities\n(eg, visual, textual and acoustic) into the latent user representations. While\nexisting works on multi-modal recommendation exploit multimedia content\nfeatures in enhancing item embeddings, their model representation capability is\nlimited by heavy label reliance and weak robustness on sparse user behavior\ndata. Inspired by the recent progress of self-supervised learning in\nalleviating label scarcity issue, we explore deriving self-supervision signals\nwith effectively learning of modality-aware user preference and cross-modal\ndependencies. To this end, we propose a new Multi-Modal Self-Supervised\nLearning (MMSSL) method which tackles two key challenges. Specifically, to\ncharacterize the inter-dependency between the user-item collaborative view and\nitem multi-modal semantic view, we design a modality-aware interactive\nstructure learning paradigm via adversarial perturbations for data\naugmentation. In addition, to capture the effects that user's modality-aware\ninteraction pattern would interweave with each other, a cross-modal contrastive\nlearning approach is introduced to jointly preserve the inter-modal semantic\ncommonality and user preference diversity. Experiments on real-world datasets\nverify the superiority of our method in offering great potential for multimedia\nrecommendation over various state-of-the-art baselines. The implementation is\nreleased at: https://github.com/HKUDS/MMSSL.", + "authors": "Wei Wei, Chao Huang, Lianghao Xia, Chuxu Zhang", + "published": "2023-02-21", + "updated": "2023-07-18", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.15950v4", + "title": "Representation Learning with Large Language Models for Recommendation", + "abstract": "Recommender systems have seen significant advancements with the influence of\ndeep learning and graph neural networks, particularly in capturing complex\nuser-item relationships. However, these graph-based recommenders heavily depend\non ID-based data, potentially disregarding valuable textual information\nassociated with users and items, resulting in less informative learned\nrepresentations. Moreover, the utilization of implicit feedback data introduces\npotential noise and bias, posing challenges for the effectiveness of user\npreference learning. While the integration of large language models (LLMs) into\ntraditional ID-based recommenders has gained attention, challenges such as\nscalability issues, limitations in text-only reliance, and prompt input\nconstraints need to be addressed for effective implementation in practical\nrecommender systems. To address these challenges, we propose a model-agnostic\nframework RLMRec that aims to enhance existing recommenders with LLM-empowered\nrepresentation learning. It proposes a recommendation paradigm that integrates\nrepresentation learning with LLMs to capture intricate semantic aspects of user\nbehaviors and preferences. RLMRec incorporates auxiliary textual signals,\ndevelops a user/item profiling paradigm empowered by LLMs, and aligns the\nsemantic space of LLMs with the representation space of collaborative\nrelational signals through a cross-view alignment framework. This work further\nestablish a theoretical foundation demonstrating that incorporating textual\nsignals through mutual information maximization enhances the quality of\nrepresentations. In our evaluation, we integrate RLMRec with state-of-the-art\nrecommender models, while also analyzing its efficiency and robustness to noise\ndata. Our implementation codes are available at\nhttps://github.com/HKUDS/RLMRec.", + "authors": "Xubin Ren, Wei Wei, Lianghao Xia, Lixin Su, Suqi Cheng, Junfeng Wang, Dawei Yin, Chao Huang", + "published": "2023-10-24", + "updated": "2024-02-25", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2206.07353v1", + "title": "Rethinking Reinforcement Learning for Recommendation: A Prompt Perspective", + "abstract": "Modern recommender systems aim to improve user experience. As reinforcement\nlearning (RL) naturally fits this objective -- maximizing an user's reward per\nsession -- it has become an emerging topic in recommender systems. Developing\nRL-based recommendation methods, however, is not trivial due to the\n\\emph{offline training challenge}. Specifically, the keystone of traditional RL\nis to train an agent with large amounts of online exploration making lots of\n`errors' in the process. In the recommendation setting, though, we cannot\nafford the price of making `errors' online. As a result, the agent needs to be\ntrained through offline historical implicit feedback, collected under different\nrecommendation policies; traditional RL algorithms may lead to sub-optimal\npolicies under these offline training settings.\n Here we propose a new learning paradigm -- namely Prompt-Based Reinforcement\nLearning (PRL) -- for the offline training of RL-based recommendation agents.\nWhile traditional RL algorithms attempt to map state-action input pairs to\ntheir expected rewards (e.g., Q-values), PRL directly infers actions (i.e.,\nrecommended items) from state-reward inputs. In short, the agents are trained\nto predict a recommended item given the prior interactions and an observed\nreward value -- with simple supervised learning. At deployment time, this\nhistorical (training) data acts as a knowledge base, while the state-reward\npairs are used as a prompt. The agents are thus used to answer the question:\n\\emph{ Which item should be recommended given the prior interactions \\& the\nprompted reward value}? We implement PRL with four notable recommendation\nmodels and conduct experiments on two real-world e-commerce datasets.\nExperimental results demonstrate the superior performance of our proposed\nmethods.", + "authors": "Xin Xin, Tiago Pimentel, Alexandros Karatzoglou, Pengjie Ren, Konstantina Christakopoulou, Zhaochun Ren", + "published": "2022-06-15", + "updated": "2022-06-15", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.10738v4", + "title": "Knowledge Graph Contrastive Learning Based on Relation-Symmetrical Structure", + "abstract": "Knowledge graph embedding (KGE) aims at learning powerful representations to\nbenefit various artificial intelligence applications. Meanwhile, contrastive\nlearning has been widely leveraged in graph learning as an effective mechanism\nto enhance the discriminative capacity of the learned representations. However,\nthe complex structures of KG make it hard to construct appropriate contrastive\npairs. Only a few attempts have integrated contrastive learning strategies with\nKGE. But, most of them rely on language models ( e.g., Bert) for contrastive\npair construction instead of fully mining information underlying the graph\nstructure, hindering expressive ability. Surprisingly, we find that the\nentities within a relational symmetrical structure are usually similar and\ncorrelated. To this end, we propose a knowledge graph contrastive learning\nframework based on relation-symmetrical structure, KGE-SymCL, which mines\nsymmetrical structure information in KGs to enhance the discriminative ability\nof KGE models. Concretely, a plug-and-play approach is proposed by taking\nentities in the relation-symmetrical positions as positive pairs. Besides, a\nself-supervised alignment loss is designed to pull together positive pairs.\nExperimental results on link prediction and entity classification datasets\ndemonstrate that our KGE-SymCL can be easily adopted to various KGE models for\nperformance improvements. Moreover, extensive experiments show that our model\ncould outperform other state-of-the-art baselines.", + "authors": "Ke Liang, Yue Liu, Sihang Zhou, Wenxuan Tu, Yi Wen, Xihong Yang, Xiangjun Dong, Xinwang Liu", + "published": "2022-11-19", + "updated": "2023-06-13", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.IR", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.15183v4", + "title": "GraphEdit: Large Language Models for Graph Structure Learning", + "abstract": "Graph Structure Learning (GSL) focuses on capturing intrinsic dependencies\nand interactions among nodes in graph-structured data by generating novel graph\nstructures. Graph Neural Networks (GNNs) have emerged as promising GSL\nsolutions, utilizing recursive message passing to encode node-wise\ninter-dependencies. However, many existing GSL methods heavily depend on\nexplicit graph structural information as supervision signals, leaving them\nsusceptible to challenges such as data noise and sparsity. In this work, we\npropose GraphEdit, an approach that leverages large language models (LLMs) to\nlearn complex node relationships in graph-structured data. By enhancing the\nreasoning capabilities of LLMs through instruction-tuning over graph\nstructures, we aim to overcome the limitations associated with explicit graph\nstructural information and enhance the reliability of graph structure learning.\nOur approach not only effectively denoises noisy connections but also\nidentifies node-wise dependencies from a global perspective, providing a\ncomprehensive understanding of the graph structure. We conduct extensive\nexperiments on multiple benchmark datasets to demonstrate the effectiveness and\nrobustness of GraphEdit across various settings. We have made our model\nimplementation available at: https://github.com/HKUDS/GraphEdit.", + "authors": "Zirui Guo, Lianghao Xia, Yanhua Yu, Yuling Wang, Zixuan Yang, Wei Wei, Liang Pang, Tat-Seng Chua, Chao Huang", + "published": "2024-02-23", + "updated": "2024-03-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.04616v2", + "title": "TinyLLM: Learning a Small Student from Multiple Large Language Models", + "abstract": "Transferring the reasoning capability from stronger large language models\n(LLMs) to smaller ones has been quite appealing, as smaller LLMs are more\nflexible to deploy with less expense. Among the existing solutions, knowledge\ndistillation stands out due to its outstanding efficiency and generalization.\nHowever, existing methods suffer from several drawbacks, including limited\nknowledge diversity and the lack of rich contextual information. To solve the\nproblems and facilitate the learning of compact language models, we propose\nTinyLLM, a new knowledge distillation paradigm to learn a small student LLM\nfrom multiple large teacher LLMs. In particular, we encourage the student LLM\nto not only generate the correct answers but also understand the rationales\nbehind these answers. Given that different LLMs possess diverse reasoning\nskills, we guide the student model to assimilate knowledge from various teacher\nLLMs. We further introduce an in-context example generator and a\nteacher-forcing Chain-of-Thought strategy to ensure that the rationales are\naccurate and grounded in contextually appropriate scenarios. Extensive\nexperiments on six datasets across two reasoning tasks demonstrate the\nsuperiority of our method. Results show that TinyLLM can outperform large\nteacher LLMs significantly, despite a considerably smaller model size.", + "authors": "Yijun Tian, Yikun Han, Xiusi Chen, Wei Wang, Nitesh V. Chawla", + "published": "2024-02-07", + "updated": "2024-04-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.16024v1", + "title": "HiGPT: Heterogeneous Graph Language Model", + "abstract": "Heterogeneous graph learning aims to capture complex relationships and\ndiverse relational semantics among entities in a heterogeneous graph to obtain\nmeaningful representations for nodes and edges. Recent advancements in\nheterogeneous graph neural networks (HGNNs) have achieved state-of-the-art\nperformance by considering relation heterogeneity and using specialized message\nfunctions and aggregation rules. However, existing frameworks for heterogeneous\ngraph learning have limitations in generalizing across diverse heterogeneous\ngraph datasets. Most of these frameworks follow the \"pre-train\" and \"fine-tune\"\nparadigm on the same dataset, which restricts their capacity to adapt to new\nand unseen data. This raises the question: \"Can we generalize heterogeneous\ngraph models to be well-adapted to diverse downstream learning tasks with\ndistribution shifts in both node token sets and relation type heterogeneity?''\nTo tackle those challenges, we propose HiGPT, a general large graph model with\nHeterogeneous graph instruction-tuning paradigm. Our framework enables learning\nfrom arbitrary heterogeneous graphs without the need for any fine-tuning\nprocess from downstream datasets. To handle distribution shifts in\nheterogeneity, we introduce an in-context heterogeneous graph tokenizer that\ncaptures semantic relationships in different heterogeneous graphs, facilitating\nmodel adaptation. We incorporate a large corpus of heterogeneity-aware graph\ninstructions into our HiGPT, enabling the model to effectively comprehend\ncomplex relation heterogeneity and distinguish between various types of graph\ntokens. Furthermore, we introduce the Mixture-of-Thought (MoT) instruction\naugmentation paradigm to mitigate data scarcity by generating diverse and\ninformative instructions. Through comprehensive evaluations, our proposed\nframework demonstrates exceptional performance in terms of generalization\nperformance.", + "authors": "Jiabin Tang, Yuhao Yang, Wei Wei, Lei Shi, Long Xia, Dawei Yin, Chao Huang", + "published": "2024-02-25", + "updated": "2024-02-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2205.04682v2", + "title": "Selective Fairness in Recommendation via Prompts", + "abstract": "Recommendation fairness has attracted great attention recently. In real-world\nsystems, users usually have multiple sensitive attributes (e.g. age, gender,\nand occupation), and users may not want their recommendation results influenced\nby those attributes. Moreover, which of and when these user attributes should\nbe considered in fairness-aware modeling should depend on users' specific\ndemands. In this work, we define the selective fairness task, where users can\nflexibly choose which sensitive attributes should the recommendation model be\nbias-free. We propose a novel parameter-efficient prompt-based fairness-aware\nrecommendation (PFRec) framework, which relies on attribute-specific\nprompt-based bias eliminators with adversarial training, enabling selective\nfairness with different attribute combinations on sequential recommendation.\nBoth task-specific and user-specific prompts are considered. We conduct\nextensive evaluations to verify PFRec's superiority in selective fairness. The\nsource codes are released in \\url{https://github.com/wyqing20/PFRec}.", + "authors": "Yiqing Wu, Ruobing Xie, Yongchun Zhu, Fuzhen Zhuang, Xiang Ao, Xu Zhang, Leyu Lin, Qing He", + "published": "2022-05-10", + "updated": "2022-07-05", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2302.03735v3", + "title": "Pre-train, Prompt and Recommendation: A Comprehensive Survey of Language Modelling Paradigm Adaptations in Recommender Systems", + "abstract": "The emergence of Pre-trained Language Models (PLMs) has achieved tremendous\nsuccess in the field of Natural Language Processing (NLP) by learning universal\nrepresentations on large corpora in a self-supervised manner. The pre-trained\nmodels and the learned representations can be beneficial to a series of\ndownstream NLP tasks. This training paradigm has recently been adapted to the\nrecommendation domain and is considered a promising approach by both academia\nand industry. In this paper, we systematically investigate how to extract and\ntransfer knowledge from pre-trained models learned by different PLM-related\ntraining paradigms to improve recommendation performance from various\nperspectives, such as generality, sparsity, efficiency and effectiveness.\nSpecifically, we propose a comprehensive taxonomy to divide existing PLM-based\nrecommender systems w.r.t. their training strategies and objectives. Then, we\nanalyze and summarize the connection between PLM-based training paradigms and\ndifferent input data types for recommender systems. Finally, we elaborate on\nopen issues and future research directions in this vibrant field.", + "authors": "Peng Liu, Lemei Zhang, Jon Atle Gulla", + "published": "2023-02-07", + "updated": "2023-09-12", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2309.15427v2", + "title": "Graph Neural Prompting with Large Language Models", + "abstract": "Large language models (LLMs) have shown remarkable generalization capability\nwith exceptional performance in various language modeling tasks. However, they\nstill exhibit inherent limitations in precisely capturing and returning\ngrounded knowledge. While existing work has explored utilizing knowledge graphs\n(KGs) to enhance language modeling via joint training and customized model\narchitectures, applying this to LLMs is problematic owing to their large number\nof parameters and high computational cost. Therefore, how to enhance\npre-trained LLMs using grounded knowledge, e.g., retrieval-augmented\ngeneration, remains an open question. In this work, we propose Graph Neural\nPrompting (GNP), a novel plug-and-play method to assist pre-trained LLMs in\nlearning beneficial knowledge from KGs. GNP encompasses various designs,\nincluding a standard graph neural network encoder, a cross-modality pooling\nmodule, a domain projector, and a self-supervised link prediction objective.\nExtensive experiments on multiple datasets demonstrate the superiority of GNP\non both commonsense and biomedical reasoning tasks across different LLM sizes\nand settings. Code is available at https://github.com/meettyj/GNP.", + "authors": "Yijun Tian, Huan Song, Zichen Wang, Haozhu Wang, Ziqing Hu, Fang Wang, Nitesh V. Chawla, Panpan Xu", + "published": "2023-09-27", + "updated": "2023-12-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2303.01130v1", + "title": "Distillation from Heterogeneous Models for Top-K Recommendation", + "abstract": "Recent recommender systems have shown remarkable performance by using an\nensemble of heterogeneous models. However, it is exceedingly costly because it\nrequires resources and inference latency proportional to the number of models,\nwhich remains the bottleneck for production. Our work aims to transfer the\nensemble knowledge of heterogeneous teachers to a lightweight student model\nusing knowledge distillation (KD), to reduce the huge inference costs while\nretaining high accuracy. Through an empirical study, we find that the efficacy\nof distillation severely drops when transferring knowledge from heterogeneous\nteachers. Nevertheless, we show that an important signal to ease the difficulty\ncan be obtained from the teacher's training trajectory. This paper proposes a\nnew KD framework, named HetComp, that guides the student model by transferring\neasy-to-hard sequences of knowledge generated from the teachers' trajectories.\nTo provide guidance according to the student's learning state, HetComp uses\ndynamic knowledge construction to provide progressively difficult ranking\nknowledge and adaptive knowledge transfer to gradually transfer finer-grained\nranking information. Our comprehensive experiments show that HetComp\nsignificantly improves the distillation quality and the generalization of the\nstudent model.", + "authors": "SeongKu Kang, Wonbin Kweon, Dongha Lee, Jianxun Lian, Xing Xie, Hwanjo Yu", + "published": "2023-03-02", + "updated": "2023-03-02", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2205.09153v1", + "title": "ERNIE-Search: Bridging Cross-Encoder with Dual-Encoder via Self On-the-fly Distillation for Dense Passage Retrieval", + "abstract": "Neural retrievers based on pre-trained language models (PLMs), such as\ndual-encoders, have achieved promising performance on the task of open-domain\nquestion answering (QA). Their effectiveness can further reach new\nstate-of-the-arts by incorporating cross-architecture knowledge distillation.\nHowever, most of the existing studies just directly apply conventional\ndistillation methods. They fail to consider the particular situation where the\nteacher and student have different structures. In this paper, we propose a\nnovel distillation method that significantly advances cross-architecture\ndistillation for dual-encoders. Our method 1) introduces a self on-the-fly\ndistillation method that can effectively distill late interaction (i.e.,\nColBERT) to vanilla dual-encoder, and 2) incorporates a cascade distillation\nprocess to further improve the performance with a cross-encoder teacher.\nExtensive experiments are conducted to validate that our proposed solution\noutperforms strong baselines and establish a new state-of-the-art on\nopen-domain QA benchmarks.", + "authors": "Yuxiang Lu, Yiding Liu, Jiaxiang Liu, Yunsheng Shi, Zhengjie Huang, Shikun Feng Yu Sun, Hao Tian, Hua Wu, Shuaiqiang Wang, Dawei Yin, Haifeng Wang", + "published": "2022-05-18", + "updated": "2022-05-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2309.09920v1", + "title": "Distilling HuBERT with LSTMs via Decoupled Knowledge Distillation", + "abstract": "Much research effort is being applied to the task of compressing the\nknowledge of self-supervised models, which are powerful, yet large and memory\nconsuming. In this work, we show that the original method of knowledge\ndistillation (and its more recently proposed extension, decoupled knowledge\ndistillation) can be applied to the task of distilling HuBERT. In contrast to\nmethods that focus on distilling internal features, this allows for more\nfreedom in the network architecture of the compressed model. We thus propose to\ndistill HuBERT's Transformer layers into an LSTM-based distilled model that\nreduces the number of parameters even below DistilHuBERT and at the same time\nshows improved performance in automatic speech recognition.", + "authors": "Danilo de Oliveira, Timo Gerkmann", + "published": "2023-09-18", + "updated": "2023-09-18", + "primary_cat": "eess.AS", + "cats": [ + "eess.AS", + "cs.LG", + "cs.SD", + "eess.SP" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2102.02973v1", + "title": "Show, Attend and Distill:Knowledge Distillation via Attention-based Feature Matching", + "abstract": "Knowledge distillation extracts general knowledge from a pre-trained teacher\nnetwork and provides guidance to a target student network. Most studies\nmanually tie intermediate features of the teacher and student, and transfer\nknowledge through pre-defined links. However, manual selection often constructs\nineffective links that limit the improvement from the distillation. There has\nbeen an attempt to address the problem, but it is still challenging to identify\neffective links under practical scenarios. In this paper, we introduce an\neffective and efficient feature distillation method utilizing all the feature\nlevels of the teacher without manually selecting the links. Specifically, our\nmethod utilizes an attention-based meta-network that learns relative\nsimilarities between features, and applies identified similarities to control\ndistillation intensities of all possible pairs. As a result, our method\ndetermines competent links more efficiently than the previous approach and\nprovides better performance on model compression and transfer learning tasks.\nFurther qualitative analyses and ablative studies describe how our method\ncontributes to better distillation. The implementation code is available at\ngithub.com/clovaai/attention-feature-distillation.", + "authors": "Mingi Ji, Byeongho Heo, Sungrae Park", + "published": "2021-02-05", + "updated": "2021-02-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1707.02573v1", + "title": "Distilling Entanglement with Noisy Operations", + "abstract": "Entanglement distillation is a fundamental task in quantum information\nprocessing. It not only extracts entanglement out of corrupted systems but also\nleads to protecting systems of interest against intervention with environment.\nIn this work, we consider a realistic scenario of entanglement distillation\nwhere noisy quantum operations are applied. In particular, the two-way\ndistillation protocol that tolerates the highest error rate is considered. We\nshow that among all types of noise there are only four equivalence classes\naccording to the distillability condition. Since the four classes are connected\nby local unitary transformations, our results can be used to improve\nentanglement distillability in practice when entanglement distillation is\nperformed in a realistic setting.", + "authors": "Jinho Chang, Joonwoo Bae, Younghun Kwon", + "published": "2017-07-09", + "updated": "2017-07-09", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1607.04311v1", + "title": "Defensive Distillation is Not Robust to Adversarial Examples", + "abstract": "We show that defensive distillation is not secure: it is no more resistant to\ntargeted misclassification attacks than unprotected neural networks.", + "authors": "Nicholas Carlini, David Wagner", + "published": "2016-07-14", + "updated": "2016-07-14", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2307.08436v1", + "title": "DOT: A Distillation-Oriented Trainer", + "abstract": "Knowledge distillation transfers knowledge from a large model to a small one\nvia task and distillation losses. In this paper, we observe a trade-off between\ntask and distillation losses, i.e., introducing distillation loss limits the\nconvergence of task loss. We believe that the trade-off results from the\ninsufficient optimization of distillation loss. The reason is: The teacher has\na lower task loss than the student, and a lower distillation loss drives the\nstudent more similar to the teacher, then a better-converged task loss could be\nobtained. To break the trade-off, we propose the Distillation-Oriented Trainer\n(DOT). DOT separately considers gradients of task and distillation losses, then\napplies a larger momentum to distillation loss to accelerate its optimization.\nWe empirically prove that DOT breaks the trade-off, i.e., both losses are\nsufficiently optimized. Extensive experiments validate the superiority of DOT.\nNotably, DOT achieves a +2.59% accuracy improvement on ImageNet-1k for the\nResNet50-MobileNetV1 pair. Conclusively, DOT greatly benefits the student's\noptimization properties in terms of loss convergence and model generalization.\nCode will be made publicly available.", + "authors": "Borui Zhao, Quan Cui, Renjie Song, Jiajun Liang", + "published": "2023-07-17", + "updated": "2023-07-17", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2303.05958v1", + "title": "Robust Knowledge Distillation from RNN-T Models With Noisy Training Labels Using Full-Sum Loss", + "abstract": "This work studies knowledge distillation (KD) and addresses its constraints\nfor recurrent neural network transducer (RNN-T) models. In hard distillation, a\nteacher model transcribes large amounts of unlabelled speech to train a student\nmodel. Soft distillation is another popular KD method that distills the output\nlogits of the teacher model. Due to the nature of RNN-T alignments, applying\nsoft distillation between RNN-T architectures having different posterior\ndistributions is challenging. In addition, bad teachers having high\nword-error-rate (WER) reduce the efficacy of KD. We investigate how to\neffectively distill knowledge from variable quality ASR teachers, which has not\nbeen studied before to the best of our knowledge. We show that a sequence-level\nKD, full-sum distillation, outperforms other distillation methods for RNN-T\nmodels, especially for bad teachers. We also propose a variant of full-sum\ndistillation that distills the sequence discriminative knowledge of the teacher\nleading to further improvement in WER. We conduct experiments on public\ndatasets namely SpeechStew and LibriSpeech, and on in-house production data.", + "authors": "Mohammad Zeineldeen, Kartik Audhkhasi, Murali Karthick Baskar, Bhuvana Ramabhadran", + "published": "2023-03-10", + "updated": "2023-03-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.SD", + "eess.AS", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2208.08840v1", + "title": "Mind the Gap in Distilling StyleGANs", + "abstract": "StyleGAN family is one of the most popular Generative Adversarial Networks\n(GANs) for unconditional generation. Despite its impressive performance, its\nhigh demand on storage and computation impedes their deployment on\nresource-constrained devices. This paper provides a comprehensive study of\ndistilling from the popular StyleGAN-like architecture. Our key insight is that\nthe main challenge of StyleGAN distillation lies in the output discrepancy\nissue, where the teacher and student model yield different outputs given the\nsame input latent code. Standard knowledge distillation losses typically fail\nunder this heterogeneous distillation scenario. We conduct thorough analysis\nabout the reasons and effects of this discrepancy issue, and identify that the\nmapping network plays a vital role in determining semantic information of\ngenerated images. Based on this finding, we propose a novel initialization\nstrategy for the student model, which can ensure the output consistency to the\nmaximum extent. To further enhance the semantic consistency between the teacher\nand student model, we present a latent-direction-based distillation loss that\npreserves the semantic relations in latent space. Extensive experiments\ndemonstrate the effectiveness of our approach in distilling StyleGAN2 and\nStyleGAN3, outperforming existing GAN distillation methods by a large margin.", + "authors": "Guodong Xu, Yuenan Hou, Ziwei Liu, Chen Change Loy", + "published": "2022-08-18", + "updated": "2022-08-18", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1901.09135v1", + "title": "Progressive Label Distillation: Learning Input-Efficient Deep Neural Networks", + "abstract": "Much of the focus in the area of knowledge distillation has been on\ndistilling knowledge from a larger teacher network to a smaller student\nnetwork. However, there has been little research on how the concept of\ndistillation can be leveraged to distill the knowledge encapsulated in the\ntraining data itself into a reduced form. In this study, we explore the concept\nof progressive label distillation, where we leverage a series of\nteacher-student network pairs to progressively generate distilled training data\nfor learning deep neural networks with greatly reduced input dimensions. To\ninvestigate the efficacy of the proposed progressive label distillation\napproach, we experimented with learning a deep limited vocabulary speech\nrecognition network based on generated 500ms input utterances distilled\nprogressively from 1000ms source training data, and demonstrated a significant\nincrease in test accuracy of almost 78% compared to direct learning.", + "authors": "Zhong Qiu Lin, Alexander Wong", + "published": "2019-01-26", + "updated": "2019-01-26", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/0908.0836v3", + "title": "Bound States for Magic State Distillation in Fault-Tolerant Quantum Computation", + "abstract": "Magic state distillation is an important primitive in fault-tolerant quantum\ncomputation. The magic states are pure non-stabilizer states which can be\ndistilled from certain mixed non-stabilizer states via Clifford group\noperations alone. Because of the Gottesman-Knill theorem, mixtures of Pauli\neigenstates are not expected to be magic state distillable, but it has been an\nopen question whether all mixed states outside this set may be distilled. In\nthis Letter we show that, when resources are finitely limited, non-distillable\nstates exist outside the stabilizer octahedron. In analogy with the bound\nentangled states, which arise in entanglement theory, we call such states bound\nstates for magic state distillation.", + "authors": "Earl T. Campbell, Dan E. Browne", + "published": "2009-08-06", + "updated": "2010-02-01", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2208.10068v1", + "title": "Tree-structured Auxiliary Online Knowledge Distillation", + "abstract": "Traditional knowledge distillation adopts a two-stage training process in\nwhich a teacher model is pre-trained and then transfers the knowledge to a\ncompact student model. To overcome the limitation, online knowledge\ndistillation is proposed to perform one-stage distillation when the teacher is\nunavailable. Recent researches on online knowledge distillation mainly focus on\nthe design of the distillation objective, including attention or gate\nmechanism. Instead, in this work, we focus on the design of the global\narchitecture and propose Tree-Structured Auxiliary online knowledge\ndistillation (TSA), which adds more parallel peers for layers close to the\noutput hierarchically to strengthen the effect of knowledge distillation.\nDifferent branches construct different views of the inputs, which can be the\nsource of the knowledge. The hierarchical structure implies that the knowledge\ntransfers from general to task-specific with the growth of the layers.\nExtensive experiments on 3 computer vision and 4 natural language processing\ndatasets show that our method achieves state-of-the-art performance without\nbells and whistles. To the best of our knowledge, we are the first to\ndemonstrate the effectiveness of online knowledge distillation for machine\ntranslation tasks.", + "authors": "Wenye Lin, Yangning Li, Yifeng Ding, Hai-Tao Zheng", + "published": "2022-08-22", + "updated": "2022-08-22", + "primary_cat": "cs.NI", + "cats": [ + "cs.NI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2004.03097v1", + "title": "Towards Non-task-specific Distillation of BERT via Sentence Representation Approximation", + "abstract": "Recently, BERT has become an essential ingredient of various NLP deep models\ndue to its effectiveness and universal-usability. However, the online\ndeployment of BERT is often blocked by its large-scale parameters and high\ncomputational cost. There are plenty of studies showing that the knowledge\ndistillation is efficient in transferring the knowledge from BERT into the\nmodel with a smaller size of parameters. Nevertheless, current BERT\ndistillation approaches mainly focus on task-specified distillation, such\nmethodologies lead to the loss of the general semantic knowledge of BERT for\nuniversal-usability. In this paper, we propose a sentence representation\napproximating oriented distillation framework that can distill the pre-trained\nBERT into a simple LSTM based model without specifying tasks. Consistent with\nBERT, our distilled model is able to perform transfer learning via fine-tuning\nto adapt to any sentence-level downstream task. Besides, our model can further\ncooperate with task-specific distillation procedures. The experimental results\non multiple NLP tasks from the GLUE benchmark show that our approach\noutperforms other task-specific distillation methods or even much larger\nmodels, i.e., ELMO, with efficiency well-improved.", + "authors": "Bowen Wu, Huan Zhang, Mengyuan Li, Zongsheng Wang, Qihang Feng, Junhong Huang, Baoxun Wang", + "published": "2020-04-07", + "updated": "2020-04-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1910.02551v3", + "title": "Soft-Label Dataset Distillation and Text Dataset Distillation", + "abstract": "Dataset distillation is a method for reducing dataset sizes by learning a\nsmall number of synthetic samples containing all the information of a large\ndataset. This has several benefits like speeding up model training, reducing\nenergy consumption, and reducing required storage space. Currently, each\nsynthetic sample is assigned a single `hard' label, and also, dataset\ndistillation can currently only be used with image data.\n We propose to simultaneously distill both images and their labels, thus\nassigning each synthetic sample a `soft' label (a distribution of labels). Our\nalgorithm increases accuracy by 2-4% over the original algorithm for several\nimage classification tasks. Using `soft' labels also enables distilled datasets\nto consist of fewer samples than there are classes as each sample can encode\ninformation for multiple classes. For example, training a LeNet model with 10\ndistilled images (one per class) results in over 96% accuracy on MNIST, and\nalmost 92% accuracy when trained on just 5 distilled images.\n We also extend the dataset distillation algorithm to distill sequential\ndatasets including texts. We demonstrate that text distillation outperforms\nother methods across multiple datasets. For example, models attain almost their\noriginal accuracy on the IMDB sentiment analysis task using just 20 distilled\nsentences.\n Our code can be found at\n$\\href{https://github.com/ilia10000/dataset-distillation}{\\text{https://github.com/ilia10000/dataset-distillation}}$.", + "authors": "Ilia Sucholutsky, Matthias Schonlau", + "published": "2019-10-06", + "updated": "2020-05-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1903.04197v7", + "title": "Structured Knowledge Distillation for Dense Prediction", + "abstract": "In this work, we consider transferring the structure information from large\nnetworks to compact ones for dense prediction tasks in computer vision.\nPrevious knowledge distillation strategies used for dense prediction tasks\noften directly borrow the distillation scheme for image classification and\nperform knowledge distillation for each pixel separately, leading to\nsub-optimal performance. Here we propose to distill structured knowledge from\nlarge networks to compact networks, taking into account the fact that dense\nprediction is a structured prediction problem. Specifically, we study two\nstructured distillation schemes: i) pair-wise distillation that distills the\npair-wise similarities by building a static graph; and ii) holistic\ndistillation that uses adversarial training to distill holistic knowledge. The\neffectiveness of our knowledge distillation approaches is demonstrated by\nexperiments on three dense prediction tasks: semantic segmentation, depth\nestimation and object detection. Code is available at: https://git.io/StructKD", + "authors": "Yifan Liu, Changyong Shun, Jingdong Wang, Chunhua Shen", + "published": "2019-03-11", + "updated": "2020-06-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2308.07719v1", + "title": "The coherent measurement cost of coherence distillation", + "abstract": "Quantum coherence is an indispensable resource for quantum technological\napplications. It is known to be distillable from a noisy form using operations\nthat cannot create coherence. However, distillation exacts a hidden coherent\nmeasurement cost, whose extent has not previously been estimated. Here we show\nthat this cost (quantified by an equivalent number of Hadamard measurements) is\nrelated to what we call the irretrievable coherence: the difference between the\ncoherence of formation and the distillable coherence. We conjecture (and make\npartial progress towards proving) that when distilling from many copies of a\ngiven noisy coherent state, the coherent measurement cost scales extensively in\nthe number of copies, at an asymptotic rate exactly equalling the input's\nirretrievable coherence. This cost applies to any application whereof coherence\ndistillation is an incidental outcome (e.g. incoherent randomness extraction),\nbut the implications are more dramatic if pure coherence is the only desired\noutcome: the measurement cost may often be higher than the distilled yield, in\nwhich case coherence should rather be prepared afresh than distilled from a\nnoisy input.", + "authors": "Varun Narasimhachar", + "published": "2023-08-15", + "updated": "2023-08-15", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2311.13811v2", + "title": "Education distillation:getting student models to learn in shcools", + "abstract": "Knowledge distillation is one of the methods for model compression, and\nexisting knowledge distillation techniques focus on how to improve the\ndistillation algorithm so as to enhance the distillation efficiency. This paper\nintroduces dynamic incremental learning into knowledge distillation and\nproposes a distillation strategy for education distillation. Specifically, it\nis proposed to take fragmented student models divided from the complete student\nmodel as lower-grade models. As the grade level rises, fragmented student\nmodels deepen in conjunction with designed teaching reference layers, while\nlearning and distilling from more teacher models. By moving from lower to\nhigher grades, fragmented student models were gradually integrated into a\ncomplete target student model, and the performance of the student models\ngradually improved from lower to higher grades of the stage. Education\ndistillation strategies combined with distillation algorithms outperform the\nresults of single distillation algorithms on the public dataset\nCIFAR100,Caltech256, Food-101 dataset.", + "authors": "Ling Feng, Danyang Li, Tianhao Wu, Xuliang Duan", + "published": "2023-11-23", + "updated": "2023-11-27", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2402.02781v1", + "title": "Dual Knowledge Distillation for Efficient Sound Event Detection", + "abstract": "Sound event detection (SED) is essential for recognizing specific sounds and\ntheir temporal locations within acoustic signals. This becomes challenging\nparticularly for on-device applications, where computational resources are\nlimited. To address this issue, we introduce a novel framework referred to as\ndual knowledge distillation for developing efficient SED systems in this work.\nOur proposed dual knowledge distillation commences with temporal-averaging\nknowledge distillation (TAKD), utilizing a mean student model derived from the\ntemporal averaging of the student model's parameters. This allows the student\nmodel to indirectly learn from a pre-trained teacher model, ensuring a stable\nknowledge distillation. Subsequently, we introduce embedding-enhanced feature\ndistillation (EEFD), which involves incorporating an embedding distillation\nlayer within the student model to bolster contextual learning. On DCASE 2023\nTask 4A public evaluation dataset, our proposed SED system with dual knowledge\ndistillation having merely one-third of the baseline model's parameters,\ndemonstrates superior performance in terms of PSDS1 and PSDS2. This highlights\nthe importance of proposed dual knowledge distillation for compact SED systems,\nwhich can be ideal for edge devices.", + "authors": "Yang Xiao, Rohan Kumar Das", + "published": "2024-02-05", + "updated": "2024-02-05", + "primary_cat": "cs.SD", + "cats": [ + "cs.SD", + "cs.AI", + "cs.CL", + "cs.LG", + "eess.AS" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2203.11932v1", + "title": "Dataset Distillation by Matching Training Trajectories", + "abstract": "Dataset distillation is the task of synthesizing a small dataset such that a\nmodel trained on the synthetic set will match the test accuracy of the model\ntrained on the full dataset. In this paper, we propose a new formulation that\noptimizes our distilled data to guide networks to a similar state as those\ntrained on real data across many training steps. Given a network, we train it\nfor several iterations on our distilled data and optimize the distilled data\nwith respect to the distance between the synthetically trained parameters and\nthe parameters trained on real data. To efficiently obtain the initial and\ntarget network parameters for large-scale datasets, we pre-compute and store\ntraining trajectories of expert networks trained on the real dataset. Our\nmethod handily outperforms existing methods and also allows us to distill\nhigher-resolution visual data.", + "authors": "George Cazenavette, Tongzhou Wang, Antonio Torralba, Alexei A. Efros, Jun-Yan Zhu", + "published": "2022-03-22", + "updated": "2022-03-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2211.08071v2", + "title": "Knowledge Distillation for Detection Transformer with Consistent Distillation Points Sampling", + "abstract": "DETR is a novel end-to-end transformer architecture object detector, which\nsignificantly outperforms classic detectors when scaling up the model size. In\nthis paper, we focus on the compression of DETR with knowledge distillation.\nWhile knowledge distillation has been well-studied in classic detectors, there\nis a lack of researches on how to make it work effectively on DETR. We first\nprovide experimental and theoretical analysis to point out that the main\nchallenge in DETR distillation is the lack of consistent distillation points.\nDistillation points refer to the corresponding inputs of the predictions for\nstudent to mimic, and reliable distillation requires sufficient distillation\npoints which are consistent between teacher and student. Based on this\nobservation, we propose a general knowledge distillation paradigm for\nDETR(KD-DETR) with consistent distillation points sampling. Specifically, we\ndecouple detection and distillation tasks by introducing a set of specialized\nobject queries to construct distillation points. In this paradigm, we further\npropose a general-to-specific distillation points sampling strategy to explore\nthe extensibility of KD-DETR. Extensive experiments on different DETR\narchitectures with various scales of backbones and transformer layers validate\nthe effectiveness and generalization of KD-DETR. KD-DETR boosts the performance\nof DAB-DETR with ResNet-18 and ResNet-50 backbone to 41.4$\\%$, 45.7$\\%$ mAP,\nrespectively, which are 5.2$\\%$, 3.5$\\%$ higher than the baseline, and\nResNet-50 even surpasses the teacher model by $2.2\\%$.", + "authors": "Yu Wang, Xin Li, Shengzhao Wen, Fukui Yang, Wanping Zhang, Gang Zhang, Haocheng Feng, Junyu Han, Errui Ding", + "published": "2022-11-15", + "updated": "2022-11-16", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2011.02255v2", + "title": "On Self-Distilling Graph Neural Network", + "abstract": "Recently, the teacher-student knowledge distillation framework has\ndemonstrated its potential in training Graph Neural Networks (GNNs). However,\ndue to the difficulty of training over-parameterized GNN models, one may not\neasily obtain a satisfactory teacher model for distillation. Furthermore, the\ninefficient training process of teacher-student knowledge distillation also\nimpedes its applications in GNN models. In this paper, we propose the first\nteacher-free knowledge distillation method for GNNs, termed GNN\nSelf-Distillation (GNN-SD), that serves as a drop-in replacement of the\nstandard training process. The method is built upon the proposed neighborhood\ndiscrepancy rate (NDR), which quantifies the non-smoothness of the embedded\ngraph in an efficient way. Based on this metric, we propose the adaptive\ndiscrepancy retaining (ADR) regularizer to empower the transferability of\nknowledge that maintains high neighborhood discrepancy across GNN layers. We\nalso summarize a generic GNN-SD framework that could be exploited to induce\nother distillation strategies. Experiments further prove the effectiveness and\ngeneralization of our approach, as it brings: 1) state-of-the-art GNN\ndistillation performance with less training cost, 2) consistent and\nconsiderable performance enhancement for various popular backbones.", + "authors": "Yuzhao Chen, Yatao Bian, Xi Xiao, Yu Rong, Tingyang Xu, Junzhou Huang", + "published": "2020-11-04", + "updated": "2021-04-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.04057v1", + "title": "Score identity Distillation: Exponentially Fast Distillation of Pretrained Diffusion Models for One-Step Generation", + "abstract": "We introduce Score identity Distillation (SiD), an innovative data-free\nmethod that distills the generative capabilities of pretrained diffusion models\ninto a single-step generator. SiD not only facilitates an exponentially fast\nreduction in Fr\\'echet inception distance (FID) during distillation but also\napproaches or even exceeds the FID performance of the original teacher\ndiffusion models. By reformulating forward diffusion processes as semi-implicit\ndistributions, we leverage three score-related identities to create an\ninnovative loss mechanism. This mechanism achieves rapid FID reduction by\ntraining the generator using its own synthesized images, eliminating the need\nfor real data or reverse-diffusion-based generation, all accomplished within\nsignificantly shortened generation time. Upon evaluation across four benchmark\ndatasets, the SiD algorithm demonstrates high iteration efficiency during\ndistillation and surpasses competing distillation approaches, whether they are\none-step or few-step, data-free, or dependent on training data, in terms of\ngeneration quality. This achievement not only redefines the benchmarks for\nefficiency and effectiveness in diffusion distillation but also in the broader\nfield of diffusion-based generation. Our PyTorch implementation will be\npublicly accessible on GitHub.", + "authors": "Mingyuan Zhou, Huangjie Zheng, Zhendong Wang, Mingzhang Yin, Hai Huang", + "published": "2024-04-05", + "updated": "2024-04-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2304.05563v2", + "title": "Entanglement distillation in terms of Schmidt rank and matrix rank", + "abstract": "Entanglement distillation is a key task in quantum-information processing. In\nthis paper, we distill non-positive-partial-transpose (NPT) bipartite states of\nsome given Schmidt rank and matrix rank. We show that all bipartite states of\nSchmidt rank two are locally equivalent to classical-classical states, and all\nbipartite states of Schmidt rank three are 1-undistillable. Subsequently, we\nshow that low-rank B-irreducible NPT states are distillable for large-rank\nreduced density operators by proving low-rank B-irreducible NPT state whose\nrange contains a product vector is distillable. Eventually, we present an\nequivalent condition to distill $M\\times N$ bipartite states of rank\n$\\max\\{M,N\\}+1$.", + "authors": "Tianyi Ding, Lin Chen", + "published": "2023-04-12", + "updated": "2023-07-06", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2304.14800v1", + "title": "Multi-to-Single Knowledge Distillation for Point Cloud Semantic Segmentation", + "abstract": "3D point cloud semantic segmentation is one of the fundamental tasks for\nenvironmental understanding. Although significant progress has been made in\nrecent years, the performance of classes with few examples or few points is\nstill far from satisfactory. In this paper, we propose a novel multi-to-single\nknowledge distillation framework for the 3D point cloud semantic segmentation\ntask to boost the performance of those hard classes. Instead of fusing all the\npoints of multi-scans directly, only the instances that belong to the\npreviously defined hard classes are fused. To effectively and sufficiently\ndistill valuable knowledge from multi-scans, we leverage a multilevel\ndistillation framework, i.e., feature representation distillation, logit\ndistillation, and affinity distillation. We further develop a novel\ninstance-aware affinity distillation algorithm for capturing high-level\nstructural knowledge to enhance the distillation efficacy for hard classes.\nFinally, we conduct experiments on the SemanticKITTI dataset, and the results\non both the validation and test sets demonstrate that our method yields\nsubstantial improvements compared with the baseline method. The code is\navailable at \\Url{https://github.com/skyshoumeng/M2SKD}.", + "authors": "Shoumeng Qiu, Feng Jiang, Haiqiang Zhang, Xiangyang Xue, Jian Pu", + "published": "2023-04-28", + "updated": "2023-04-28", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2312.06899v1", + "title": "LoRA-Enhanced Distillation on Guided Diffusion Models", + "abstract": "Diffusion models, such as Stable Diffusion (SD), offer the ability to\ngenerate high-resolution images with diverse features, but they come at a\nsignificant computational and memory cost. In classifier-free guided diffusion\nmodels, prolonged inference times are attributed to the necessity of computing\ntwo separate diffusion models at each denoising step. Recent work has shown\npromise in improving inference time through distillation techniques, teaching\nthe model to perform similar denoising steps with reduced computations.\nHowever, the application of distillation introduces additional memory overhead\nto these already resource-intensive diffusion models, making it less practical.\n To address these challenges, our research explores a novel approach that\ncombines Low-Rank Adaptation (LoRA) with model distillation to efficiently\ncompress diffusion models. This approach not only reduces inference time but\nalso mitigates memory overhead, and notably decreases memory consumption even\nbefore applying distillation. The results are remarkable, featuring a\nsignificant reduction in inference time due to the distillation process and a\nsubstantial 50% reduction in memory consumption. Our examination of the\ngenerated images underscores that the incorporation of LoRA-enhanced\ndistillation maintains image quality and alignment with the provided prompts.\nIn summary, while conventional distillation tends to increase memory\nconsumption, LoRA-enhanced distillation offers optimization without any\ntrade-offs or compromises in quality.", + "authors": "Pareesa Ameneh Golnari", + "published": "2023-12-12", + "updated": "2023-12-12", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2305.08076v1", + "title": "Improving Defensive Distillation using Teacher Assistant", + "abstract": "Adversarial attacks pose a significant threat to the security and safety of\ndeep neural networks being applied to modern applications. More specifically,\nin computer vision-based tasks, experts can use the knowledge of model\narchitecture to create adversarial samples imperceptible to the human eye.\nThese attacks can lead to security problems in popular applications such as\nself-driving cars, face recognition, etc. Hence, building networks which are\nrobust to such attacks is highly desirable and essential. Among the various\nmethods present in literature, defensive distillation has shown promise in\nrecent years. Using knowledge distillation, researchers have been able to\ncreate models robust against some of those attacks. However, more attacks have\nbeen developed exposing weakness in defensive distillation. In this project, we\nderive inspiration from teacher assistant knowledge distillation and propose\nthat introducing an assistant network can improve the robustness of the\ndistilled model. Through a series of experiments, we evaluate the distilled\nmodels for different distillation temperatures in terms of accuracy,\nsensitivity, and robustness. Our experiments demonstrate that the proposed\nhypothesis can improve robustness in most cases. Additionally, we show that\nmulti-step distillation can further improve robustness with very little impact\non model accuracy.", + "authors": "Maniratnam Mandal, Suna Gao", + "published": "2023-05-14", + "updated": "2023-05-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CR", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0312123v2", + "title": "Many copies may be required for entanglement distillation", + "abstract": "A mixed quantum state shared between two parties is said to be distillable\nif, by means of a protocol involving only local quantum operations and\nclassical communication, the two parties can transform some number of copies of\nthat state into a single shared pair of qubits having high fidelity with a\nmaximally entangled state state. In this paper it is proved that there exist\nstates that are distillable, but for which an arbitrarily large number of\ncopies is required before any distillation procedure can produce a shared pair\nof qubits with even a small amount of entanglement. Specifically, for every\npositive integer n there exists a state that is distillable, but given n or\nfewer copies of that state every distillation procedure outputting a single\nshared pair of qubits will output those qubits in a separable state.\nEssentially all previous examples of states proved to be distillable were such\nthat some distillation procedure could output an entangled pair of qubits given\na single copy of the state in question.", + "authors": "John Watrous", + "published": "2003-12-15", + "updated": "2004-05-31", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.06170v1", + "title": "CLIP-Embed-KD: Computationally Efficient Knowledge Distillation Using Embeddings as Teachers", + "abstract": "Contrastive Language-Image Pre-training (CLIP) has been shown to improve\nzero-shot generalization capabilities of language and vision models. In this\npaper, we extend CLIP for efficient knowledge distillation, by utilizing\nembeddings as teachers. Typical knowledge distillation frameworks require\nrunning forward passes through a teacher model, which is often prohibitive in\nthe case of billion or trillion parameter teachers. In these cases, using only\nthe embeddings of the teacher models to guide the distillation can yield\nsignificant computational savings. Our preliminary findings show that\nCLIP-based knowledge distillation with embeddings can outperform full scale\nknowledge distillation using $9\\times$ less memory and $8\\times$ less training\ntime. Code available at: https://github.com/lnairGT/CLIP-Distillation/", + "authors": "Lakshmi Nair", + "published": "2024-04-09", + "updated": "2024-04-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2006.01683v1", + "title": "Channel Distillation: Channel-Wise Attention for Knowledge Distillation", + "abstract": "Knowledge distillation is to transfer the knowledge from the data learned by\nthe teacher network to the student network, so that the student has the\nadvantage of less parameters and less calculations, and the accuracy is close\nto the teacher. In this paper, we propose a new distillation method, which\ncontains two transfer distillation strategies and a loss decay strategy. The\nfirst transfer strategy is based on channel-wise attention, called Channel\nDistillation (CD). CD transfers the channel information from the teacher to the\nstudent. The second is Guided Knowledge Distillation (GKD). Unlike Knowledge\nDistillation (KD), which allows the student to mimic each sample's prediction\ndistribution of the teacher, GKD only enables the student to mimic the correct\noutput of the teacher. The last part is Early Decay Teacher (EDT). During the\ntraining process, we gradually decay the weight of the distillation loss. The\npurpose is to enable the student to gradually control the optimization rather\nthan the teacher. Our proposed method is evaluated on ImageNet and CIFAR100. On\nImageNet, we achieve 27.68% of top-1 error with ResNet18, which outperforms\nstate-of-the-art methods. On CIFAR100, we achieve surprising result that the\nstudent outperforms the teacher. Code is available at\nhttps://github.com/zhouzaida/channel-distillation.", + "authors": "Zaida Zhou, Chaoran Zhuge, Xinwei Guan, Wen Liu", + "published": "2020-06-02", + "updated": "2020-06-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2205.16004v3", + "title": "What Knowledge Gets Distilled in Knowledge Distillation?", + "abstract": "Knowledge distillation aims to transfer useful information from a teacher\nnetwork to a student network, with the primary goal of improving the student's\nperformance for the task at hand. Over the years, there has a been a deluge of\nnovel techniques and use cases of knowledge distillation. Yet, despite the\nvarious improvements, there seems to be a glaring gap in the community's\nfundamental understanding of the process. Specifically, what is the knowledge\nthat gets distilled in knowledge distillation? In other words, in what ways\ndoes the student become similar to the teacher? Does it start to localize\nobjects in the same way? Does it get fooled by the same adversarial samples?\nDoes its data invariance properties become similar? Our work presents a\ncomprehensive study to try to answer these questions. We show that existing\nmethods can indeed indirectly distill these properties beyond improving task\nperformance. We further study why knowledge distillation might work this way,\nand show that our findings have practical implications as well.", + "authors": "Utkarsh Ojha, Yuheng Li, Anirudh Sundara Rajan, Yingyu Liang, Yong Jae Lee", + "published": "2022-05-31", + "updated": "2023-11-06", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1812.00249v1", + "title": "On Compressing U-net Using Knowledge Distillation", + "abstract": "We study the use of knowledge distillation to compress the U-net\narchitecture. We show that, while standard distillation is not sufficient to\nreliably train a compressed U-net, introducing other regularization methods,\nsuch as batch normalization and class re-weighting, in knowledge distillation\nsignificantly improves the training process. This allows us to compress a U-net\nby over 1000x, i.e., to 0.1% of its original number of parameters, at a\nnegligible decrease in performance.", + "authors": "Karttikeya Mangalam, Mathieu Salzamann", + "published": "2018-12-01", + "updated": "2018-12-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0607126v3", + "title": "Random bipartite entanglement from W and W-like states", + "abstract": "We describe a protocol for distilling maximally entangled bipartite states\nbetween random pairs of parties from those sharing a tripartite W state, and\nshow that, rather surprisingly, the total distillation rate (the total number\nof EPR pairs distilled per W, irrespective of who shares them) may be done at a\nhigher rate than distillation of bipartite entanglement between specified pairs\nof parties. Specifically, the optimal distillation rate for specified\nentanglement for the W has been previously shown to be the asymptotic\nentanglement of assistance of 0.92 EPR pairs per W, while our protocol can\nasymptotically distill 1 EPR pair per W between random pairs of parties, which\nwe conjecture to be optimal. We thus demonstrate a tradeoff between the overall\nasymptotic rate of EPR distillation and the distribution of final EPR pairs\nbetween parties. We further show that by increasing the number of parties in\nthe protocol that there exist states with fixed lower-bounded distillable\nentanglement for random parties but arbitrarily small distillable entanglement\nfor specified parties.", + "authors": "Ben Fortescue, Hoi-Kwong Lo", + "published": "2006-07-18", + "updated": "2007-02-23", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2403.09053v1", + "title": "Towards a theory of model distillation", + "abstract": "Distillation is the task of replacing a complicated machine learning model\nwith a simpler model that approximates the original [BCNM06,HVD15]. Despite\nmany practical applications, basic questions about the extent to which models\ncan be distilled, and the runtime and amount of data needed to distill, remain\nlargely open.\n To study these questions, we initiate a general theory of distillation,\ndefining PAC-distillation in an analogous way to PAC-learning [Val84]. As\napplications of this theory: (1) we propose new algorithms to extract the\nknowledge stored in the trained weights of neural networks -- we show how to\nefficiently distill neural networks into succinct, explicit decision tree\nrepresentations when possible by using the ``linear representation\nhypothesis''; and (2) we prove that distillation can be much cheaper than\nlearning from scratch, and make progress on characterizing its complexity.", + "authors": "Enric Boix-Adsera", + "published": "2024-03-14", + "updated": "2024-03-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.NE" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1912.12630v1", + "title": "Real-time Policy Distillation in Deep Reinforcement Learning", + "abstract": "Policy distillation in deep reinforcement learning provides an effective way\nto transfer control policies from a larger network to a smaller untrained\nnetwork without a significant degradation in performance. However, policy\ndistillation is underexplored in deep reinforcement learning, and existing\napproaches are computationally inefficient, resulting in a long distillation\ntime. In addition, the effectiveness of the distillation process is still\nlimited to the model capacity. We propose a new distillation mechanism, called\nreal-time policy distillation, in which training the teacher model and\ndistilling the policy to the student model occur simultaneously. Accordingly,\nthe teacher's latest policy is transferred to the student model in real time.\nThis reduces the distillation time to half the original time or even less and\nalso makes it possible for extremely small student models to learn skills at\nthe expert level. We evaluated the proposed algorithm in the Atari 2600 domain.\nThe results show that our approach can achieve full distillation in most games,\neven with compression ratios up to 1.7%.", + "authors": "Yuxiang Sun, Pooyan Fazli", + "published": "2019-12-29", + "updated": "2019-12-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2405.00348v1", + "title": "Practical Dataset Distillation Based on Deep Support Vectors", + "abstract": "Conventional dataset distillation requires significant computational\nresources and assumes access to the entire dataset, an assumption impractical\nas it presumes all data resides on a central server. In this paper, we focus on\ndataset distillation in practical scenarios with access to only a fraction of\nthe entire dataset. We introduce a novel distillation method that augments the\nconventional process by incorporating general model knowledge via the addition\nof Deep KKT (DKKT) loss. In practical settings, our approach showed improved\nperformance compared to the baseline distribution matching distillation method\non the CIFAR-10 dataset. Additionally, we present experimental evidence that\nDeep Support Vectors (DSVs) offer unique information to the original\ndistillation, and their integration results in enhanced performance.", + "authors": "Hyunho Lee, Junhoo Lee, Nojun Kwak", + "published": "2024-05-01", + "updated": "2024-05-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.00264v1", + "title": "DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation", + "abstract": "Dataset distillation aims to compress a training dataset by creating a small\nnumber of informative synthetic samples such that neural networks trained on\nthem perform as well as those trained on the original training dataset. Current\ntext dataset distillation methods create each synthetic sample as a sequence of\nword embeddings instead of a text to apply gradient-based optimization;\nhowever, such embedding-level distilled datasets cannot be used for training\nother models whose word embedding weights are different from the model used for\ndistillation. To address this issue, we propose a novel text dataset\ndistillation approach, called Distilling dataset into Language Model (DiLM),\nwhich trains a language model to generate informative synthetic training\nsamples as text data, instead of directly optimizing synthetic samples. We\nevaluated DiLM on various text classification datasets and showed that\ndistilled synthetic datasets from DiLM outperform those from current coreset\nselection methods. DiLM achieved remarkable generalization performance in\ntraining different types of models and in-context learning of large language\nmodels. Our code will be available at https://github.com/arumaekawa/DiLM.", + "authors": "Aru Maekawa, Satoshi Kosugi, Kotaro Funakoshi, Manabu Okumura", + "published": "2024-03-30", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2312.00739v1", + "title": "Adversarial Score Distillation: When score distillation meets GAN", + "abstract": "Existing score distillation methods are sensitive to classifier-free guidance\n(CFG) scale: manifested as over-smoothness or instability at small CFG scales,\nwhile over-saturation at large ones. To explain and analyze these issues, we\nrevisit the derivation of Score Distillation Sampling (SDS) and decipher\nexisting score distillation with the Wasserstein Generative Adversarial Network\n(WGAN) paradigm. With the WGAN paradigm, we find that existing score\ndistillation either employs a fixed sub-optimal discriminator or conducts\nincomplete discriminator optimization, resulting in the scale-sensitive issue.\nWe propose the Adversarial Score Distillation (ASD), which maintains an\noptimizable discriminator and updates it using the complete optimization\nobjective. Experiments show that the proposed ASD performs favorably in 2D\ndistillation and text-to-3D tasks against existing methods. Furthermore, to\nexplore the generalization ability of our WGAN paradigm, we extend ASD to the\nimage editing task, which achieves competitive results. The project page and\ncode are at https://github.com/2y7c3/ASD.", + "authors": "Min Wei, Jingkai Zhou, Junyao Sun, Xuesong Zhang", + "published": "2023-12-01", + "updated": "2023-12-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0202165v1", + "title": "Distinguishing locally of quantum states and the distillation of entanglement", + "abstract": "This paper try to probe the relation of distinguishing locally and\ndistillation of entanglement. The distinguishing information (DI) and the\nmaximal distinguishing information (MDI) of a set of pure states are defined.\nThe interpretation of distillation of entanglement in term of information is\ngiven. The relation between the maximal distinguishing information and\ndistillable entanglement is gained. As a application of this relation the\ndistillable entanglement of Bell-diagonal states is present.", + "authors": "ping-xing. chen, Cheng-zu Li", + "published": "2002-02-27", + "updated": "2002-02-27", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2401.15863v1", + "title": "Importance-Aware Adaptive Dataset Distillation", + "abstract": "Herein, we propose a novel dataset distillation method for constructing small\ninformative datasets that preserve the information of the large original\ndatasets. The development of deep learning models is enabled by the\navailability of large-scale datasets. Despite unprecedented success,\nlarge-scale datasets considerably increase the storage and transmission costs,\nresulting in a cumbersome model training process. Moreover, using raw data for\ntraining raises privacy and copyright concerns. To address these issues, a new\ntask named dataset distillation has been introduced, aiming to synthesize a\ncompact dataset that retains the essential information from the large original\ndataset. State-of-the-art (SOTA) dataset distillation methods have been\nproposed by matching gradients or network parameters obtained during training\non real and synthetic datasets. The contribution of different network\nparameters to the distillation process varies, and uniformly treating them\nleads to degraded distillation performance. Based on this observation, we\npropose an importance-aware adaptive dataset distillation (IADD) method that\ncan improve distillation performance by automatically assigning importance\nweights to different network parameters during distillation, thereby\nsynthesizing more robust distilled datasets. IADD demonstrates superior\nperformance over other SOTA dataset distillation methods based on parameter\nmatching on multiple benchmark datasets and outperforms them in terms of\ncross-architecture generalization. In addition, the analysis of self-adaptive\nweights demonstrates the effectiveness of IADD. Furthermore, the effectiveness\nof IADD is validated in a real-world medical application such as COVID-19\ndetection.", + "authors": "Guang Li, Ren Togo, Takahiro Ogawa, Miki Haseyama", + "published": "2024-01-29", + "updated": "2024-01-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2308.14286v2", + "title": "Bridging Cross-task Protocol Inconsistency for Distillation in Dense Object Detection", + "abstract": "Knowledge distillation (KD) has shown potential for learning compact models\nin dense object detection. However, the commonly used softmax-based\ndistillation ignores the absolute classification scores for individual\ncategories. Thus, the optimum of the distillation loss does not necessarily\nlead to the optimal student classification scores for dense object detectors.\nThis cross-task protocol inconsistency is critical, especially for dense object\ndetectors, since the foreground categories are extremely imbalanced. To address\nthe issue of protocol differences between distillation and classification, we\npropose a novel distillation method with cross-task consistent protocols,\ntailored for the dense object detection. For classification distillation, we\naddress the cross-task protocol inconsistency problem by formulating the\nclassification logit maps in both teacher and student models as multiple\nbinary-classification maps and applying a binary-classification distillation\nloss to each map. For localization distillation, we design an IoU-based\nLocalization Distillation Loss that is free from specific network structures\nand can be compared with existing localization distillation losses. Our\nproposed method is simple but effective, and experimental results demonstrate\nits superiority over existing methods. Code is available at\nhttps://github.com/TinyTigerPan/BCKD.", + "authors": "Longrong Yang, Xianpan Zhou, Xuewei Li, Liang Qiao, Zheyang Li, Ziwei Yang, Gaoang Wang, Xi Li", + "published": "2023-08-28", + "updated": "2024-03-12", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2307.12732v1", + "title": "CLIP-KD: An Empirical Study of Distilling CLIP Models", + "abstract": "CLIP has become a promising language-supervised visual pre-training framework\nand achieves excellent performance over a wide range of tasks. This paper aims\nto distill small CLIP models supervised by a large teacher CLIP model. We\npropose several distillation strategies, including relation, feature, gradient\nand contrastive paradigm, to examine the impact on CLIP distillation. We show\nthat the simplest feature mimicry with MSE loss performs best. Moreover,\ninteractive contrastive learning and relation-based distillation are also\ncritical in performance improvement. We apply the unified method to distill\nseveral student networks trained on 15 million (image, text) pairs.\nDistillation improves the student CLIP models consistently over zero-shot\nImageNet classification and cross-modal retrieval benchmarks. We hope our\nempirical study will become an important baseline for future CLIP distillation\nresearch. The code is available at \\url{https://github.com/winycg/CLIP-KD}.", + "authors": "Chuanguang Yang, Zhulin An, Libo Huang, Junyu Bi, Xinqiang Yu, Han Yang, Yongjun Xu", + "published": "2023-07-24", + "updated": "2023-07-24", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2303.05015v2", + "title": "Smooth and Stepwise Self-Distillation for Object Detection", + "abstract": "Distilling the structured information captured in feature maps has\ncontributed to improved results for object detection tasks, but requires\ncareful selection of baseline architectures and substantial pre-training.\nSelf-distillation addresses these limitations and has recently achieved\nstate-of-the-art performance for object detection despite making several\nsimplifying architectural assumptions. Building on this work, we propose Smooth\nand Stepwise Self-Distillation (SSSD) for object detection. Our SSSD\narchitecture forms an implicit teacher from object labels and a feature pyramid\nnetwork backbone to distill label-annotated feature maps using Jensen-Shannon\ndistance, which is smoother than distillation losses used in prior work. We\nadditionally add a distillation coefficient that is adaptively configured based\non the learning rate. We extensively benchmark SSSD against a baseline and two\nstate-of-the-art object detector architectures on the COCO dataset by varying\nthe coefficients and backbone and detector networks. We demonstrate that SSSD\nachieves higher average precision in most experimental settings, is robust to a\nwide range of coefficients, and benefits from our stepwise distillation\nprocedure.", + "authors": "Jieren Deng, Xin Zhou, Hao Tian, Zhihong Pan, Derek Aguiar", + "published": "2023-03-09", + "updated": "2024-01-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/0803.0345v2", + "title": "Secret key distillation from shielded two-qubit states", + "abstract": "The quantum states corresponding to a secret key are characterized using the\nso-called private states, where the key part consisting of a secret key is\nshielded by the additional systems. Based on the construction, it was shown\nthat a secret key can be distilled from bound entangled states. In this work, I\nconsider the shielded two-qubit states in a key-distillation scenario and\nderive the conditions under which a secret key can be distilled using the\nrecurrence protocol or the two-way classical distillation, advantage\ndistillation together with one-way postprocessing. From the security\nconditions, it is shown that a secret key can be distilled from bound entangled\nstates in a much wider range. In addition, I consider the case that in which\nwhite noise is added to quantum states and show that the classical distillation\nprotocol still works despite a certain amount of noise although the recurrence\nprotocol does not.", + "authors": "Joonwoo Bae", + "published": "2008-03-03", + "updated": "2010-09-22", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2310.18628v2", + "title": "Personalised Distillation: Empowering Open-Sourced LLMs with Adaptive Learning for Code Generation", + "abstract": "With the rise of powerful closed-sourced LLMs (ChatGPT, GPT-4), there are\nincreasing interests in distilling the capabilies of close-sourced LLMs to\nsmaller open-sourced LLMs. Previous distillation methods usually prompt ChatGPT\nto generate a set of instructions and answers, for the student model to learn.\nHowever, such standard distillation approach neglects the merits and conditions\nof the student model. Inspired by modern teaching principles, we design a\npersonalised distillation process, in which the student attempts to solve a\ntask first, then the teacher provides an adaptive refinement for the student to\nimprove. Instead of feeding the student with teacher's prior, personalised\ndistillation enables personalised learning for the student model, as it only\nlearns on examples it makes mistakes upon and learns to improve its own\nsolution. On code generation, personalised distillation consistently\noutperforms standard distillation with only one third of the data. With only\n2.5-3K personalised examples that incur a data-collection cost of 4-6$, we\nboost CodeGen-mono-16B by 7% to achieve 36.4% pass@1 and StarCoder by 12.2% to\nachieve 45.8% pass@1 on HumanEval.", + "authors": "Hailin Chen, Amrita Saha, Steven Hoi, Shafiq Joty", + "published": "2023-10-28", + "updated": "2024-01-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0108029v1", + "title": "Distillability, Bell inequalities and multiparticle bound entanglement", + "abstract": "We study the relation between violation of Bell inequalities and\ndistillability properties of quantum states. Recently, D\\\"ur has shown that\nthere are some multiparticle bound entangled states, non-separable and\nnon-distillable, that violate a Bell inequality. We prove that for all the\nstates violating this inequality there exist at least one splitting of the\nparties into two groups such that some pure-state entanglement can be\ndistilled, obtaining a connection between Bell inequalities and bipartite\ndistillable entanglement.", + "authors": "A. Acin", + "published": "2001-08-07", + "updated": "2001-08-07", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0001084v2", + "title": "Distillation of GHZ states by selective information manipulation", + "abstract": "Methods for distilling maximally entangled tripartite (GHZ) states from\narbitrary entangled tripartite pure states are described. These techniques work\nfor virtually any input state. Each technique has two stages which we call\nprimary and secondary distillation. Primary distillation produces a GHZ state\nwith some probability, so that when applied to an ensemble of systems, a\ncertain percentage is discarded. Secondary distillation produces further GHZs\nfrom the discarded systems. These protocols are developed with the help of an\napproach to quantum information theory based on absolutely selective\ninformation, which has other potential applications.", + "authors": "Oliver Cohen, Todd A. Brun", + "published": "2000-01-23", + "updated": "2000-02-02", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2403.10045v1", + "title": "Towards Adversarially Robust Dataset Distillation by Curvature Regularization", + "abstract": "Dataset distillation (DD) allows datasets to be distilled to fractions of\ntheir original size while preserving the rich distributional information so\nthat models trained on the distilled datasets can achieve a comparable accuracy\nwhile saving significant computational loads. Recent research in this area has\nbeen focusing on improving the accuracy of models trained on distilled\ndatasets. In this paper, we aim to explore a new perspective of DD. We study\nhow to embed adversarial robustness in distilled datasets, so that models\ntrained on these datasets maintain the high accuracy and meanwhile acquire\nbetter adversarial robustness. We propose a new method that achieves this goal\nby incorporating curvature regularization into the distillation process with\nmuch less computational overhead than standard adversarial training. Extensive\nempirical experiments suggest that our method not only outperforms standard\nadversarial training on both accuracy and robustness with less computation\noverhead but is also capable of generating robust distilled datasets that can\nwithstand various adversarial attacks.", + "authors": "Eric Xue, Yijiang Li, Haoyang Liu, Yifan Shen, Haohan Wang", + "published": "2024-03-15", + "updated": "2024-03-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2104.11928v1", + "title": "Extract then Distill: Efficient and Effective Task-Agnostic BERT Distillation", + "abstract": "Task-agnostic knowledge distillation, a teacher-student framework, has been\nproved effective for BERT compression. Although achieving promising results on\nNLP tasks, it requires enormous computational resources. In this paper, we\npropose Extract Then Distill (ETD), a generic and flexible strategy to reuse\nthe teacher's parameters for efficient and effective task-agnostic\ndistillation, which can be applied to students of any size. Specifically, we\nintroduce two variants of ETD, ETD-Rand and ETD-Impt, which extract the\nteacher's parameters in a random manner and by following an importance metric\nrespectively. In this way, the student has already acquired some knowledge at\nthe beginning of the distillation process, which makes the distillation process\nconverge faster. We demonstrate the effectiveness of ETD on the GLUE benchmark\nand SQuAD. The experimental results show that: (1) compared with the baseline\nwithout an ETD strategy, ETD can save 70\\% of computation cost. Moreover, it\nachieves better results than the baseline when using the same computing\nresource. (2) ETD is generic and has been proven effective for different\ndistillation methods (e.g., TinyBERT and MiniLM) and students of different\nsizes. The source code will be publicly available upon publication.", + "authors": "Cheng Chen, Yichun Yin, Lifeng Shang, Zhi Wang, Xin Jiang, Xiao Chen, Qun Liu", + "published": "2021-04-24", + "updated": "2021-04-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2106.12591v1", + "title": "Magic State Distillation from Entangled States", + "abstract": "Magic can be distributed non-locally in many-body entangled states, such as\nthe low energy states of condensed matter systems. Using the Bravyi-Kitaev\nmagic state distillation protocol, we find that non-local magic is distillable\nand can improve the distillation outcome. We analyze a few explicit examples\nand show that spin squeezing can be used to convert non-distillable states into\ndistillable ones.\n Our analysis also suggests that the conventional product input states assumed\nby magic distillation protocols are extremely atypical among general states\nwith distillable magic. It further justifies the need for studying a diverse\nrange of entangled inputs that yield magic states with high probability.", + "authors": "Ning Bao, ChunJun Cao, Vincent Paul Su", + "published": "2021-06-23", + "updated": "2021-06-23", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2302.09632v1", + "title": "HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained Transformers", + "abstract": "Knowledge distillation has been shown to be a powerful model compression\napproach to facilitate the deployment of pre-trained language models in\npractice. This paper focuses on task-agnostic distillation. It produces a\ncompact pre-trained model that can be easily fine-tuned on various tasks with\nsmall computational costs and memory footprints. Despite the practical\nbenefits, task-agnostic distillation is challenging. Since the teacher model\nhas a significantly larger capacity and stronger representation power than the\nstudent model, it is very difficult for the student to produce predictions that\nmatch the teacher's over a massive amount of open-domain training data. Such a\nlarge prediction discrepancy often diminishes the benefits of knowledge\ndistillation. To address this challenge, we propose Homotopic Distillation\n(HomoDistil), a novel task-agnostic distillation approach equipped with\niterative pruning. Specifically, we initialize the student model from the\nteacher model, and iteratively prune the student's neurons until the target\nwidth is reached. Such an approach maintains a small discrepancy between the\nteacher's and student's predictions throughout the distillation process, which\nensures the effectiveness of knowledge transfer. Extensive experiments\ndemonstrate that HomoDistil achieves significant improvements on existing\nbaselines.", + "authors": "Chen Liang, Haoming Jiang, Zheng Li, Xianfeng Tang, Bin Yin, Tuo Zhao", + "published": "2023-02-19", + "updated": "2023-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0008047v2", + "title": "A semidefinite program for distillable entanglement", + "abstract": "We show that the maximum fidelity obtained by a p.p.t. distillation protocol\nis given by the solution to a certain semidefinite program. This gives a number\nof new lower and upper bounds on p.p.t. distillable entanglement (and thus new\nupper bounds on 2-locally distillable entanglement). In the presence of\nsymmetry, the semidefinite program simplifies considerably, becoming a linear\nprogram in the case of isotropic and Werner states. Using these techniques, we\ndetermine the p.p.t. distillable entanglement of asymmetric Werner states and\n``maximally correlated'' states. We conclude with a discussion of possible\napplications of semidefinite programming to quantum codes and 1-local\ndistillation.", + "authors": "Eric M. Rains", + "published": "2000-08-10", + "updated": "2001-04-12", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2305.18381v3", + "title": "Distill Gold from Massive Ores: Efficient Dataset Distillation via Critical Samples Selection", + "abstract": "Data-efficient learning has garnered significant attention, especially given\nthe current trend of large multi-modal models. Recently, dataset distillation\nbecomes an effective approach for data-efficiency; however, the distillation\nprocess itself can still be inefficient. In this work, we model the dataset\ndistillation task within the context of information transport. By observing the\nsubstantial data redundancy inherent in the distillation, we argue to put more\nemphasis on the samples' utility for the distillation task. We introduce and\nvalidate a family of data utility estimators and optimal data selection methods\nto exploit the most valuable samples. This strategy significantly reduces the\ntraining costs and extends various existing distillation algorithms to larger\nand more diversified datasets, e.g., in some cases only 0.04% training data is\nsufficient for comparable distillation performance. Our method consistently\nenhances the distillation algorithms, even on much larger-scale and more\nheterogeneous datasets, e.g. ImageNet-1K and Kinetics-400. This paradigm opens\nup new avenues in the dynamics of distillation and paves the way for efficient\ndataset distillation. Our code is available on\nhttps://github.com/silicx/GoldFromOres .", + "authors": "Yue Xu, Yong-Lu Li, Kaitong Cui, Ziyu Wang, Cewu Lu, Yu-Wing Tai, Chi-Keung Tang", + "published": "2023-05-28", + "updated": "2023-11-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2006.08572v3", + "title": "Flexible Dataset Distillation: Learn Labels Instead of Images", + "abstract": "We study the problem of dataset distillation - creating a small set of\nsynthetic examples capable of training a good model. In particular, we study\nthe problem of label distillation - creating synthetic labels for a small set\nof real images, and show it to be more effective than the prior image-based\napproach to dataset distillation. Methodologically, we introduce a more robust\nand flexible meta-learning algorithm for distillation, as well as an effective\nfirst-order strategy based on convex optimization layers. Distilling labels\nwith our new algorithm leads to improved results over prior image-based\ndistillation. More importantly, it leads to clear improvements in flexibility\nof the distilled dataset in terms of compatibility with off-the-shelf\noptimizers and diverse neural architectures. Interestingly, label distillation\ncan also be applied across datasets, for example enabling learning Japanese\ncharacter recognition by training only on synthetically labeled English\nletters.", + "authors": "Ondrej Bohdal, Yongxin Yang, Timothy Hospedales", + "published": "2020-06-15", + "updated": "2020-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2103.16367v1", + "title": "Complementary Relation Contrastive Distillation", + "abstract": "Knowledge distillation aims to transfer representation ability from a teacher\nmodel to a student model. Previous approaches focus on either individual\nrepresentation distillation or inter-sample similarity preservation. While we\nargue that the inter-sample relation conveys abundant information and needs to\nbe distilled in a more effective way. In this paper, we propose a novel\nknowledge distillation method, namely Complementary Relation Contrastive\nDistillation (CRCD), to transfer the structural knowledge from the teacher to\nthe student. Specifically, we estimate the mutual relation in an anchor-based\nway and distill the anchor-student relation under the supervision of its\ncorresponding anchor-teacher relation. To make it more robust, mutual relations\nare modeled by two complementary elements: the feature and its gradient.\nFurthermore, the low bound of mutual information between the anchor-teacher\nrelation distribution and the anchor-student relation distribution is maximized\nvia relation contrastive loss, which can distill both the sample representation\nand the inter-sample relations. Experiments on different benchmarks demonstrate\nthe effectiveness of our proposed CRCD.", + "authors": "Jinguo Zhu, Shixiang Tang, Dapeng Chen, Shijie Yu, Yakun Liu, Aijun Yang, Mingzhe Rong, Xiaohua Wang", + "published": "2021-03-29", + "updated": "2021-03-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/0704.3661v1", + "title": "Complementarity, distillable secret key, and distillable entanglement", + "abstract": "We consider controllability of two conjugate observables Z and X by two\nparties with classical communication. The ability is specified by two\nalternative tasks, (i) agreement on Z and (ii) preparation of an eigenstate of\nX with use of an extra communication channel. We prove that their feasibility\nis equivalent to that of key distillation if the extra channel is quantum, and\nto that of entanglement distillation if it is classical. This clarifies the\ndistinction between two entanglement measures, distillable key and distillable\nentanglement.", + "authors": "Masato Koashi", + "published": "2007-04-27", + "updated": "2007-04-27", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0305188v1", + "title": "Dynamics of Distillability", + "abstract": "The time evolution of a maximally entangled bipartite systems is presented in\nthis paper. The distillability criterion is given in terms of Kraus operators.\nUsing the criterion, we discuss the distillability of $2\\times 2$ and $n\\times\nn (n>2)$ systems in their evolution process. There are two distinguished\nprocesses, dissipation and decoherence, which may destroy the distillability.\nWe discuss the effects of those processes on distillability in details.", + "authors": "W. Wu, W. Wang, X. X. Yi", + "published": "2003-05-30", + "updated": "2003-05-30", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0012022v1", + "title": "Distilling a Greenberger-Horne-Zeilinger State From an Arbitrary Pure State of Three Qubits", + "abstract": "We present a general algorithm to achieve local operators which can produce\nthe GHZ state for an arbitrary given three-qubit state. Thus the distillation\nprocess of the state can be realized optimally. The algorithm is shown to be\nsufficient for the three-qubit state on account of the fact that any state for\nwhich this distillation algorithm is invalid cannot be distilled to the GHZ\nstate by any local actions. Moreover, an analytical result of distillation\noperations is achieved for the general state of three qubits.", + "authors": "Li-Xiang Cen, Shun-Jin Wang", + "published": "2000-12-05", + "updated": "2000-12-05", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2106.07137v1", + "title": "Why Can You Lay Off Heads? Investigating How BERT Heads Transfer", + "abstract": "The huge size of the widely used BERT family models has led to recent efforts\nabout model distillation. The main goal of distillation is to create a\ntask-agnostic pre-trained model that can be fine-tuned on downstream tasks\nwithout fine-tuning its full-sized version. Despite the progress of\ndistillation, to what degree and for what reason a task-agnostic model can be\ncreated from distillation has not been well studied. Also, the mechanisms\nbehind transfer learning of those BERT models are not well investigated either.\nTherefore, this work focuses on analyzing the acceptable deduction when\ndistillation for guiding the future distillation procedure. Specifically, we\nfirst inspect the prunability of the Transformer heads in RoBERTa and ALBERT\nusing their head importance estimation proposed by Michel et al. (2019), and\nthen check the coherence of the important heads between the pre-trained task\nand downstream tasks. Hence, the acceptable deduction of performance on the\npre-trained task when distilling a model can be derived from the results, and\nwe further compare the behavior of the pruned model before and after\nfine-tuning. Our studies provide guidance for future directions about BERT\nfamily model distillation.", + "authors": "Ting-Rui Chiang, Yun-Nung Chen", + "published": "2021-06-14", + "updated": "2021-06-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2112.10047v1", + "title": "Controlling the Quality of Distillation in Response-Based Network Compression", + "abstract": "The performance of a distillation-based compressed network is governed by the\nquality of distillation. The reason for the suboptimal distillation of a large\nnetwork (teacher) to a smaller network (student) is largely attributed to the\ngap in the learning capacities of given teacher-student pair. While it is hard\nto distill all the knowledge of a teacher, the quality of distillation can be\ncontrolled to a large extent to achieve better performance. Our experiments\nshow that the quality of distillation is largely governed by the quality of\nteacher's response, which in turn is heavily affected by the presence of\nsimilarity information in its response. A well-trained large capacity teacher\nloses similarity information between classes in the process of learning\nfine-grained discriminative properties for classification. The absence of\nsimilarity information causes the distillation process to be reduced from one\nexample-many class learning to one example-one class learning, thereby\nthrottling the flow of diverse knowledge from the teacher. With the implicit\nassumption that only the instilled knowledge can be distilled, instead of\nfocusing only on the knowledge distilling process, we scrutinize the knowledge\ninculcation process. We argue that for a given teacher-student pair, the\nquality of distillation can be improved by finding the sweet spot between batch\nsize and number of epochs while training the teacher. We discuss the steps to\nfind this sweet spot for better distillation. We also propose the distillation\nhypothesis to differentiate the behavior of the distillation process between\nknowledge distillation and regularization effect. We conduct all our\nexperiments on three different datasets.", + "authors": "Vibhas Vats, David Crandall", + "published": "2021-12-19", + "updated": "2021-12-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2403.03846v1", + "title": "On the Effectiveness of Distillation in Mitigating Backdoors in Pre-trained Encoder", + "abstract": "In this paper, we study a defense against poisoned encoders in SSL called\ndistillation, which is a defense used in supervised learning originally.\nDistillation aims to distill knowledge from a given model (a.k.a the teacher\nnet) and transfer it to another (a.k.a the student net). Now, we use it to\ndistill benign knowledge from poisoned pre-trained encoders and transfer it to\na new encoder, resulting in a clean pre-trained encoder. In particular, we\nconduct an empirical study on the effectiveness and performance of distillation\nagainst poisoned encoders. Using two state-of-the-art backdoor attacks against\npre-trained image encoders and four commonly used image classification\ndatasets, our experimental results show that distillation can reduce attack\nsuccess rate from 80.87% to 27.51% while suffering a 6.35% loss in accuracy.\nMoreover, we investigate the impact of three core components of distillation on\nperformance: teacher net, student net, and distillation loss. By comparing 4\ndifferent teacher nets, 3 student nets, and 6 distillation losses, we find that\nfine-tuned teacher nets, warm-up-training-based student nets, and\nattention-based distillation loss perform best, respectively.", + "authors": "Tingxu Han, Shenghan Huang, Ziqi Ding, Weisong Sun, Yebo Feng, Chunrong Fang, Jun Li, Hanwei Qian, Cong Wu, Quanjun Zhang, Yang Liu, Zhenyu Chen", + "published": "2024-03-06", + "updated": "2024-03-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.01392v1", + "title": "No-go theorem for probabilistic one-way secret-key distillation", + "abstract": "The probabilistic one-way distillable secret key is equal to the largest\nexpected rate at which perfect secret key bits can be probabilistically\ndistilled from a bipartite state by means of local operations and one-way\nclassical communication. Here we define the set of super two-extendible states\nand prove that an arbitrary state in this set cannot be used for probabilistic\none-way secret-key distillation. This broad class of states includes both\nerased states and all full-rank states. Comparing the probabilistic one-way\ndistillable secret key with the more commonly studied approximate one-way\ndistillable secret key, our results demonstrate an extreme gap between them for\nmany states of interest, with the approximate one-way distillable secret key\nbeing much larger. Our findings naturally extend to probabilistic one-way\nentanglement distillation, with similar conclusions.", + "authors": "Vishal Singh, Mark M. Wilde", + "published": "2024-04-01", + "updated": "2024-04-01", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph", + "cs.IT", + "math.IT" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2302.05637v2", + "title": "Dual Relation Knowledge Distillation for Object Detection", + "abstract": "Knowledge distillation is an effective method for model compression. However,\nit is still a challenging topic to apply knowledge distillation to detection\ntasks. There are two key points resulting in poor distillation performance for\ndetection tasks. One is the serious imbalance between foreground and background\nfeatures, another one is that small object lacks enough feature representation.\nTo solve the above issues, we propose a new distillation method named dual\nrelation knowledge distillation (DRKD), including pixel-wise relation\ndistillation and instance-wise relation distillation. The pixel-wise relation\ndistillation embeds pixel-wise features in the graph space and applies graph\nconvolution to capture the global pixel relation. By distilling the global\npixel relation, the student detector can learn the relation between foreground\nand background features, and avoid the difficulty of distilling features\ndirectly for the feature imbalance issue. Besides, we find that instance-wise\nrelation supplements valuable knowledge beyond independent features for small\nobjects. Thus, the instance-wise relation distillation is designed, which\ncalculates the similarity of different instances to obtain a relation matrix.\nMore importantly, a relation filter module is designed to highlight valuable\ninstance relations. The proposed dual relation knowledge distillation is\ngeneral and can be easily applied for both one-stage and two-stage detectors.\nOur method achieves state-of-the-art performance, which improves Faster R-CNN\nbased on ResNet50 from 38.4% to 41.6% mAP and improves RetinaNet based on\nResNet50 from 37.4% to 40.3% mAP on COCO 2017.", + "authors": "Zhenliang Ni, Fukui Yang, Shengzhao Wen, Gang Zhang", + "published": "2023-02-11", + "updated": "2023-06-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.09740v1", + "title": "Leveraging Zero-Level Distillation to Generate High-Fidelity Magic States", + "abstract": "Magic state distillation plays an important role in universal fault-tolerant\nquantum computing, and its overhead is one of the major obstacles to realizing\nfault-tolerant quantum computers. Hence, many studies have been conducted to\nreduce this overhead. Among these, Litinski has provided a concrete assessment\nof resource-efficient distillation protocol implementations on the rotated\nsurface code. On the other hand, recently, Itogawa et al. have proposed\nzero-level distillation, a distillation protocol offering very small spatial\nand temporal overhead to generate relatively low-fidelity magic states. While\nzero-level distillation offers preferable spatial and temporal overhead, it\ncannot directly generate high-fidelity magic states since it only reduces the\nlogical error rate of the magic state quadratically. In this study, we evaluate\nthe spatial and temporal overhead of two-level distillation implementations\ngenerating relatively high-fidelity magic states, including ones incorporating\nzero-level distillation. To this end, we introduce (0+1)-level distillation, a\ntwo-level distillation protocol which combines zero-level distillation and the\n15-to-1 distillation protocol. We refine the second-level 15-to-1\nimplementation in it to capitalize on the small footprint of zero-level\ndistillation. Under conditions of a physical error probability of\n$p_{\\mathrm{phys}} = 10^{-4}$ ($10^{-3}$) and targeting an error rate for the\nmagic state within $[5 \\times 10^{-17}, 10^{-11}]$ ($[5 \\times 10^{-11},\n10^{-8}]$), (0+1)-level distillation reduces the spatiotemporal overhead by\nmore than 63% (61%) compared to the (15-to-1)$\\times$(15-to-1) protocol and\nmore than 43% (44%) compared to the (15-to-1)$\\times$(20-to-4) protocol,\noffering a substantial efficiency gain over the traditional protocols.", + "authors": "Yutaka Hirano, Tomohiro Itogawa, Keisuke Fujii", + "published": "2024-04-15", + "updated": "2024-04-15", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2302.14643v1", + "title": "Graph-based Knowledge Distillation: A survey and experimental evaluation", + "abstract": "Graph, such as citation networks, social networks, and transportation\nnetworks, are prevalent in the real world. Graph Neural Networks (GNNs) have\ngained widespread attention for their robust expressiveness and exceptional\nperformance in various graph applications. However, the efficacy of GNNs is\nheavily reliant on sufficient data labels and complex network models, with the\nformer obtaining hardly and the latter computing costly. To address the labeled\ndata scarcity and high complexity of GNNs, Knowledge Distillation (KD) has been\nintroduced to enhance existing GNNs. This technique involves transferring the\nsoft-label supervision of the large teacher model to the small student model\nwhile maintaining prediction performance. This survey offers a comprehensive\noverview of Graph-based Knowledge Distillation methods, systematically\ncategorizing and summarizing them while discussing their limitations and future\ndirections. This paper first introduces the background of graph and KD. It then\nprovides a comprehensive summary of three types of Graph-based Knowledge\nDistillation methods, namely Graph-based Knowledge Distillation for deep neural\nnetworks (DKD), Graph-based Knowledge Distillation for GNNs (GKD), and\nSelf-Knowledge Distillation based Graph-based Knowledge Distillation (SKD).\nEach type is further divided into knowledge distillation methods based on the\noutput layer, middle layer, and constructed graph. Subsequently, various\nalgorithms' ideas are analyzed and compared, concluding with the advantages and\ndisadvantages of each algorithm supported by experimental results. In addition,\nthe applications of graph-based knowledge distillation in CV, NLP, RS, and\nother fields are listed. Finally, the graph-based knowledge distillation is\nsummarized and prospectively discussed. We have also released related resources\nat https://github.com/liujing1023/Graph-based-Knowledge-Distillation.", + "authors": "Jing Liu, Tongya Zheng, Guanzheng Zhang, Qinfen Hao", + "published": "2023-02-27", + "updated": "2023-02-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/0708.3699v2", + "title": "Convolutional Entanglement Distillation", + "abstract": "We develop a theory of entanglement distillation that exploits a\nconvolutional coding structure. We provide a method for converting an arbitrary\nclassical binary or quaternary convolutional code into a convolutional\nentanglement distillation protocol. The imported classical convolutional code\ndoes not have to be dual-containing or self-orthogonal. The yield and\nerror-correcting properties of such a protocol depend respectively on the rate\nand error-correcting properties of the imported classical convolutional code. A\nconvolutional entanglement distillation protocol has several other benefits.\nTwo parties sharing noisy ebits can distill noiseless ebits ``online'' as they\nacquire more noisy ebits. Distillation yield is high and decoding complexity is\nsimple for a convolutional entanglement distillation protocol. Our theory of\nconvolutional entanglement distillation reduces the problem of finding a good\nconvolutional entanglement distillation protocol to the well-established\nproblem of finding a good classical convolutional code.", + "authors": "Mark M. Wilde, Hari Krovi, Todd A. Brun", + "published": "2007-08-28", + "updated": "2007-09-19", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph", + "cs.IT", + "math.IT" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/9809078v2", + "title": "A rigorous treatment of distillable entanglement", + "abstract": "The notion of distillable entanglement is one of the fundamental concepts of\nquantum information theory. Unfortunately, there is an apparent mismatch\nbetween the intuitive and rigorous definitions of distillable entanglement. To\nbe precise, the existing rigorous definitions impose the constraint that the\ndistilation protocol produce an output of constant dimension. It is therefore\nconceivable that this unnecessary constraint might have led to underestimation\nof the true distillable entanglement. We give a new definition of distillable\nentanglement which removes this constraint, but could conceivably overestimate\nthe true value. Since the definitions turn out to be equivalent, neither\nunderestimation nor overestimation is possible, and both definitions are\narguably correct", + "authors": "Eric M. Rains", + "published": "1998-09-24", + "updated": "1998-10-12", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2306.06629v1", + "title": "GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model", + "abstract": "Currently, the reduction in the parameter scale of large-scale pre-trained\nlanguage models (PLMs) through knowledge distillation has greatly facilitated\ntheir widespread deployment on various devices. However, the deployment of\nknowledge distillation systems faces great challenges in real-world\nindustrial-strength applications, which require the use of complex distillation\nmethods on even larger-scale PLMs (over 10B), limited by memory on GPUs and the\nswitching of methods. To overcome these challenges, we propose GKD, a general\nknowledge distillation framework that supports distillation on larger-scale\nPLMs using various distillation methods. With GKD, developers can build larger\ndistillation models on memory-limited GPUs and easily switch and combine\ndifferent distillation methods within a single framework. Experimental results\nshow that GKD can support the distillation of at least 100B-scale PLMs and 25\nmainstream methods on 8 NVIDIA A100 (40GB) GPUs.", + "authors": "Shicheng Tan, Weng Lam Tam, Yuanchun Wang, Wenwen Gong, Yang Yang, Hongyin Tang, Keqing He, Jiahao Liu, Jingang Wang, Shu Zhao, Peng Zhang, Jie Tang", + "published": "2023-06-11", + "updated": "2023-06-11", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2109.15014v1", + "title": "Deep Neural Compression Via Concurrent Pruning and Self-Distillation", + "abstract": "Pruning aims to reduce the number of parameters while maintaining performance\nclose to the original network. This work proposes a novel\n\\emph{self-distillation} based pruning strategy, whereby the representational\nsimilarity between the pruned and unpruned versions of the same network is\nmaximized. Unlike previous approaches that treat distillation and pruning\nseparately, we use distillation to inform the pruning criteria, without\nrequiring a separate student network as in knowledge distillation. We show that\nthe proposed {\\em cross-correlation objective for self-distilled pruning}\nimplicitly encourages sparse solutions, naturally complementing magnitude-based\npruning criteria. Experiments on the GLUE and XGLUE benchmarks show that\nself-distilled pruning increases mono- and cross-lingual language model\nperformance. Self-distilled pruned models also outperform smaller Transformers\nwith an equal number of parameters and are competitive against (6 times) larger\ndistilled networks. We also observe that self-distillation (1) maximizes class\nseparability, (2) increases the signal-to-noise ratio, and (3) converges faster\nafter pruning steps, providing further insights into why self-distilled pruning\nimproves generalization.", + "authors": "James O' Neill, Sourav Dutta, Haytham Assem", + "published": "2021-09-30", + "updated": "2021-09-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2302.11472v1", + "title": "Distilling Calibrated Student from an Uncalibrated Teacher", + "abstract": "Knowledge distillation is a common technique for improving the performance of\na shallow student network by transferring information from a teacher network,\nwhich in general, is comparatively large and deep. These teacher networks are\npre-trained and often uncalibrated, as no calibration technique is applied to\nthe teacher model while training. Calibration of a network measures the\nprobability of correctness for any of its predictions, which is critical in\nhigh-risk domains. In this paper, we study how to obtain a calibrated student\nfrom an uncalibrated teacher. Our approach relies on the fusion of the\ndata-augmentation techniques, including but not limited to cutout, mixup, and\nCutMix, with knowledge distillation. We extend our approach beyond traditional\nknowledge distillation and find it suitable for Relational Knowledge\nDistillation and Contrastive Representation Distillation as well. The novelty\nof the work is that it provides a framework to distill a calibrated student\nfrom an uncalibrated teacher model without compromising the accuracy of the\ndistilled student. We perform extensive experiments to validate our approach on\nvarious datasets, including CIFAR-10, CIFAR-100, CINIC-10 and TinyImageNet, and\nobtained calibrated student models. We also observe robust performance of our\napproach while evaluating it on corrupted CIFAR-100C data.", + "authors": "Ishan Mishra, Sethu Vamsi Krishna, Deepak Mishra", + "published": "2023-02-22", + "updated": "2023-02-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1504.05965v2", + "title": "Qutrit Magic State Distillation Tight in Some Directions", + "abstract": "Magic state distillation is a crucial component in the leading approaches to\nimplementing universal fault tolerant quantum computation, with existing\nprotocols for both qubit and higher dimensional systems. Early work focused on\ndetermining the region of distillable states for qubit protocols, yet\ncomparatively little is known about which states can be distilled and with what\ndistillable region for d>2. Here we focus on d=3 and present new four-qutrit\ndistillation schemes that improve upon the known distillable region, and\nachieve distillation tight to the boundary of undistillable states for some\nclasses of state. As a consequence of recent results, this implies that there\nis a family of quantum states that enable universality if and only if they\nexhibit contextuality with respect to stabilizer measurements. We also identify\na new routine whose fixed point is a magic state with maximal sum-negativity\ni.e., it is maximally non-stabilizer in a specific sense.", + "authors": "Hillary Dawkins, Mark Howard", + "published": "2015-04-22", + "updated": "2015-09-21", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2007.09029v1", + "title": "Knowledge Distillation in Deep Learning and its Applications", + "abstract": "Deep learning based models are relatively large, and it is hard to deploy\nsuch models on resource-limited devices such as mobile phones and embedded\ndevices. One possible solution is knowledge distillation whereby a smaller\nmodel (student model) is trained by utilizing the information from a larger\nmodel (teacher model). In this paper, we present a survey of knowledge\ndistillation techniques applied to deep learning models. To compare the\nperformances of different techniques, we propose a new metric called\ndistillation metric. Distillation metric compares different knowledge\ndistillation algorithms based on sizes and accuracy scores. Based on the\nsurvey, some interesting conclusions are drawn and presented in this paper.", + "authors": "Abdolmaged Alkhulaifi, Fahad Alsahli, Irfan Ahmad", + "published": "2020-07-17", + "updated": "2020-07-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2108.12905v1", + "title": "Lipschitz Continuity Guided Knowledge Distillation", + "abstract": "Knowledge distillation has become one of the most important model compression\ntechniques by distilling knowledge from larger teacher networks to smaller\nstudent ones. Although great success has been achieved by prior distillation\nmethods via delicately designing various types of knowledge, they overlook the\nfunctional properties of neural networks, which makes the process of applying\nthose techniques to new tasks unreliable and non-trivial. To alleviate such\nproblem, in this paper, we initially leverage Lipschitz continuity to better\nrepresent the functional characteristic of neural networks and guide the\nknowledge distillation process. In particular, we propose a novel Lipschitz\nContinuity Guided Knowledge Distillation framework to faithfully distill\nknowledge by minimizing the distance between two neural networks' Lipschitz\nconstants, which enables teacher networks to better regularize student networks\nand improve the corresponding performance. We derive an explainable\napproximation algorithm with an explicit theoretical derivation to address the\nNP-hard problem of calculating the Lipschitz constant. Experimental results\nhave shown that our method outperforms other benchmarks over several knowledge\ndistillation tasks (e.g., classification, segmentation and object detection) on\nCIFAR-100, ImageNet, and PASCAL VOC datasets.", + "authors": "Yuzhang Shang, Bin Duan, Ziliang Zong, Liqiang Nie, Yan Yan", + "published": "2021-08-29", + "updated": "2021-08-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2204.00548v1", + "title": "Unified and Effective Ensemble Knowledge Distillation", + "abstract": "Ensemble knowledge distillation can extract knowledge from multiple teacher\nmodels and encode it into a single student model. Many existing methods learn\nand distill the student model on labeled data only. However, the teacher models\nare usually learned on the same labeled data, and their predictions have high\ncorrelations with groudtruth labels. Thus, they cannot provide sufficient\nknowledge complementary to task labels for student teaching. Distilling on\nunseen unlabeled data has the potential to enhance the knowledge transfer from\nthe teachers to the student. In this paper, we propose a unified and effective\nensemble knowledge distillation method that distills a single student model\nfrom an ensemble of teacher models on both labeled and unlabeled data. Since\ndifferent teachers may have diverse prediction correctness on the same sample,\non labeled data we weight the predictions of different teachers according to\ntheir correctness. In addition, we weight the distillation loss based on the\noverall prediction correctness of the teacher ensemble to distill high-quality\nknowledge. On unlabeled data, there is no groundtruth to evaluate prediction\ncorrectness. Fortunately, the disagreement among teachers is an indication of\nsample hardness, and thereby we weight the distillation loss based on teachers'\ndisagreement to emphasize knowledge distillation on important samples.\nExtensive experiments on four datasets show the effectiveness of our proposed\nensemble distillation method.", + "authors": "Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang", + "published": "2022-04-01", + "updated": "2022-04-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2011.09969v1", + "title": "Neural network algorithm and its application in reactive distillation", + "abstract": "Reactive distillation is a special distillation technology based on the\ncoupling of chemical reaction and distillation. It has the characteristics of\nlow energy consumption and high separation efficiency. However, because the\ncombination of reaction and separation produces highly nonlinear robust\nbehavior, the control and optimization of the reactive distillation process\ncannot use conventional methods, but must rely on neural network algorithms.\nThis paper briefly describes the characteristics and research progress of\nreactive distillation technology and neural network algorithms, and summarizes\nthe application of neural network algorithms in reactive distillation, aiming\nto provide reference for the development and innovation of industry technology.", + "authors": "Huihui Wang, Ruyang Mo", + "published": "2020-11-16", + "updated": "2020-11-16", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE", + "cs.LG", + "I.2.8" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2401.06370v1", + "title": "Graph Relation Distillation for Efficient Biomedical Instance Segmentation", + "abstract": "Instance-aware embeddings predicted by deep neural networks have\nrevolutionized biomedical instance segmentation, but its resource requirements\nare substantial. Knowledge distillation offers a solution by transferring\ndistilled knowledge from heavy teacher networks to lightweight yet\nhigh-performance student networks. However, existing knowledge distillation\nmethods struggle to extract knowledge for distinguishing instances and overlook\nglobal relation information. To address these challenges, we propose a graph\nrelation distillation approach for efficient biomedical instance segmentation,\nwhich considers three essential types of knowledge: instance-level features,\ninstance relations, and pixel-level boundaries. We introduce two graph\ndistillation schemes deployed at both the intra-image level and the inter-image\nlevel: instance graph distillation (IGD) and affinity graph distillation (AGD).\nIGD constructs a graph representing instance features and relations,\ntransferring these two types of knowledge by enforcing instance graph\nconsistency. AGD constructs an affinity graph representing pixel relations to\ncapture structured knowledge of instance boundaries, transferring\nboundary-related knowledge by ensuring pixel affinity consistency. Experimental\nresults on a number of biomedical datasets validate the effectiveness of our\napproach, enabling student models with less than $ 1\\%$ parameters and less\nthan $10\\%$ inference time while achieving promising performance compared to\nteacher models.", + "authors": "Xiaoyu Liu, Yueyi Zhang, Zhiwei Xiong, Wei Huang, Bo Hu, Xiaoyan Sun, Feng Wu", + "published": "2024-01-12", + "updated": "2024-01-12", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2011.14554v1", + "title": "A Selective Survey on Versatile Knowledge Distillation Paradigm for Neural Network Models", + "abstract": "This paper aims to provide a selective survey about knowledge\ndistillation(KD) framework for researchers and practitioners to take advantage\nof it for developing new optimized models in the deep neural network field. To\nthis end, we give a brief overview of knowledge distillation and some related\nworks including learning using privileged information(LUPI) and generalized\ndistillation(GD). Even though knowledge distillation based on the\nteacher-student architecture was initially devised as a model compression\ntechnique, it has found versatile applications over various frameworks.\n In this paper, we review the characteristics of knowledge distillation from\nthe hypothesis that the three important ingredients of knowledge distillation\nare distilled knowledge and loss,teacher-student paradigm, and the distillation\nprocess. In addition, we survey the versatility of the knowledge distillation\nby studying its direct applications and its usage in combination with other\ndeep learning paradigms. Finally we present some future works in knowledge\ndistillation including explainable knowledge distillation where the analytical\nanalysis of the performance gain is studied and the self-supervised learning\nwhich is a hot research topic in deep learning community.", + "authors": "Jeong-Hoe Ku, JiHun Oh, YoungYoon Lee, Gaurav Pooniwala, SangJeong Lee", + "published": "2020-11-30", + "updated": "2020-11-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1108.0537v2", + "title": "Isotropic non-locality cannot be distilled", + "abstract": "We investigate non-locality distillation protocols for isotropic\ncorrelations. These correlations are the hardest instances which respect to\ndistillability and only partial results are known about their behaviour under\nnon-locality distillation protocols. We completely resolve this issue by\nproving that non-locality distillation is impossible for all non-local\nisotropic correlations.", + "authors": "Dejan D. Dukaric", + "published": "2011-08-02", + "updated": "2011-09-20", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1905.09747v2", + "title": "Adversarially Robust Distillation", + "abstract": "Knowledge distillation is effective for producing small, high-performance\nneural networks for classification, but these small networks are vulnerable to\nadversarial attacks. This paper studies how adversarial robustness transfers\nfrom teacher to student during knowledge distillation. We find that a large\namount of robustness may be inherited by the student even when distilled on\nonly clean images. Second, we introduce Adversarially Robust Distillation (ARD)\nfor distilling robustness onto student networks. In addition to producing small\nmodels with high test accuracy like conventional distillation, ARD also passes\nthe superior robustness of large networks onto the student. In our experiments,\nwe find that ARD student models decisively outperform adversarially trained\nnetworks of identical architecture in terms of robust accuracy, surpassing\nstate-of-the-art methods on standard robustness benchmarks. Finally, we adapt\nrecent fast adversarial training methods to ARD for accelerated robust\ndistillation.", + "authors": "Micah Goldblum, Liam Fowl, Soheil Feizi, Tom Goldstein", + "published": "2019-05-23", + "updated": "2019-12-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2011.06110v1", + "title": "Efficient Knowledge Distillation for RNN-Transducer Models", + "abstract": "Knowledge Distillation is an effective method of transferring knowledge from\na large model to a smaller model. Distillation can be viewed as a type of model\ncompression, and has played an important role for on-device ASR applications.\nIn this paper, we develop a distillation method for RNN-Transducer (RNN-T)\nmodels, a popular end-to-end neural network architecture for streaming speech\nrecognition. Our proposed distillation loss is simple and efficient, and uses\nonly the \"y\" and \"blank\" posterior probabilities from the RNN-T output\nprobability lattice. We study the effectiveness of the proposed approach in\nimproving the accuracy of sparse RNN-T models obtained by gradually pruning a\nlarger uncompressed model, which also serves as the teacher during\ndistillation. With distillation of 60% and 90% sparse multi-domain RNN-T\nmodels, we obtain WER reductions of 4.3% and 12.1% respectively, on a noisy\nFarField eval set. We also present results of experiments on LibriSpeech, where\nthe introduction of the distillation loss yields a 4.8% relative WER reduction\non the test-other dataset for a small Conformer model.", + "authors": "Sankaran Panchapagesan, Daniel S. Park, Chung-Cheng Chiu, Yuan Shangguan, Qiao Liang, Alexander Gruenstein", + "published": "2020-11-11", + "updated": "2020-11-11", + "primary_cat": "eess.AS", + "cats": [ + "eess.AS", + "cs.SD" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2109.14960v3", + "title": "Prune Your Model Before Distill It", + "abstract": "Knowledge distillation transfers the knowledge from a cumbersome teacher to a\nsmall student. Recent results suggest that the student-friendly teacher is more\nappropriate to distill since it provides more transferable knowledge. In this\nwork, we propose the novel framework, \"prune, then distill,\" that prunes the\nmodel first to make it more transferrable and then distill it to the student.\nWe provide several exploratory examples where the pruned teacher teaches better\nthan the original unpruned networks. We further show theoretically that the\npruned teacher plays the role of regularizer in distillation, which reduces the\ngeneralization error. Based on this result, we propose a novel neural network\ncompression scheme where the student network is formed based on the pruned\nteacher and then apply the \"prune, then distill\" strategy. The code is\navailable at https://github.com/ososos888/prune-then-distill", + "authors": "Jinhyuk Park, Albert No", + "published": "2021-09-30", + "updated": "2022-07-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2112.05638v2", + "title": "DistilCSE: Effective Knowledge Distillation For Contrastive Sentence Embeddings", + "abstract": "Large-scale contrastive learning models can learn very informative sentence\nembeddings, but are hard to serve online due to the huge model size. Therefore,\nthey often play the role of \"teacher\", transferring abilities to small\n\"student\" models through knowledge distillation. However, knowledge\ndistillation inevitably brings some drop in embedding effect. To tackle that,\nwe propose an effective knowledge distillation framework for contrastive\nsentence embeddings, termed DistilCSE. It first applies knowledge distillation\non a large amount of unlabeled data, and then fine-tunes student models through\ncontrastive learning on limited labeled data. To achieve better distillation\nresults, we further propose Contrastive Knowledge Distillation (CKD). CKD uses\nInfoNCE as the loss function in knowledge distillation, enhancing the objective\nconsistency among teacher model training, knowledge distillation, and student\nmodel fine-tuning. Extensive experiments show that student models trained with\nthe proposed DistilCSE and CKD suffer from little or even no performance\ndecrease and consistently outperform the corresponding counterparts of the same\nparameter size. Impressively, our 110M student model outperforms the latest\nstate-of-the-art model, i.e., Sentence-T5 (11B), with only 1% parameters and\n0.25% unlabeled data.", + "authors": "Chaochen Gao, Xing Wu, Peng Wang, Jue Wang, Liangjun Zang, Zhongyuan Wang, Songlin Hu", + "published": "2021-12-10", + "updated": "2023-01-30", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Distillation" + } + ], + [ + { + "url": "http://arxiv.org/abs/2403.17001v1", + "title": "VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation", + "abstract": "Recent innovations on text-to-3D generation have featured Score Distillation\nSampling (SDS), which enables the zero-shot learning of implicit 3D models\n(NeRF) by directly distilling prior knowledge from 2D diffusion models.\nHowever, current SDS-based models still struggle with intricate text prompts\nand commonly result in distorted 3D models with unrealistic textures or\ncross-view inconsistency issues. In this work, we introduce a novel Visual\nPrompt-guided text-to-3D diffusion model (VP3D) that explicitly unleashes the\nvisual appearance knowledge in 2D visual prompt to boost text-to-3D generation.\nInstead of solely supervising SDS with text prompt, VP3D first capitalizes on\n2D diffusion model to generate a high-quality image from input text, which\nsubsequently acts as visual prompt to strengthen SDS optimization with explicit\nvisual appearance. Meanwhile, we couple the SDS optimization with additional\ndifferentiable reward function that encourages rendering images of 3D models to\nbetter visually align with 2D visual prompt and semantically match with text\nprompt. Through extensive experiments, we show that the 2D Visual Prompt in our\nVP3D significantly eases the learning of visual appearance of 3D models and\nthus leads to higher visual fidelity with more detailed textures. It is also\nappealing in view that when replacing the self-generating visual prompt with a\ngiven reference image, VP3D is able to trigger a new task of stylized\ntext-to-3D generation. Our project page is available at\nhttps://vp3d-cvpr24.github.io.", + "authors": "Yang Chen, Yingwei Pan, Haibo Yang, Ting Yao, Tao Mei", + "published": "2024-03-25", + "updated": "2024-03-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.MM" + ], + "label": "Original Paper", + "paper_cat": "Distillation", + "gt": "Text-to-3D generation. Significant advancements have been witnessed in text-to-image generation with 2D diffusion models in recent years [3, 12, 13, 25, 30\u201332, 35]. However, extending these capabilities to 3D content generation poses a substantial challenge, primarily due to the absence of large-scale paired text-3D datasets. To mitigate the reliance on extensive training data, recent works try to accomplish zero-shot text-to-3D generation [4, 7, 8, 17, 22, 27, 38, 39, 42]. Specifically, the pioneering work DreamFusion [27] showcased remarkable achievements in text-to-3D generation through pre-trained text-to-image diffusion models. SJC [38] concurrently addressed the out-ofdistribution problem in lifting 2D diffusion models to perform text-to-3D generation. Following these, several subsequent works have strived to enhance text-to-3D generation further. For instance, Latent-NeRF [22] proposed to incorporate a sketch shape to guide the 3D generation directly in the latent space of a latent diffusion model. Magic3D [17] presented a coarse-to-fine strategy that leverages both lowand high-resolution diffusion priors to learn the underlying 3D representation. Control3D [8] proposed to enhance user controllability in text-to-3D generation by incorporating additional hand-drawn sketch conditions. ProlificDreamer [39] presented a principled particle-based variational framework to improve the generation quality. Unlike previous works, we formulate the text-to-3D generation process from a new perspective. We first leverage the off-the-shelf text-to-image diffusion models to generate a high-quality image that faithfully matches the input text prompt. This synthetic reference image then serves as a complementary input alongside the text, synergistically guiding the 3D learning process. Moreover, we showcase the remarkable versatility of this novel architecture by effortlessly extending its capabilities to the realm of stylized text-to-3D generation. The resulting 3D asset not only exhibits semantic alignment with the provided text prompt but also masterfully captures the visual style of the reference image. This capability marks another pivotal distinction between our VP3D and previous text-to-3D approaches. Image-to-3D generation. Recently, prior works RealFusion [21], NeuralLift-360 [40] and NeRDi [9] leverage 2D diffusion models to achieve image-to-3D generation. The following work Make-IT-3D [37] proposed a two-stage optimization framework to improve the generation quality further. Zero-1-to-3 [19] finetuned the Stable Diffusion model to enable generating novel views of the input image. It can then be used as a 3D prior model to achieve high-quality image-to-3D generation. Inspired by this, Magic123 [28] proposed to use 2D and 3D priors simultaneously to generate faithful 3D content from the given image. One-2-3-45 [18] integrated Zero-1-to-3 and a multi-view reconstruction model to accelerate the 3D generation process. It is worth noting that our work is not targeting image-to-3D generation. We utilize a reference image to guide the text-to-3D learning process, instead of directly turning the reference image into 3D content.", + "pre_questions": [], + "main_content": "Introduction Generative Artificial Intelligence (especially for vision content generation) has aroused great attention in computer vision field [5, 6, 20, 26], leading to impressive advancements in text-to-image [30\u201332] and text-to-video generation [10, 14, 34]. These accomplishments can be attributed to the availability of large-scale image-text and video-text pair data [1, 33] and the emergence of robust diffusion-based generative models [12, 13, 25, 35]. Recently, researchers Text prompt: \u201cA florist is making a bouquet with fresh flowers\u201d (a) Magic3D (b) ProlificDreamer (c) VP3D (Ours) Visual prompt Figure 1. Exisiting text-to-3D generation techniques (e.g., Magic3D [17] and ProlificDreamer [39]) often suffer from degenerated results (e.g., over-saturated appearances or inaccurate geometries). Our VP3D novelly integrates a visual prompt to strength score distillation sampling, leading to better 3D results. have gone beyond text-driven image/video generation, and begun exploring diffusion models for text-driven content creation of 3D assets (e.g., text-to-3D generation). This direction paves a new way for practical 3D content creation and has a great potential impact for numerous applications like virtual reality, gaming and Metaverse. Compared to image generation, text-to-3D generation, however, is more challenging, due to the complexities associated with intricate 3D geometry and appearance (i.e., object shapes and textures). Moreover, the collection and annotation of 3D data are somewhat resourcefully expensive and thus cannot be easily scaled up to billion level as image data. To tackle this issue, a pioneering text-to-3D work (DreamFusion [27]) presents the first attempt of exploiting an off-the-shelf text-to-image diffusion model to generate promising 3D assets in a zero-shot fashion. The key design behind such success is Score Distillation Sampling (SDS), which directly optimizes the implicit 3D model of arXiv:2403.17001v1 [cs.CV] 25 Mar 2024 Neural Radiance Field (NeRF) with prior knowledge distilled from 2D diffusion model. Nevertheless, such distilled prior knowledge is merely driven by the input text prompt, and it is non-trivial to learn high-quality NeRF with distilled SDS supervision. Although several subsequent works [4, 17, 22, 38, 39] further upgrade SDS, this kind of SDSbased solution still results in degenerated 3D models with unrealistic/less-detailed textures, especially when feeding intricate text prompts (as seen in Figure 1 (a-b)). In this work, we propose to mitigate this limitation through a unique design of visual prompt-guided text-to3D diffusion model, namely VP3D. Intuitively, \u201ca picture is worth a thousand words.\u201d That is, a single image can convey human intentions of visual content creation (e.g., the visual appearance or semantic structure) more effectively than textual sentences. This motivates us to introduce additional guidance of visual prompt, and thus decompose the typical single-shot text-to-3D process into two cascaded processes: first text-to-image generation, and then (text plus image)to-3D generation. In particular, VP3D first leverages offthe-shelf text-to-image diffusion models to produce a highfidelity image that reflects extremely realistic appearance with rich details. In the latter process, this synthetic image further acts as 2D visual prompt to supervise SDS optimization of NeRF, coupled with the input text prompt. At the same time, a differentiable reward function is additionally utilized to encourage the rendering images of NeRF to be better aligned with 2D visual prompt (visual appearance consistency) and text prompt (semantic consistency). As illustrated in Figure 1 (c), we show that the novel visual prompt-guided diffusion process in VP3D significantly enhances the visual fidelity of 3D assets with realistic and rich texture details. Meanwhile, when easing the learning of visual appearance of 3D assets via visual prompt guidance, the optimization of NeRF will focus more on the modeling of geometry, leading to better 3D sharps with cross-view consistency. We believe that the ability of unleashing highquality visual knowledge in 2D visual prompt is potentially a new paradigm of text-to-3D generation. As a by-product, we also demonstrate that our VP3D can be readily adapted for a new task of stylized text-to-3D generation. Intriguingly, we simply replace the self-generating image in VP3D with a user-given reference image, and treat it as a new visual prompt to trigger (text plus image)-to-3D generation. In this way, our VP3D is able to produce a stylized 3D asset, which not only semantically aligns with text prompt but also shares similar geometric & visual style as the reference image. In this section, we elaborate the architecture of our VP3D, which introduces a novel visual prompt-guided text-to-3D diffusion model. An overview of our VP3D architecture is depicted in Figure 2. 3.1. Background Text-to-Image Diffusion Models. Diffusion models are a family of generative models that are trained to gradually transform Gaussian noise into samples from a target distribution [13]. Given a target data distribution q(x), a forward diffusion process is defined to progressively add a small amount of Gaussian noise to the data x0 sampled from q(x). This process follows a Markov chain q(x1:T ) = \ufffdT t=1 q(xt|xt\u22121) and produces a sequence of latent variables x1, . . . , xT after T time steps. The marginal distribution of latent variables at time step t is given by q(xt|x) = N(xt; \u03b1tx, \u03c32 t I). Thus the noisy sample xt can be directly generated through the equation xt = \u03b1tx + \u03c32 t \u03f5, where \u03f5 \u223cN(0, I), \u03b1t and \u03c3t are chosen parameters such that \u03b12 t + \u03c32 t = 1. After T noise adding steps, xT is equivalent to an isotropic Gaussian distribution. Then, a reverse generative process is defined to gradually \u201cdenoise\u201d XT to reconstruct the original sample. This can be described by a Markov process p\u03d5(x0:T ) = p(xT ) \ufffdT t=1 p\u03d5(xt\u22121|xt), with the conditional probability p\u03d5(xt\u22121|xt) = N(xt\u22121; \u00b5\u03d5(xt, t), \u03a3\u03d5(xt, t)). Commonly, a UNet neural network \u03f5\u03d5(xt; t) with parameters \u03d5 is used to predict the noise that was used to produce xt at time step t. Text-to-image diffusion models build upon the above theory to condition the diffusion process with a given text prompt y using classifier-free guidance (CFG) [12]. The corresponding noise predictor is remodeled by: \\labe l { eq : cfg} \\ ha t {\\boldsymb ol {\\epsilon } }_\\p (1) \\b f tbdm{eso } (1) where s is a scale that denotes the classifier-free guidance weight, zy is the corresponding text embedding of the text prompt y and \u2205indicates the noise prediction without conditioning. The diffusion model \u03f5\u03d5 is typically optimized by a simplified variant of the variational lower bound of the log data likelihood, which is a Mean Squared Error criterion: \\mathc a l {L}_ \\ mathrm {diff }( \\p h i ) = \\mathbb {E}_{\\mathbf {x},t,\\epsilon }\\Bigl [w(t)\\|\\hat {\\boldsymbol {\\epsilon }}_\\phi (\\mathbf {x}_t; y,t) \\epsilon \\|^2_2 \\Bigr ], \\label {equation:diffloss} (2) w(t) is a weighting function that depends on the \\ = where w(t) is a weighting function that depends on the timestep t \u223cU(0, 1) and \u03f5 \u223cN(0, I). Score Distillation Sampling. A recent pioneering work called DreamFusion [27] introduces Score Distillation Sampling (SDS) that enables leveraging the priors of pre-trained text-to-image diffusion models to facilitate text-to-3D generation. Specifically, let \u03b8 be the learnable parameters of a 3D model (e.g., NeRF) and g be a differentiable rendering function that can render an image x = g(\u03b8; c) from the 3D model \u03b8 at a camera viewpoint c. SDS introduces a loss function LSDS to optimize the parameters \u03b8. Its gradient is defined as follows: \\lab e l {eq:sds} \\nabla _ {\\t h eta }\\ a }\\ mathcal {L}_{SDS} = \\mathbb {E}_{t,\\epsilon }[ w(t)( \\hat {\\boldsymbol {\\epsilon }}_\\phi (\\mathbf {x}_t;t,\\mathbf {z}_y)-\\epsilon )\\frac {\\partial \\mathbf {x}}{\\partial \\theta }], (3) where xt is obtained by perturbing the rendered image x with a Gaussian noise \u03f5 corresponding to the t-th timestep of the forward diffusion process, zy is the conditioned text embedding of given text prompt y. Intuitively, the SDS loss estimates an update direction in which the noised version of rendered image x should be moved towards a denser region in the distribution of real images (aligned with the conditional text prompt y). By randomly sampling views and backpropagating the gradient in Eq. 3 to the parameters \u03b8 through the differential parametric function g, this approach eventually results in a 3D model that resembles the text. 3.2. Visual-prompted Score Distillation Sampling Visual Prompt Generation. As aforementioned, score distillation sampling plays a key role in text-to-3D generation. Nevertheless, empirical observations [11, 27, 39] reveal that SDS still results in degenerated 3D models especially when feeding intricate text prompts. First, SDS-generated results often suffer from over-saturation issues. These issues are, in part, attributed to the necessity of employing a large CFG value (i.e., 100) within the SDS framework [27, 39]. A Large CFG value narrows down the score distillation space to more text-relevant areas. This can mitigate the divergence of diffusion priors in the optimization process, thereby fostering enhanced stability in 3D representation learning. However, this is at the cost of less realistic and diversity generation results, as large CFG values are known to yield over-saturated results [39]. Second, results generated by SDS still face the risk of text-3D misalignment, such as missing key elements in the scene, especially when text prompts contain multiple objects with specific attributes. A fundamental reason behind the aforementioned issues may lie in the substantial distribution gap between text and 3D modalities. Thus it is non-trivial to directly learn a meaningful 3D scene solely based on a single text prompt. This insight motivates us to introduce an additional visual prompt as a bridge to explicitly establish a connection between the text input and the desired 3D output. Particularly, we leverage off-the-shelf text-to-image diffusion models (e.g., Stable Diffusion) to produce a high-fidelity image that faithfully matches the input text prompt and has an extremely realistic appearance. This image is then used as a visual prompt in conjunction with the input text prompt to jointly supervise the 3D generation process. Score Distillation Sampling with Visual Prompt. We now present visual-prompted score distillation sampling that disText Prompt: \u201ca lighthouse on a rocky shore\u201d noise Visual-prompted SDS NeRF/DMTet Model camera rendering KV Q KV Q U-Net Visual prompt image Back Propagation Add Noise Human feedback Text-to-image Image Reward Visual consistency feedback Figure 2. An overview of the proposed VP3D framework for visual prompted text-to-3D generation. tillation knowledge from a pre-trained diffusion model to optimize a 3D model by considering inputs not only from a text prompt y but also from a visual prompt v. To be clear, we restructure the standard SDS-based text-to-3D pipeline by utilizing an image-conditioned diffusion model [43] to trigger visual prompt-guided text-to-3D generation. Technically, the visual prompt is first converted to a global image embedding zv by the CLIP image encoder [29] and a following projection network. This image embedding represents the rich content and style of the visual prompt and has the same dimension as the text embedding zy used in the pre-trained text-to-image diffusion model (Stable Diffusion). Following SDS, we first add noise \u03f5 to the rendered image of the underlying 3D model according to the random sampled time step t to get a noised image xt. Then xt is input to the diffusion model along with the conditional visual prompt embedding zv and text prompt embedding zy to estimate the added noise as follows: \\smal l \\be gin {align ed } \\t i l d e {\\bol ds ymb o l {\\e p silon }} _\\ phi (\\mathbf {x}_t; t, \\mathbf {z}_y, \\mathbf {z}_v) &= \\boldsymbol {\\epsilon }_\\phi (\\mathbf {x}_t; t, \\emptyset , \\emptyset )\\\\ &+ s * (\\boldsymbol {\\epsilon }_\\phi (\\mathbf {x}_t; t, \\mathbf {z}_y, \\lambda * \\mathbf {z}_v)) \\boldsymbol {\\epsilon }_\\phi (\\mathbf {x}_t; t, \\emptyset , \\emptyset )), \\end {aligned} (4) where s is the classifier-free guidance weight, \u03bb \u2208[0, 1] is the visual prompt condition weight, \u03d5 is the parameter of the pre-trained noise predictor \u03f5\u03d5 and \u03f5\u03d5(xt; t, \u2205, \u2205) denotes the noise prediction without conditioning. In this way, our proposed method explicitly incorporates the visual prompt and text prompt in a unified fashion for text-to-3D generation. Consequently, the final gradient of our introduced visual-prompted score distillation sampling (VP-SDS) loss \u03b8 is expressed as: \\l a bel { eq:vp-sds} \\nabla _ {\\t het a }\\m at hcal {L}_{VP-SDS} = \\mathbb {E}_{t,\\epsilon }[ w(t)(\\tilde {\\boldsymbol {\\epsilon }}_\\phi (\\mathbf {x}_t;t,\\mathbf {z}_y,\\mathbf {z}_v)\\boldsymbol {\\epsilon })\\frac {\\partial \\mathbf {x}}{\\partial \\theta }], (5) where w(t) is a scheduling coefficient. Comparison with SDS. Comparing the update gradient of SDS (Eq. 3) and VP-SDS (Eq. 5), SDS is a special case of our VP-SDS by setting \u03bb = 0 where the visual prompt condition is neglected. In accordance with the theoretical analysis presented in [27, 39], the mode-seeking nature of SDS necessitates a large CFG to ensure that the pre-trained diffusion model \u03f5\u03d5 delivers a \u201csharp\u201d updating direction for the underlying 3D model. Nevertheless, a large CFG, in turn, results in poor-quality samples and thus a \u201cdegraded\u201d update direction. In contrast, VP-SDS leverages the additional visual prompt to narrow down the distillation space of \u03f5\u03d5 into a more compact region that aligns tightly with the visual prompt. Meanwhile, the distillation space is also refined by the visual prompt as it reflects realistic appearances with rich details. Therefore, the updating direction derived from our VP-SDS is not only \u201csharp\u201d but also \u201cfine\u201d, which can obtain much better 3D generation results than SDS. Notably, a recent work ProlificDreamer [39] presents variational score distillation (VSD) to address the aforementioned issues in SDS. However, VSD needs to train an additional diffusion model using LoRA [15] during the optimization process, which incurs a considerable computational overhead compared to SDS. Instead, the additional computational cost of our VP-SDS is nearly negligible, making it computationally more efficient than VSD. View-dependent Visual Prompting. Apart from the oversaturation problem discussed above, existing text-to-3D methods are known to also suffer from the multi-view inconsistency problem (e.g., the multi-face Janus problem). This arises from the fact that the underlying prior diffusion model is exclusively trained on individual 2D images and therefore lacks 3D awareness. To alleviate this issue, existing text-to-3D methods [17, 27, 38, 39] always employ diffusion loss with view-dependent text conditioning, which is to append \u201cfront view\u201d, \u201cside view\u201d, or \u201cback view\u201d to the input text based on the location of the randomly sampled camera. Inspired by this, we devise a view-dependent visual prompting strategy to further mitigate the view inconsistency problem in collaboration with our introduced VPSDS. Technically, given the input visual prompt (assuming it is shot from the front view), we use a view-conditioned 2D diffusion model, Zero-1-to-3 [19], to transform it into left-side, right-side and backward views. Then we fed different visual prompts into VP-SDS (Eq. 5) depending on the corresponding sampled camera viewpoint. For instance, when the azimuth angle \u03b3cam \u2208[0\u25e6, 360\u25e6] of the camera position falls in the range near 180\u25e6(0\u25e6denotes the front view), we feed the generated back view counterpart of the input visual prompt into Eq 5. In this way, the inherent 3D geometry information contained in the multi-view visual prompts is encoded into the 3D representation learning through view-dependent VP-SDS, leading to better view consistency in the 3D generation. 3.3. Learning with Reward Feedback To further encourage rendered images of the underlying 3D model that are high fidelity and well aligned with the input visual prompt and text prompt, we devise two types of differentiable reward functions that complement the aforementioned VP-SDS objective. Human Feedback Reward. Recent practice has shown the capability of improving text-to-image models with human feedback [41]. Particularly, it first trains a reward model on a large dataset comprised of human assessments of textimage pairs. Such a reward model thus has the ability to measure the quality of the generated samples in terms of both image fidelity and image-text alignment. Consequently, it can then be used to fine-tune diffusion models to maximize the predicted scores of the reward model through differentiable reward functions, leading to better generation results. Motivated by this, we go one step further to utilize the open-sourced reward model r in ImageReward [41] for text-to-3D generation. Specifically, we introduce a human feedback reward loss as follows: \\label { e q:refl} \\ mathcal {L}_{hf-reward} = \\mathbb {E}_{\\mathbf {c}}[ \\psi (\\mathbf {r}(\\mathbf {x}, y))], (6) where x = g(\u03b8; c) is a rendered image by the underlying 3D model \u03b8 from an arbitrary viewpoint c, y is the conditional text prompt and \u03c8 is a differentiable reward-to-loss map function as in [41]. Intuitively, minimizing the loss in Eq. 6 encourages the rendered image x to get a higher reward score from the reward model r, which means the underlying 3D model should update towards the refined direction where the renderings have high appearance fidelity and faithfully match the input text prompt. Visual Consistency Reward. Given that the above human feedback reward only takes into account the input text prompt, we further devised a visual consistency reward to fully leverage the visual prompt as well, since text prompts cannot capture all appearance details. Technically, we adopt a pre-trained self-supervised vision transformer DINO-ViT [2] to extract the visual features Fdino(v) and Fdino(x) of the input visual prompt v and rendered image x, respectively. Then we penalize the feature-wise difference between them at the visual prompt viewpoint: \\label { e q:visual_c o nsistency} \\mathcal {L}_{vc-reward} = ||F_{dino}(\\mathbf {x})-F_{dino}(\\mathbf {v})||^2. (7) By imposing such visual consistency loss, we encourage the underlying 3D model to adhere to the plausible shape and appearance properties conveyed by the visual prompt. 3.4. 3D Representation and Training Inspired by [17], we adopt a two-stage coarse-to-fine framework for text-to-3D generation with two different 3D scene representations. At the coarse stage, we leverage InstantNGP [24] as 3D representation, which is much faster to optimize compared to the vanilla NeRF [23] and can recover complex geometry. In the fine stage, we leverage DMTet as the 3D representation to further optimize a high-fidelity mesh and texture. Specifically, the 3D shape and texture represented in DMTet are first initialized from the density field and color field of the coarse stage, respectively [17]. During the optimization process in each stage, we first render images from the underlying 3D model through differentiable rasterizers at arbitrary camera poses and optimize the 3D model with a combination of losses: \\la b el {eq: l oss_fine} \\m a thcal {L}_{fine} =\\mathcal {L}_{VP-SDS} + \\lambda _1 \\mathcal {L}_{vc-reward} + \\lambda _2 \\mathcal {L}_{hf-reward}, (8) where \u03bb1, \u03bb2 are the trade-off parameters. 4. Experiments In this section, we evaluate the effectiveness of our VP3D for text-to-3D generation via extensive empirical evaluations. We first show both quantitative and qualitative results of VP3D in comparison to existing techniques on the newly released text-to-3D benchmark (T3Bench [11]). Next, we conduct ablation studies to validate each design in VP3D. Finally, we demonstrate the extended capability of VP3D for stylized text-to-3D generation. 4.1. Experimental Settings Implementation Details. In the coarse and fine stage, the underlying 3D models are both optimized for 5000 iterations using Adam optimizer with 0.001 learning rate. The rendering resolutions are set to 128\u00d7128 and 512\u00d7512 for coarse and fine stage, respectively. We implement the underlying Instant-NGP and DMTet 3D representation mainly based on the Stable-DreamFusion codebase [36]. \u03bb1 is set to 0.1 in the coarse stage and 0.01 in the fine stage. \u03bb2 is linearly increased from 0.001 to 0.01 during the optimization process. The visual prompt condition weight is set to 0.5 in all experiments. Table 1. The quantitative results of our method and baselines on T3Bench [11]. Method Single Object Single Object with Surroundings Multiple Objects Quality Alignment Average Quality Alignment Average Quality Alignment Average DreamFusion [27] 24.9 24.0 24.4 19.3 29.8 24.6 17.3 14.8 16.1 SJC [38] 26.3 23.0 24.7 17.3 22.3 19.8 17.7 5.8 11.7 LatentNeRF [22] 34.2 32.0 33.1 23.7 37.5 30.6 21.7 19.5 20.6 Fantasia3D [4] 29.2 23.5 26.4 21.9 32.0 27.0 22.7 14.3 18.5 ProlificDreamer [39] 51.1 47.8 49.4 42.5 47.0 44.8 45.7 25.8 35.8 Magic3D [17] 38.7 35.3 37.0 29.8 41.0 35.4 26.6 24.8 25.7 VP3D (Ours) 54.8 52.2 53.5 45.4 50.8 48.1 49.1 31.5 40.3 Evaluation Protocol. Existing text-to-3D generation works commonly examine their methods over the CLIP RPrecision score [16], which is an automated metric for the consistency of rendered images with respect to the input text. However, this text-image alignment-based metric cannot faithfully represent the overall 3D quality. For example, CLIP-based text-to-3D methods can also achieve high CLIP R-Precision scores even if the resulting 3D scenes are unrealistic and severely distorted [27]. Taking this into account, we instead conduct experiments on a newly open-sourced benchmark: T3Bench [11], which is the first comprehensive text-to-3D benchmark containing 300 diverse text prompts of three categories (single object, single object with surroundings, and multiple objects). T3Bench provides two automatic metrics (quality and alignment) based on the rendered multi-view images to assess the subjective quality and text alignment. The quality metric utilizes a combination of multi-view text-image scores and regional convolution to effectively identify quality and view inconsistency. The alignment metric employs a 3D captioning model and a Large Language Model (i.e., GPT-4) to access text-3D consistency. Following this, we also leverage the quality and alignment metric to quantitatively compare our VP3D against baseline methods. Baselines. To evaluate our method, we compare our VP3D with six state-of-the-art text-to-3D generation methods: DreamFusion [27], SJC [38], LatentNeRF [22], Fantasia3D [4], Magic3D [17] and ProlificDreamer [39]. Specifically, DreamFusion [27] firstly introduces score distillation sampling (SDS) that enables leveraging 2D diffusion model (Imagen [14]) to optimize a NeRF [23]. SJC [38] concurrently addresses the out-of-distribution problem in SDS and utilizes an open-sourced diffusion model (Stable Diffusion) to optimize a voxel NeRF. Latent-NeRF [22] first brings NeRF to the latent space to harmonize with latent diffusion models, then refines it in pixel space. Magic3D [17] extends DreamFusion with a coarse-to-fine framework that first optimizes a low-resolution NeRF model and then a high-resolution DMTet model via SDS. Fantasia3D [4] disentangles the SDS-based 3D learning into geometry and appearance learning. ProlificDreamer [39] upgrades DreamFusion by a variational score distillation (VSD) loss that treats the underlying 3D scene as a random variable instead of a single point as in SDS. 4.2. Quantitative Results The quantitative performance comparisons of different methods for text-to-3D generation are summarized in Table 1. Overall, our VP3D consistently achieves better performances against existing techniques across all evaluation metrics and prompt categories. Remarkably, VP3D achieves an absolute quality-alignment average score improvement of 4.1%, 3.3%, and 4.5% against the best competitor ProlificDreamer across the three text prompt categories, respectively, which validates the effectiveness of our overall proposals. More importantly, while VP3D employs the same NeRF & DMTet 3D representation and coarseto-fine training scheme as the baseline method Magic3D, it significantly outperforms Magic3D by achieving 53.5%, 48.1%, and 40.3% average scores, representing a substantial improvement over Magic3D\u2019s average scores of 37.0%, 35.4%, and 25.7%. The results generally highlight the key advantage of introducing visual prompts in lifting 2D diffusion models to perform text-to-3D generation. Specifically, DreamFusion and SJC enable the zero-shot learning of implicit 3D models by distilling prior knowledge from 2D diffusion models. However, the generated 3D scenes have relatively low quality and alignment scores, especially in complex scenarios where the text prompt contains multiple objects or surroundings. Latent-NeRF employs score distillation in the latent space and then back to pixel space to further refine the 3D model, leading to better results. The aforementioned three methods only utilize implicit 3D representations (NeRFs). In contrast, Magic3D adopts textured mesh DMTet as 3D representation for enabling high-resolution optimization and exhibits better performances across all three prompt categories. Fantasia3D also capitalizes on DMTet for geometry learning and then leverages BRDF for appearance learning in a disentangled manner. While Fantasia3D achieves better average scores than DreamFusion and SJC, it fails to create high-fidelity results in complex scenes (e.g., \u201cmultiple objects\u201d). ProDreamFusion Magic3D Latent-NeRF Fantasia3D SJC ProlificDreamer VP3D (ours) (a) (b) (c) (d) (e) (f) Figure 3. Comparisons on qualitative results of our VP3D with other text-to-3D techniques on T3Bench [11]. The prompts are (a) \u201cA fuzzy pink flamingo lawn ornament\u201d, (b) \u201cA blooming potted orchid with purple flowers\u201d, (c) \u201cA blue butterfly on a pink flower\u201d,(d) \u201cA lighthouse on a rocky shore\u201d,(e) \u201cHot popcorn jump out from the red striped popcorn maker\u201d,(f) \u201cA chef is making pizza dough in the kitchen\u201d. (a-b), (c-d), (e-f) belongs to the Single Object, Single Object with Surr and Multi Objects category in T3Bench, respectively. lificDreamer further boosts the performance by training an additional diffusion model during the optimization process to realize a principled particle-based variational score distillation loss. However, our VP3D still outperforms ProlificDreamer across all evaluation metrics and prompt sets, which confirms the effectiveness of our VP3D. 4.3. Qualitative Results The qualitative comparisons for text-to-3D generation are presented in Figure 3. As can be seen, our VP3D generally produces superior 3D scenes with plausible geometry and realistic textures when compared with baseline methods. Specifically, DreamFusion suffers from a severely over-saturated problem and has difficulty generating complex geometry. Magic3D and Latent-NeRF slightly alleviate these issues through their higher-resolution DMTet and pixel space refinement, respectively. While Fantasia3D and SJC can generate richer textures than DreamFusion, the geometric quality of the generated 3D scenes falls short of expectations. Notably, ProlificDreamer trains an additional diffusion model during the optimization process to perform variational score distillation (VSD) instead of SDS, achieving satisfactory single-object objects. However, the use of VSD at times introduces excessive irrelevant information or geometry noise in more complex scenarios. In contrast, we can clearly observe that the generated 3D scenes by VP3D faithfully match the input text prompt with plausible geometry and realistic appearance, which demonstrates the superiority of VP3D over state-of-the-art methods and its ability to generate high-quality 3D content. \u201ca rabbit, high detail 3d model\u201d Visual prompt (c) (b) (a) (d) Text prompt: Figure 4. Stylized text-to-3D generation results of our VP3D. 4.4. Ablation Study Here we investigate how each design in our VP3D influences the overall generation performance. We depict the qualitative results of each ablated run in Figure 5. LSDS is our baseline model that employs vanilla score distillation sampling loss. As can be seen, the generated 3D scene is over-saturated and geometry unreasonable. Instead, when LV P \u2212SDS is employed, the generation quality is clearly enhanced in terms of both geometry and appearance. This highlights the critical effectiveness of our proposed visualprompted score distillation sampling. Nevertheless, the resulting 3D scenes by LV P \u2212SDS are still not satisfying enough. By utilizing additional visual consistency and human feedback reward functions Lvc\u2212reward (Eq. 7) and Lhf\u2212reward (Eq. 6), the generation quality is gradually improved. The results basically validate the effectiveness of these two complementary factors. 4.5. Extension to Stylized Text-to-3D Generation In this section, we demonstrate that another advantage of our VP3D is its remarkable versatility in 3D generation as it can be readily adapted for a new task of stylized text-to-3D generation. The main difference is that the visual prompt is no longer generated from the text prompt but from a userspecified reference image. We also empirically discard the loss in Eq. 6 to eliminate the strictly text-image alignment constraint. In this way, our VP3D can integrate the visual cues contained in the reference image into text-to-3D generation and produce a stylized 3D asset. This asset not only Visual prompt (a) (b) (VP3D) + (VP3D) + + Figure 5. Comparisons on qualitative results by using different ablated runs of our VP3D. The text prompts are (a) \u201cA broken tablespoon lies next to an empty sugar bowl\u201d and (b) \u201cA chameleon perched on a tree branch\u201d. semantically aligns with the text prompt but also reflects visual and geometry properties in the reference image. Figure 4 shows our stylized text-to-3D generation results. Our VP3D can generate diverse and stylized 3D assets by giving different visual prompts to the same text prompt. As shown in Figure 4 (a-b), the generated result semantically is a rabbit that adheres to the text prompt but also inherits some visual cues of the visual prompt. To be clear, the generated 3D rabbits have somewhat consistent geometry pose and appearance texture with the object in the visual prompt. For example, in Figure 4 (b), the generated rabbit mirrors the \u201chugging pose\u201d of the reference image and also has the same style of \u201ccrescent-shaped eyebrows\u201d and \u201cyellow plaid jacket\u201d as in the reference image. In Figure 4 (c-d), we showcase the versatility of our VP3D by seamlessly blending styles from different visual prompts. Take Figure 4 (d) as an instance, we use the leopard image as a visual prompt in the coarse stage and then replace it with an oil painting image in the fine stage. Our VP3D finally resulted in a 3D rabbit that not only has a consistent pose with the leopard but also a colorful oil painting style texture. The stylized 3D generation ability distinct our VP3D from previous text-to-3D approaches and can lead to more creative and diverse 3D content creation. 5. Conclusion In this work, we propose VP3D, a new paradigm for textto-3D generation by leveraging 2D visual prompts. We first capitalize on 2D diffusion models to generate a high-quality image from input text. This image then acts as a visual prompt to strengthen the 3D model learning with our devised visual-prompted score distillation sampling. Meanwhile, we introduce additional human feedback and visual consistency reward functions to encourage the semantic and appearance consistency between the 3D model and input visual & text prompt. Both qualitative and quantitative comparisons on the T3Bench benchmark demonstrate the superiority of our VP3D over existing SOTA techniques." + }, + { + "url": "http://arxiv.org/abs/2303.13873v3", + "title": "Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation", + "abstract": "Automatic 3D content creation has achieved rapid progress recently due to the\navailability of pre-trained, large language models and image diffusion models,\nforming the emerging topic of text-to-3D content creation. Existing text-to-3D\nmethods commonly use implicit scene representations, which couple the geometry\nand appearance via volume rendering and are suboptimal in terms of recovering\nfiner geometries and achieving photorealistic rendering; consequently, they are\nless effective for generating high-quality 3D assets. In this work, we propose\na new method of Fantasia3D for high-quality text-to-3D content creation. Key to\nFantasia3D is the disentangled modeling and learning of geometry and\nappearance. For geometry learning, we rely on a hybrid scene representation,\nand propose to encode surface normal extracted from the representation as the\ninput of the image diffusion model. For appearance modeling, we introduce the\nspatially varying bidirectional reflectance distribution function (BRDF) into\nthe text-to-3D task, and learn the surface material for photorealistic\nrendering of the generated surface. Our disentangled framework is more\ncompatible with popular graphics engines, supporting relighting, editing, and\nphysical simulation of the generated 3D assets. We conduct thorough experiments\nthat show the advantages of our method over existing ones under different\ntext-to-3D task settings. Project page and source codes:\nhttps://fantasia3d.github.io/.", + "authors": "Rui Chen, Yongwei Chen, Ningxin Jiao, Kui Jia", + "published": "2023-03-24", + "updated": "2023-09-27", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2212.00774v1", + "title": "Score Jacobian Chaining: Lifting Pretrained 2D Diffusion Models for 3D Generation", + "abstract": "A diffusion model learns to predict a vector field of gradients. We propose\nto apply chain rule on the learned gradients, and back-propagate the score of a\ndiffusion model through the Jacobian of a differentiable renderer, which we\ninstantiate to be a voxel radiance field. This setup aggregates 2D scores at\nmultiple camera viewpoints into a 3D score, and repurposes a pretrained 2D\nmodel for 3D data generation. We identify a technical challenge of distribution\nmismatch that arises in this application, and propose a novel estimation\nmechanism to resolve it. We run our algorithm on several off-the-shelf\ndiffusion image generative models, including the recently released Stable\nDiffusion trained on the large-scale LAION dataset.", + "authors": "Haochen Wang, Xiaodan Du, Jiahao Li, Raymond A. Yeh, Greg Shakhnarovich", + "published": "2022-12-01", + "updated": "2022-12-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.10440v2", + "title": "Magic3D: High-Resolution Text-to-3D Content Creation", + "abstract": "DreamFusion has recently demonstrated the utility of a pre-trained\ntext-to-image diffusion model to optimize Neural Radiance Fields (NeRF),\nachieving remarkable text-to-3D synthesis results. However, the method has two\ninherent limitations: (a) extremely slow optimization of NeRF and (b)\nlow-resolution image space supervision on NeRF, leading to low-quality 3D\nmodels with a long processing time. In this paper, we address these limitations\nby utilizing a two-stage optimization framework. First, we obtain a coarse\nmodel using a low-resolution diffusion prior and accelerate with a sparse 3D\nhash grid structure. Using the coarse representation as the initialization, we\nfurther optimize a textured 3D mesh model with an efficient differentiable\nrenderer interacting with a high-resolution latent diffusion model. Our method,\ndubbed Magic3D, can create high quality 3D mesh models in 40 minutes, which is\n2x faster than DreamFusion (reportedly taking 1.5 hours on average), while also\nachieving higher resolution. User studies show 61.7% raters to prefer our\napproach over DreamFusion. Together with the image-conditioned generation\ncapabilities, we provide users with new ways to control 3D synthesis, opening\nup new avenues to various creative applications.", + "authors": "Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, Tsung-Yi Lin", + "published": "2022-11-18", + "updated": "2023-03-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.GR", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.16431v2", + "title": "NeuralLift-360: Lifting An In-the-wild 2D Photo to A 3D Object with 360\u00b0 Views", + "abstract": "Virtual reality and augmented reality (XR) bring increasing demand for 3D\ncontent. However, creating high-quality 3D content requires tedious work that a\nhuman expert must do. In this work, we study the challenging task of lifting a\nsingle image to a 3D object and, for the first time, demonstrate the ability to\ngenerate a plausible 3D object with 360{\\deg} views that correspond well with\nthe given reference image. By conditioning on the reference image, our model\ncan fulfill the everlasting curiosity for synthesizing novel views of objects\nfrom images. Our technique sheds light on a promising direction of easing the\nworkflows for 3D artists and XR designers. We propose a novel framework, dubbed\nNeuralLift-360, that utilizes a depth-aware neural radiance representation\n(NeRF) and learns to craft the scene guided by denoising diffusion models. By\nintroducing a ranking loss, our NeuralLift-360 can be guided with rough depth\nestimation in the wild. We also adopt a CLIP-guided sampling strategy for the\ndiffusion prior to provide coherent guidance. Extensive experiments demonstrate\nthat our NeuralLift-360 significantly outperforms existing state-of-the-art\nbaselines. Project page: https://vita-group.github.io/NeuralLift-360/", + "authors": "Dejia Xu, Yifan Jiang, Peihao Wang, Zhiwen Fan, Yi Wang, Zhangyang Wang", + "published": "2022-11-29", + "updated": "2023-04-03", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2306.17843v2", + "title": "Magic123: One Image to High-Quality 3D Object Generation Using Both 2D and 3D Diffusion Priors", + "abstract": "We present Magic123, a two-stage coarse-to-fine approach for high-quality,\ntextured 3D meshes generation from a single unposed image in the wild using\nboth2D and 3D priors. In the first stage, we optimize a neural radiance field\nto produce a coarse geometry. In the second stage, we adopt a memory-efficient\ndifferentiable mesh representation to yield a high-resolution mesh with a\nvisually appealing texture. In both stages, the 3D content is learned through\nreference view supervision and novel views guided by a combination of 2D and 3D\ndiffusion priors. We introduce a single trade-off parameter between the 2D and\n3D priors to control exploration (more imaginative) and exploitation (more\nprecise) of the generated geometry. Additionally, we employ textual inversion\nand monocular depth regularization to encourage consistent appearances across\nviews and to prevent degenerate solutions, respectively. Magic123 demonstrates\na significant improvement over previous image-to-3D techniques, as validated\nthrough extensive experiments on synthetic benchmarks and diverse real-world\nimages. Our code, models, and generated 3D assets are available at\nhttps://github.com/guochengqian/Magic123.", + "authors": "Guocheng Qian, Jinjie Mai, Abdullah Hamdi, Jian Ren, Aliaksandr Siarohin, Bing Li, Hsin-Ying Lee, Ivan Skorokhodov, Peter Wonka, Sergey Tulyakov, Bernard Ghanem", + "published": "2023-06-30", + "updated": "2023-07-23", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2306.16928v1", + "title": "One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization", + "abstract": "Single image 3D reconstruction is an important but challenging task that\nrequires extensive knowledge of our natural world. Many existing methods solve\nthis problem by optimizing a neural radiance field under the guidance of 2D\ndiffusion models but suffer from lengthy optimization time, 3D inconsistency\nresults, and poor geometry. In this work, we propose a novel method that takes\na single image of any object as input and generates a full 360-degree 3D\ntextured mesh in a single feed-forward pass. Given a single image, we first use\na view-conditioned 2D diffusion model, Zero123, to generate multi-view images\nfor the input view, and then aim to lift them up to 3D space. Since traditional\nreconstruction methods struggle with inconsistent multi-view predictions, we\nbuild our 3D reconstruction module upon an SDF-based generalizable neural\nsurface reconstruction method and propose several critical training strategies\nto enable the reconstruction of 360-degree meshes. Without costly\noptimizations, our method reconstructs 3D shapes in significantly less time\nthan existing methods. Moreover, our method favors better geometry, generates\nmore 3D consistent results, and adheres more closely to the input image. We\nevaluate our approach on both synthetic data and in-the-wild images and\ndemonstrate its superiority in terms of both mesh quality and runtime. In\naddition, our approach can seamlessly support the text-to-3D task by\nintegrating with off-the-shelf text-to-image diffusion models.", + "authors": "Minghua Liu, Chao Xu, Haian Jin, Linghao Chen, Mukund Varma T, Zexiang Xu, Hao Su", + "published": "2023-06-29", + "updated": "2023-06-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2311.05461v1", + "title": "Control3D: Towards Controllable Text-to-3D Generation", + "abstract": "Recent remarkable advances in large-scale text-to-image diffusion models have\ninspired a significant breakthrough in text-to-3D generation, pursuing 3D\ncontent creation solely from a given text prompt. However, existing text-to-3D\ntechniques lack a crucial ability in the creative process: interactively\ncontrol and shape the synthetic 3D contents according to users' desired\nspecifications (e.g., sketch). To alleviate this issue, we present the first\nattempt for text-to-3D generation conditioning on the additional hand-drawn\nsketch, namely Control3D, which enhances controllability for users. In\nparticular, a 2D conditioned diffusion model (ControlNet) is remoulded to guide\nthe learning of 3D scene parameterized as NeRF, encouraging each view of 3D\nscene aligned with the given text prompt and hand-drawn sketch. Moreover, we\nexploit a pre-trained differentiable photo-to-sketch model to directly estimate\nthe sketch of the rendered image over synthetic 3D scene. Such estimated sketch\nalong with each sampled view is further enforced to be geometrically consistent\nwith the given sketch, pursuing better controllable text-to-3D generation.\nThrough extensive experiments, we demonstrate that our proposal can generate\naccurate and faithful 3D scenes that align closely with the input text prompts\nand sketches.", + "authors": "Yang Chen, Yingwei Pan, Yehao Li, Ting Yao, Tao Mei", + "published": "2023-11-09", + "updated": "2023-11-09", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.MM" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.07600v1", + "title": "Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures", + "abstract": "Text-guided image generation has progressed rapidly in recent years,\ninspiring major breakthroughs in text-guided shape generation. Recently, it has\nbeen shown that using score distillation, one can successfully text-guide a\nNeRF model to generate a 3D object. We adapt the score distillation to the\npublicly available, and computationally efficient, Latent Diffusion Models,\nwhich apply the entire diffusion process in a compact latent space of a\npretrained autoencoder. As NeRFs operate in image space, a naive solution for\nguiding them with latent score distillation would require encoding to the\nlatent space at each guidance step. Instead, we propose to bring the NeRF to\nthe latent space, resulting in a Latent-NeRF. Analyzing our Latent-NeRF, we\nshow that while Text-to-3D models can generate impressive results, they are\ninherently unconstrained and may lack the ability to guide or enforce a\nspecific 3D structure. To assist and direct the 3D generation, we propose to\nguide our Latent-NeRF using a Sketch-Shape: an abstract geometry that defines\nthe coarse structure of the desired object. Then, we present means to integrate\nsuch a constraint directly into a Latent-NeRF. This unique combination of text\nand shape guidance allows for increased control over the generation process. We\nalso show that latent score distillation can be successfully applied directly\non 3D meshes. This allows for generating high-quality textures on a given\ngeometry. Our experiments validate the power of our different forms of guidance\nand the efficiency of using latent rendering. Implementation is available at\nhttps://github.com/eladrich/latent-nerf", + "authors": "Gal Metzer, Elad Richardson, Or Patashnik, Raja Giryes, Daniel Cohen-Or", + "published": "2022-11-14", + "updated": "2022-11-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.GR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2209.14988v1", + "title": "DreamFusion: Text-to-3D using 2D Diffusion", + "abstract": "Recent breakthroughs in text-to-image synthesis have been driven by diffusion\nmodels trained on billions of image-text pairs. Adapting this approach to 3D\nsynthesis would require large-scale datasets of labeled 3D data and efficient\narchitectures for denoising 3D data, neither of which currently exist. In this\nwork, we circumvent these limitations by using a pretrained 2D text-to-image\ndiffusion model to perform text-to-3D synthesis. We introduce a loss based on\nprobability density distillation that enables the use of a 2D diffusion model\nas a prior for optimization of a parametric image generator. Using this loss in\na DeepDream-like procedure, we optimize a randomly-initialized 3D model (a\nNeural Radiance Field, or NeRF) via gradient descent such that its 2D\nrenderings from random angles achieve a low loss. The resulting 3D model of the\ngiven text can be viewed from any angle, relit by arbitrary illumination, or\ncomposited into any 3D environment. Our approach requires no 3D training data\nand no modifications to the image diffusion model, demonstrating the\neffectiveness of pretrained image diffusion models as priors.", + "authors": "Ben Poole, Ajay Jain, Jonathan T. Barron, Ben Mildenhall", + "published": "2022-09-29", + "updated": "2022-09-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2302.10663v2", + "title": "RealFusion: 360\u00b0 Reconstruction of Any Object from a Single Image", + "abstract": "We consider the problem of reconstructing a full 360{\\deg} photographic model\nof an object from a single image of it. We do so by fitting a neural radiance\nfield to the image, but find this problem to be severely ill-posed. We thus\ntake an off-the-self conditional image generator based on diffusion and\nengineer a prompt that encourages it to \"dream up\" novel views of the object.\nUsing an approach inspired by DreamFields and DreamFusion, we fuse the given\ninput view, the conditional prior, and other regularizers in a final,\nconsistent reconstruction. We demonstrate state-of-the-art reconstruction\nresults on benchmark images when compared to prior methods for monocular 3D\nreconstruction of objects. Qualitatively, our reconstructions provide a\nfaithful match of the input view and a plausible extrapolation of its\nappearance and 3D shape, including to the side of the object not visible in the\nimage.", + "authors": "Luke Melas-Kyriazi, Christian Rupprecht, Iro Laina, Andrea Vedaldi", + "published": "2023-02-21", + "updated": "2023-02-23", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2212.03267v1", + "title": "NeRDi: Single-View NeRF Synthesis with Language-Guided Diffusion as General Image Priors", + "abstract": "2D-to-3D reconstruction is an ill-posed problem, yet humans are good at\nsolving this problem due to their prior knowledge of the 3D world developed\nover years. Driven by this observation, we propose NeRDi, a single-view NeRF\nsynthesis framework with general image priors from 2D diffusion models.\nFormulating single-view reconstruction as an image-conditioned 3D generation\nproblem, we optimize the NeRF representations by minimizing a diffusion loss on\nits arbitrary view renderings with a pretrained image diffusion model under the\ninput-view constraint. We leverage off-the-shelf vision-language models and\nintroduce a two-section language guidance as conditioning inputs to the\ndiffusion model. This is essentially helpful for improving multiview content\ncoherence as it narrows down the general image prior conditioned on the\nsemantic and visual features of the single-view input image. Additionally, we\nintroduce a geometric loss based on estimated depth maps to regularize the\nunderlying 3D geometry of the NeRF. Experimental results on the DTU MVS dataset\nshow that our method can synthesize novel views with higher quality even\ncompared to existing methods trained on this dataset. We also demonstrate our\ngeneralizability in zero-shot NeRF synthesis for in-the-wild images.", + "authors": "Congyue Deng, Chiyu \"Max'' Jiang, Charles R. Qi, Xinchen Yan, Yin Zhou, Leonidas Guibas, Dragomir Anguelov", + "published": "2022-12-06", + "updated": "2022-12-06", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2311.05464v1", + "title": "3DStyle-Diffusion: Pursuing Fine-grained Text-driven 3D Stylization with 2D Diffusion Models", + "abstract": "3D content creation via text-driven stylization has played a fundamental\nchallenge to multimedia and graphics community. Recent advances of cross-modal\nfoundation models (e.g., CLIP) have made this problem feasible. Those\napproaches commonly leverage CLIP to align the holistic semantics of stylized\nmesh with the given text prompt. Nevertheless, it is not trivial to enable more\ncontrollable stylization of fine-grained details in 3D meshes solely based on\nsuch semantic-level cross-modal supervision. In this work, we propose a new\n3DStyle-Diffusion model that triggers fine-grained stylization of 3D meshes\nwith additional controllable appearance and geometric guidance from 2D\nDiffusion models. Technically, 3DStyle-Diffusion first parameterizes the\ntexture of 3D mesh into reflectance properties and scene lighting using\nimplicit MLP networks. Meanwhile, an accurate depth map of each sampled view is\nachieved conditioned on 3D mesh. Then, 3DStyle-Diffusion leverages a\npre-trained controllable 2D Diffusion model to guide the learning of rendered\nimages, encouraging the synthesized image of each view semantically aligned\nwith text prompt and geometrically consistent with depth map. This way\nelegantly integrates both image rendering via implicit MLP networks and\ndiffusion process of image synthesis in an end-to-end fashion, enabling a\nhigh-quality fine-grained stylization of 3D meshes. We also build a new dataset\nderived from Objaverse and the evaluation protocol for this task. Through both\nqualitative and quantitative experiments, we validate the capability of our\n3DStyle-Diffusion. Source code and data are available at\n\\url{https://github.com/yanghb22-fdu/3DStyle-Diffusion-Official}.", + "authors": "Haibo Yang, Yang Chen, Yingwei Pan, Ting Yao, Zhineng Chen, Tao Mei", + "published": "2023-11-09", + "updated": "2023-11-09", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.MM" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.16213v2", + "title": "ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation", + "abstract": "Score distillation sampling (SDS) has shown great promise in text-to-3D\ngeneration by distilling pretrained large-scale text-to-image diffusion models,\nbut suffers from over-saturation, over-smoothing, and low-diversity problems.\nIn this work, we propose to model the 3D parameter as a random variable instead\nof a constant as in SDS and present variational score distillation (VSD), a\nprincipled particle-based variational framework to explain and address the\naforementioned issues in text-to-3D generation. We show that SDS is a special\ncase of VSD and leads to poor samples with both small and large CFG weights. In\ncomparison, VSD works well with various CFG weights as ancestral sampling from\ndiffusion models and simultaneously improves the diversity and sample quality\nwith a common CFG weight (i.e., $7.5$). We further present various improvements\nin the design space for text-to-3D such as distillation time schedule and\ndensity initialization, which are orthogonal to the distillation algorithm yet\nnot well explored. Our overall approach, dubbed ProlificDreamer, can generate\nhigh rendering resolution (i.e., $512\\times512$) and high-fidelity NeRF with\nrich structure and complex effects (e.g., smoke and drops). Further,\ninitialized from NeRF, meshes fine-tuned by VSD are meticulously detailed and\nphoto-realistic. Project page and codes:\nhttps://ml.cs.tsinghua.edu.cn/prolificdreamer/", + "authors": "Zhengyi Wang, Cheng Lu, Yikai Wang, Fan Bao, Chongxuan Li, Hang Su, Jun Zhu", + "published": "2023-05-25", + "updated": "2023-11-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2303.14184v2", + "title": "Make-It-3D: High-Fidelity 3D Creation from A Single Image with Diffusion Prior", + "abstract": "In this work, we investigate the problem of creating high-fidelity 3D content\nfrom only a single image. This is inherently challenging: it essentially\ninvolves estimating the underlying 3D geometry while simultaneously\nhallucinating unseen textures. To address this challenge, we leverage prior\nknowledge from a well-trained 2D diffusion model to act as 3D-aware supervision\nfor 3D creation. Our approach, Make-It-3D, employs a two-stage optimization\npipeline: the first stage optimizes a neural radiance field by incorporating\nconstraints from the reference image at the frontal view and diffusion prior at\nnovel views; the second stage transforms the coarse model into textured point\nclouds and further elevates the realism with diffusion prior while leveraging\nthe high-quality textures from the reference image. Extensive experiments\ndemonstrate that our method outperforms prior works by a large margin,\nresulting in faithful reconstructions and impressive visual quality. Our method\npresents the first attempt to achieve high-quality 3D creation from a single\nimage for general objects and enables various applications such as text-to-3D\ncreation and texture editing.", + "authors": "Junshu Tang, Tengfei Wang, Bo Zhang, Ting Zhang, Ran Yi, Lizhuang Ma, Dong Chen", + "published": "2023-03-24", + "updated": "2023-04-03", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2006.01683v1", + "title": "Channel Distillation: Channel-Wise Attention for Knowledge Distillation", + "abstract": "Knowledge distillation is to transfer the knowledge from the data learned by\nthe teacher network to the student network, so that the student has the\nadvantage of less parameters and less calculations, and the accuracy is close\nto the teacher. In this paper, we propose a new distillation method, which\ncontains two transfer distillation strategies and a loss decay strategy. The\nfirst transfer strategy is based on channel-wise attention, called Channel\nDistillation (CD). CD transfers the channel information from the teacher to the\nstudent. The second is Guided Knowledge Distillation (GKD). Unlike Knowledge\nDistillation (KD), which allows the student to mimic each sample's prediction\ndistribution of the teacher, GKD only enables the student to mimic the correct\noutput of the teacher. The last part is Early Decay Teacher (EDT). During the\ntraining process, we gradually decay the weight of the distillation loss. The\npurpose is to enable the student to gradually control the optimization rather\nthan the teacher. Our proposed method is evaluated on ImageNet and CIFAR100. On\nImageNet, we achieve 27.68% of top-1 error with ResNet18, which outperforms\nstate-of-the-art methods. On CIFAR100, we achieve surprising result that the\nstudent outperforms the teacher. Code is available at\nhttps://github.com/zhouzaida/channel-distillation.", + "authors": "Zaida Zhou, Chaoran Zhuge, Xinwei Guan, Wen Liu", + "published": "2020-06-02", + "updated": "2020-06-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.01392v1", + "title": "No-go theorem for probabilistic one-way secret-key distillation", + "abstract": "The probabilistic one-way distillable secret key is equal to the largest\nexpected rate at which perfect secret key bits can be probabilistically\ndistilled from a bipartite state by means of local operations and one-way\nclassical communication. Here we define the set of super two-extendible states\nand prove that an arbitrary state in this set cannot be used for probabilistic\none-way secret-key distillation. This broad class of states includes both\nerased states and all full-rank states. Comparing the probabilistic one-way\ndistillable secret key with the more commonly studied approximate one-way\ndistillable secret key, our results demonstrate an extreme gap between them for\nmany states of interest, with the approximate one-way distillable secret key\nbeing much larger. Our findings naturally extend to probabilistic one-way\nentanglement distillation, with similar conclusions.", + "authors": "Vishal Singh, Mark M. Wilde", + "published": "2024-04-01", + "updated": "2024-04-01", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph", + "cs.IT", + "math.IT" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2305.18381v3", + "title": "Distill Gold from Massive Ores: Efficient Dataset Distillation via Critical Samples Selection", + "abstract": "Data-efficient learning has garnered significant attention, especially given\nthe current trend of large multi-modal models. Recently, dataset distillation\nbecomes an effective approach for data-efficiency; however, the distillation\nprocess itself can still be inefficient. In this work, we model the dataset\ndistillation task within the context of information transport. By observing the\nsubstantial data redundancy inherent in the distillation, we argue to put more\nemphasis on the samples' utility for the distillation task. We introduce and\nvalidate a family of data utility estimators and optimal data selection methods\nto exploit the most valuable samples. This strategy significantly reduces the\ntraining costs and extends various existing distillation algorithms to larger\nand more diversified datasets, e.g., in some cases only 0.04% training data is\nsufficient for comparable distillation performance. Our method consistently\nenhances the distillation algorithms, even on much larger-scale and more\nheterogeneous datasets, e.g. ImageNet-1K and Kinetics-400. This paradigm opens\nup new avenues in the dynamics of distillation and paves the way for efficient\ndataset distillation. Our code is available on\nhttps://github.com/silicx/GoldFromOres .", + "authors": "Yue Xu, Yong-Lu Li, Kaitong Cui, Ziyu Wang, Cewu Lu, Yu-Wing Tai, Chi-Keung Tang", + "published": "2023-05-28", + "updated": "2023-11-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2211.08071v2", + "title": "Knowledge Distillation for Detection Transformer with Consistent Distillation Points Sampling", + "abstract": "DETR is a novel end-to-end transformer architecture object detector, which\nsignificantly outperforms classic detectors when scaling up the model size. In\nthis paper, we focus on the compression of DETR with knowledge distillation.\nWhile knowledge distillation has been well-studied in classic detectors, there\nis a lack of researches on how to make it work effectively on DETR. We first\nprovide experimental and theoretical analysis to point out that the main\nchallenge in DETR distillation is the lack of consistent distillation points.\nDistillation points refer to the corresponding inputs of the predictions for\nstudent to mimic, and reliable distillation requires sufficient distillation\npoints which are consistent between teacher and student. Based on this\nobservation, we propose a general knowledge distillation paradigm for\nDETR(KD-DETR) with consistent distillation points sampling. Specifically, we\ndecouple detection and distillation tasks by introducing a set of specialized\nobject queries to construct distillation points. In this paradigm, we further\npropose a general-to-specific distillation points sampling strategy to explore\nthe extensibility of KD-DETR. Extensive experiments on different DETR\narchitectures with various scales of backbones and transformer layers validate\nthe effectiveness and generalization of KD-DETR. KD-DETR boosts the performance\nof DAB-DETR with ResNet-18 and ResNet-50 backbone to 41.4$\\%$, 45.7$\\%$ mAP,\nrespectively, which are 5.2$\\%$, 3.5$\\%$ higher than the baseline, and\nResNet-50 even surpasses the teacher model by $2.2\\%$.", + "authors": "Yu Wang, Xin Li, Shengzhao Wen, Fukui Yang, Wanping Zhang, Gang Zhang, Haocheng Feng, Junyu Han, Errui Ding", + "published": "2022-11-15", + "updated": "2022-11-16", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/0708.3699v2", + "title": "Convolutional Entanglement Distillation", + "abstract": "We develop a theory of entanglement distillation that exploits a\nconvolutional coding structure. We provide a method for converting an arbitrary\nclassical binary or quaternary convolutional code into a convolutional\nentanglement distillation protocol. The imported classical convolutional code\ndoes not have to be dual-containing or self-orthogonal. The yield and\nerror-correcting properties of such a protocol depend respectively on the rate\nand error-correcting properties of the imported classical convolutional code. A\nconvolutional entanglement distillation protocol has several other benefits.\nTwo parties sharing noisy ebits can distill noiseless ebits ``online'' as they\nacquire more noisy ebits. Distillation yield is high and decoding complexity is\nsimple for a convolutional entanglement distillation protocol. Our theory of\nconvolutional entanglement distillation reduces the problem of finding a good\nconvolutional entanglement distillation protocol to the well-established\nproblem of finding a good classical convolutional code.", + "authors": "Mark M. Wilde, Hari Krovi, Todd A. Brun", + "published": "2007-08-28", + "updated": "2007-09-19", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph", + "cs.IT", + "math.IT" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2308.07719v1", + "title": "The coherent measurement cost of coherence distillation", + "abstract": "Quantum coherence is an indispensable resource for quantum technological\napplications. It is known to be distillable from a noisy form using operations\nthat cannot create coherence. However, distillation exacts a hidden coherent\nmeasurement cost, whose extent has not previously been estimated. Here we show\nthat this cost (quantified by an equivalent number of Hadamard measurements) is\nrelated to what we call the irretrievable coherence: the difference between the\ncoherence of formation and the distillable coherence. We conjecture (and make\npartial progress towards proving) that when distilling from many copies of a\ngiven noisy coherent state, the coherent measurement cost scales extensively in\nthe number of copies, at an asymptotic rate exactly equalling the input's\nirretrievable coherence. This cost applies to any application whereof coherence\ndistillation is an incidental outcome (e.g. incoherent randomness extraction),\nbut the implications are more dramatic if pure coherence is the only desired\noutcome: the measurement cost may often be higher than the distilled yield, in\nwhich case coherence should rather be prepared afresh than distilled from a\nnoisy input.", + "authors": "Varun Narasimhachar", + "published": "2023-08-15", + "updated": "2023-08-15", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2309.09920v1", + "title": "Distilling HuBERT with LSTMs via Decoupled Knowledge Distillation", + "abstract": "Much research effort is being applied to the task of compressing the\nknowledge of self-supervised models, which are powerful, yet large and memory\nconsuming. In this work, we show that the original method of knowledge\ndistillation (and its more recently proposed extension, decoupled knowledge\ndistillation) can be applied to the task of distilling HuBERT. In contrast to\nmethods that focus on distilling internal features, this allows for more\nfreedom in the network architecture of the compressed model. We thus propose to\ndistill HuBERT's Transformer layers into an LSTM-based distilled model that\nreduces the number of parameters even below DistilHuBERT and at the same time\nshows improved performance in automatic speech recognition.", + "authors": "Danilo de Oliveira, Timo Gerkmann", + "published": "2023-09-18", + "updated": "2023-09-18", + "primary_cat": "eess.AS", + "cats": [ + "eess.AS", + "cs.LG", + "cs.SD", + "eess.SP" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.04057v1", + "title": "Score identity Distillation: Exponentially Fast Distillation of Pretrained Diffusion Models for One-Step Generation", + "abstract": "We introduce Score identity Distillation (SiD), an innovative data-free\nmethod that distills the generative capabilities of pretrained diffusion models\ninto a single-step generator. SiD not only facilitates an exponentially fast\nreduction in Fr\\'echet inception distance (FID) during distillation but also\napproaches or even exceeds the FID performance of the original teacher\ndiffusion models. By reformulating forward diffusion processes as semi-implicit\ndistributions, we leverage three score-related identities to create an\ninnovative loss mechanism. This mechanism achieves rapid FID reduction by\ntraining the generator using its own synthesized images, eliminating the need\nfor real data or reverse-diffusion-based generation, all accomplished within\nsignificantly shortened generation time. Upon evaluation across four benchmark\ndatasets, the SiD algorithm demonstrates high iteration efficiency during\ndistillation and surpasses competing distillation approaches, whether they are\none-step or few-step, data-free, or dependent on training data, in terms of\ngeneration quality. This achievement not only redefines the benchmarks for\nefficiency and effectiveness in diffusion distillation but also in the broader\nfield of diffusion-based generation. Our PyTorch implementation will be\npublicly accessible on GitHub.", + "authors": "Mingyuan Zhou, Huangjie Zheng, Zhendong Wang, Mingzhang Yin, Hai Huang", + "published": "2024-04-05", + "updated": "2024-04-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2303.05958v1", + "title": "Robust Knowledge Distillation from RNN-T Models With Noisy Training Labels Using Full-Sum Loss", + "abstract": "This work studies knowledge distillation (KD) and addresses its constraints\nfor recurrent neural network transducer (RNN-T) models. In hard distillation, a\nteacher model transcribes large amounts of unlabelled speech to train a student\nmodel. Soft distillation is another popular KD method that distills the output\nlogits of the teacher model. Due to the nature of RNN-T alignments, applying\nsoft distillation between RNN-T architectures having different posterior\ndistributions is challenging. In addition, bad teachers having high\nword-error-rate (WER) reduce the efficacy of KD. We investigate how to\neffectively distill knowledge from variable quality ASR teachers, which has not\nbeen studied before to the best of our knowledge. We show that a sequence-level\nKD, full-sum distillation, outperforms other distillation methods for RNN-T\nmodels, especially for bad teachers. We also propose a variant of full-sum\ndistillation that distills the sequence discriminative knowledge of the teacher\nleading to further improvement in WER. We conduct experiments on public\ndatasets namely SpeechStew and LibriSpeech, and on in-house production data.", + "authors": "Mohammad Zeineldeen, Kartik Audhkhasi, Murali Karthick Baskar, Bhuvana Ramabhadran", + "published": "2023-03-10", + "updated": "2023-03-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.SD", + "eess.AS", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2305.08076v1", + "title": "Improving Defensive Distillation using Teacher Assistant", + "abstract": "Adversarial attacks pose a significant threat to the security and safety of\ndeep neural networks being applied to modern applications. More specifically,\nin computer vision-based tasks, experts can use the knowledge of model\narchitecture to create adversarial samples imperceptible to the human eye.\nThese attacks can lead to security problems in popular applications such as\nself-driving cars, face recognition, etc. Hence, building networks which are\nrobust to such attacks is highly desirable and essential. Among the various\nmethods present in literature, defensive distillation has shown promise in\nrecent years. Using knowledge distillation, researchers have been able to\ncreate models robust against some of those attacks. However, more attacks have\nbeen developed exposing weakness in defensive distillation. In this project, we\nderive inspiration from teacher assistant knowledge distillation and propose\nthat introducing an assistant network can improve the robustness of the\ndistilled model. Through a series of experiments, we evaluate the distilled\nmodels for different distillation temperatures in terms of accuracy,\nsensitivity, and robustness. Our experiments demonstrate that the proposed\nhypothesis can improve robustness in most cases. Additionally, we show that\nmulti-step distillation can further improve robustness with very little impact\non model accuracy.", + "authors": "Maniratnam Mandal, Suna Gao", + "published": "2023-05-14", + "updated": "2023-05-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CR", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2205.02399v1", + "title": "Spot-adaptive Knowledge Distillation", + "abstract": "Knowledge distillation (KD) has become a well established paradigm for\ncompressing deep neural networks. The typical way of conducting knowledge\ndistillation is to train the student network under the supervision of the\nteacher network to harness the knowledge at one or multiple spots (i.e.,\nlayers) in the teacher network. The distillation spots, once specified, will\nnot change for all the training samples, throughout the whole distillation\nprocess. In this work, we argue that distillation spots should be adaptive to\ntraining samples and distillation epochs. We thus propose a new distillation\nstrategy, termed spot-adaptive KD (SAKD), to adaptively determine the\ndistillation spots in the teacher network per sample, at every training\niteration during the whole distillation period. As SAKD actually focuses on\n\"where to distill\" instead of \"what to distill\" that is widely investigated by\nmost existing works, it can be seamlessly integrated into existing distillation\nmethods to further improve their performance. Extensive experiments with 10\nstate-of-the-art distillers are conducted to demonstrate the effectiveness of\nSAKD for improving their distillation performance, under both homogeneous and\nheterogeneous distillation settings. Code is available at\nhttps://github.com/zju-vipa/spot-adaptive-pytorch", + "authors": "Jie Song, Ying Chen, Jingwen Ye, Mingli Song", + "published": "2022-05-05", + "updated": "2022-05-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2112.10047v1", + "title": "Controlling the Quality of Distillation in Response-Based Network Compression", + "abstract": "The performance of a distillation-based compressed network is governed by the\nquality of distillation. The reason for the suboptimal distillation of a large\nnetwork (teacher) to a smaller network (student) is largely attributed to the\ngap in the learning capacities of given teacher-student pair. While it is hard\nto distill all the knowledge of a teacher, the quality of distillation can be\ncontrolled to a large extent to achieve better performance. Our experiments\nshow that the quality of distillation is largely governed by the quality of\nteacher's response, which in turn is heavily affected by the presence of\nsimilarity information in its response. A well-trained large capacity teacher\nloses similarity information between classes in the process of learning\nfine-grained discriminative properties for classification. The absence of\nsimilarity information causes the distillation process to be reduced from one\nexample-many class learning to one example-one class learning, thereby\nthrottling the flow of diverse knowledge from the teacher. With the implicit\nassumption that only the instilled knowledge can be distilled, instead of\nfocusing only on the knowledge distilling process, we scrutinize the knowledge\ninculcation process. We argue that for a given teacher-student pair, the\nquality of distillation can be improved by finding the sweet spot between batch\nsize and number of epochs while training the teacher. We discuss the steps to\nfind this sweet spot for better distillation. We also propose the distillation\nhypothesis to differentiate the behavior of the distillation process between\nknowledge distillation and regularization effect. We conduct all our\nexperiments on three different datasets.", + "authors": "Vibhas Vats, David Crandall", + "published": "2021-12-19", + "updated": "2021-12-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2206.12370v2", + "title": "Mixed Sample Augmentation for Online Distillation", + "abstract": "Mixed Sample Regularization (MSR), such as MixUp or CutMix, is a powerful\ndata augmentation strategy to generalize convolutional neural networks.\nPrevious empirical analysis has illustrated an orthogonal performance gain\nbetween MSR and conventional offline Knowledge Distillation (KD). To be more\nspecific, student networks can be enhanced with the involvement of MSR in the\ntraining stage of sequential distillation. Yet, the interplay between MSR and\nonline knowledge distillation, where an ensemble of peer students learn\nmutually from each other, remains unexplored. To bridge the gap, we make the\nfirst attempt at incorporating CutMix into online distillation, where we\nempirically observe a significant improvement. Encouraged by this fact, we\npropose an even stronger MSR specifically for online distillation, named as\nCut\\textsuperscript{n}Mix. Furthermore, a novel online distillation framework\nis designed upon Cut\\textsuperscript{n}Mix, to enhance the distillation with\nfeature level mutual learning and a self-ensemble teacher. Comprehensive\nevaluations on CIFAR10 and CIFAR100 with six network architectures show that\nour approach can consistently outperform state-of-the-art distillation methods.", + "authors": "Yiqing Shen, Liwu Xu, Yuzhe Yang, Yaqian Li, Yandong Guo", + "published": "2022-06-24", + "updated": "2023-03-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2305.05233v1", + "title": "DynamicKD: An Effective Knowledge Distillation via Dynamic Entropy Correction-Based Distillation for Gap Optimizing", + "abstract": "The knowledge distillation uses a high-performance teacher network to guide\nthe student network. However, the performance gap between the teacher and\nstudent networks can affect the student's training. This paper proposes a novel\nknowledge distillation algorithm based on dynamic entropy correction to reduce\nthe gap by adjusting the student instead of the teacher. Firstly, the effect of\nchanging the output entropy (short for output information entropy) in the\nstudent on the distillation loss is analyzed in theory. This paper shows that\ncorrecting the output entropy can reduce the gap. Then, a knowledge\ndistillation algorithm based on dynamic entropy correction is created, which\ncan correct the output entropy in real-time with an entropy controller updated\ndynamically by the distillation loss. The proposed algorithm is validated on\nthe CIFAR100 and ImageNet. The comparison with various state-of-the-art\ndistillation algorithms shows impressive results, especially in the experiment\non the CIFAR100 regarding teacher-student pair resnet32x4-resnet8x4. The\nproposed algorithm raises 2.64 points over the traditional distillation\nalgorithm and 0.87 points over the state-of-the-art algorithm CRD in\nclassification accuracy, demonstrating its effectiveness and efficiency.", + "authors": "Songling Zhu, Ronghua Shang, Bo Yuan, Weitong Zhang, Yangyang Li, Licheng Jiao", + "published": "2023-05-09", + "updated": "2023-05-09", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2011.06110v1", + "title": "Efficient Knowledge Distillation for RNN-Transducer Models", + "abstract": "Knowledge Distillation is an effective method of transferring knowledge from\na large model to a smaller model. Distillation can be viewed as a type of model\ncompression, and has played an important role for on-device ASR applications.\nIn this paper, we develop a distillation method for RNN-Transducer (RNN-T)\nmodels, a popular end-to-end neural network architecture for streaming speech\nrecognition. Our proposed distillation loss is simple and efficient, and uses\nonly the \"y\" and \"blank\" posterior probabilities from the RNN-T output\nprobability lattice. We study the effectiveness of the proposed approach in\nimproving the accuracy of sparse RNN-T models obtained by gradually pruning a\nlarger uncompressed model, which also serves as the teacher during\ndistillation. With distillation of 60% and 90% sparse multi-domain RNN-T\nmodels, we obtain WER reductions of 4.3% and 12.1% respectively, on a noisy\nFarField eval set. We also present results of experiments on LibriSpeech, where\nthe introduction of the distillation loss yields a 4.8% relative WER reduction\non the test-other dataset for a small Conformer model.", + "authors": "Sankaran Panchapagesan, Daniel S. Park, Chung-Cheng Chiu, Yuan Shangguan, Qiao Liang, Alexander Gruenstein", + "published": "2020-11-11", + "updated": "2020-11-11", + "primary_cat": "eess.AS", + "cats": [ + "eess.AS", + "cs.SD" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/0803.0345v2", + "title": "Secret key distillation from shielded two-qubit states", + "abstract": "The quantum states corresponding to a secret key are characterized using the\nso-called private states, where the key part consisting of a secret key is\nshielded by the additional systems. Based on the construction, it was shown\nthat a secret key can be distilled from bound entangled states. In this work, I\nconsider the shielded two-qubit states in a key-distillation scenario and\nderive the conditions under which a secret key can be distilled using the\nrecurrence protocol or the two-way classical distillation, advantage\ndistillation together with one-way postprocessing. From the security\nconditions, it is shown that a secret key can be distilled from bound entangled\nstates in a much wider range. In addition, I consider the case that in which\nwhite noise is added to quantum states and show that the classical distillation\nprotocol still works despite a certain amount of noise although the recurrence\nprotocol does not.", + "authors": "Joonwoo Bae", + "published": "2008-03-03", + "updated": "2010-09-22", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2203.11932v1", + "title": "Dataset Distillation by Matching Training Trajectories", + "abstract": "Dataset distillation is the task of synthesizing a small dataset such that a\nmodel trained on the synthetic set will match the test accuracy of the model\ntrained on the full dataset. In this paper, we propose a new formulation that\noptimizes our distilled data to guide networks to a similar state as those\ntrained on real data across many training steps. Given a network, we train it\nfor several iterations on our distilled data and optimize the distilled data\nwith respect to the distance between the synthetically trained parameters and\nthe parameters trained on real data. To efficiently obtain the initial and\ntarget network parameters for large-scale datasets, we pre-compute and store\ntraining trajectories of expert networks trained on the real dataset. Our\nmethod handily outperforms existing methods and also allows us to distill\nhigher-resolution visual data.", + "authors": "George Cazenavette, Tongzhou Wang, Antonio Torralba, Alexei A. Efros, Jun-Yan Zhu", + "published": "2022-03-22", + "updated": "2022-03-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2403.10045v1", + "title": "Towards Adversarially Robust Dataset Distillation by Curvature Regularization", + "abstract": "Dataset distillation (DD) allows datasets to be distilled to fractions of\ntheir original size while preserving the rich distributional information so\nthat models trained on the distilled datasets can achieve a comparable accuracy\nwhile saving significant computational loads. Recent research in this area has\nbeen focusing on improving the accuracy of models trained on distilled\ndatasets. In this paper, we aim to explore a new perspective of DD. We study\nhow to embed adversarial robustness in distilled datasets, so that models\ntrained on these datasets maintain the high accuracy and meanwhile acquire\nbetter adversarial robustness. We propose a new method that achieves this goal\nby incorporating curvature regularization into the distillation process with\nmuch less computational overhead than standard adversarial training. Extensive\nempirical experiments suggest that our method not only outperforms standard\nadversarial training on both accuracy and robustness with less computation\noverhead but is also capable of generating robust distilled datasets that can\nwithstand various adversarial attacks.", + "authors": "Eric Xue, Yijiang Li, Haoyang Liu, Yifan Shen, Haohan Wang", + "published": "2024-03-15", + "updated": "2024-03-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2208.10068v1", + "title": "Tree-structured Auxiliary Online Knowledge Distillation", + "abstract": "Traditional knowledge distillation adopts a two-stage training process in\nwhich a teacher model is pre-trained and then transfers the knowledge to a\ncompact student model. To overcome the limitation, online knowledge\ndistillation is proposed to perform one-stage distillation when the teacher is\nunavailable. Recent researches on online knowledge distillation mainly focus on\nthe design of the distillation objective, including attention or gate\nmechanism. Instead, in this work, we focus on the design of the global\narchitecture and propose Tree-Structured Auxiliary online knowledge\ndistillation (TSA), which adds more parallel peers for layers close to the\noutput hierarchically to strengthen the effect of knowledge distillation.\nDifferent branches construct different views of the inputs, which can be the\nsource of the knowledge. The hierarchical structure implies that the knowledge\ntransfers from general to task-specific with the growth of the layers.\nExtensive experiments on 3 computer vision and 4 natural language processing\ndatasets show that our method achieves state-of-the-art performance without\nbells and whistles. To the best of our knowledge, we are the first to\ndemonstrate the effectiveness of online knowledge distillation for machine\ntranslation tasks.", + "authors": "Wenye Lin, Yangning Li, Yifeng Ding, Hai-Tao Zheng", + "published": "2022-08-22", + "updated": "2022-08-22", + "primary_cat": "cs.NI", + "cats": [ + "cs.NI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1607.04311v1", + "title": "Defensive Distillation is Not Robust to Adversarial Examples", + "abstract": "We show that defensive distillation is not secure: it is no more resistant to\ntargeted misclassification attacks than unprotected neural networks.", + "authors": "Nicholas Carlini, David Wagner", + "published": "2016-07-14", + "updated": "2016-07-14", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0108029v1", + "title": "Distillability, Bell inequalities and multiparticle bound entanglement", + "abstract": "We study the relation between violation of Bell inequalities and\ndistillability properties of quantum states. Recently, D\\\"ur has shown that\nthere are some multiparticle bound entangled states, non-separable and\nnon-distillable, that violate a Bell inequality. We prove that for all the\nstates violating this inequality there exist at least one splitting of the\nparties into two groups such that some pure-state entanglement can be\ndistilled, obtaining a connection between Bell inequalities and bipartite\ndistillable entanglement.", + "authors": "A. Acin", + "published": "2001-08-07", + "updated": "2001-08-07", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2206.08491v1", + "title": "Revisiting Self-Distillation", + "abstract": "Knowledge distillation is the procedure of transferring \"knowledge\" from a\nlarge model (the teacher) to a more compact one (the student), often being used\nin the context of model compression. When both models have the same\narchitecture, this procedure is called self-distillation. Several works have\nanecdotally shown that a self-distilled student can outperform the teacher on\nheld-out data. In this work, we systematically study self-distillation in a\nnumber of settings. We first show that even with a highly accurate teacher,\nself-distillation allows a student to surpass the teacher in all cases.\nSecondly, we revisit existing theoretical explanations of (self) distillation\nand identify contradicting examples, revealing possible drawbacks of these\nexplanations. Finally, we provide an alternative explanation for the dynamics\nof self-distillation through the lens of loss landscape geometry. We conduct\nextensive experiments to show that self-distillation leads to flatter minima,\nthereby resulting in better generalization.", + "authors": "Minh Pham, Minsu Cho, Ameya Joshi, Chinmay Hegde", + "published": "2022-06-17", + "updated": "2022-06-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1812.00249v1", + "title": "On Compressing U-net Using Knowledge Distillation", + "abstract": "We study the use of knowledge distillation to compress the U-net\narchitecture. We show that, while standard distillation is not sufficient to\nreliably train a compressed U-net, introducing other regularization methods,\nsuch as batch normalization and class re-weighting, in knowledge distillation\nsignificantly improves the training process. This allows us to compress a U-net\nby over 1000x, i.e., to 0.1% of its original number of parameters, at a\nnegligible decrease in performance.", + "authors": "Karttikeya Mangalam, Mathieu Salzamann", + "published": "2018-12-01", + "updated": "2018-12-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2403.12040v1", + "title": "Distilling Datasets Into Less Than One Image", + "abstract": "Dataset distillation aims to compress a dataset into a much smaller one so\nthat a model trained on the distilled dataset achieves high accuracy. Current\nmethods frame this as maximizing the distilled classification accuracy for a\nbudget of K distilled images-per-class, where K is a positive integer. In this\npaper, we push the boundaries of dataset distillation, compressing the dataset\ninto less than an image-per-class. It is important to realize that the\nmeaningful quantity is not the number of distilled images-per-class but the\nnumber of distilled pixels-per-dataset. We therefore, propose Poster Dataset\nDistillation (PoDD), a new approach that distills the entire original dataset\ninto a single poster. The poster approach motivates new technical solutions for\ncreating training images and learnable labels. Our method can achieve\ncomparable or better performance with less than an image-per-class compared to\nexisting methods that use one image-per-class. Specifically, our method\nestablishes a new state-of-the-art performance on CIFAR-10, CIFAR-100, and\nCUB200 using as little as 0.3 images-per-class.", + "authors": "Asaf Shul, Eliahu Horwitz, Yedid Hoshen", + "published": "2024-03-18", + "updated": "2024-03-18", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2302.11472v1", + "title": "Distilling Calibrated Student from an Uncalibrated Teacher", + "abstract": "Knowledge distillation is a common technique for improving the performance of\na shallow student network by transferring information from a teacher network,\nwhich in general, is comparatively large and deep. These teacher networks are\npre-trained and often uncalibrated, as no calibration technique is applied to\nthe teacher model while training. Calibration of a network measures the\nprobability of correctness for any of its predictions, which is critical in\nhigh-risk domains. In this paper, we study how to obtain a calibrated student\nfrom an uncalibrated teacher. Our approach relies on the fusion of the\ndata-augmentation techniques, including but not limited to cutout, mixup, and\nCutMix, with knowledge distillation. We extend our approach beyond traditional\nknowledge distillation and find it suitable for Relational Knowledge\nDistillation and Contrastive Representation Distillation as well. The novelty\nof the work is that it provides a framework to distill a calibrated student\nfrom an uncalibrated teacher model without compromising the accuracy of the\ndistilled student. We perform extensive experiments to validate our approach on\nvarious datasets, including CIFAR-10, CIFAR-100, CINIC-10 and TinyImageNet, and\nobtained calibrated student models. We also observe robust performance of our\napproach while evaluating it on corrupted CIFAR-100C data.", + "authors": "Ishan Mishra, Sethu Vamsi Krishna, Deepak Mishra", + "published": "2023-02-22", + "updated": "2023-02-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2302.14643v1", + "title": "Graph-based Knowledge Distillation: A survey and experimental evaluation", + "abstract": "Graph, such as citation networks, social networks, and transportation\nnetworks, are prevalent in the real world. Graph Neural Networks (GNNs) have\ngained widespread attention for their robust expressiveness and exceptional\nperformance in various graph applications. However, the efficacy of GNNs is\nheavily reliant on sufficient data labels and complex network models, with the\nformer obtaining hardly and the latter computing costly. To address the labeled\ndata scarcity and high complexity of GNNs, Knowledge Distillation (KD) has been\nintroduced to enhance existing GNNs. This technique involves transferring the\nsoft-label supervision of the large teacher model to the small student model\nwhile maintaining prediction performance. This survey offers a comprehensive\noverview of Graph-based Knowledge Distillation methods, systematically\ncategorizing and summarizing them while discussing their limitations and future\ndirections. This paper first introduces the background of graph and KD. It then\nprovides a comprehensive summary of three types of Graph-based Knowledge\nDistillation methods, namely Graph-based Knowledge Distillation for deep neural\nnetworks (DKD), Graph-based Knowledge Distillation for GNNs (GKD), and\nSelf-Knowledge Distillation based Graph-based Knowledge Distillation (SKD).\nEach type is further divided into knowledge distillation methods based on the\noutput layer, middle layer, and constructed graph. Subsequently, various\nalgorithms' ideas are analyzed and compared, concluding with the advantages and\ndisadvantages of each algorithm supported by experimental results. In addition,\nthe applications of graph-based knowledge distillation in CV, NLP, RS, and\nother fields are listed. Finally, the graph-based knowledge distillation is\nsummarized and prospectively discussed. We have also released related resources\nat https://github.com/liujing1023/Graph-based-Knowledge-Distillation.", + "authors": "Jing Liu, Tongya Zheng, Guanzheng Zhang, Qinfen Hao", + "published": "2023-02-27", + "updated": "2023-02-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2103.16367v1", + "title": "Complementary Relation Contrastive Distillation", + "abstract": "Knowledge distillation aims to transfer representation ability from a teacher\nmodel to a student model. Previous approaches focus on either individual\nrepresentation distillation or inter-sample similarity preservation. While we\nargue that the inter-sample relation conveys abundant information and needs to\nbe distilled in a more effective way. In this paper, we propose a novel\nknowledge distillation method, namely Complementary Relation Contrastive\nDistillation (CRCD), to transfer the structural knowledge from the teacher to\nthe student. Specifically, we estimate the mutual relation in an anchor-based\nway and distill the anchor-student relation under the supervision of its\ncorresponding anchor-teacher relation. To make it more robust, mutual relations\nare modeled by two complementary elements: the feature and its gradient.\nFurthermore, the low bound of mutual information between the anchor-teacher\nrelation distribution and the anchor-student relation distribution is maximized\nvia relation contrastive loss, which can distill both the sample representation\nand the inter-sample relations. Experiments on different benchmarks demonstrate\nthe effectiveness of our proposed CRCD.", + "authors": "Jinguo Zhu, Shixiang Tang, Dapeng Chen, Shijie Yu, Yakun Liu, Aijun Yang, Mingzhe Rong, Xiaohua Wang", + "published": "2021-03-29", + "updated": "2021-03-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2311.13811v2", + "title": "Education distillation:getting student models to learn in shcools", + "abstract": "Knowledge distillation is one of the methods for model compression, and\nexisting knowledge distillation techniques focus on how to improve the\ndistillation algorithm so as to enhance the distillation efficiency. This paper\nintroduces dynamic incremental learning into knowledge distillation and\nproposes a distillation strategy for education distillation. Specifically, it\nis proposed to take fragmented student models divided from the complete student\nmodel as lower-grade models. As the grade level rises, fragmented student\nmodels deepen in conjunction with designed teaching reference layers, while\nlearning and distilling from more teacher models. By moving from lower to\nhigher grades, fragmented student models were gradually integrated into a\ncomplete target student model, and the performance of the student models\ngradually improved from lower to higher grades of the stage. Education\ndistillation strategies combined with distillation algorithms outperform the\nresults of single distillation algorithms on the public dataset\nCIFAR100,Caltech256, Food-101 dataset.", + "authors": "Ling Feng, Danyang Li, Tianhao Wu, Xuliang Duan", + "published": "2023-11-23", + "updated": "2023-11-27", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2308.14286v2", + "title": "Bridging Cross-task Protocol Inconsistency for Distillation in Dense Object Detection", + "abstract": "Knowledge distillation (KD) has shown potential for learning compact models\nin dense object detection. However, the commonly used softmax-based\ndistillation ignores the absolute classification scores for individual\ncategories. Thus, the optimum of the distillation loss does not necessarily\nlead to the optimal student classification scores for dense object detectors.\nThis cross-task protocol inconsistency is critical, especially for dense object\ndetectors, since the foreground categories are extremely imbalanced. To address\nthe issue of protocol differences between distillation and classification, we\npropose a novel distillation method with cross-task consistent protocols,\ntailored for the dense object detection. For classification distillation, we\naddress the cross-task protocol inconsistency problem by formulating the\nclassification logit maps in both teacher and student models as multiple\nbinary-classification maps and applying a binary-classification distillation\nloss to each map. For localization distillation, we design an IoU-based\nLocalization Distillation Loss that is free from specific network structures\nand can be compared with existing localization distillation losses. Our\nproposed method is simple but effective, and experimental results demonstrate\nits superiority over existing methods. Code is available at\nhttps://github.com/TinyTigerPan/BCKD.", + "authors": "Longrong Yang, Xianpan Zhou, Xuewei Li, Liang Qiao, Zheyang Li, Ziwei Yang, Gaoang Wang, Xi Li", + "published": "2023-08-28", + "updated": "2024-03-12", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2108.12905v1", + "title": "Lipschitz Continuity Guided Knowledge Distillation", + "abstract": "Knowledge distillation has become one of the most important model compression\ntechniques by distilling knowledge from larger teacher networks to smaller\nstudent ones. Although great success has been achieved by prior distillation\nmethods via delicately designing various types of knowledge, they overlook the\nfunctional properties of neural networks, which makes the process of applying\nthose techniques to new tasks unreliable and non-trivial. To alleviate such\nproblem, in this paper, we initially leverage Lipschitz continuity to better\nrepresent the functional characteristic of neural networks and guide the\nknowledge distillation process. In particular, we propose a novel Lipschitz\nContinuity Guided Knowledge Distillation framework to faithfully distill\nknowledge by minimizing the distance between two neural networks' Lipschitz\nconstants, which enables teacher networks to better regularize student networks\nand improve the corresponding performance. We derive an explainable\napproximation algorithm with an explicit theoretical derivation to address the\nNP-hard problem of calculating the Lipschitz constant. Experimental results\nhave shown that our method outperforms other benchmarks over several knowledge\ndistillation tasks (e.g., classification, segmentation and object detection) on\nCIFAR-100, ImageNet, and PASCAL VOC datasets.", + "authors": "Yuzhang Shang, Bin Duan, Ziliang Zong, Liqiang Nie, Yan Yan", + "published": "2021-08-29", + "updated": "2021-08-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1108.0537v2", + "title": "Isotropic non-locality cannot be distilled", + "abstract": "We investigate non-locality distillation protocols for isotropic\ncorrelations. These correlations are the hardest instances which respect to\ndistillability and only partial results are known about their behaviour under\nnon-locality distillation protocols. We completely resolve this issue by\nproving that non-locality distillation is impossible for all non-local\nisotropic correlations.", + "authors": "Dejan D. Dukaric", + "published": "2011-08-02", + "updated": "2011-09-20", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1910.02551v3", + "title": "Soft-Label Dataset Distillation and Text Dataset Distillation", + "abstract": "Dataset distillation is a method for reducing dataset sizes by learning a\nsmall number of synthetic samples containing all the information of a large\ndataset. This has several benefits like speeding up model training, reducing\nenergy consumption, and reducing required storage space. Currently, each\nsynthetic sample is assigned a single `hard' label, and also, dataset\ndistillation can currently only be used with image data.\n We propose to simultaneously distill both images and their labels, thus\nassigning each synthetic sample a `soft' label (a distribution of labels). Our\nalgorithm increases accuracy by 2-4% over the original algorithm for several\nimage classification tasks. Using `soft' labels also enables distilled datasets\nto consist of fewer samples than there are classes as each sample can encode\ninformation for multiple classes. For example, training a LeNet model with 10\ndistilled images (one per class) results in over 96% accuracy on MNIST, and\nalmost 92% accuracy when trained on just 5 distilled images.\n We also extend the dataset distillation algorithm to distill sequential\ndatasets including texts. We demonstrate that text distillation outperforms\nother methods across multiple datasets. For example, models attain almost their\noriginal accuracy on the IMDB sentiment analysis task using just 20 distilled\nsentences.\n Our code can be found at\n$\\href{https://github.com/ilia10000/dataset-distillation}{\\text{https://github.com/ilia10000/dataset-distillation}}$.", + "authors": "Ilia Sucholutsky, Matthias Schonlau", + "published": "2019-10-06", + "updated": "2020-05-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.14827v1", + "title": "Sentence-Level or Token-Level? A Comprehensive Study on Knowledge Distillation", + "abstract": "Knowledge distillation, transferring knowledge from a teacher model to a\nstudent model, has emerged as a powerful technique in neural machine\ntranslation for compressing models or simplifying training targets. Knowledge\ndistillation encompasses two primary methods: sentence-level distillation and\ntoken-level distillation. In sentence-level distillation, the student model is\ntrained to align with the output of the teacher model, which can alleviate the\ntraining difficulty and give student model a comprehensive understanding of\nglobal structure. Differently, token-level distillation requires the student\nmodel to learn the output distribution of the teacher model, facilitating a\nmore fine-grained transfer of knowledge. Studies have revealed divergent\nperformances between sentence-level and token-level distillation across\ndifferent scenarios, leading to the confusion on the empirical selection of\nknowledge distillation methods. In this study, we argue that token-level\ndistillation, with its more complex objective (i.e., distribution), is better\nsuited for ``simple'' scenarios, while sentence-level distillation excels in\n``complex'' scenarios. To substantiate our hypothesis, we systematically\nanalyze the performance of distillation methods by varying the model size of\nstudent models, the complexity of text, and the difficulty of decoding\nprocedure. While our experimental results validate our hypothesis, defining the\ncomplexity level of a given scenario remains a challenging task. So we further\nintroduce a novel hybrid method that combines token-level and sentence-level\ndistillation through a gating mechanism, aiming to leverage the advantages of\nboth individual methods. Experiments demonstrate that the hybrid method\nsurpasses the performance of token-level or sentence-level distillation methods\nand the previous works by a margin, demonstrating the effectiveness of the\nproposed hybrid method.", + "authors": "Jingxuan Wei, Linzhuang Sun, Yichong Leng, Xu Tan, Bihui Yu, Ruifeng Guo", + "published": "2024-04-23", + "updated": "2024-04-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2104.11928v1", + "title": "Extract then Distill: Efficient and Effective Task-Agnostic BERT Distillation", + "abstract": "Task-agnostic knowledge distillation, a teacher-student framework, has been\nproved effective for BERT compression. Although achieving promising results on\nNLP tasks, it requires enormous computational resources. In this paper, we\npropose Extract Then Distill (ETD), a generic and flexible strategy to reuse\nthe teacher's parameters for efficient and effective task-agnostic\ndistillation, which can be applied to students of any size. Specifically, we\nintroduce two variants of ETD, ETD-Rand and ETD-Impt, which extract the\nteacher's parameters in a random manner and by following an importance metric\nrespectively. In this way, the student has already acquired some knowledge at\nthe beginning of the distillation process, which makes the distillation process\nconverge faster. We demonstrate the effectiveness of ETD on the GLUE benchmark\nand SQuAD. The experimental results show that: (1) compared with the baseline\nwithout an ETD strategy, ETD can save 70\\% of computation cost. Moreover, it\nachieves better results than the baseline when using the same computing\nresource. (2) ETD is generic and has been proven effective for different\ndistillation methods (e.g., TinyBERT and MiniLM) and students of different\nsizes. The source code will be publicly available upon publication.", + "authors": "Cheng Chen, Yichun Yin, Lifeng Shang, Zhi Wang, Xin Jiang, Xiao Chen, Qun Liu", + "published": "2021-04-24", + "updated": "2021-04-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2208.08840v1", + "title": "Mind the Gap in Distilling StyleGANs", + "abstract": "StyleGAN family is one of the most popular Generative Adversarial Networks\n(GANs) for unconditional generation. Despite its impressive performance, its\nhigh demand on storage and computation impedes their deployment on\nresource-constrained devices. This paper provides a comprehensive study of\ndistilling from the popular StyleGAN-like architecture. Our key insight is that\nthe main challenge of StyleGAN distillation lies in the output discrepancy\nissue, where the teacher and student model yield different outputs given the\nsame input latent code. Standard knowledge distillation losses typically fail\nunder this heterogeneous distillation scenario. We conduct thorough analysis\nabout the reasons and effects of this discrepancy issue, and identify that the\nmapping network plays a vital role in determining semantic information of\ngenerated images. Based on this finding, we propose a novel initialization\nstrategy for the student model, which can ensure the output consistency to the\nmaximum extent. To further enhance the semantic consistency between the teacher\nand student model, we present a latent-direction-based distillation loss that\npreserves the semantic relations in latent space. Extensive experiments\ndemonstrate the effectiveness of our approach in distilling StyleGAN2 and\nStyleGAN3, outperforming existing GAN distillation methods by a large margin.", + "authors": "Guodong Xu, Yuenan Hou, Ziwei Liu, Chen Change Loy", + "published": "2022-08-18", + "updated": "2022-08-18", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0303009v2", + "title": "Security bounds in Quantum Cryptography using d-level systems", + "abstract": "We analyze the security of quantum cryptography schemes for $d$-level systems\nusing 2 or $d+1$ maximally conjugated bases, under individual eavesdropping\nattacks based on cloning machines and measurement after the basis\nreconciliation. We consider classical advantage distillation protocols, that\nallow to extract a key even in situations where the mutual information between\nthe honest parties is smaller than the eavesdropper's information. In this\nscenario, advantage distillation protocols are shown to be as powerful as\nquantum distillation: key distillation is possible using classical techniques\nif and only if the corresponding state in the entanglement based protocol is\ndistillable.", + "authors": "Antonio Acin, Nicolas Gisin, Valerio Scarani", + "published": "2003-03-03", + "updated": "2003-11-03", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2011.09969v1", + "title": "Neural network algorithm and its application in reactive distillation", + "abstract": "Reactive distillation is a special distillation technology based on the\ncoupling of chemical reaction and distillation. It has the characteristics of\nlow energy consumption and high separation efficiency. However, because the\ncombination of reaction and separation produces highly nonlinear robust\nbehavior, the control and optimization of the reactive distillation process\ncannot use conventional methods, but must rely on neural network algorithms.\nThis paper briefly describes the characteristics and research progress of\nreactive distillation technology and neural network algorithms, and summarizes\nthe application of neural network algorithms in reactive distillation, aiming\nto provide reference for the development and innovation of industry technology.", + "authors": "Huihui Wang, Ruyang Mo", + "published": "2020-11-16", + "updated": "2020-11-16", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE", + "cs.LG", + "I.2.8" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2401.06370v1", + "title": "Graph Relation Distillation for Efficient Biomedical Instance Segmentation", + "abstract": "Instance-aware embeddings predicted by deep neural networks have\nrevolutionized biomedical instance segmentation, but its resource requirements\nare substantial. Knowledge distillation offers a solution by transferring\ndistilled knowledge from heavy teacher networks to lightweight yet\nhigh-performance student networks. However, existing knowledge distillation\nmethods struggle to extract knowledge for distinguishing instances and overlook\nglobal relation information. To address these challenges, we propose a graph\nrelation distillation approach for efficient biomedical instance segmentation,\nwhich considers three essential types of knowledge: instance-level features,\ninstance relations, and pixel-level boundaries. We introduce two graph\ndistillation schemes deployed at both the intra-image level and the inter-image\nlevel: instance graph distillation (IGD) and affinity graph distillation (AGD).\nIGD constructs a graph representing instance features and relations,\ntransferring these two types of knowledge by enforcing instance graph\nconsistency. AGD constructs an affinity graph representing pixel relations to\ncapture structured knowledge of instance boundaries, transferring\nboundary-related knowledge by ensuring pixel affinity consistency. Experimental\nresults on a number of biomedical datasets validate the effectiveness of our\napproach, enabling student models with less than $ 1\\%$ parameters and less\nthan $10\\%$ inference time while achieving promising performance compared to\nteacher models.", + "authors": "Xiaoyu Liu, Yueyi Zhang, Zhiwei Xiong, Wei Huang, Bo Hu, Xiaoyan Sun, Feng Wu", + "published": "2024-01-12", + "updated": "2024-01-12", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2304.06461v2", + "title": "Multi-Mode Online Knowledge Distillation for Self-Supervised Visual Representation Learning", + "abstract": "Self-supervised learning (SSL) has made remarkable progress in visual\nrepresentation learning. Some studies combine SSL with knowledge distillation\n(SSL-KD) to boost the representation learning performance of small models. In\nthis study, we propose a Multi-mode Online Knowledge Distillation method (MOKD)\nto boost self-supervised visual representation learning. Different from\nexisting SSL-KD methods that transfer knowledge from a static pre-trained\nteacher to a student, in MOKD, two different models learn collaboratively in a\nself-supervised manner. Specifically, MOKD consists of two distillation modes:\nself-distillation and cross-distillation modes. Among them, self-distillation\nperforms self-supervised learning for each model independently, while\ncross-distillation realizes knowledge interaction between different models. In\ncross-distillation, a cross-attention feature search strategy is proposed to\nenhance the semantic feature alignment between different models. As a result,\nthe two models can absorb knowledge from each other to boost their\nrepresentation learning performance. Extensive experimental results on\ndifferent backbones and datasets demonstrate that two heterogeneous models can\nbenefit from MOKD and outperform their independently trained baseline. In\naddition, MOKD also outperforms existing SSL-KD methods for both the student\nand teacher models.", + "authors": "Kaiyou Song, Jin Xie, Shan Zhang, Zimeng Luo", + "published": "2023-04-13", + "updated": "2023-06-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/9908047v2", + "title": "On bound entanglement assisted distillation", + "abstract": "We investigate asymptotic distillation of entanglement in the presence of an\nunlimited amount of bound entanglement for bi-partite systems. We show that the\ndistillability is still bounded by the relative entropy of entanglement. This\noffers a strong support to the fact that bound entanglement does not improve\ndistillation of entanglement.", + "authors": "V. Vedral", + "published": "1999-08-14", + "updated": "1999-11-17", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2004.03097v1", + "title": "Towards Non-task-specific Distillation of BERT via Sentence Representation Approximation", + "abstract": "Recently, BERT has become an essential ingredient of various NLP deep models\ndue to its effectiveness and universal-usability. However, the online\ndeployment of BERT is often blocked by its large-scale parameters and high\ncomputational cost. There are plenty of studies showing that the knowledge\ndistillation is efficient in transferring the knowledge from BERT into the\nmodel with a smaller size of parameters. Nevertheless, current BERT\ndistillation approaches mainly focus on task-specified distillation, such\nmethodologies lead to the loss of the general semantic knowledge of BERT for\nuniversal-usability. In this paper, we propose a sentence representation\napproximating oriented distillation framework that can distill the pre-trained\nBERT into a simple LSTM based model without specifying tasks. Consistent with\nBERT, our distilled model is able to perform transfer learning via fine-tuning\nto adapt to any sentence-level downstream task. Besides, our model can further\ncooperate with task-specific distillation procedures. The experimental results\non multiple NLP tasks from the GLUE benchmark show that our approach\noutperforms other task-specific distillation methods or even much larger\nmodels, i.e., ELMO, with efficiency well-improved.", + "authors": "Bowen Wu, Huan Zhang, Mengyuan Li, Zongsheng Wang, Qihang Feng, Junhong Huang, Baoxun Wang", + "published": "2020-04-07", + "updated": "2020-04-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0012022v1", + "title": "Distilling a Greenberger-Horne-Zeilinger State From an Arbitrary Pure State of Three Qubits", + "abstract": "We present a general algorithm to achieve local operators which can produce\nthe GHZ state for an arbitrary given three-qubit state. Thus the distillation\nprocess of the state can be realized optimally. The algorithm is shown to be\nsufficient for the three-qubit state on account of the fact that any state for\nwhich this distillation algorithm is invalid cannot be distilled to the GHZ\nstate by any local actions. Moreover, an analytical result of distillation\noperations is achieved for the general state of three qubits.", + "authors": "Li-Xiang Cen, Shun-Jin Wang", + "published": "2000-12-05", + "updated": "2000-12-05", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1905.09747v2", + "title": "Adversarially Robust Distillation", + "abstract": "Knowledge distillation is effective for producing small, high-performance\nneural networks for classification, but these small networks are vulnerable to\nadversarial attacks. This paper studies how adversarial robustness transfers\nfrom teacher to student during knowledge distillation. We find that a large\namount of robustness may be inherited by the student even when distilled on\nonly clean images. Second, we introduce Adversarially Robust Distillation (ARD)\nfor distilling robustness onto student networks. In addition to producing small\nmodels with high test accuracy like conventional distillation, ARD also passes\nthe superior robustness of large networks onto the student. In our experiments,\nwe find that ARD student models decisively outperform adversarially trained\nnetworks of identical architecture in terms of robust accuracy, surpassing\nstate-of-the-art methods on standard robustness benchmarks. Finally, we adapt\nrecent fast adversarial training methods to ARD for accelerated robust\ndistillation.", + "authors": "Micah Goldblum, Liam Fowl, Soheil Feizi, Tom Goldstein", + "published": "2019-05-23", + "updated": "2019-12-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2106.12591v1", + "title": "Magic State Distillation from Entangled States", + "abstract": "Magic can be distributed non-locally in many-body entangled states, such as\nthe low energy states of condensed matter systems. Using the Bravyi-Kitaev\nmagic state distillation protocol, we find that non-local magic is distillable\nand can improve the distillation outcome. We analyze a few explicit examples\nand show that spin squeezing can be used to convert non-distillable states into\ndistillable ones.\n Our analysis also suggests that the conventional product input states assumed\nby magic distillation protocols are extremely atypical among general states\nwith distillable magic. It further justifies the need for studying a diverse\nrange of entangled inputs that yield magic states with high probability.", + "authors": "Ning Bao, ChunJun Cao, Vincent Paul Su", + "published": "2021-06-23", + "updated": "2021-06-23", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0305188v1", + "title": "Dynamics of Distillability", + "abstract": "The time evolution of a maximally entangled bipartite systems is presented in\nthis paper. The distillability criterion is given in terms of Kraus operators.\nUsing the criterion, we discuss the distillability of $2\\times 2$ and $n\\times\nn (n>2)$ systems in their evolution process. There are two distinguished\nprocesses, dissipation and decoherence, which may destroy the distillability.\nWe discuss the effects of those processes on distillability in details.", + "authors": "W. Wu, W. Wang, X. X. Yi", + "published": "2003-05-30", + "updated": "2003-05-30", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0001084v2", + "title": "Distillation of GHZ states by selective information manipulation", + "abstract": "Methods for distilling maximally entangled tripartite (GHZ) states from\narbitrary entangled tripartite pure states are described. These techniques work\nfor virtually any input state. Each technique has two stages which we call\nprimary and secondary distillation. Primary distillation produces a GHZ state\nwith some probability, so that when applied to an ensemble of systems, a\ncertain percentage is discarded. Secondary distillation produces further GHZs\nfrom the discarded systems. These protocols are developed with the help of an\napproach to quantum information theory based on absolutely selective\ninformation, which has other potential applications.", + "authors": "Oliver Cohen, Todd A. Brun", + "published": "2000-01-23", + "updated": "2000-02-02", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/9809078v2", + "title": "A rigorous treatment of distillable entanglement", + "abstract": "The notion of distillable entanglement is one of the fundamental concepts of\nquantum information theory. Unfortunately, there is an apparent mismatch\nbetween the intuitive and rigorous definitions of distillable entanglement. To\nbe precise, the existing rigorous definitions impose the constraint that the\ndistilation protocol produce an output of constant dimension. It is therefore\nconceivable that this unnecessary constraint might have led to underestimation\nof the true distillable entanglement. We give a new definition of distillable\nentanglement which removes this constraint, but could conceivably overestimate\nthe true value. Since the definitions turn out to be equivalent, neither\nunderestimation nor overestimation is possible, and both definitions are\narguably correct", + "authors": "Eric M. Rains", + "published": "1998-09-24", + "updated": "1998-10-12", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1912.12630v1", + "title": "Real-time Policy Distillation in Deep Reinforcement Learning", + "abstract": "Policy distillation in deep reinforcement learning provides an effective way\nto transfer control policies from a larger network to a smaller untrained\nnetwork without a significant degradation in performance. However, policy\ndistillation is underexplored in deep reinforcement learning, and existing\napproaches are computationally inefficient, resulting in a long distillation\ntime. In addition, the effectiveness of the distillation process is still\nlimited to the model capacity. We propose a new distillation mechanism, called\nreal-time policy distillation, in which training the teacher model and\ndistilling the policy to the student model occur simultaneously. Accordingly,\nthe teacher's latest policy is transferred to the student model in real time.\nThis reduces the distillation time to half the original time or even less and\nalso makes it possible for extremely small student models to learn skills at\nthe expert level. We evaluated the proposed algorithm in the Atari 2600 domain.\nThe results show that our approach can achieve full distillation in most games,\neven with compression ratios up to 1.7%.", + "authors": "Yuxiang Sun, Pooyan Fazli", + "published": "2019-12-29", + "updated": "2019-12-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2304.14800v1", + "title": "Multi-to-Single Knowledge Distillation for Point Cloud Semantic Segmentation", + "abstract": "3D point cloud semantic segmentation is one of the fundamental tasks for\nenvironmental understanding. Although significant progress has been made in\nrecent years, the performance of classes with few examples or few points is\nstill far from satisfactory. In this paper, we propose a novel multi-to-single\nknowledge distillation framework for the 3D point cloud semantic segmentation\ntask to boost the performance of those hard classes. Instead of fusing all the\npoints of multi-scans directly, only the instances that belong to the\npreviously defined hard classes are fused. To effectively and sufficiently\ndistill valuable knowledge from multi-scans, we leverage a multilevel\ndistillation framework, i.e., feature representation distillation, logit\ndistillation, and affinity distillation. We further develop a novel\ninstance-aware affinity distillation algorithm for capturing high-level\nstructural knowledge to enhance the distillation efficacy for hard classes.\nFinally, we conduct experiments on the SemanticKITTI dataset, and the results\non both the validation and test sets demonstrate that our method yields\nsubstantial improvements compared with the baseline method. The code is\navailable at \\Url{https://github.com/skyshoumeng/M2SKD}.", + "authors": "Shoumeng Qiu, Feng Jiang, Haiqiang Zhang, Xiangyang Xue, Jian Pu", + "published": "2023-04-28", + "updated": "2023-04-28", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/0908.2142v1", + "title": "Distillation of Bell states in open systems", + "abstract": "In this work we review the entire classification of 2x2 distillable states\nfor protocols with a finite numbers of copies. We show a distillation protocol\nthat allows to distill Bell states with non zero probability at any time for an\ninitial singlet in vacuum. It is shown that the same protocol used in non zero\nthermal baths yields a considerable recovering of entanglement.", + "authors": "E. Isasi, D. Mundarain", + "published": "2009-08-14", + "updated": "2009-08-14", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2301.01615v2", + "title": "StereoDistill: Pick the Cream from LiDAR for Distilling Stereo-based 3D Object Detection", + "abstract": "In this paper, we propose a cross-modal distillation method named\nStereoDistill to narrow the gap between the stereo and LiDAR-based approaches\nvia distilling the stereo detectors from the superior LiDAR model at the\nresponse level, which is usually overlooked in 3D object detection\ndistillation. The key designs of StereoDistill are: the X-component Guided\nDistillation~(XGD) for regression and the Cross-anchor Logit Distillation~(CLD)\nfor classification. In XGD, instead of empirically adopting a threshold to\nselect the high-quality teacher predictions as soft targets, we decompose the\npredicted 3D box into sub-components and retain the corresponding part for\ndistillation if the teacher component pilot is consistent with ground truth to\nlargely boost the number of positive predictions and alleviate the mimicking\ndifficulty of the student model. For CLD, we aggregate the probability\ndistribution of all anchors at the same position to encourage the highest\nprobability anchor rather than individually distill the distribution at the\nanchor level. Finally, our StereoDistill achieves state-of-the-art results for\nstereo-based 3D detection on the KITTI test benchmark and extensive experiments\non KITTI and Argoverse Dataset validate the effectiveness.", + "authors": "Zhe Liu, Xiaoqing Ye, Xiao Tan, Errui Ding, Xiang Bai", + "published": "2023-01-04", + "updated": "2023-01-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1903.04197v7", + "title": "Structured Knowledge Distillation for Dense Prediction", + "abstract": "In this work, we consider transferring the structure information from large\nnetworks to compact ones for dense prediction tasks in computer vision.\nPrevious knowledge distillation strategies used for dense prediction tasks\noften directly borrow the distillation scheme for image classification and\nperform knowledge distillation for each pixel separately, leading to\nsub-optimal performance. Here we propose to distill structured knowledge from\nlarge networks to compact networks, taking into account the fact that dense\nprediction is a structured prediction problem. Specifically, we study two\nstructured distillation schemes: i) pair-wise distillation that distills the\npair-wise similarities by building a static graph; and ii) holistic\ndistillation that uses adversarial training to distill holistic knowledge. The\neffectiveness of our knowledge distillation approaches is demonstrated by\nexperiments on three dense prediction tasks: semantic segmentation, depth\nestimation and object detection. Code is available at: https://git.io/StructKD", + "authors": "Yifan Liu, Changyong Shun, Jingdong Wang, Chunhua Shen", + "published": "2019-03-11", + "updated": "2020-06-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2305.12330v1", + "title": "Task-agnostic Distillation of Encoder-Decoder Language Models", + "abstract": "Finetuning pretrained language models (LMs) have enabled appealing\nperformance on a diverse array of tasks. The intriguing task-agnostic property\nhas driven a shifted focus from task-specific to task-agnostic distillation of\nLMs. While task-agnostic, compute-efficient, performance-preserved LMs can be\nyielded by task-agnostic distillation, previous studies mainly sit in\ndistillation of either encoder-only LMs (e.g., BERT) or decoder-only ones\n(e.g., GPT) yet largely neglect that distillation of encoder-decoder LMs (e.g.,\nT5) can posit very distinguished behaviors. Frustratingly, we discover that\nexisting task-agnostic distillation methods can fail to handle the distillation\nof encoder-decoder LMs. To the demand, we explore a few paths and uncover a\npath named as MiniEnD that successfully tackles the distillation of\nencoder-decoder LMs in a task-agnostic fashion. We examine MiniEnD on language\nunderstanding and abstractive summarization. The results showcase that MiniEnD\nis generally effective and is competitive compared to other alternatives. We\nfurther scale MiniEnD up to distillation of 3B encoder-decoder language models\nwith interpolated distillation. The results imply the opportunities and\nchallenges in distilling large language models (e.g., LLaMA).", + "authors": "Chen Zhang, Yang Yang, Jingang Wang, Dawei Song", + "published": "2023-05-21", + "updated": "2023-05-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2401.15863v1", + "title": "Importance-Aware Adaptive Dataset Distillation", + "abstract": "Herein, we propose a novel dataset distillation method for constructing small\ninformative datasets that preserve the information of the large original\ndatasets. The development of deep learning models is enabled by the\navailability of large-scale datasets. Despite unprecedented success,\nlarge-scale datasets considerably increase the storage and transmission costs,\nresulting in a cumbersome model training process. Moreover, using raw data for\ntraining raises privacy and copyright concerns. To address these issues, a new\ntask named dataset distillation has been introduced, aiming to synthesize a\ncompact dataset that retains the essential information from the large original\ndataset. State-of-the-art (SOTA) dataset distillation methods have been\nproposed by matching gradients or network parameters obtained during training\non real and synthetic datasets. The contribution of different network\nparameters to the distillation process varies, and uniformly treating them\nleads to degraded distillation performance. Based on this observation, we\npropose an importance-aware adaptive dataset distillation (IADD) method that\ncan improve distillation performance by automatically assigning importance\nweights to different network parameters during distillation, thereby\nsynthesizing more robust distilled datasets. IADD demonstrates superior\nperformance over other SOTA dataset distillation methods based on parameter\nmatching on multiple benchmark datasets and outperforms them in terms of\ncross-architecture generalization. In addition, the analysis of self-adaptive\nweights demonstrates the effectiveness of IADD. Furthermore, the effectiveness\nof IADD is validated in a real-world medical application such as COVID-19\ndetection.", + "authors": "Guang Li, Ren Togo, Takahiro Ogawa, Miki Haseyama", + "published": "2024-01-29", + "updated": "2024-01-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2109.15014v1", + "title": "Deep Neural Compression Via Concurrent Pruning and Self-Distillation", + "abstract": "Pruning aims to reduce the number of parameters while maintaining performance\nclose to the original network. This work proposes a novel\n\\emph{self-distillation} based pruning strategy, whereby the representational\nsimilarity between the pruned and unpruned versions of the same network is\nmaximized. Unlike previous approaches that treat distillation and pruning\nseparately, we use distillation to inform the pruning criteria, without\nrequiring a separate student network as in knowledge distillation. We show that\nthe proposed {\\em cross-correlation objective for self-distilled pruning}\nimplicitly encourages sparse solutions, naturally complementing magnitude-based\npruning criteria. Experiments on the GLUE and XGLUE benchmarks show that\nself-distilled pruning increases mono- and cross-lingual language model\nperformance. Self-distilled pruned models also outperform smaller Transformers\nwith an equal number of parameters and are competitive against (6 times) larger\ndistilled networks. We also observe that self-distillation (1) maximizes class\nseparability, (2) increases the signal-to-noise ratio, and (3) converges faster\nafter pruning steps, providing further insights into why self-distilled pruning\nimproves generalization.", + "authors": "James O' Neill, Sourav Dutta, Haytham Assem", + "published": "2021-09-30", + "updated": "2021-09-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2006.08572v3", + "title": "Flexible Dataset Distillation: Learn Labels Instead of Images", + "abstract": "We study the problem of dataset distillation - creating a small set of\nsynthetic examples capable of training a good model. In particular, we study\nthe problem of label distillation - creating synthetic labels for a small set\nof real images, and show it to be more effective than the prior image-based\napproach to dataset distillation. Methodologically, we introduce a more robust\nand flexible meta-learning algorithm for distillation, as well as an effective\nfirst-order strategy based on convex optimization layers. Distilling labels\nwith our new algorithm leads to improved results over prior image-based\ndistillation. More importantly, it leads to clear improvements in flexibility\nof the distilled dataset in terms of compatibility with off-the-shelf\noptimizers and diverse neural architectures. Interestingly, label distillation\ncan also be applied across datasets, for example enabling learning Japanese\ncharacter recognition by training only on synthetically labeled English\nletters.", + "authors": "Ondrej Bohdal, Yongxin Yang, Timothy Hospedales", + "published": "2020-06-15", + "updated": "2020-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/0908.0836v3", + "title": "Bound States for Magic State Distillation in Fault-Tolerant Quantum Computation", + "abstract": "Magic state distillation is an important primitive in fault-tolerant quantum\ncomputation. The magic states are pure non-stabilizer states which can be\ndistilled from certain mixed non-stabilizer states via Clifford group\noperations alone. Because of the Gottesman-Knill theorem, mixtures of Pauli\neigenstates are not expected to be magic state distillable, but it has been an\nopen question whether all mixed states outside this set may be distilled. In\nthis Letter we show that, when resources are finitely limited, non-distillable\nstates exist outside the stabilizer octahedron. In analogy with the bound\nentangled states, which arise in entanglement theory, we call such states bound\nstates for magic state distillation.", + "authors": "Earl T. Campbell, Dan E. Browne", + "published": "2009-08-06", + "updated": "2010-02-01", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2102.02973v1", + "title": "Show, Attend and Distill:Knowledge Distillation via Attention-based Feature Matching", + "abstract": "Knowledge distillation extracts general knowledge from a pre-trained teacher\nnetwork and provides guidance to a target student network. Most studies\nmanually tie intermediate features of the teacher and student, and transfer\nknowledge through pre-defined links. However, manual selection often constructs\nineffective links that limit the improvement from the distillation. There has\nbeen an attempt to address the problem, but it is still challenging to identify\neffective links under practical scenarios. In this paper, we introduce an\neffective and efficient feature distillation method utilizing all the feature\nlevels of the teacher without manually selecting the links. Specifically, our\nmethod utilizes an attention-based meta-network that learns relative\nsimilarities between features, and applies identified similarities to control\ndistillation intensities of all possible pairs. As a result, our method\ndetermines competent links more efficiently than the previous approach and\nprovides better performance on model compression and transfer learning tasks.\nFurther qualitative analyses and ablative studies describe how our method\ncontributes to better distillation. The implementation code is available at\ngithub.com/clovaai/attention-feature-distillation.", + "authors": "Mingi Ji, Byeongho Heo, Sungrae Park", + "published": "2021-02-05", + "updated": "2021-02-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1504.05965v2", + "title": "Qutrit Magic State Distillation Tight in Some Directions", + "abstract": "Magic state distillation is a crucial component in the leading approaches to\nimplementing universal fault tolerant quantum computation, with existing\nprotocols for both qubit and higher dimensional systems. Early work focused on\ndetermining the region of distillable states for qubit protocols, yet\ncomparatively little is known about which states can be distilled and with what\ndistillable region for d>2. Here we focus on d=3 and present new four-qutrit\ndistillation schemes that improve upon the known distillable region, and\nachieve distillation tight to the boundary of undistillable states for some\nclasses of state. As a consequence of recent results, this implies that there\nis a family of quantum states that enable universality if and only if they\nexhibit contextuality with respect to stabilizer measurements. We also identify\na new routine whose fixed point is a magic state with maximal sum-negativity\ni.e., it is maximally non-stabilizer in a specific sense.", + "authors": "Hillary Dawkins, Mark Howard", + "published": "2015-04-22", + "updated": "2015-09-21", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0202165v1", + "title": "Distinguishing locally of quantum states and the distillation of entanglement", + "abstract": "This paper try to probe the relation of distinguishing locally and\ndistillation of entanglement. The distinguishing information (DI) and the\nmaximal distinguishing information (MDI) of a set of pure states are defined.\nThe interpretation of distillation of entanglement in term of information is\ngiven. The relation between the maximal distinguishing information and\ndistillable entanglement is gained. As a application of this relation the\ndistillable entanglement of Bell-diagonal states is present.", + "authors": "ping-xing. chen, Cheng-zu Li", + "published": "2002-02-27", + "updated": "2002-02-27", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2104.02857v2", + "title": "Soft-Label Anonymous Gastric X-ray Image Distillation", + "abstract": "This paper presents a soft-label anonymous gastric X-ray image distillation\nmethod based on a gradient descent approach. The sharing of medical data is\ndemanded to construct high-accuracy computer-aided diagnosis (CAD) systems.\nHowever, the large size of the medical dataset and privacy protection are\nremaining problems in medical data sharing, which hindered the research of CAD\nsystems. The idea of our distillation method is to extract the valid\ninformation of the medical dataset and generate a tiny distilled dataset that\nhas a different data distribution. Different from model distillation, our\nmethod aims to find the optimal distilled images, distilled labels and the\noptimized learning rate. Experimental results show that the proposed method can\nnot only effectively compress the medical dataset but also anonymize medical\nimages to protect the patient's private information. The proposed approach can\nimprove the efficiency and security of medical data sharing.", + "authors": "Guang Li, Ren Togo, Takahiro Ogawa, Miki Haseyama", + "published": "2021-04-07", + "updated": "2024-03-21", + "primary_cat": "eess.IV", + "cats": [ + "eess.IV", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2307.08436v1", + "title": "DOT: A Distillation-Oriented Trainer", + "abstract": "Knowledge distillation transfers knowledge from a large model to a small one\nvia task and distillation losses. In this paper, we observe a trade-off between\ntask and distillation losses, i.e., introducing distillation loss limits the\nconvergence of task loss. We believe that the trade-off results from the\ninsufficient optimization of distillation loss. The reason is: The teacher has\na lower task loss than the student, and a lower distillation loss drives the\nstudent more similar to the teacher, then a better-converged task loss could be\nobtained. To break the trade-off, we propose the Distillation-Oriented Trainer\n(DOT). DOT separately considers gradients of task and distillation losses, then\napplies a larger momentum to distillation loss to accelerate its optimization.\nWe empirically prove that DOT breaks the trade-off, i.e., both losses are\nsufficiently optimized. Extensive experiments validate the superiority of DOT.\nNotably, DOT achieves a +2.59% accuracy improvement on ImageNet-1k for the\nResNet50-MobileNetV1 pair. Conclusively, DOT greatly benefits the student's\noptimization properties in terms of loss convergence and model generalization.\nCode will be made publicly available.", + "authors": "Borui Zhao, Quan Cui, Renjie Song, Jiajun Liang", + "published": "2023-07-17", + "updated": "2023-07-17", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0607126v3", + "title": "Random bipartite entanglement from W and W-like states", + "abstract": "We describe a protocol for distilling maximally entangled bipartite states\nbetween random pairs of parties from those sharing a tripartite W state, and\nshow that, rather surprisingly, the total distillation rate (the total number\nof EPR pairs distilled per W, irrespective of who shares them) may be done at a\nhigher rate than distillation of bipartite entanglement between specified pairs\nof parties. Specifically, the optimal distillation rate for specified\nentanglement for the W has been previously shown to be the asymptotic\nentanglement of assistance of 0.92 EPR pairs per W, while our protocol can\nasymptotically distill 1 EPR pair per W between random pairs of parties, which\nwe conjecture to be optimal. We thus demonstrate a tradeoff between the overall\nasymptotic rate of EPR distillation and the distribution of final EPR pairs\nbetween parties. We further show that by increasing the number of parties in\nthe protocol that there exist states with fixed lower-bounded distillable\nentanglement for random parties but arbitrarily small distillable entanglement\nfor specified parties.", + "authors": "Ben Fortescue, Hoi-Kwong Lo", + "published": "2006-07-18", + "updated": "2007-02-23", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2403.03846v1", + "title": "On the Effectiveness of Distillation in Mitigating Backdoors in Pre-trained Encoder", + "abstract": "In this paper, we study a defense against poisoned encoders in SSL called\ndistillation, which is a defense used in supervised learning originally.\nDistillation aims to distill knowledge from a given model (a.k.a the teacher\nnet) and transfer it to another (a.k.a the student net). Now, we use it to\ndistill benign knowledge from poisoned pre-trained encoders and transfer it to\na new encoder, resulting in a clean pre-trained encoder. In particular, we\nconduct an empirical study on the effectiveness and performance of distillation\nagainst poisoned encoders. Using two state-of-the-art backdoor attacks against\npre-trained image encoders and four commonly used image classification\ndatasets, our experimental results show that distillation can reduce attack\nsuccess rate from 80.87% to 27.51% while suffering a 6.35% loss in accuracy.\nMoreover, we investigate the impact of three core components of distillation on\nperformance: teacher net, student net, and distillation loss. By comparing 4\ndifferent teacher nets, 3 student nets, and 6 distillation losses, we find that\nfine-tuned teacher nets, warm-up-training-based student nets, and\nattention-based distillation loss perform best, respectively.", + "authors": "Tingxu Han, Shenghan Huang, Ziqi Ding, Weisong Sun, Yebo Feng, Chunrong Fang, Jun Li, Hanwei Qian, Cong Wu, Quanjun Zhang, Yang Liu, Zhenyu Chen", + "published": "2024-03-06", + "updated": "2024-03-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2112.05638v2", + "title": "DistilCSE: Effective Knowledge Distillation For Contrastive Sentence Embeddings", + "abstract": "Large-scale contrastive learning models can learn very informative sentence\nembeddings, but are hard to serve online due to the huge model size. Therefore,\nthey often play the role of \"teacher\", transferring abilities to small\n\"student\" models through knowledge distillation. However, knowledge\ndistillation inevitably brings some drop in embedding effect. To tackle that,\nwe propose an effective knowledge distillation framework for contrastive\nsentence embeddings, termed DistilCSE. It first applies knowledge distillation\non a large amount of unlabeled data, and then fine-tunes student models through\ncontrastive learning on limited labeled data. To achieve better distillation\nresults, we further propose Contrastive Knowledge Distillation (CKD). CKD uses\nInfoNCE as the loss function in knowledge distillation, enhancing the objective\nconsistency among teacher model training, knowledge distillation, and student\nmodel fine-tuning. Extensive experiments show that student models trained with\nthe proposed DistilCSE and CKD suffer from little or even no performance\ndecrease and consistently outperform the corresponding counterparts of the same\nparameter size. Impressively, our 110M student model outperforms the latest\nstate-of-the-art model, i.e., Sentence-T5 (11B), with only 1% parameters and\n0.25% unlabeled data.", + "authors": "Chaochen Gao, Xing Wu, Peng Wang, Jue Wang, Liangjun Zang, Zhongyuan Wang, Songlin Hu", + "published": "2021-12-10", + "updated": "2023-01-30", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1707.02573v1", + "title": "Distilling Entanglement with Noisy Operations", + "abstract": "Entanglement distillation is a fundamental task in quantum information\nprocessing. It not only extracts entanglement out of corrupted systems but also\nleads to protecting systems of interest against intervention with environment.\nIn this work, we consider a realistic scenario of entanglement distillation\nwhere noisy quantum operations are applied. In particular, the two-way\ndistillation protocol that tolerates the highest error rate is considered. We\nshow that among all types of noise there are only four equivalence classes\naccording to the distillability condition. Since the four classes are connected\nby local unitary transformations, our results can be used to improve\nentanglement distillability in practice when entanglement distillation is\nperformed in a realistic setting.", + "authors": "Jinho Chang, Joonwoo Bae, Younghun Kwon", + "published": "2017-07-09", + "updated": "2017-07-09", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.06170v1", + "title": "CLIP-Embed-KD: Computationally Efficient Knowledge Distillation Using Embeddings as Teachers", + "abstract": "Contrastive Language-Image Pre-training (CLIP) has been shown to improve\nzero-shot generalization capabilities of language and vision models. In this\npaper, we extend CLIP for efficient knowledge distillation, by utilizing\nembeddings as teachers. Typical knowledge distillation frameworks require\nrunning forward passes through a teacher model, which is often prohibitive in\nthe case of billion or trillion parameter teachers. In these cases, using only\nthe embeddings of the teacher models to guide the distillation can yield\nsignificant computational savings. Our preliminary findings show that\nCLIP-based knowledge distillation with embeddings can outperform full scale\nknowledge distillation using $9\\times$ less memory and $8\\times$ less training\ntime. Code available at: https://github.com/lnairGT/CLIP-Distillation/", + "authors": "Lakshmi Nair", + "published": "2024-04-09", + "updated": "2024-04-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2205.16004v3", + "title": "What Knowledge Gets Distilled in Knowledge Distillation?", + "abstract": "Knowledge distillation aims to transfer useful information from a teacher\nnetwork to a student network, with the primary goal of improving the student's\nperformance for the task at hand. Over the years, there has a been a deluge of\nnovel techniques and use cases of knowledge distillation. Yet, despite the\nvarious improvements, there seems to be a glaring gap in the community's\nfundamental understanding of the process. Specifically, what is the knowledge\nthat gets distilled in knowledge distillation? In other words, in what ways\ndoes the student become similar to the teacher? Does it start to localize\nobjects in the same way? Does it get fooled by the same adversarial samples?\nDoes its data invariance properties become similar? Our work presents a\ncomprehensive study to try to answer these questions. We show that existing\nmethods can indeed indirectly distill these properties beyond improving task\nperformance. We further study why knowledge distillation might work this way,\nand show that our findings have practical implications as well.", + "authors": "Utkarsh Ojha, Yuheng Li, Anirudh Sundara Rajan, Yingyu Liang, Yong Jae Lee", + "published": "2022-05-31", + "updated": "2023-11-06", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2302.09632v1", + "title": "HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained Transformers", + "abstract": "Knowledge distillation has been shown to be a powerful model compression\napproach to facilitate the deployment of pre-trained language models in\npractice. This paper focuses on task-agnostic distillation. It produces a\ncompact pre-trained model that can be easily fine-tuned on various tasks with\nsmall computational costs and memory footprints. Despite the practical\nbenefits, task-agnostic distillation is challenging. Since the teacher model\nhas a significantly larger capacity and stronger representation power than the\nstudent model, it is very difficult for the student to produce predictions that\nmatch the teacher's over a massive amount of open-domain training data. Such a\nlarge prediction discrepancy often diminishes the benefits of knowledge\ndistillation. To address this challenge, we propose Homotopic Distillation\n(HomoDistil), a novel task-agnostic distillation approach equipped with\niterative pruning. Specifically, we initialize the student model from the\nteacher model, and iteratively prune the student's neurons until the target\nwidth is reached. Such an approach maintains a small discrepancy between the\nteacher's and student's predictions throughout the distillation process, which\nensures the effectiveness of knowledge transfer. Extensive experiments\ndemonstrate that HomoDistil achieves significant improvements on existing\nbaselines.", + "authors": "Chen Liang, Haoming Jiang, Zheng Li, Xianfeng Tang, Bin Yin, Tuo Zhao", + "published": "2023-02-19", + "updated": "2023-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2011.02255v2", + "title": "On Self-Distilling Graph Neural Network", + "abstract": "Recently, the teacher-student knowledge distillation framework has\ndemonstrated its potential in training Graph Neural Networks (GNNs). However,\ndue to the difficulty of training over-parameterized GNN models, one may not\neasily obtain a satisfactory teacher model for distillation. Furthermore, the\ninefficient training process of teacher-student knowledge distillation also\nimpedes its applications in GNN models. In this paper, we propose the first\nteacher-free knowledge distillation method for GNNs, termed GNN\nSelf-Distillation (GNN-SD), that serves as a drop-in replacement of the\nstandard training process. The method is built upon the proposed neighborhood\ndiscrepancy rate (NDR), which quantifies the non-smoothness of the embedded\ngraph in an efficient way. Based on this metric, we propose the adaptive\ndiscrepancy retaining (ADR) regularizer to empower the transferability of\nknowledge that maintains high neighborhood discrepancy across GNN layers. We\nalso summarize a generic GNN-SD framework that could be exploited to induce\nother distillation strategies. Experiments further prove the effectiveness and\ngeneralization of our approach, as it brings: 1) state-of-the-art GNN\ndistillation performance with less training cost, 2) consistent and\nconsiderable performance enhancement for various popular backbones.", + "authors": "Yuzhao Chen, Yatao Bian, Xi Xiao, Yu Rong, Tingyang Xu, Junzhou Huang", + "published": "2020-11-04", + "updated": "2021-04-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1901.09135v1", + "title": "Progressive Label Distillation: Learning Input-Efficient Deep Neural Networks", + "abstract": "Much of the focus in the area of knowledge distillation has been on\ndistilling knowledge from a larger teacher network to a smaller student\nnetwork. However, there has been little research on how the concept of\ndistillation can be leveraged to distill the knowledge encapsulated in the\ntraining data itself into a reduced form. In this study, we explore the concept\nof progressive label distillation, where we leverage a series of\nteacher-student network pairs to progressively generate distilled training data\nfor learning deep neural networks with greatly reduced input dimensions. To\ninvestigate the efficacy of the proposed progressive label distillation\napproach, we experimented with learning a deep limited vocabulary speech\nrecognition network based on generated 500ms input utterances distilled\nprogressively from 1000ms source training data, and demonstrated a significant\nincrease in test accuracy of almost 78% compared to direct learning.", + "authors": "Zhong Qiu Lin, Alexander Wong", + "published": "2019-01-26", + "updated": "2019-01-26", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2302.05637v2", + "title": "Dual Relation Knowledge Distillation for Object Detection", + "abstract": "Knowledge distillation is an effective method for model compression. However,\nit is still a challenging topic to apply knowledge distillation to detection\ntasks. There are two key points resulting in poor distillation performance for\ndetection tasks. One is the serious imbalance between foreground and background\nfeatures, another one is that small object lacks enough feature representation.\nTo solve the above issues, we propose a new distillation method named dual\nrelation knowledge distillation (DRKD), including pixel-wise relation\ndistillation and instance-wise relation distillation. The pixel-wise relation\ndistillation embeds pixel-wise features in the graph space and applies graph\nconvolution to capture the global pixel relation. By distilling the global\npixel relation, the student detector can learn the relation between foreground\nand background features, and avoid the difficulty of distilling features\ndirectly for the feature imbalance issue. Besides, we find that instance-wise\nrelation supplements valuable knowledge beyond independent features for small\nobjects. Thus, the instance-wise relation distillation is designed, which\ncalculates the similarity of different instances to obtain a relation matrix.\nMore importantly, a relation filter module is designed to highlight valuable\ninstance relations. The proposed dual relation knowledge distillation is\ngeneral and can be easily applied for both one-stage and two-stage detectors.\nOur method achieves state-of-the-art performance, which improves Faster R-CNN\nbased on ResNet50 from 38.4% to 41.6% mAP and improves RetinaNet based on\nResNet50 from 37.4% to 40.3% mAP on COCO 2017.", + "authors": "Zhenliang Ni, Fukui Yang, Shengzhao Wen, Gang Zhang", + "published": "2023-02-11", + "updated": "2023-06-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2312.06899v1", + "title": "LoRA-Enhanced Distillation on Guided Diffusion Models", + "abstract": "Diffusion models, such as Stable Diffusion (SD), offer the ability to\ngenerate high-resolution images with diverse features, but they come at a\nsignificant computational and memory cost. In classifier-free guided diffusion\nmodels, prolonged inference times are attributed to the necessity of computing\ntwo separate diffusion models at each denoising step. Recent work has shown\npromise in improving inference time through distillation techniques, teaching\nthe model to perform similar denoising steps with reduced computations.\nHowever, the application of distillation introduces additional memory overhead\nto these already resource-intensive diffusion models, making it less practical.\n To address these challenges, our research explores a novel approach that\ncombines Low-Rank Adaptation (LoRA) with model distillation to efficiently\ncompress diffusion models. This approach not only reduces inference time but\nalso mitigates memory overhead, and notably decreases memory consumption even\nbefore applying distillation. The results are remarkable, featuring a\nsignificant reduction in inference time due to the distillation process and a\nsubstantial 50% reduction in memory consumption. Our examination of the\ngenerated images underscores that the incorporation of LoRA-enhanced\ndistillation maintains image quality and alignment with the provided prompts.\nIn summary, while conventional distillation tends to increase memory\nconsumption, LoRA-enhanced distillation offers optimization without any\ntrade-offs or compromises in quality.", + "authors": "Pareesa Ameneh Golnari", + "published": "2023-12-12", + "updated": "2023-12-12", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2303.05015v2", + "title": "Smooth and Stepwise Self-Distillation for Object Detection", + "abstract": "Distilling the structured information captured in feature maps has\ncontributed to improved results for object detection tasks, but requires\ncareful selection of baseline architectures and substantial pre-training.\nSelf-distillation addresses these limitations and has recently achieved\nstate-of-the-art performance for object detection despite making several\nsimplifying architectural assumptions. Building on this work, we propose Smooth\nand Stepwise Self-Distillation (SSSD) for object detection. Our SSSD\narchitecture forms an implicit teacher from object labels and a feature pyramid\nnetwork backbone to distill label-annotated feature maps using Jensen-Shannon\ndistance, which is smoother than distillation losses used in prior work. We\nadditionally add a distillation coefficient that is adaptively configured based\non the learning rate. We extensively benchmark SSSD against a baseline and two\nstate-of-the-art object detector architectures on the COCO dataset by varying\nthe coefficients and backbone and detector networks. We demonstrate that SSSD\nachieves higher average precision in most experimental settings, is robust to a\nwide range of coefficients, and benefits from our stepwise distillation\nprocedure.", + "authors": "Jieren Deng, Xin Zhou, Hao Tian, Zhihong Pan, Derek Aguiar", + "published": "2023-03-09", + "updated": "2024-01-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2205.09153v1", + "title": "ERNIE-Search: Bridging Cross-Encoder with Dual-Encoder via Self On-the-fly Distillation for Dense Passage Retrieval", + "abstract": "Neural retrievers based on pre-trained language models (PLMs), such as\ndual-encoders, have achieved promising performance on the task of open-domain\nquestion answering (QA). Their effectiveness can further reach new\nstate-of-the-arts by incorporating cross-architecture knowledge distillation.\nHowever, most of the existing studies just directly apply conventional\ndistillation methods. They fail to consider the particular situation where the\nteacher and student have different structures. In this paper, we propose a\nnovel distillation method that significantly advances cross-architecture\ndistillation for dual-encoders. Our method 1) introduces a self on-the-fly\ndistillation method that can effectively distill late interaction (i.e.,\nColBERT) to vanilla dual-encoder, and 2) incorporates a cascade distillation\nprocess to further improve the performance with a cross-encoder teacher.\nExtensive experiments are conducted to validate that our proposed solution\noutperforms strong baselines and establish a new state-of-the-art on\nopen-domain QA benchmarks.", + "authors": "Yuxiang Lu, Yiding Liu, Jiaxiang Liu, Yunsheng Shi, Zhengjie Huang, Shikun Feng Yu Sun, Hao Tian, Hua Wu, Shuaiqiang Wang, Dawei Yin, Haifeng Wang", + "published": "2022-05-18", + "updated": "2022-05-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2011.14554v1", + "title": "A Selective Survey on Versatile Knowledge Distillation Paradigm for Neural Network Models", + "abstract": "This paper aims to provide a selective survey about knowledge\ndistillation(KD) framework for researchers and practitioners to take advantage\nof it for developing new optimized models in the deep neural network field. To\nthis end, we give a brief overview of knowledge distillation and some related\nworks including learning using privileged information(LUPI) and generalized\ndistillation(GD). Even though knowledge distillation based on the\nteacher-student architecture was initially devised as a model compression\ntechnique, it has found versatile applications over various frameworks.\n In this paper, we review the characteristics of knowledge distillation from\nthe hypothesis that the three important ingredients of knowledge distillation\nare distilled knowledge and loss,teacher-student paradigm, and the distillation\nprocess. In addition, we survey the versatility of the knowledge distillation\nby studying its direct applications and its usage in combination with other\ndeep learning paradigms. Finally we present some future works in knowledge\ndistillation including explainable knowledge distillation where the analytical\nanalysis of the performance gain is studied and the self-supervised learning\nwhich is a hot research topic in deep learning community.", + "authors": "Jeong-Hoe Ku, JiHun Oh, YoungYoon Lee, Gaurav Pooniwala, SangJeong Lee", + "published": "2020-11-30", + "updated": "2020-11-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2401.11365v1", + "title": "Confidence Preservation Property in Knowledge Distillation Abstractions", + "abstract": "Social media platforms prevent malicious activities by detecting harmful\ncontent of posts and comments. To that end, they employ large-scale deep neural\nnetwork language models for sentiment analysis and content understanding. Some\nmodels, like BERT, are complex, and have numerous parameters, which makes them\nexpensive to operate and maintain. To overcome these deficiencies, industry\nexperts employ a knowledge distillation compression technique, where a\ndistilled model is trained to reproduce the classification behavior of the\noriginal model. The distillation processes terminates when the distillation\nloss function reaches the stopping criteria. This function is mainly designed\nto ensure that the original and the distilled models exhibit alike\nclassification behaviors. However, besides classification accuracy, there are\nadditional properties of the original model that the distilled model should\npreserve to be considered as an appropriate abstraction. In this work, we\nexplore whether distilled TinyBERT models preserve confidence values of the\noriginal BERT models, and investigate how this confidence preservation property\ncould guide tuning hyperparameters of the distillation process.", + "authors": "Dmitry Vengertsev, Elena Sherman", + "published": "2024-01-21", + "updated": "2024-01-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2304.04615v1", + "title": "A Survey on Recent Teacher-student Learning Studies", + "abstract": "Knowledge distillation is a method of transferring the knowledge from a\ncomplex deep neural network (DNN) to a smaller and faster DNN, while preserving\nits accuracy. Recent variants of knowledge distillation include teaching\nassistant distillation, curriculum distillation, mask distillation, and\ndecoupling distillation, which aim to improve the performance of knowledge\ndistillation by introducing additional components or by changing the learning\nprocess. Teaching assistant distillation involves an intermediate model called\nthe teaching assistant, while curriculum distillation follows a curriculum\nsimilar to human education. Mask distillation focuses on transferring the\nattention mechanism learned by the teacher, and decoupling distillation\ndecouples the distillation loss from the task loss. Overall, these variants of\nknowledge distillation have shown promising results in improving the\nperformance of knowledge distillation.", + "authors": "Minghong Gao", + "published": "2023-04-10", + "updated": "2023-04-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0008047v2", + "title": "A semidefinite program for distillable entanglement", + "abstract": "We show that the maximum fidelity obtained by a p.p.t. distillation protocol\nis given by the solution to a certain semidefinite program. This gives a number\nof new lower and upper bounds on p.p.t. distillable entanglement (and thus new\nupper bounds on 2-locally distillable entanglement). In the presence of\nsymmetry, the semidefinite program simplifies considerably, becoming a linear\nprogram in the case of isotropic and Werner states. Using these techniques, we\ndetermine the p.p.t. distillable entanglement of asymmetric Werner states and\n``maximally correlated'' states. We conclude with a discussion of possible\napplications of semidefinite programming to quantum codes and 1-local\ndistillation.", + "authors": "Eric M. Rains", + "published": "2000-08-10", + "updated": "2001-04-12", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1907.09682v2", + "title": "Similarity-Preserving Knowledge Distillation", + "abstract": "Knowledge distillation is a widely applicable technique for training a\nstudent neural network under the guidance of a trained teacher network. For\nexample, in neural network compression, a high-capacity teacher is distilled to\ntrain a compact student; in privileged learning, a teacher trained with\nprivileged data is distilled to train a student without access to that data.\nThe distillation loss determines how a teacher's knowledge is captured and\ntransferred to the student. In this paper, we propose a new form of knowledge\ndistillation loss that is inspired by the observation that semantically similar\ninputs tend to elicit similar activation patterns in a trained network.\nSimilarity-preserving knowledge distillation guides the training of a student\nnetwork such that input pairs that produce similar (dissimilar) activations in\nthe teacher network produce similar (dissimilar) activations in the student\nnetwork. In contrast to previous distillation methods, the student is not\nrequired to mimic the representation space of the teacher, but rather to\npreserve the pairwise similarities in its own representation space. Experiments\non three public datasets demonstrate the potential of our approach.", + "authors": "Frederick Tung, Greg Mori", + "published": "2019-07-23", + "updated": "2019-08-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.09740v1", + "title": "Leveraging Zero-Level Distillation to Generate High-Fidelity Magic States", + "abstract": "Magic state distillation plays an important role in universal fault-tolerant\nquantum computing, and its overhead is one of the major obstacles to realizing\nfault-tolerant quantum computers. Hence, many studies have been conducted to\nreduce this overhead. Among these, Litinski has provided a concrete assessment\nof resource-efficient distillation protocol implementations on the rotated\nsurface code. On the other hand, recently, Itogawa et al. have proposed\nzero-level distillation, a distillation protocol offering very small spatial\nand temporal overhead to generate relatively low-fidelity magic states. While\nzero-level distillation offers preferable spatial and temporal overhead, it\ncannot directly generate high-fidelity magic states since it only reduces the\nlogical error rate of the magic state quadratically. In this study, we evaluate\nthe spatial and temporal overhead of two-level distillation implementations\ngenerating relatively high-fidelity magic states, including ones incorporating\nzero-level distillation. To this end, we introduce (0+1)-level distillation, a\ntwo-level distillation protocol which combines zero-level distillation and the\n15-to-1 distillation protocol. We refine the second-level 15-to-1\nimplementation in it to capitalize on the small footprint of zero-level\ndistillation. Under conditions of a physical error probability of\n$p_{\\mathrm{phys}} = 10^{-4}$ ($10^{-3}$) and targeting an error rate for the\nmagic state within $[5 \\times 10^{-17}, 10^{-11}]$ ($[5 \\times 10^{-11},\n10^{-8}]$), (0+1)-level distillation reduces the spatiotemporal overhead by\nmore than 63% (61%) compared to the (15-to-1)$\\times$(15-to-1) protocol and\nmore than 43% (44%) compared to the (15-to-1)$\\times$(20-to-4) protocol,\noffering a substantial efficiency gain over the traditional protocols.", + "authors": "Yutaka Hirano, Tomohiro Itogawa, Keisuke Fujii", + "published": "2024-04-15", + "updated": "2024-04-15", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.17732v1", + "title": "Generative Dataset Distillation: Balancing Global Structure and Local Details", + "abstract": "In this paper, we propose a new dataset distillation method that considers\nbalancing global structure and local details when distilling the information\nfrom a large dataset into a generative model. Dataset distillation has been\nproposed to reduce the size of the required dataset when training models. The\nconventional dataset distillation methods face the problem of long redeployment\ntime and poor cross-architecture performance. Moreover, previous methods\nfocused too much on the high-level semantic attributes between the synthetic\ndataset and the original dataset while ignoring the local features such as\ntexture and shape. Based on the above understanding, we propose a new method\nfor distilling the original image dataset into a generative model. Our method\ninvolves using a conditional generative adversarial network to generate the\ndistilled dataset. Subsequently, we ensure balancing global structure and local\ndetails in the distillation process, continuously optimizing the generator for\nmore information-dense dataset generation.", + "authors": "Longzhen Li, Guang Li, Ren Togo, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama", + "published": "2024-04-26", + "updated": "2024-04-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1807.04705v2", + "title": "Non-asymptotic assisted distillation of quantum coherence", + "abstract": "We characterize the operational task of environment-assisted distillation of\nquantum coherence under different sets of free operations when only a finite\nsupply of copies of a given state is available. We first evaluate the one-shot\nassisted distillable coherence exactly, and introduce a semidefinite\nprogramming bound on it in terms of a smooth entropic quantity. We prove the\nbound to be tight for all systems in dimensions 2 and 3, which allows us to\nobtain computable expressions for the one-shot rate of distillation, establish\nan analytical expression for the best achievable fidelity of assisted\ndistillation for any finite number of copies, and fully solve the problem of\nasymptotic zero-error assisted distillation for qubit and qutrit systems. Our\ncharacterization shows that all relevant sets of free operations in the\nresource theory of coherence have exactly the same power in the task of\none-shot assisted coherence distillation, and furthermore resolves a conjecture\nregarding the additivity of coherence of assistance in dimension 3.", + "authors": "Bartosz Regula, Ludovico Lami, Alexander Streltsov", + "published": "2018-07-12", + "updated": "2018-10-16", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2312.00739v1", + "title": "Adversarial Score Distillation: When score distillation meets GAN", + "abstract": "Existing score distillation methods are sensitive to classifier-free guidance\n(CFG) scale: manifested as over-smoothness or instability at small CFG scales,\nwhile over-saturation at large ones. To explain and analyze these issues, we\nrevisit the derivation of Score Distillation Sampling (SDS) and decipher\nexisting score distillation with the Wasserstein Generative Adversarial Network\n(WGAN) paradigm. With the WGAN paradigm, we find that existing score\ndistillation either employs a fixed sub-optimal discriminator or conducts\nincomplete discriminator optimization, resulting in the scale-sensitive issue.\nWe propose the Adversarial Score Distillation (ASD), which maintains an\noptimizable discriminator and updates it using the complete optimization\nobjective. Experiments show that the proposed ASD performs favorably in 2D\ndistillation and text-to-3D tasks against existing methods. Furthermore, to\nexplore the generalization ability of our WGAN paradigm, we extend ASD to the\nimage editing task, which achieves competitive results. The project page and\ncode are at https://github.com/2y7c3/ASD.", + "authors": "Min Wei, Jingkai Zhou, Junyao Sun, Xuesong Zhang", + "published": "2023-12-01", + "updated": "2023-12-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2106.07137v1", + "title": "Why Can You Lay Off Heads? Investigating How BERT Heads Transfer", + "abstract": "The huge size of the widely used BERT family models has led to recent efforts\nabout model distillation. The main goal of distillation is to create a\ntask-agnostic pre-trained model that can be fine-tuned on downstream tasks\nwithout fine-tuning its full-sized version. Despite the progress of\ndistillation, to what degree and for what reason a task-agnostic model can be\ncreated from distillation has not been well studied. Also, the mechanisms\nbehind transfer learning of those BERT models are not well investigated either.\nTherefore, this work focuses on analyzing the acceptable deduction when\ndistillation for guiding the future distillation procedure. Specifically, we\nfirst inspect the prunability of the Transformer heads in RoBERTa and ALBERT\nusing their head importance estimation proposed by Michel et al. (2019), and\nthen check the coherence of the important heads between the pre-trained task\nand downstream tasks. Hence, the acceptable deduction of performance on the\npre-trained task when distilling a model can be derived from the results, and\nwe further compare the behavior of the pruned model before and after\nfine-tuning. Our studies provide guidance for future directions about BERT\nfamily model distillation.", + "authors": "Ting-Rui Chiang, Yun-Nung Chen", + "published": "2021-06-14", + "updated": "2021-06-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2403.09053v1", + "title": "Towards a theory of model distillation", + "abstract": "Distillation is the task of replacing a complicated machine learning model\nwith a simpler model that approximates the original [BCNM06,HVD15]. Despite\nmany practical applications, basic questions about the extent to which models\ncan be distilled, and the runtime and amount of data needed to distill, remain\nlargely open.\n To study these questions, we initiate a general theory of distillation,\ndefining PAC-distillation in an analogous way to PAC-learning [Val84]. As\napplications of this theory: (1) we propose new algorithms to extract the\nknowledge stored in the trained weights of neural networks -- we show how to\nefficiently distill neural networks into succinct, explicit decision tree\nrepresentations when possible by using the ``linear representation\nhypothesis''; and (2) we prove that distillation can be much cheaper than\nlearning from scratch, and make progress on characterizing its complexity.", + "authors": "Enric Boix-Adsera", + "published": "2024-03-14", + "updated": "2024-03-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.NE" + ], + "category": "Distillation" + } + ], + [ + { + "url": "http://arxiv.org/abs/2402.16361v1", + "title": "Layer-wise Regularized Dropout for Neural Language Models", + "abstract": "Among the various pre-trained neural language models that are popular today,\ndropout is already an indispensable regularization technique. To solve the\ninconsistency between training and inference caused by the randomness of\ndropout, some studies use consistency training to regularize dropout at the\noutput layer. In this paper, we propose a novel Layer-wise Regularized Dropout\n(LR-Drop), which is specially designed for Transformer-based Language models.\nSpecifically, LR-Drop layer-wise regularizes each Transformer layer using the\nconsistency training strategy. Each training sample passes through the two\nsiamese sub-models sampled by dropout, and then LR-Drop forces the hidden\nstates, multi-head attention matrices, and output distribution of the two\nsiamese sub-models to be consistent. The proposed LR-Drop can be regarded as a\n\"self-distillation\" framework, in which each sub-model generated by dropout is\nthe other's \"teacher\" model and \"student\" model. Through extensive experiments\non 8 natural language understanding datasets, 6 neural machine translation\ndatasets, and 1 abstractive summarization dataset (a total of 15 datasets), we\nshow that LR-Drop achieves superior performances, including state-of-the-art\nresults.", + "authors": "Shiwen Ni, Min Yang, Ruifeng Xu, Chengming Li, Xiping Hu", + "published": "2024-02-26", + "updated": "2024-02-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "Distillation", + "gt": "2.1. Regularization Methods The susceptibility of large and deep neural network models to overfitting is a well-established fact. It has been observed that the most effective models are typically large ones, but they are also paired with appropriate regularization techniques. A plethora of regularization techniques have been suggested to enhance the generalization capacity of these models. (Krogh and Hertz, 1992) introduced simple weight decay as a regularization technique to improve generalizability . (Kang et al., 2016) proposed the Shakeout method, which randomly enhances or inverts the contribution of each cell to the next layer, effectively applying L1 and L2 regularization to the weights. Normalization techniques have also been utilized for regularization by researchers such as (Ba et al., 2016; Salimans and Kingma, 2016; Wu and He, 2018). (Hochreiter and Schmidhuber, 1995; Poole et al., 2014) found that adding noise can have a regularization effect. Label smoothing, a simple regularization technique particularly effective in the presence of noisy labels, has been explored by (M\u00fcller et al., 2019; Zhang et al., 2021; Li et al., 2020). Adversarial training, as proposed by (Goodfellow et al., 2015; Zhu et al., 2020; Ni et al., 2021, 2022a,b) has shown significant improvement in model performance, but it comes at the cost of increased computational effort. Dropout and its derivatives, including Adaptive Dropout by (Wan et al., 2013; Ba and Frey, 2013; Srivastava et al., 2014; Ni and Kao, 2023) have gained popularity due to their effectiveness and compatibility with other regularization techniques. Dropout enables the generation of sub-models with exponentially shared parameters during training, providing powerful regularization capabilities. 2.2. Knowledge Distillation The concept of minimizing the output or parameter distribution between two models is commonly referred to as knowledge distillation (Hinton et al.; Furlanello et al., 2018). In knowledge distillation, a teacher model and a student model are typically employed, where the student model learns from both the ground truth labels and the teacher model during training. The teacher model serves as a guide for the student model, allowing it to learn the parameters and output distribution of the teacher model. This process can be viewed as the student model distilling knowledge from the teacher model. In the case of R-Drop (Wu et al., 2021), the generated sub-models can be seen as reciprocal teacher and student models, similar to the concept of selfdistillation (Mobahi et al., 2020; Zhang et al., 2019; Zhang and Sabuncu, 2020). However, R-Drop only applies self-distillation to the output of the model, without considering the internal representations. On the other hand, our proposed LR-Drop incorporates a layer-wise self-distillation approach, similar to the knowledge distillation technique employed in TinyBERT (Jiao et al., 2020). This allows for a more comprehensive knowledge interaction between the sub-models within our LR-Drop framework. It is important to note that while knowledge distillation is typically used to compress models, the primary objective of LR-Drop is to facilitate mutual learning among the sub-models within the larger model, thereby enhancing overall model performance.", + "pre_questions": [], + "main_content": "Introduction In recent years, pre-trained language models (PLMs) based on the Transformer architecture have revolutionized the field of natural language processing (NLP) by achieving state-of-the-art performance on a wide range of NLP tasks. These models, such as BERT (Bidirectional Encoder Representations from Transformers) (Kenton and Toutanova, 2019), ALBERT (A Lite BERT) (Lan et al., 2019), XLNet (Yang et al., 2019), RoBERTa (Liu et al., 2019), and ELECTRA (Efficiently Learning an Encoder that Classifies Token Replacements Accurately) (Clark et al., 2019b), have demonstrated their effectiveness in tasks such as text classification, named entity recognition, sentiment analysis, machine translation, question answering, and more. One of the key reasons for the success of these Transformer-based PLMs is their ability to capture contextualized representations of words and sentences. By leveraging the self-attention mechanism, these models can efficiently encode the relationships between different words in a sentence, allowing them to capture long-range dependencies and context. The pre-training stage involves training the models on large amounts of unlabeled text, followed by fine-tuning on specific downstream tasks using labeled data. This transfer learning approach has proven to be highly effective, as the pre-trained models can leverage the knowledge learned from the vast amount of unlabeled data to perform well on a variety of NLP tasks. To prevent overfitting and improve the general\u2020Corresponding author ization ability of PLMs, dropout regularization techniques (Srivastava et al., 2014) are commonly employed during both the pre-training and fine-tuning stages. Dropout randomly deactivates a portion of the neural units during training, effectively creating an ensemble of sub-models. This ensemble approach helps in reducing over-reliance on specific units and encourages the model to learn more robust and generalizable representations. However, the use of dropout introduces a challenge in terms of inconsistency between training and inference. During training, dropout is applied to create the ensemble, but during inference, the full model without dropout is used, leading to a mismatch in behavior. Several studies (Ma et al., 2016; Zolna et al., 2018) have highlighted this inconsistency and its potential impact on model performance. They have proposed methods to address this issue by introducing L2 regularization to the hidden unit state. However, the effectiveness of this approach is limited, and it does not fully resolve the inconsistency problem. To tackle this challenge more effectively, recent research (Wu et al., 2021) has introduced a novel consistency training method called R-Drop. R-Drop aims to align the output distributions of identical data samples processed by different submodels created through dropout. It involves performing two forward passes for each data sample, with each pass handled by a distinct sub-model that randomly deactivates some hidden units. By minimizing the bidirectional Kullback-Leibler (KL) divergence between the output distributions of these two sub-models, R-Drop encourages consistency in the predictions made by the ensemble. This arXiv:2402.16361v1 [cs.CL] 26 Feb 2024 approach provides a more robust and consistent regularization of dropout, addressing the inconsistency issue between training and inference. In addition to regulating dropout at the output layer, it is also important to ensure consistency in other representations within the PLM. For instance, the multi-head attention mechanism, which is a crucial component of Transformer-based models, typically employs dropout. Previous studies (Clark et al., 2019a) have shown that the attention weight matrix captures substantial linguistic knowledge. Therefore, it is essential to maintain consistency between the multi-head attention matrices of different sub-models to preserve the learned linguistic knowledge. By extending the principles of R-Drop, we proposed LR-Drop to introduce regular constraints into each Transformer layer of the model. In particular, we formulate three loss functions to regulate different representations from PLMs layers: 1) the hidden states and 2) multi-head attention matrices extracted from the Transformer layer; 3) the output distributions generated by the prediction layer. The multi-head attention in PLMs typically employs dropout, and previous studies (Clark et al., 2019a) have demonstrated that the attention weight matrix can acquire a substantial amount of linguistic knowledge, hence we ensure the consistency between the two multi-head attentions. To summarize, the main contributions of this paper are as follows: \u2022 In this work, we propose the layer-wise regularized dropout (LR-Drop), a simple but effective regularization technique built upon dropout, designed for Transformer-based pre-trained language models. \u2022 For the special structure of Transformer-based pre-trained language models, we are the first to propose Transformer-layer regularization, which includes regularization for hidden states and multi-headed attention. \u2022 Our LR-Drop does not introduce additional model parameters and does not change the original architecture of the language model. \u2022 By conducting rigorous experiments on 8 natural language understanding datasets, 6 neural machine translation datasets, and 1 abstractive summarization dataset, we provide evidence that LR-Drop excels in performance, even achieving state-of-the-art results. This section presents a novel regularization method called LR-Drop, specifically designed for Transformer-based language models. The LR-Drop technique is illustrated in Figure 1, where it is applied to the Transformer-based model. The process begins by inputting a sample x into the model with dropout applied twice, resulting in two output distributions denoted as P1 and P2. Subsequently, the cross-entropy loss is calculated using P1, P2, and the hard label y: LCE = \u2212logw 1 (y|x) \u2212logw 2 (y|x). (1) n to the losses obtained from the compuL \u2212 | \u2212 | In addition to the losses obtained from the computation with labels, LR-Drop contains three regularization losses, which are Transformer-layer regularization (containing (1) hidden states regularization loss and (2) multi-head attention regularization loss) and (3) output regularization loss. Next we will describe each of these three regularization processes and details. 3.1. Transformer-layer Regularization As shown in Figure 1, the red box is a Transformerlayer regularization, and we regularize between the two sub-Transformer-layers sampled in each layer of the model. The two sub-models obtained by dropout random sampling are mutually teacher and student. Therefore, Transformer-layer regularization can also be seen as Transformer-layer self-distillation. The right side of Figure 1 shows a concrete representation of the Transformer-layer regularization, which contains hidden states regularization and multi-head attention regularization. Hidden States Regularization. A fully connected feed-forward network is included in each transformer layer, which is expressed as follows: HS(x) = max(0, xW1 + b1)W2 + b2, (2) where there are two linear transformations and one ReLU activation in each feed-forward network. We regularize the knowledge from the Transformer layer outputs of the two sub-models with the following objectives: LHSR = MSE(HS(x)1, HS(x)2), (3) e matrices HS and HS L where the matrices HS(x)1 \u2208Rl\u00d7d and HS(x)2 \u2208 Rl\u00d7d are the hidden states of first sub-model and the second sub-model respectively, which are calculated by Equation 2. MSE() is the mean squared error loss function, l is the input text length, and d is the hidden sizes of this two sub-models. Multi-head Attention Regularization. The key point for the Transformer-based model to work well is that each Transformer contains a multi-headed attention module whose attention function is computed depending on the query, key and value, represented as matrices Q, K and V. The attention function is called \u201cScaled Dot-Product Attention\u201d, it can be expressed as: Attention(Q, K, V) = softmax(QK T \u221ad QK T \u221adk )V, (4) where dk is the dimension of queries and keys. Then the dot product of the query and all the keys is obtained by dividing each key by sqrtdk and applying the softmax() function to obtain the weights of the values. Multi-head attention is concatenated by the attention of independent initialization weights in equation (4), which enables the model to focus jointly on information from different representation subspaces. It can be expressed as: MHA(Q, K, V) = Concat(A1 head, ..., A2 head)W o, (5) where h is the number of attention heads, and A1 head denotes the i-th attention head, which is calculated by the equation (4). The matrix W o acts as a linear transformation. Multi-head attention learns a lot of linguistic knowledge during training, and it is necessary to regularize it. Therefore, we propose multi-head attention regularization to encourage mutual learning of attention weights between two sub-models. The optimization objective is defined as: LMHAR = 1 h h h \ufffd i=1 r of h \ufffd i=1 MSE(A1 i , A2 i ), (6) \ufffd where h is the number of attention heads, A1 i \u2208 Rl\u00d7l and A2 i \u2208Rl\u00d7l refer to the attention matrix corresponding to the i-th head of first sub-model and the second sub-model, l is the input text length, and MSE() refer to the mean squared error loss function. 3.2. Output Regularization In addition to regularizing the Transformer layer within the model, we also apply regularization to the output of the model, similar to R-Drop. Specifically, our approach, LR-Drop, aims to minimize the bidirectional KL-divergence between the output distributions of the two sub-models obtained through dropout sampling. The optimization objective is defined as: LOR = 1 2 1 2[KL(P1, P2) + KL(P2, P1)], (7) Figure 1: The proposed LR-Drop to regularize Transformer-based PLM. The left figure shows that one input will go through the two different sub-models produced by dropout twice and obtain two distributions P1 and P2. The right one shows a Transformer-layer regularization containing hidden states regularization MHA regularization. where P1 and P2 are the output distributions of the first and second sub-models, respectively, and KL() denotes the KL-divergence loss function. 3.3. Total Optimization Objective To summarize, the total optimization objective of our proposed LR-Drop during training is expressed as follows: LT otal = LCE + \u03b1LHSR + \u03b2LMHAR + \u03b3LOR, (8) where \u03b1, \u03b2, and \u03b3 are the weight coefficients for the regularization loss functions LHSR, LMHAR, and LOR, respectively. 4. Experiments We assessed the effectiveness of LR-Drop on various natural language processing tasks. Our evaluation involved eight datasets for natural language understanding, six datasets for neural machine translation, and one dataset for abstractive summarization. In the table below, we use the abbreviations \"RD\" to refer to R-Drop and \"LRD\" to refer to LRDrop in the presentation of the experimental results. 4.1. Natural Language Understanding Datasets We begin by assessing the effectiveness of LR-Drop on natural language understanding tasks. The GLUE Benchmark consists of eight English natural language understanding tasks, which vary in domains, data volumes, and difficulty levels. (1) RTE (Dagan et al., 2006; Bar Haim et al., 2006; Giampiccolo et al., 2007): This dataset comprises a series of natural language inference datasets used in annual text challenges. (2) MNLI (Williams et al., 2018): In this task, a premise and a hypothesis are given, and the objective is to predict whether the premise supports or contradicts the hypothesis, or neither. (3) MRPC (Dolan and Brockett, 2005): Given a pair of sentences, the task is to determine whether their semantics are the same. (4) STS-B (Agirre et al., 2007): Each data instance consists of a pair of sentences along with a similarity score ranging from 1 to 5. The task involves discrete regression to predict the scores. (5) QQP (Iyer et al., 2017): This task involves identifying whether a pair of questions are semantically identical. (6) SST-2 (Socher et al., 2013): This binary task requires predicting whether a sentence is positive or negative. (7) QNLI (Rajpurkar et al., 2016): The objective of this task is to determine if a given question can be answered using the context sentence. (8) CoLA (Warstadt et al., 2018): This task focuses on assessing the grammatical accuracy of a sentence. Experimental Settings In this subsection, we employ three publicly available pre-trained language models (PLMs) as the baseline models for our experiments to evaluate the effectiveness of LR-Drop. The chosen PLMs are BERT-base, RoBERTa-large, and ELECTRA-large. Different tasks may require different parameter settings, so we dynamically adjust the coefficients \u03b1, \u03b2, Model MNLI MRPC QNLI QQP RTE SST-2 STS-B CoLA Avg BERT-base 83.8 85.3 90.8 91.0 68.2 92.4 89.3 62.3 82.85 BERT-base + RD 85.5 87.3 92.0 91.4 71.1 93.0 89.6 62.6 84.06 BERT-base + LRD 86.1 88.0 92.3 91.4 71.8 93.2 90.2 63.4 84.55 RoBERTa-large 90.2 90.9 94.7 92.2 86.6 96.4 92.4 68.0 88.93 RoBERTa-large + RD 90.9 91.4 95.2 92.5 88.4 96.9 92.5 70.0 89.73 RoBERTa-large + LRD 91.3 91.8 95.9 93.2 89.2 97.4 92.7 71.8 90.41 ELECTRA-large 90.9 90.8 95.0 92.4 88.0 96.9 92.6 69.1 89.46 ELECTRA-large + RD 91.2 91.3 95.6 92.6 88.9 97.4 92.8 70.5 90.03 ELECTRA-large + LRD 91.7 92.1 96.2 93.3 89.5 97.6 93.1 71.9 90.68 Table 1: Performances on natural language understanding tasks of GLUE benchmark. Significance test: the average performance of LR-Drop and R-Drop on the GLUE datasets was t-tested to obtain a p-value of 0.0034 < 0.01. and \u03b3 from the set {0.01, 0.05, 0.1, 0.5} accordingly. The experimental methodology for the comparative models follows the approach outlined in previous research. For the STS-B task, we use the Pearson correlation as the evaluation metric, while for CoLA, we employ Matthew\u2019s correlation. The remaining tasks are evaluated based on Accuracy. We report the mean results of 5 runs to ensure statistical reliability. The experiments were conducted using an RTX 3090 GPU. Experimental Results The experimental results are presented in Table 1. When applying LR-Drop to the BERT-base model, we observed improvements in fine-tuning scores across multiple tasks. For the MNLI task, the fine-tuning score increased from 83.8 to 86.1. In the MRPC task, the score improved from 85.3 to 88.8. The QNLI task saw an improvement from 90.8 to 92.3, and the QQP task improved from 91.0 to 91.4. For the RTE task, the score increased from 68.2 to 71.8. The SST-2 task showed an improvement from 92.4 to 93.2, and the STS-B task improved from 89.3 to 90.2. Similarly, the RoBERTa-large and ELECTRA-large models also exhibited performance improvements of more than 1 point per dataset when using LR-Drop. Across the eight datasets, the BERT-base + LRDrop, RoBERTa-large + LR-Drop, and ELECTRAlarge + LR-Drop models achieved impressive average scores of 84.55, 90.41, and 90.68, respectively. LR-Drop significantly improved the performance of the three baseline models: BERT-base, RoBERTalarge, and ELECTRA-large, by 1.70 points, 1.48 points, and 1.22 points, respectively. Furthermore, when compared to the previous method R-Drop, our proposed LR-Drop demonstrated average improvements of 0.49 points, 0.65 points, and 0.68 points, respectively. These results indicate that the performance of our LR-Drop regularization is particularly enhanced when applied to stronger baseline models. Additionally, the effectiveness of LR-Drop is evident across different neural language models, resulting in improved performance in natural language understanding tasks. 4.2. Neural Machine Translation Datasets The datasets used for neural machine translation were obtained from the International Workshop on Spoken Language Translation (IWSLT) competitions. These datasets consist of translations between English and German (En \u2194 De), English and Spanish (En \u2194Es), English and French (En \u2194Fr), and English and Chinese (En \u2194 Zh). Specifically, we used the IWSLT14 dataset for English to German and vice versa, the IWSLT14 dataset for English to Spanish and vice versa, the IWSLT17 dataset for English to French and vice versa, and the IWSLT dataset for English to Chinese and vice versa. The IWSLT dataset contains approximately 170,000 pairs of sentences for training, 7,000 pairs for validation, and 7,000 pairs for testing. These datasets serve as valuable resources for training and evaluating our neural machine translation models. Experimental Settings The experimental configuration outlined in (Wu et al., 2021) is followed in this study. Our benchmark model is the Transformer network proposed by (Vaswani et al., 2017). The specific configurations for the IWSLT translations are specified under the transformer_iwslt_de_en setting. To explore different settings, the coefficients \u03b1, \u03b2, and \u03b3 are dynamically varied within the set {0.1, 0.5, 1}. The implementation of our models is carried out using the Fairseq framework. For evaluating the performance of the models on neural machine translation tasks, we utilize BLEU scores. The reported results are the averages obtained from five trial runs to ensure robustness. The experiments are conducted on an RTX 3090 GPU, which serves as the hardware for the experiments. Experimental Results The experimental results are presented in Table 2. When applying the Transformer model to our LR-Drop technique, we Model En to De De to En En to Fr Fr to En En to Zh Zh to En En to Es Es to En Avg Transformer 28.57 34.64 35.9 36.1 26.3 18.4 39.0 40.6 32.44 Transformer + RD 30.72 37.25 38.0 38.9 28.1 19.5 41.8 43.2 34.68 Transformer + LRD 30.95 37.87 38.8 39.6 28.6 20.3 42.6 44.1 35.35 Table 2: BLEU scores on 8 IWSLT machine translation tasks. Significance test: the average performance of LR-Drop and R-Drop on the 8 datasets was t-tested to obtain a p-value of 0.0037 < 0.01. Model RG-1 RG-2 RG-L Transformera 39.50 16.06 36.63 ProphetNetb 44.02 21.17 41.30 BARTc 44.16 21.28 40.90 PEGASUSd 44.17 21.47 41.11 BART + R3Fe 44.38 21.53 41.17 BART + RDf 44.51 21.58 41.24 BART + LRD 44.58 21.63 41.30 Table 3: The ROUGE scores, consisting of ROUGE-1, ROUGE-2, and ROUGE-L, are presented for the CNN/Daily Mail summarization dataset. A significance test was carried out, comparing the mean performance of LR-Drop and RDrop. A t-test yielded a p-value of 0.0064, which is less than 0.01." + }, + { + "url": "http://arxiv.org/abs/1805.04770v2", + "title": "Born Again Neural Networks", + "abstract": "Knowledge Distillation (KD) consists of transferring \u201cknowledge\u201d from one\nmachine learning model (the teacher) to another (the student). Commonly, the\nteacher is a high-capacity model with formidable performance, while the student\nis more compact. By transferring knowledge, one hopes to benefit from the\nstudent\u2019s compactness, without sacrificing too much performance. We study KD\nfrom a new perspective: rather than compressing models, we train students\nparameterized identically to their teachers. Surprisingly, these Born-Again\nNetworks (BANs), outperform their teachers significantly, both on computer\nvision and language modeling tasks. Our experiments with BANs based on\nDenseNets demonstrate state-of-the-art performance on the CIFAR-10 (3.5%) and\nCIFAR-100 (15.5%) datasets, by validation error. Additional experiments explore\ntwo distillation objectives: (i) Confidence-Weighted by Teacher Max (CWTM) and\n(ii) Dark Knowledge with Permuted Predictions (DKPP). Both methods elucidate\nthe essential components of KD, demonstrating the effect of the teacher outputs\non both predicted and non-predicted classes.", + "authors": "Tommaso Furlanello, Zachary C. Lipton, Michael Tschannen, Laurent Itti, Anima Anandkumar", + "published": "2018-05-12", + "updated": "2018-06-29", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1602.07868v3", + "title": "Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks", + "abstract": "We present weight normalization: a reparameterization of the weight vectors\nin a neural network that decouples the length of those weight vectors from\ntheir direction. By reparameterizing the weights in this way we improve the\nconditioning of the optimization problem and we speed up convergence of\nstochastic gradient descent. Our reparameterization is inspired by batch\nnormalization but does not introduce any dependencies between the examples in a\nminibatch. This means that our method can also be applied successfully to\nrecurrent models such as LSTMs and to noise-sensitive applications such as deep\nreinforcement learning or generative models, for which batch normalization is\nless well suited. Although our method is much simpler, it still provides much\nof the speed-up of full batch normalization. In addition, the computational\noverhead of our method is lower, permitting more optimization steps to be taken\nin the same amount of time. We demonstrate the usefulness of our method on\napplications in supervised image recognition, generative modelling, and deep\nreinforcement learning.", + "authors": "Tim Salimans, Diederik P. Kingma", + "published": "2016-02-25", + "updated": "2016-06-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.NE" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2001.01900v2", + "title": "Regularization via Structural Label Smoothing", + "abstract": "Regularization is an effective way to promote the generalization performance\nof machine learning models. In this paper, we focus on label smoothing, a form\nof output distribution regularization that prevents overfitting of a neural\nnetwork by softening the ground-truth labels in the training data in an attempt\nto penalize overconfident outputs. Existing approaches typically use\ncross-validation to impose this smoothing, which is uniform across all training\ndata. In this paper, we show that such label smoothing imposes a quantifiable\nbias in the Bayes error rate of the training data, with regions of the feature\nspace with high overlap and low marginal likelihood having a lower bias and\nregions of low overlap and high marginal likelihood having a higher bias. These\ntheoretical results motivate a simple objective function for data-dependent\nsmoothing to mitigate the potential negative consequences of the operation\nwhile maintaining its desirable properties as a regularizer. We call this\napproach Structural Label Smoothing (SLS). We implement SLS and empirically\nvalidate on synthetic, Higgs, SVHN, CIFAR-10, and CIFAR-100 datasets. The\nresults confirm our theoretical insights and demonstrate the effectiveness of\nthe proposed method in comparison to traditional label smoothing.", + "authors": "Weizhi Li, Gautam Dasarathy, Visar Berisha", + "published": "2020-01-07", + "updated": "2020-07-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1909.11764v5", + "title": "FreeLB: Enhanced Adversarial Training for Natural Language Understanding", + "abstract": "Adversarial training, which minimizes the maximal risk for label-preserving\ninput perturbations, has proved to be effective for improving the\ngeneralization of language models. In this work, we propose a novel adversarial\ntraining algorithm, FreeLB, that promotes higher invariance in the embedding\nspace, by adding adversarial perturbations to word embeddings and minimizing\nthe resultant adversarial risk inside different regions around input samples.\nTo validate the effectiveness of the proposed approach, we apply it to\nTransformer-based models for natural language understanding and commonsense\nreasoning tasks. Experiments on the GLUE benchmark show that when applied only\nto the finetuning stage, it is able to improve the overall test scores of\nBERT-base model from 78.3 to 79.4, and RoBERTa-large model from 88.5 to 88.8.\nIn addition, the proposed approach achieves state-of-the-art single-model test\naccuracies of 85.44\\% and 67.75\\% on ARC-Easy and ARC-Challenge. Experiments on\nCommonsenseQA benchmark further demonstrate that FreeLB can be generalized and\nboost the performance of RoBERTa-large model on other tasks as well. Code is\navailable at \\url{https://github.com/zhuchen03/FreeLB .", + "authors": "Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, Jingjing Liu", + "published": "2019-09-25", + "updated": "2020-04-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2002.05715v3", + "title": "Self-Distillation Amplifies Regularization in Hilbert Space", + "abstract": "Knowledge distillation introduced in the deep learning context is a method to\ntransfer knowledge from one architecture to another. In particular, when the\narchitectures are identical, this is called self-distillation. The idea is to\nfeed in predictions of the trained model as new target values for retraining\n(and iterate this loop possibly a few times). It has been empirically observed\nthat the self-distilled model often achieves higher accuracy on held out data.\nWhy this happens, however, has been a mystery: the self-distillation dynamics\ndoes not receive any new information about the task and solely evolves by\nlooping over training. To the best of our knowledge, there is no rigorous\nunderstanding of this phenomenon. This work provides the first theoretical\nanalysis of self-distillation. We focus on fitting a nonlinear function to\ntraining data, where the model space is Hilbert space and fitting is subject to\n$\\ell_2$ regularization in this function space. We show that self-distillation\niterations modify regularization by progressively limiting the number of basis\nfunctions that can be used to represent the solution. This implies (as we also\nverify empirically) that while a few rounds of self-distillation may reduce\nover-fitting, further rounds may lead to under-fitting and thus worse\nperformance.", + "authors": "Hossein Mobahi, Mehrdad Farajtabar, Peter L. Bartlett", + "published": "2020-02-13", + "updated": "2020-10-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1406.1831v1", + "title": "Analyzing noise in autoencoders and deep networks", + "abstract": "Autoencoders have emerged as a useful framework for unsupervised learning of\ninternal representations, and a wide variety of apparently conceptually\ndisparate regularization techniques have been proposed to generate useful\nfeatures. Here we extend existing denoising autoencoders to additionally inject\nnoise before the nonlinearity, and at the hidden unit activations. We show that\na wide variety of previous methods, including denoising, contractive, and\nsparse autoencoders, as well as dropout can be interpreted using this\nframework. This noise injection framework reaps practical benefits by providing\na unified strategy to develop new internal representations by designing the\nnature of the injected noise. We show that noisy autoencoders outperform\ndenoising autoencoders at the very task of denoising, and are competitive with\nother single-layer techniques on MNIST, and CIFAR-10. We also show that types\nof noise other than dropout improve performance in a deep network through\nsparsifying, decorrelating, and spreading information across representations.", + "authors": "Ben Poole, Jascha Sohl-Dickstein, Surya Ganguli", + "published": "2014-06-06", + "updated": "2014-06-06", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2108.12805v1", + "title": "DropAttack: A Masked Weight Adversarial Training Method to Improve Generalization of Neural Networks", + "abstract": "Adversarial training has been proven to be a powerful regularization method\nto improve the generalization of models. However, current adversarial training\nmethods only attack the original input sample or the embedding vectors, and\ntheir attacks lack coverage and diversity. To further enhance the breadth and\ndepth of attack, we propose a novel masked weight adversarial training method\ncalled DropAttack, which enhances generalization of model by adding\nintentionally worst-case adversarial perturbations to both the input and hidden\nlayers in different dimensions and minimize the adversarial risks generated by\neach layer. DropAttack is a general technique and can be adopt to a wide\nvariety of neural networks with different architectures. To validate the\neffectiveness of the proposed method, we used five public datasets in the\nfields of natural language processing (NLP) and computer vision (CV) for\nexperimental evaluating. We compare the proposed method with other adversarial\ntraining methods and regularization methods, and our method achieves\nstate-of-the-art on all datasets. In addition, Dropattack can achieve the same\nperformance when it use only a half training data compared to other standard\ntraining method. Theoretical analysis reveals that DropAttack can perform\ngradient regularization at random on some of the input and wight parameters of\nthe model. Further visualization experiments show that DropAttack can push the\nminimum risk of the model to a lower and flatter loss landscapes. Our source\ncode is publicly available on https://github.com/nishiwen1214/DropAttack.", + "authors": "Shiwen Ni, Jiawen Li, Hung-Yu Kao", + "published": "2021-08-29", + "updated": "2021-08-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1607.06450v1", + "title": "Layer Normalization", + "abstract": "Training state-of-the-art, deep neural networks is computationally expensive.\nOne way to reduce the training time is to normalize the activities of the\nneurons. A recently introduced technique called batch normalization uses the\ndistribution of the summed input to a neuron over a mini-batch of training\ncases to compute a mean and variance which are then used to normalize the\nsummed input to that neuron on each training case. This significantly reduces\nthe training time in feed-forward neural networks. However, the effect of batch\nnormalization is dependent on the mini-batch size and it is not obvious how to\napply it to recurrent neural networks. In this paper, we transpose batch\nnormalization into layer normalization by computing the mean and variance used\nfor normalization from all of the summed inputs to the neurons in a layer on a\nsingle training case. Like batch normalization, we also give each neuron its\nown adaptive bias and gain which are applied after the normalization but before\nthe non-linearity. Unlike batch normalization, layer normalization performs\nexactly the same computation at training and test times. It is also\nstraightforward to apply to recurrent neural networks by computing the\nnormalization statistics separately at each time step. Layer normalization is\nvery effective at stabilizing the hidden state dynamics in recurrent networks.\nEmpirically, we show that layer normalization can substantially reduce the\ntraining time compared with previously published techniques.", + "authors": "Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton", + "published": "2016-07-21", + "updated": "2016-07-21", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1912.08777v3", + "title": "PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization", + "abstract": "Recent work pre-training Transformers with self-supervised objectives on\nlarge text corpora has shown great success when fine-tuned on downstream NLP\ntasks including text summarization. However, pre-training objectives tailored\nfor abstractive text summarization have not been explored. Furthermore there is\na lack of systematic evaluation across diverse domains. In this work, we\npropose pre-training large Transformer-based encoder-decoder models on massive\ntext corpora with a new self-supervised objective. In PEGASUS, important\nsentences are removed/masked from an input document and are generated together\nas one output sequence from the remaining sentences, similar to an extractive\nsummary. We evaluated our best PEGASUS model on 12 downstream summarization\ntasks spanning news, science, stories, instructions, emails, patents, and\nlegislative bills. Experiments demonstrate it achieves state-of-the-art\nperformance on all 12 downstream datasets measured by ROUGE scores. Our model\nalso shows surprising performance on low-resource summarization, surpassing\nprevious state-of-the-art results on 6 datasets with only 1000 examples.\nFinally we validated our results using human evaluation and show that our model\nsummaries achieve human performance on multiple datasets.", + "authors": "Jingqing Zhang, Yao Zhao, Mohammad Saleh, Peter J. Liu", + "published": "2019-12-18", + "updated": "2020-07-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2006.05065v2", + "title": "Self-Distillation as Instance-Specific Label Smoothing", + "abstract": "It has been recently demonstrated that multi-generational self-distillation\ncan improve generalization. Despite this intriguing observation, reasons for\nthe enhancement remain poorly understood. In this paper, we first demonstrate\nexperimentally that the improved performance of multi-generational\nself-distillation is in part associated with the increasing diversity in\nteacher predictions. With this in mind, we offer a new interpretation for\nteacher-student training as amortized MAP estimation, such that teacher\npredictions enable instance-specific regularization. Our framework allows us to\ntheoretically relate self-distillation to label smoothing, a commonly used\ntechnique that regularizes predictive uncertainty, and suggests the importance\nof predictive diversity in addition to predictive uncertainty. We present\nexperimental results using multiple datasets and neural network architectures\nthat, overall, demonstrate the utility of predictive diversity. Finally, we\npropose a novel instance-specific label smoothing technique that promotes\npredictive diversity without the need for a separately trained teacher model.\nWe provide an empirical evaluation of the proposed method, which, we find,\noften outperforms classical label smoothing.", + "authors": "Zhilu Zhang, Mert R. Sabuncu", + "published": "2020-06-09", + "updated": "2020-10-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1909.10351v5", + "title": "TinyBERT: Distilling BERT for Natural Language Understanding", + "abstract": "Language model pre-training, such as BERT, has significantly improved the\nperformances of many natural language processing tasks. However, pre-trained\nlanguage models are usually computationally expensive, so it is difficult to\nefficiently execute them on resource-restricted devices. To accelerate\ninference and reduce model size while maintaining accuracy, we first propose a\nnovel Transformer distillation method that is specially designed for knowledge\ndistillation (KD) of the Transformer-based models. By leveraging this new KD\nmethod, the plenty of knowledge encoded in a large teacher BERT can be\neffectively transferred to a small student Tiny-BERT. Then, we introduce a new\ntwo-stage learning framework for TinyBERT, which performs Transformer\ndistillation at both the pretraining and task-specific learning stages. This\nframework ensures that TinyBERT can capture he general-domain as well as the\ntask-specific knowledge in BERT.\n TinyBERT with 4 layers is empirically effective and achieves more than 96.8%\nthe performance of its teacher BERTBASE on GLUE benchmark, while being 7.5x\nsmaller and 9.4x faster on inference. TinyBERT with 4 layers is also\nsignificantly better than 4-layer state-of-the-art baselines on BERT\ndistillation, with only about 28% parameters and about 31% inference time of\nthem. Moreover, TinyBERT with 6 layers performs on-par with its teacher\nBERTBASE.", + "authors": "Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, Qun Liu", + "published": "2019-09-23", + "updated": "2020-10-16", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1905.08094v1", + "title": "Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation", + "abstract": "Convolutional neural networks have been widely deployed in various\napplication scenarios. In order to extend the applications' boundaries to some\naccuracy-crucial domains, researchers have been investigating approaches to\nboost accuracy through either deeper or wider network structures, which brings\nwith them the exponential increment of the computational and storage cost,\ndelaying the responding time. In this paper, we propose a general training\nframework named self distillation, which notably enhances the performance\n(accuracy) of convolutional neural networks through shrinking the size of the\nnetwork rather than aggrandizing it. Different from traditional knowledge\ndistillation - a knowledge transformation methodology among networks, which\nforces student neural networks to approximate the softmax layer outputs of\npre-trained teacher neural networks, the proposed self distillation framework\ndistills knowledge within network itself. The networks are firstly divided into\nseveral sections. Then the knowledge in the deeper portion of the networks is\nsqueezed into the shallow ones. Experiments further prove the generalization of\nthe proposed self distillation framework: enhancement of accuracy at average\nlevel is 2.65%, varying from 0.61% in ResNeXt as minimum to 4.07% in VGG19 as\nmaximum. In addition, it can also provide flexibility of depth-wise scalable\ninference on resource-limited edge devices.Our codes will be released on github\nsoon.", + "authors": "Linfeng Zhang, Jiebo Song, Anni Gao, Jingwei Chen, Chenglong Bao, Kaisheng Ma", + "published": "2019-05-17", + "updated": "2019-05-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2401.06370v1", + "title": "Graph Relation Distillation for Efficient Biomedical Instance Segmentation", + "abstract": "Instance-aware embeddings predicted by deep neural networks have\nrevolutionized biomedical instance segmentation, but its resource requirements\nare substantial. Knowledge distillation offers a solution by transferring\ndistilled knowledge from heavy teacher networks to lightweight yet\nhigh-performance student networks. However, existing knowledge distillation\nmethods struggle to extract knowledge for distinguishing instances and overlook\nglobal relation information. To address these challenges, we propose a graph\nrelation distillation approach for efficient biomedical instance segmentation,\nwhich considers three essential types of knowledge: instance-level features,\ninstance relations, and pixel-level boundaries. We introduce two graph\ndistillation schemes deployed at both the intra-image level and the inter-image\nlevel: instance graph distillation (IGD) and affinity graph distillation (AGD).\nIGD constructs a graph representing instance features and relations,\ntransferring these two types of knowledge by enforcing instance graph\nconsistency. AGD constructs an affinity graph representing pixel relations to\ncapture structured knowledge of instance boundaries, transferring\nboundary-related knowledge by ensuring pixel affinity consistency. Experimental\nresults on a number of biomedical datasets validate the effectiveness of our\napproach, enabling student models with less than $ 1\\%$ parameters and less\nthan $10\\%$ inference time while achieving promising performance compared to\nteacher models.", + "authors": "Xiaoyu Liu, Yueyi Zhang, Zhiwei Xiong, Wei Huang, Bo Hu, Xiaoyan Sun, Feng Wu", + "published": "2024-01-12", + "updated": "2024-01-12", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2403.12040v1", + "title": "Distilling Datasets Into Less Than One Image", + "abstract": "Dataset distillation aims to compress a dataset into a much smaller one so\nthat a model trained on the distilled dataset achieves high accuracy. Current\nmethods frame this as maximizing the distilled classification accuracy for a\nbudget of K distilled images-per-class, where K is a positive integer. In this\npaper, we push the boundaries of dataset distillation, compressing the dataset\ninto less than an image-per-class. It is important to realize that the\nmeaningful quantity is not the number of distilled images-per-class but the\nnumber of distilled pixels-per-dataset. We therefore, propose Poster Dataset\nDistillation (PoDD), a new approach that distills the entire original dataset\ninto a single poster. The poster approach motivates new technical solutions for\ncreating training images and learnable labels. Our method can achieve\ncomparable or better performance with less than an image-per-class compared to\nexisting methods that use one image-per-class. Specifically, our method\nestablishes a new state-of-the-art performance on CIFAR-10, CIFAR-100, and\nCUB200 using as little as 0.3 images-per-class.", + "authors": "Asaf Shul, Eliahu Horwitz, Yedid Hoshen", + "published": "2024-03-18", + "updated": "2024-03-18", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0312123v2", + "title": "Many copies may be required for entanglement distillation", + "abstract": "A mixed quantum state shared between two parties is said to be distillable\nif, by means of a protocol involving only local quantum operations and\nclassical communication, the two parties can transform some number of copies of\nthat state into a single shared pair of qubits having high fidelity with a\nmaximally entangled state state. In this paper it is proved that there exist\nstates that are distillable, but for which an arbitrarily large number of\ncopies is required before any distillation procedure can produce a shared pair\nof qubits with even a small amount of entanglement. Specifically, for every\npositive integer n there exists a state that is distillable, but given n or\nfewer copies of that state every distillation procedure outputting a single\nshared pair of qubits will output those qubits in a separable state.\nEssentially all previous examples of states proved to be distillable were such\nthat some distillation procedure could output an entangled pair of qubits given\na single copy of the state in question.", + "authors": "John Watrous", + "published": "2003-12-15", + "updated": "2004-05-31", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1912.12630v1", + "title": "Real-time Policy Distillation in Deep Reinforcement Learning", + "abstract": "Policy distillation in deep reinforcement learning provides an effective way\nto transfer control policies from a larger network to a smaller untrained\nnetwork without a significant degradation in performance. However, policy\ndistillation is underexplored in deep reinforcement learning, and existing\napproaches are computationally inefficient, resulting in a long distillation\ntime. In addition, the effectiveness of the distillation process is still\nlimited to the model capacity. We propose a new distillation mechanism, called\nreal-time policy distillation, in which training the teacher model and\ndistilling the policy to the student model occur simultaneously. Accordingly,\nthe teacher's latest policy is transferred to the student model in real time.\nThis reduces the distillation time to half the original time or even less and\nalso makes it possible for extremely small student models to learn skills at\nthe expert level. We evaluated the proposed algorithm in the Atari 2600 domain.\nThe results show that our approach can achieve full distillation in most games,\neven with compression ratios up to 1.7%.", + "authors": "Yuxiang Sun, Pooyan Fazli", + "published": "2019-12-29", + "updated": "2019-12-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0202165v1", + "title": "Distinguishing locally of quantum states and the distillation of entanglement", + "abstract": "This paper try to probe the relation of distinguishing locally and\ndistillation of entanglement. The distinguishing information (DI) and the\nmaximal distinguishing information (MDI) of a set of pure states are defined.\nThe interpretation of distillation of entanglement in term of information is\ngiven. The relation between the maximal distinguishing information and\ndistillable entanglement is gained. As a application of this relation the\ndistillable entanglement of Bell-diagonal states is present.", + "authors": "ping-xing. chen, Cheng-zu Li", + "published": "2002-02-27", + "updated": "2002-02-27", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2305.05233v1", + "title": "DynamicKD: An Effective Knowledge Distillation via Dynamic Entropy Correction-Based Distillation for Gap Optimizing", + "abstract": "The knowledge distillation uses a high-performance teacher network to guide\nthe student network. However, the performance gap between the teacher and\nstudent networks can affect the student's training. This paper proposes a novel\nknowledge distillation algorithm based on dynamic entropy correction to reduce\nthe gap by adjusting the student instead of the teacher. Firstly, the effect of\nchanging the output entropy (short for output information entropy) in the\nstudent on the distillation loss is analyzed in theory. This paper shows that\ncorrecting the output entropy can reduce the gap. Then, a knowledge\ndistillation algorithm based on dynamic entropy correction is created, which\ncan correct the output entropy in real-time with an entropy controller updated\ndynamically by the distillation loss. The proposed algorithm is validated on\nthe CIFAR100 and ImageNet. The comparison with various state-of-the-art\ndistillation algorithms shows impressive results, especially in the experiment\non the CIFAR100 regarding teacher-student pair resnet32x4-resnet8x4. The\nproposed algorithm raises 2.64 points over the traditional distillation\nalgorithm and 0.87 points over the state-of-the-art algorithm CRD in\nclassification accuracy, demonstrating its effectiveness and efficiency.", + "authors": "Songling Zhu, Ronghua Shang, Bo Yuan, Weitong Zhang, Yangyang Li, Licheng Jiao", + "published": "2023-05-09", + "updated": "2023-05-09", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.01392v1", + "title": "No-go theorem for probabilistic one-way secret-key distillation", + "abstract": "The probabilistic one-way distillable secret key is equal to the largest\nexpected rate at which perfect secret key bits can be probabilistically\ndistilled from a bipartite state by means of local operations and one-way\nclassical communication. Here we define the set of super two-extendible states\nand prove that an arbitrary state in this set cannot be used for probabilistic\none-way secret-key distillation. This broad class of states includes both\nerased states and all full-rank states. Comparing the probabilistic one-way\ndistillable secret key with the more commonly studied approximate one-way\ndistillable secret key, our results demonstrate an extreme gap between them for\nmany states of interest, with the approximate one-way distillable secret key\nbeing much larger. Our findings naturally extend to probabilistic one-way\nentanglement distillation, with similar conclusions.", + "authors": "Vishal Singh, Mark M. Wilde", + "published": "2024-04-01", + "updated": "2024-04-01", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph", + "cs.IT", + "math.IT" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2304.05563v2", + "title": "Entanglement distillation in terms of Schmidt rank and matrix rank", + "abstract": "Entanglement distillation is a key task in quantum-information processing. In\nthis paper, we distill non-positive-partial-transpose (NPT) bipartite states of\nsome given Schmidt rank and matrix rank. We show that all bipartite states of\nSchmidt rank two are locally equivalent to classical-classical states, and all\nbipartite states of Schmidt rank three are 1-undistillable. Subsequently, we\nshow that low-rank B-irreducible NPT states are distillable for large-rank\nreduced density operators by proving low-rank B-irreducible NPT state whose\nrange contains a product vector is distillable. Eventually, we present an\nequivalent condition to distill $M\\times N$ bipartite states of rank\n$\\max\\{M,N\\}+1$.", + "authors": "Tianyi Ding, Lin Chen", + "published": "2023-04-12", + "updated": "2023-07-06", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2307.08436v1", + "title": "DOT: A Distillation-Oriented Trainer", + "abstract": "Knowledge distillation transfers knowledge from a large model to a small one\nvia task and distillation losses. In this paper, we observe a trade-off between\ntask and distillation losses, i.e., introducing distillation loss limits the\nconvergence of task loss. We believe that the trade-off results from the\ninsufficient optimization of distillation loss. The reason is: The teacher has\na lower task loss than the student, and a lower distillation loss drives the\nstudent more similar to the teacher, then a better-converged task loss could be\nobtained. To break the trade-off, we propose the Distillation-Oriented Trainer\n(DOT). DOT separately considers gradients of task and distillation losses, then\napplies a larger momentum to distillation loss to accelerate its optimization.\nWe empirically prove that DOT breaks the trade-off, i.e., both losses are\nsufficiently optimized. Extensive experiments validate the superiority of DOT.\nNotably, DOT achieves a +2.59% accuracy improvement on ImageNet-1k for the\nResNet50-MobileNetV1 pair. Conclusively, DOT greatly benefits the student's\noptimization properties in terms of loss convergence and model generalization.\nCode will be made publicly available.", + "authors": "Borui Zhao, Quan Cui, Renjie Song, Jiajun Liang", + "published": "2023-07-17", + "updated": "2023-07-17", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1807.04705v2", + "title": "Non-asymptotic assisted distillation of quantum coherence", + "abstract": "We characterize the operational task of environment-assisted distillation of\nquantum coherence under different sets of free operations when only a finite\nsupply of copies of a given state is available. We first evaluate the one-shot\nassisted distillable coherence exactly, and introduce a semidefinite\nprogramming bound on it in terms of a smooth entropic quantity. We prove the\nbound to be tight for all systems in dimensions 2 and 3, which allows us to\nobtain computable expressions for the one-shot rate of distillation, establish\nan analytical expression for the best achievable fidelity of assisted\ndistillation for any finite number of copies, and fully solve the problem of\nasymptotic zero-error assisted distillation for qubit and qutrit systems. Our\ncharacterization shows that all relevant sets of free operations in the\nresource theory of coherence have exactly the same power in the task of\none-shot assisted coherence distillation, and furthermore resolves a conjecture\nregarding the additivity of coherence of assistance in dimension 3.", + "authors": "Bartosz Regula, Ludovico Lami, Alexander Streltsov", + "published": "2018-07-12", + "updated": "2018-10-16", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.06170v1", + "title": "CLIP-Embed-KD: Computationally Efficient Knowledge Distillation Using Embeddings as Teachers", + "abstract": "Contrastive Language-Image Pre-training (CLIP) has been shown to improve\nzero-shot generalization capabilities of language and vision models. In this\npaper, we extend CLIP for efficient knowledge distillation, by utilizing\nembeddings as teachers. Typical knowledge distillation frameworks require\nrunning forward passes through a teacher model, which is often prohibitive in\nthe case of billion or trillion parameter teachers. In these cases, using only\nthe embeddings of the teacher models to guide the distillation can yield\nsignificant computational savings. Our preliminary findings show that\nCLIP-based knowledge distillation with embeddings can outperform full scale\nknowledge distillation using $9\\times$ less memory and $8\\times$ less training\ntime. Code available at: https://github.com/lnairGT/CLIP-Distillation/", + "authors": "Lakshmi Nair", + "published": "2024-04-09", + "updated": "2024-04-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.14827v1", + "title": "Sentence-Level or Token-Level? A Comprehensive Study on Knowledge Distillation", + "abstract": "Knowledge distillation, transferring knowledge from a teacher model to a\nstudent model, has emerged as a powerful technique in neural machine\ntranslation for compressing models or simplifying training targets. Knowledge\ndistillation encompasses two primary methods: sentence-level distillation and\ntoken-level distillation. In sentence-level distillation, the student model is\ntrained to align with the output of the teacher model, which can alleviate the\ntraining difficulty and give student model a comprehensive understanding of\nglobal structure. Differently, token-level distillation requires the student\nmodel to learn the output distribution of the teacher model, facilitating a\nmore fine-grained transfer of knowledge. Studies have revealed divergent\nperformances between sentence-level and token-level distillation across\ndifferent scenarios, leading to the confusion on the empirical selection of\nknowledge distillation methods. In this study, we argue that token-level\ndistillation, with its more complex objective (i.e., distribution), is better\nsuited for ``simple'' scenarios, while sentence-level distillation excels in\n``complex'' scenarios. To substantiate our hypothesis, we systematically\nanalyze the performance of distillation methods by varying the model size of\nstudent models, the complexity of text, and the difficulty of decoding\nprocedure. While our experimental results validate our hypothesis, defining the\ncomplexity level of a given scenario remains a challenging task. So we further\nintroduce a novel hybrid method that combines token-level and sentence-level\ndistillation through a gating mechanism, aiming to leverage the advantages of\nboth individual methods. Experiments demonstrate that the hybrid method\nsurpasses the performance of token-level or sentence-level distillation methods\nand the previous works by a margin, demonstrating the effectiveness of the\nproposed hybrid method.", + "authors": "Jingxuan Wei, Linzhuang Sun, Yichong Leng, Xu Tan, Bihui Yu, Ruifeng Guo", + "published": "2024-04-23", + "updated": "2024-04-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0001084v2", + "title": "Distillation of GHZ states by selective information manipulation", + "abstract": "Methods for distilling maximally entangled tripartite (GHZ) states from\narbitrary entangled tripartite pure states are described. These techniques work\nfor virtually any input state. Each technique has two stages which we call\nprimary and secondary distillation. Primary distillation produces a GHZ state\nwith some probability, so that when applied to an ensemble of systems, a\ncertain percentage is discarded. Secondary distillation produces further GHZs\nfrom the discarded systems. These protocols are developed with the help of an\napproach to quantum information theory based on absolutely selective\ninformation, which has other potential applications.", + "authors": "Oliver Cohen, Todd A. Brun", + "published": "2000-01-23", + "updated": "2000-02-02", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1607.04311v1", + "title": "Defensive Distillation is Not Robust to Adversarial Examples", + "abstract": "We show that defensive distillation is not secure: it is no more resistant to\ntargeted misclassification attacks than unprotected neural networks.", + "authors": "Nicholas Carlini, David Wagner", + "published": "2016-07-14", + "updated": "2016-07-14", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2304.04615v1", + "title": "A Survey on Recent Teacher-student Learning Studies", + "abstract": "Knowledge distillation is a method of transferring the knowledge from a\ncomplex deep neural network (DNN) to a smaller and faster DNN, while preserving\nits accuracy. Recent variants of knowledge distillation include teaching\nassistant distillation, curriculum distillation, mask distillation, and\ndecoupling distillation, which aim to improve the performance of knowledge\ndistillation by introducing additional components or by changing the learning\nprocess. Teaching assistant distillation involves an intermediate model called\nthe teaching assistant, while curriculum distillation follows a curriculum\nsimilar to human education. Mask distillation focuses on transferring the\nattention mechanism learned by the teacher, and decoupling distillation\ndecouples the distillation loss from the task loss. Overall, these variants of\nknowledge distillation have shown promising results in improving the\nperformance of knowledge distillation.", + "authors": "Minghong Gao", + "published": "2023-04-10", + "updated": "2023-04-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/0704.3661v1", + "title": "Complementarity, distillable secret key, and distillable entanglement", + "abstract": "We consider controllability of two conjugate observables Z and X by two\nparties with classical communication. The ability is specified by two\nalternative tasks, (i) agreement on Z and (ii) preparation of an eigenstate of\nX with use of an extra communication channel. We prove that their feasibility\nis equivalent to that of key distillation if the extra channel is quantum, and\nto that of entanglement distillation if it is classical. This clarifies the\ndistinction between two entanglement measures, distillable key and distillable\nentanglement.", + "authors": "Masato Koashi", + "published": "2007-04-27", + "updated": "2007-04-27", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2307.12732v1", + "title": "CLIP-KD: An Empirical Study of Distilling CLIP Models", + "abstract": "CLIP has become a promising language-supervised visual pre-training framework\nand achieves excellent performance over a wide range of tasks. This paper aims\nto distill small CLIP models supervised by a large teacher CLIP model. We\npropose several distillation strategies, including relation, feature, gradient\nand contrastive paradigm, to examine the impact on CLIP distillation. We show\nthat the simplest feature mimicry with MSE loss performs best. Moreover,\ninteractive contrastive learning and relation-based distillation are also\ncritical in performance improvement. We apply the unified method to distill\nseveral student networks trained on 15 million (image, text) pairs.\nDistillation improves the student CLIP models consistently over zero-shot\nImageNet classification and cross-modal retrieval benchmarks. We hope our\nempirical study will become an important baseline for future CLIP distillation\nresearch. The code is available at \\url{https://github.com/winycg/CLIP-KD}.", + "authors": "Chuanguang Yang, Zhulin An, Libo Huang, Junyu Bi, Xinqiang Yu, Han Yang, Yongjun Xu", + "published": "2023-07-24", + "updated": "2023-07-24", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0108029v1", + "title": "Distillability, Bell inequalities and multiparticle bound entanglement", + "abstract": "We study the relation between violation of Bell inequalities and\ndistillability properties of quantum states. Recently, D\\\"ur has shown that\nthere are some multiparticle bound entangled states, non-separable and\nnon-distillable, that violate a Bell inequality. We prove that for all the\nstates violating this inequality there exist at least one splitting of the\nparties into two groups such that some pure-state entanglement can be\ndistilled, obtaining a connection between Bell inequalities and bipartite\ndistillable entanglement.", + "authors": "A. Acin", + "published": "2001-08-07", + "updated": "2001-08-07", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2208.08840v1", + "title": "Mind the Gap in Distilling StyleGANs", + "abstract": "StyleGAN family is one of the most popular Generative Adversarial Networks\n(GANs) for unconditional generation. Despite its impressive performance, its\nhigh demand on storage and computation impedes their deployment on\nresource-constrained devices. This paper provides a comprehensive study of\ndistilling from the popular StyleGAN-like architecture. Our key insight is that\nthe main challenge of StyleGAN distillation lies in the output discrepancy\nissue, where the teacher and student model yield different outputs given the\nsame input latent code. Standard knowledge distillation losses typically fail\nunder this heterogeneous distillation scenario. We conduct thorough analysis\nabout the reasons and effects of this discrepancy issue, and identify that the\nmapping network plays a vital role in determining semantic information of\ngenerated images. Based on this finding, we propose a novel initialization\nstrategy for the student model, which can ensure the output consistency to the\nmaximum extent. To further enhance the semantic consistency between the teacher\nand student model, we present a latent-direction-based distillation loss that\npreserves the semantic relations in latent space. Extensive experiments\ndemonstrate the effectiveness of our approach in distilling StyleGAN2 and\nStyleGAN3, outperforming existing GAN distillation methods by a large margin.", + "authors": "Guodong Xu, Yuenan Hou, Ziwei Liu, Chen Change Loy", + "published": "2022-08-18", + "updated": "2022-08-18", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2301.01615v2", + "title": "StereoDistill: Pick the Cream from LiDAR for Distilling Stereo-based 3D Object Detection", + "abstract": "In this paper, we propose a cross-modal distillation method named\nStereoDistill to narrow the gap between the stereo and LiDAR-based approaches\nvia distilling the stereo detectors from the superior LiDAR model at the\nresponse level, which is usually overlooked in 3D object detection\ndistillation. The key designs of StereoDistill are: the X-component Guided\nDistillation~(XGD) for regression and the Cross-anchor Logit Distillation~(CLD)\nfor classification. In XGD, instead of empirically adopting a threshold to\nselect the high-quality teacher predictions as soft targets, we decompose the\npredicted 3D box into sub-components and retain the corresponding part for\ndistillation if the teacher component pilot is consistent with ground truth to\nlargely boost the number of positive predictions and alleviate the mimicking\ndifficulty of the student model. For CLD, we aggregate the probability\ndistribution of all anchors at the same position to encourage the highest\nprobability anchor rather than individually distill the distribution at the\nanchor level. Finally, our StereoDistill achieves state-of-the-art results for\nstereo-based 3D detection on the KITTI test benchmark and extensive experiments\non KITTI and Argoverse Dataset validate the effectiveness.", + "authors": "Zhe Liu, Xiaoqing Ye, Xiao Tan, Errui Ding, Xiang Bai", + "published": "2023-01-04", + "updated": "2023-01-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2308.07719v1", + "title": "The coherent measurement cost of coherence distillation", + "abstract": "Quantum coherence is an indispensable resource for quantum technological\napplications. It is known to be distillable from a noisy form using operations\nthat cannot create coherence. However, distillation exacts a hidden coherent\nmeasurement cost, whose extent has not previously been estimated. Here we show\nthat this cost (quantified by an equivalent number of Hadamard measurements) is\nrelated to what we call the irretrievable coherence: the difference between the\ncoherence of formation and the distillable coherence. We conjecture (and make\npartial progress towards proving) that when distilling from many copies of a\ngiven noisy coherent state, the coherent measurement cost scales extensively in\nthe number of copies, at an asymptotic rate exactly equalling the input's\nirretrievable coherence. This cost applies to any application whereof coherence\ndistillation is an incidental outcome (e.g. incoherent randomness extraction),\nbut the implications are more dramatic if pure coherence is the only desired\noutcome: the measurement cost may often be higher than the distilled yield, in\nwhich case coherence should rather be prepared afresh than distilled from a\nnoisy input.", + "authors": "Varun Narasimhachar", + "published": "2023-08-15", + "updated": "2023-08-15", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2304.14800v1", + "title": "Multi-to-Single Knowledge Distillation for Point Cloud Semantic Segmentation", + "abstract": "3D point cloud semantic segmentation is one of the fundamental tasks for\nenvironmental understanding. Although significant progress has been made in\nrecent years, the performance of classes with few examples or few points is\nstill far from satisfactory. In this paper, we propose a novel multi-to-single\nknowledge distillation framework for the 3D point cloud semantic segmentation\ntask to boost the performance of those hard classes. Instead of fusing all the\npoints of multi-scans directly, only the instances that belong to the\npreviously defined hard classes are fused. To effectively and sufficiently\ndistill valuable knowledge from multi-scans, we leverage a multilevel\ndistillation framework, i.e., feature representation distillation, logit\ndistillation, and affinity distillation. We further develop a novel\ninstance-aware affinity distillation algorithm for capturing high-level\nstructural knowledge to enhance the distillation efficacy for hard classes.\nFinally, we conduct experiments on the SemanticKITTI dataset, and the results\non both the validation and test sets demonstrate that our method yields\nsubstantial improvements compared with the baseline method. The code is\navailable at \\Url{https://github.com/skyshoumeng/M2SKD}.", + "authors": "Shoumeng Qiu, Feng Jiang, Haiqiang Zhang, Xiangyang Xue, Jian Pu", + "published": "2023-04-28", + "updated": "2023-04-28", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/9809078v2", + "title": "A rigorous treatment of distillable entanglement", + "abstract": "The notion of distillable entanglement is one of the fundamental concepts of\nquantum information theory. Unfortunately, there is an apparent mismatch\nbetween the intuitive and rigorous definitions of distillable entanglement. To\nbe precise, the existing rigorous definitions impose the constraint that the\ndistilation protocol produce an output of constant dimension. It is therefore\nconceivable that this unnecessary constraint might have led to underestimation\nof the true distillable entanglement. We give a new definition of distillable\nentanglement which removes this constraint, but could conceivably overestimate\nthe true value. Since the definitions turn out to be equivalent, neither\nunderestimation nor overestimation is possible, and both definitions are\narguably correct", + "authors": "Eric M. Rains", + "published": "1998-09-24", + "updated": "1998-10-12", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1707.02573v1", + "title": "Distilling Entanglement with Noisy Operations", + "abstract": "Entanglement distillation is a fundamental task in quantum information\nprocessing. It not only extracts entanglement out of corrupted systems but also\nleads to protecting systems of interest against intervention with environment.\nIn this work, we consider a realistic scenario of entanglement distillation\nwhere noisy quantum operations are applied. In particular, the two-way\ndistillation protocol that tolerates the highest error rate is considered. We\nshow that among all types of noise there are only four equivalence classes\naccording to the distillability condition. Since the four classes are connected\nby local unitary transformations, our results can be used to improve\nentanglement distillability in practice when entanglement distillation is\nperformed in a realistic setting.", + "authors": "Jinho Chang, Joonwoo Bae, Younghun Kwon", + "published": "2017-07-09", + "updated": "2017-07-09", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2308.14286v2", + "title": "Bridging Cross-task Protocol Inconsistency for Distillation in Dense Object Detection", + "abstract": "Knowledge distillation (KD) has shown potential for learning compact models\nin dense object detection. However, the commonly used softmax-based\ndistillation ignores the absolute classification scores for individual\ncategories. Thus, the optimum of the distillation loss does not necessarily\nlead to the optimal student classification scores for dense object detectors.\nThis cross-task protocol inconsistency is critical, especially for dense object\ndetectors, since the foreground categories are extremely imbalanced. To address\nthe issue of protocol differences between distillation and classification, we\npropose a novel distillation method with cross-task consistent protocols,\ntailored for the dense object detection. For classification distillation, we\naddress the cross-task protocol inconsistency problem by formulating the\nclassification logit maps in both teacher and student models as multiple\nbinary-classification maps and applying a binary-classification distillation\nloss to each map. For localization distillation, we design an IoU-based\nLocalization Distillation Loss that is free from specific network structures\nand can be compared with existing localization distillation losses. Our\nproposed method is simple but effective, and experimental results demonstrate\nits superiority over existing methods. Code is available at\nhttps://github.com/TinyTigerPan/BCKD.", + "authors": "Longrong Yang, Xianpan Zhou, Xuewei Li, Liang Qiao, Zheyang Li, Ziwei Yang, Gaoang Wang, Xi Li", + "published": "2023-08-28", + "updated": "2024-03-12", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2004.03097v1", + "title": "Towards Non-task-specific Distillation of BERT via Sentence Representation Approximation", + "abstract": "Recently, BERT has become an essential ingredient of various NLP deep models\ndue to its effectiveness and universal-usability. However, the online\ndeployment of BERT is often blocked by its large-scale parameters and high\ncomputational cost. There are plenty of studies showing that the knowledge\ndistillation is efficient in transferring the knowledge from BERT into the\nmodel with a smaller size of parameters. Nevertheless, current BERT\ndistillation approaches mainly focus on task-specified distillation, such\nmethodologies lead to the loss of the general semantic knowledge of BERT for\nuniversal-usability. In this paper, we propose a sentence representation\napproximating oriented distillation framework that can distill the pre-trained\nBERT into a simple LSTM based model without specifying tasks. Consistent with\nBERT, our distilled model is able to perform transfer learning via fine-tuning\nto adapt to any sentence-level downstream task. Besides, our model can further\ncooperate with task-specific distillation procedures. The experimental results\non multiple NLP tasks from the GLUE benchmark show that our approach\noutperforms other task-specific distillation methods or even much larger\nmodels, i.e., ELMO, with efficiency well-improved.", + "authors": "Bowen Wu, Huan Zhang, Mengyuan Li, Zongsheng Wang, Qihang Feng, Junhong Huang, Baoxun Wang", + "published": "2020-04-07", + "updated": "2020-04-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2403.03846v1", + "title": "On the Effectiveness of Distillation in Mitigating Backdoors in Pre-trained Encoder", + "abstract": "In this paper, we study a defense against poisoned encoders in SSL called\ndistillation, which is a defense used in supervised learning originally.\nDistillation aims to distill knowledge from a given model (a.k.a the teacher\nnet) and transfer it to another (a.k.a the student net). Now, we use it to\ndistill benign knowledge from poisoned pre-trained encoders and transfer it to\na new encoder, resulting in a clean pre-trained encoder. In particular, we\nconduct an empirical study on the effectiveness and performance of distillation\nagainst poisoned encoders. Using two state-of-the-art backdoor attacks against\npre-trained image encoders and four commonly used image classification\ndatasets, our experimental results show that distillation can reduce attack\nsuccess rate from 80.87% to 27.51% while suffering a 6.35% loss in accuracy.\nMoreover, we investigate the impact of three core components of distillation on\nperformance: teacher net, student net, and distillation loss. By comparing 4\ndifferent teacher nets, 3 student nets, and 6 distillation losses, we find that\nfine-tuned teacher nets, warm-up-training-based student nets, and\nattention-based distillation loss perform best, respectively.", + "authors": "Tingxu Han, Shenghan Huang, Ziqi Ding, Weisong Sun, Yebo Feng, Chunrong Fang, Jun Li, Hanwei Qian, Cong Wu, Quanjun Zhang, Yang Liu, Zhenyu Chen", + "published": "2024-03-06", + "updated": "2024-03-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2205.16004v3", + "title": "What Knowledge Gets Distilled in Knowledge Distillation?", + "abstract": "Knowledge distillation aims to transfer useful information from a teacher\nnetwork to a student network, with the primary goal of improving the student's\nperformance for the task at hand. Over the years, there has a been a deluge of\nnovel techniques and use cases of knowledge distillation. Yet, despite the\nvarious improvements, there seems to be a glaring gap in the community's\nfundamental understanding of the process. Specifically, what is the knowledge\nthat gets distilled in knowledge distillation? In other words, in what ways\ndoes the student become similar to the teacher? Does it start to localize\nobjects in the same way? Does it get fooled by the same adversarial samples?\nDoes its data invariance properties become similar? Our work presents a\ncomprehensive study to try to answer these questions. We show that existing\nmethods can indeed indirectly distill these properties beyond improving task\nperformance. We further study why knowledge distillation might work this way,\nand show that our findings have practical implications as well.", + "authors": "Utkarsh Ojha, Yuheng Li, Anirudh Sundara Rajan, Yingyu Liang, Yong Jae Lee", + "published": "2022-05-31", + "updated": "2023-11-06", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2312.00739v1", + "title": "Adversarial Score Distillation: When score distillation meets GAN", + "abstract": "Existing score distillation methods are sensitive to classifier-free guidance\n(CFG) scale: manifested as over-smoothness or instability at small CFG scales,\nwhile over-saturation at large ones. To explain and analyze these issues, we\nrevisit the derivation of Score Distillation Sampling (SDS) and decipher\nexisting score distillation with the Wasserstein Generative Adversarial Network\n(WGAN) paradigm. With the WGAN paradigm, we find that existing score\ndistillation either employs a fixed sub-optimal discriminator or conducts\nincomplete discriminator optimization, resulting in the scale-sensitive issue.\nWe propose the Adversarial Score Distillation (ASD), which maintains an\noptimizable discriminator and updates it using the complete optimization\nobjective. Experiments show that the proposed ASD performs favorably in 2D\ndistillation and text-to-3D tasks against existing methods. Furthermore, to\nexplore the generalization ability of our WGAN paradigm, we extend ASD to the\nimage editing task, which achieves competitive results. The project page and\ncode are at https://github.com/2y7c3/ASD.", + "authors": "Min Wei, Jingkai Zhou, Junyao Sun, Xuesong Zhang", + "published": "2023-12-01", + "updated": "2023-12-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2006.01683v1", + "title": "Channel Distillation: Channel-Wise Attention for Knowledge Distillation", + "abstract": "Knowledge distillation is to transfer the knowledge from the data learned by\nthe teacher network to the student network, so that the student has the\nadvantage of less parameters and less calculations, and the accuracy is close\nto the teacher. In this paper, we propose a new distillation method, which\ncontains two transfer distillation strategies and a loss decay strategy. The\nfirst transfer strategy is based on channel-wise attention, called Channel\nDistillation (CD). CD transfers the channel information from the teacher to the\nstudent. The second is Guided Knowledge Distillation (GKD). Unlike Knowledge\nDistillation (KD), which allows the student to mimic each sample's prediction\ndistribution of the teacher, GKD only enables the student to mimic the correct\noutput of the teacher. The last part is Early Decay Teacher (EDT). During the\ntraining process, we gradually decay the weight of the distillation loss. The\npurpose is to enable the student to gradually control the optimization rather\nthan the teacher. Our proposed method is evaluated on ImageNet and CIFAR100. On\nImageNet, we achieve 27.68% of top-1 error with ResNet18, which outperforms\nstate-of-the-art methods. On CIFAR100, we achieve surprising result that the\nstudent outperforms the teacher. Code is available at\nhttps://github.com/zhouzaida/channel-distillation.", + "authors": "Zaida Zhou, Chaoran Zhuge, Xinwei Guan, Wen Liu", + "published": "2020-06-02", + "updated": "2020-06-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2302.14643v1", + "title": "Graph-based Knowledge Distillation: A survey and experimental evaluation", + "abstract": "Graph, such as citation networks, social networks, and transportation\nnetworks, are prevalent in the real world. Graph Neural Networks (GNNs) have\ngained widespread attention for their robust expressiveness and exceptional\nperformance in various graph applications. However, the efficacy of GNNs is\nheavily reliant on sufficient data labels and complex network models, with the\nformer obtaining hardly and the latter computing costly. To address the labeled\ndata scarcity and high complexity of GNNs, Knowledge Distillation (KD) has been\nintroduced to enhance existing GNNs. This technique involves transferring the\nsoft-label supervision of the large teacher model to the small student model\nwhile maintaining prediction performance. This survey offers a comprehensive\noverview of Graph-based Knowledge Distillation methods, systematically\ncategorizing and summarizing them while discussing their limitations and future\ndirections. This paper first introduces the background of graph and KD. It then\nprovides a comprehensive summary of three types of Graph-based Knowledge\nDistillation methods, namely Graph-based Knowledge Distillation for deep neural\nnetworks (DKD), Graph-based Knowledge Distillation for GNNs (GKD), and\nSelf-Knowledge Distillation based Graph-based Knowledge Distillation (SKD).\nEach type is further divided into knowledge distillation methods based on the\noutput layer, middle layer, and constructed graph. Subsequently, various\nalgorithms' ideas are analyzed and compared, concluding with the advantages and\ndisadvantages of each algorithm supported by experimental results. In addition,\nthe applications of graph-based knowledge distillation in CV, NLP, RS, and\nother fields are listed. Finally, the graph-based knowledge distillation is\nsummarized and prospectively discussed. We have also released related resources\nat https://github.com/liujing1023/Graph-based-Knowledge-Distillation.", + "authors": "Jing Liu, Tongya Zheng, Guanzheng Zhang, Qinfen Hao", + "published": "2023-02-27", + "updated": "2023-02-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1108.0537v2", + "title": "Isotropic non-locality cannot be distilled", + "abstract": "We investigate non-locality distillation protocols for isotropic\ncorrelations. These correlations are the hardest instances which respect to\ndistillability and only partial results are known about their behaviour under\nnon-locality distillation protocols. We completely resolve this issue by\nproving that non-locality distillation is impossible for all non-local\nisotropic correlations.", + "authors": "Dejan D. Dukaric", + "published": "2011-08-02", + "updated": "2011-09-20", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1910.02551v3", + "title": "Soft-Label Dataset Distillation and Text Dataset Distillation", + "abstract": "Dataset distillation is a method for reducing dataset sizes by learning a\nsmall number of synthetic samples containing all the information of a large\ndataset. This has several benefits like speeding up model training, reducing\nenergy consumption, and reducing required storage space. Currently, each\nsynthetic sample is assigned a single `hard' label, and also, dataset\ndistillation can currently only be used with image data.\n We propose to simultaneously distill both images and their labels, thus\nassigning each synthetic sample a `soft' label (a distribution of labels). Our\nalgorithm increases accuracy by 2-4% over the original algorithm for several\nimage classification tasks. Using `soft' labels also enables distilled datasets\nto consist of fewer samples than there are classes as each sample can encode\ninformation for multiple classes. For example, training a LeNet model with 10\ndistilled images (one per class) results in over 96% accuracy on MNIST, and\nalmost 92% accuracy when trained on just 5 distilled images.\n We also extend the dataset distillation algorithm to distill sequential\ndatasets including texts. We demonstrate that text distillation outperforms\nother methods across multiple datasets. For example, models attain almost their\noriginal accuracy on the IMDB sentiment analysis task using just 20 distilled\nsentences.\n Our code can be found at\n$\\href{https://github.com/ilia10000/dataset-distillation}{\\text{https://github.com/ilia10000/dataset-distillation}}$.", + "authors": "Ilia Sucholutsky, Matthias Schonlau", + "published": "2019-10-06", + "updated": "2020-05-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2401.11365v1", + "title": "Confidence Preservation Property in Knowledge Distillation Abstractions", + "abstract": "Social media platforms prevent malicious activities by detecting harmful\ncontent of posts and comments. To that end, they employ large-scale deep neural\nnetwork language models for sentiment analysis and content understanding. Some\nmodels, like BERT, are complex, and have numerous parameters, which makes them\nexpensive to operate and maintain. To overcome these deficiencies, industry\nexperts employ a knowledge distillation compression technique, where a\ndistilled model is trained to reproduce the classification behavior of the\noriginal model. The distillation processes terminates when the distillation\nloss function reaches the stopping criteria. This function is mainly designed\nto ensure that the original and the distilled models exhibit alike\nclassification behaviors. However, besides classification accuracy, there are\nadditional properties of the original model that the distilled model should\npreserve to be considered as an appropriate abstraction. In this work, we\nexplore whether distilled TinyBERT models preserve confidence values of the\noriginal BERT models, and investigate how this confidence preservation property\ncould guide tuning hyperparameters of the distillation process.", + "authors": "Dmitry Vengertsev, Elena Sherman", + "published": "2024-01-21", + "updated": "2024-01-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2302.05637v2", + "title": "Dual Relation Knowledge Distillation for Object Detection", + "abstract": "Knowledge distillation is an effective method for model compression. However,\nit is still a challenging topic to apply knowledge distillation to detection\ntasks. There are two key points resulting in poor distillation performance for\ndetection tasks. One is the serious imbalance between foreground and background\nfeatures, another one is that small object lacks enough feature representation.\nTo solve the above issues, we propose a new distillation method named dual\nrelation knowledge distillation (DRKD), including pixel-wise relation\ndistillation and instance-wise relation distillation. The pixel-wise relation\ndistillation embeds pixel-wise features in the graph space and applies graph\nconvolution to capture the global pixel relation. By distilling the global\npixel relation, the student detector can learn the relation between foreground\nand background features, and avoid the difficulty of distilling features\ndirectly for the feature imbalance issue. Besides, we find that instance-wise\nrelation supplements valuable knowledge beyond independent features for small\nobjects. Thus, the instance-wise relation distillation is designed, which\ncalculates the similarity of different instances to obtain a relation matrix.\nMore importantly, a relation filter module is designed to highlight valuable\ninstance relations. The proposed dual relation knowledge distillation is\ngeneral and can be easily applied for both one-stage and two-stage detectors.\nOur method achieves state-of-the-art performance, which improves Faster R-CNN\nbased on ResNet50 from 38.4% to 41.6% mAP and improves RetinaNet based on\nResNet50 from 37.4% to 40.3% mAP on COCO 2017.", + "authors": "Zhenliang Ni, Fukui Yang, Shengzhao Wen, Gang Zhang", + "published": "2023-02-11", + "updated": "2023-06-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2401.15863v1", + "title": "Importance-Aware Adaptive Dataset Distillation", + "abstract": "Herein, we propose a novel dataset distillation method for constructing small\ninformative datasets that preserve the information of the large original\ndatasets. The development of deep learning models is enabled by the\navailability of large-scale datasets. Despite unprecedented success,\nlarge-scale datasets considerably increase the storage and transmission costs,\nresulting in a cumbersome model training process. Moreover, using raw data for\ntraining raises privacy and copyright concerns. To address these issues, a new\ntask named dataset distillation has been introduced, aiming to synthesize a\ncompact dataset that retains the essential information from the large original\ndataset. State-of-the-art (SOTA) dataset distillation methods have been\nproposed by matching gradients or network parameters obtained during training\non real and synthetic datasets. The contribution of different network\nparameters to the distillation process varies, and uniformly treating them\nleads to degraded distillation performance. Based on this observation, we\npropose an importance-aware adaptive dataset distillation (IADD) method that\ncan improve distillation performance by automatically assigning importance\nweights to different network parameters during distillation, thereby\nsynthesizing more robust distilled datasets. IADD demonstrates superior\nperformance over other SOTA dataset distillation methods based on parameter\nmatching on multiple benchmark datasets and outperforms them in terms of\ncross-architecture generalization. In addition, the analysis of self-adaptive\nweights demonstrates the effectiveness of IADD. Furthermore, the effectiveness\nof IADD is validated in a real-world medical application such as COVID-19\ndetection.", + "authors": "Guang Li, Ren Togo, Takahiro Ogawa, Miki Haseyama", + "published": "2024-01-29", + "updated": "2024-01-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2403.10045v1", + "title": "Towards Adversarially Robust Dataset Distillation by Curvature Regularization", + "abstract": "Dataset distillation (DD) allows datasets to be distilled to fractions of\ntheir original size while preserving the rich distributional information so\nthat models trained on the distilled datasets can achieve a comparable accuracy\nwhile saving significant computational loads. Recent research in this area has\nbeen focusing on improving the accuracy of models trained on distilled\ndatasets. In this paper, we aim to explore a new perspective of DD. We study\nhow to embed adversarial robustness in distilled datasets, so that models\ntrained on these datasets maintain the high accuracy and meanwhile acquire\nbetter adversarial robustness. We propose a new method that achieves this goal\nby incorporating curvature regularization into the distillation process with\nmuch less computational overhead than standard adversarial training. Extensive\nempirical experiments suggest that our method not only outperforms standard\nadversarial training on both accuracy and robustness with less computation\noverhead but is also capable of generating robust distilled datasets that can\nwithstand various adversarial attacks.", + "authors": "Eric Xue, Yijiang Li, Haoyang Liu, Yifan Shen, Haohan Wang", + "published": "2024-03-15", + "updated": "2024-03-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1905.09747v2", + "title": "Adversarially Robust Distillation", + "abstract": "Knowledge distillation is effective for producing small, high-performance\nneural networks for classification, but these small networks are vulnerable to\nadversarial attacks. This paper studies how adversarial robustness transfers\nfrom teacher to student during knowledge distillation. We find that a large\namount of robustness may be inherited by the student even when distilled on\nonly clean images. Second, we introduce Adversarially Robust Distillation (ARD)\nfor distilling robustness onto student networks. In addition to producing small\nmodels with high test accuracy like conventional distillation, ARD also passes\nthe superior robustness of large networks onto the student. In our experiments,\nwe find that ARD student models decisively outperform adversarially trained\nnetworks of identical architecture in terms of robust accuracy, surpassing\nstate-of-the-art methods on standard robustness benchmarks. Finally, we adapt\nrecent fast adversarial training methods to ARD for accelerated robust\ndistillation.", + "authors": "Micah Goldblum, Liam Fowl, Soheil Feizi, Tom Goldstein", + "published": "2019-05-23", + "updated": "2019-12-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2311.13811v2", + "title": "Education distillation:getting student models to learn in shcools", + "abstract": "Knowledge distillation is one of the methods for model compression, and\nexisting knowledge distillation techniques focus on how to improve the\ndistillation algorithm so as to enhance the distillation efficiency. This paper\nintroduces dynamic incremental learning into knowledge distillation and\nproposes a distillation strategy for education distillation. Specifically, it\nis proposed to take fragmented student models divided from the complete student\nmodel as lower-grade models. As the grade level rises, fragmented student\nmodels deepen in conjunction with designed teaching reference layers, while\nlearning and distilling from more teacher models. By moving from lower to\nhigher grades, fragmented student models were gradually integrated into a\ncomplete target student model, and the performance of the student models\ngradually improved from lower to higher grades of the stage. Education\ndistillation strategies combined with distillation algorithms outperform the\nresults of single distillation algorithms on the public dataset\nCIFAR100,Caltech256, Food-101 dataset.", + "authors": "Ling Feng, Danyang Li, Tianhao Wu, Xuliang Duan", + "published": "2023-11-23", + "updated": "2023-11-27", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2305.08076v1", + "title": "Improving Defensive Distillation using Teacher Assistant", + "abstract": "Adversarial attacks pose a significant threat to the security and safety of\ndeep neural networks being applied to modern applications. More specifically,\nin computer vision-based tasks, experts can use the knowledge of model\narchitecture to create adversarial samples imperceptible to the human eye.\nThese attacks can lead to security problems in popular applications such as\nself-driving cars, face recognition, etc. Hence, building networks which are\nrobust to such attacks is highly desirable and essential. Among the various\nmethods present in literature, defensive distillation has shown promise in\nrecent years. Using knowledge distillation, researchers have been able to\ncreate models robust against some of those attacks. However, more attacks have\nbeen developed exposing weakness in defensive distillation. In this project, we\nderive inspiration from teacher assistant knowledge distillation and propose\nthat introducing an assistant network can improve the robustness of the\ndistilled model. Through a series of experiments, we evaluate the distilled\nmodels for different distillation temperatures in terms of accuracy,\nsensitivity, and robustness. Our experiments demonstrate that the proposed\nhypothesis can improve robustness in most cases. Additionally, we show that\nmulti-step distillation can further improve robustness with very little impact\non model accuracy.", + "authors": "Maniratnam Mandal, Suna Gao", + "published": "2023-05-14", + "updated": "2023-05-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CR", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2011.14554v1", + "title": "A Selective Survey on Versatile Knowledge Distillation Paradigm for Neural Network Models", + "abstract": "This paper aims to provide a selective survey about knowledge\ndistillation(KD) framework for researchers and practitioners to take advantage\nof it for developing new optimized models in the deep neural network field. To\nthis end, we give a brief overview of knowledge distillation and some related\nworks including learning using privileged information(LUPI) and generalized\ndistillation(GD). Even though knowledge distillation based on the\nteacher-student architecture was initially devised as a model compression\ntechnique, it has found versatile applications over various frameworks.\n In this paper, we review the characteristics of knowledge distillation from\nthe hypothesis that the three important ingredients of knowledge distillation\nare distilled knowledge and loss,teacher-student paradigm, and the distillation\nprocess. In addition, we survey the versatility of the knowledge distillation\nby studying its direct applications and its usage in combination with other\ndeep learning paradigms. Finally we present some future works in knowledge\ndistillation including explainable knowledge distillation where the analytical\nanalysis of the performance gain is studied and the self-supervised learning\nwhich is a hot research topic in deep learning community.", + "authors": "Jeong-Hoe Ku, JiHun Oh, YoungYoon Lee, Gaurav Pooniwala, SangJeong Lee", + "published": "2020-11-30", + "updated": "2020-11-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2303.05015v2", + "title": "Smooth and Stepwise Self-Distillation for Object Detection", + "abstract": "Distilling the structured information captured in feature maps has\ncontributed to improved results for object detection tasks, but requires\ncareful selection of baseline architectures and substantial pre-training.\nSelf-distillation addresses these limitations and has recently achieved\nstate-of-the-art performance for object detection despite making several\nsimplifying architectural assumptions. Building on this work, we propose Smooth\nand Stepwise Self-Distillation (SSSD) for object detection. Our SSSD\narchitecture forms an implicit teacher from object labels and a feature pyramid\nnetwork backbone to distill label-annotated feature maps using Jensen-Shannon\ndistance, which is smoother than distillation losses used in prior work. We\nadditionally add a distillation coefficient that is adaptively configured based\non the learning rate. We extensively benchmark SSSD against a baseline and two\nstate-of-the-art object detector architectures on the COCO dataset by varying\nthe coefficients and backbone and detector networks. We demonstrate that SSSD\nachieves higher average precision in most experimental settings, is robust to a\nwide range of coefficients, and benefits from our stepwise distillation\nprocedure.", + "authors": "Jieren Deng, Xin Zhou, Hao Tian, Zhihong Pan, Derek Aguiar", + "published": "2023-03-09", + "updated": "2024-01-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0303009v2", + "title": "Security bounds in Quantum Cryptography using d-level systems", + "abstract": "We analyze the security of quantum cryptography schemes for $d$-level systems\nusing 2 or $d+1$ maximally conjugated bases, under individual eavesdropping\nattacks based on cloning machines and measurement after the basis\nreconciliation. We consider classical advantage distillation protocols, that\nallow to extract a key even in situations where the mutual information between\nthe honest parties is smaller than the eavesdropper's information. In this\nscenario, advantage distillation protocols are shown to be as powerful as\nquantum distillation: key distillation is possible using classical techniques\nif and only if the corresponding state in the entanglement based protocol is\ndistillable.", + "authors": "Antonio Acin, Nicolas Gisin, Valerio Scarani", + "published": "2003-03-03", + "updated": "2003-11-03", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/0908.2142v1", + "title": "Distillation of Bell states in open systems", + "abstract": "In this work we review the entire classification of 2x2 distillable states\nfor protocols with a finite numbers of copies. We show a distillation protocol\nthat allows to distill Bell states with non zero probability at any time for an\ninitial singlet in vacuum. It is shown that the same protocol used in non zero\nthermal baths yields a considerable recovering of entanglement.", + "authors": "E. Isasi, D. Mundarain", + "published": "2009-08-14", + "updated": "2009-08-14", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.00264v1", + "title": "DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation", + "abstract": "Dataset distillation aims to compress a training dataset by creating a small\nnumber of informative synthetic samples such that neural networks trained on\nthem perform as well as those trained on the original training dataset. Current\ntext dataset distillation methods create each synthetic sample as a sequence of\nword embeddings instead of a text to apply gradient-based optimization;\nhowever, such embedding-level distilled datasets cannot be used for training\nother models whose word embedding weights are different from the model used for\ndistillation. To address this issue, we propose a novel text dataset\ndistillation approach, called Distilling dataset into Language Model (DiLM),\nwhich trains a language model to generate informative synthetic training\nsamples as text data, instead of directly optimizing synthetic samples. We\nevaluated DiLM on various text classification datasets and showed that\ndistilled synthetic datasets from DiLM outperform those from current coreset\nselection methods. DiLM achieved remarkable generalization performance in\ntraining different types of models and in-context learning of large language\nmodels. Our code will be available at https://github.com/arumaekawa/DiLM.", + "authors": "Aru Maekawa, Satoshi Kosugi, Kotaro Funakoshi, Manabu Okumura", + "published": "2024-03-30", + "updated": "2024-03-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2011.06110v1", + "title": "Efficient Knowledge Distillation for RNN-Transducer Models", + "abstract": "Knowledge Distillation is an effective method of transferring knowledge from\na large model to a smaller model. Distillation can be viewed as a type of model\ncompression, and has played an important role for on-device ASR applications.\nIn this paper, we develop a distillation method for RNN-Transducer (RNN-T)\nmodels, a popular end-to-end neural network architecture for streaming speech\nrecognition. Our proposed distillation loss is simple and efficient, and uses\nonly the \"y\" and \"blank\" posterior probabilities from the RNN-T output\nprobability lattice. We study the effectiveness of the proposed approach in\nimproving the accuracy of sparse RNN-T models obtained by gradually pruning a\nlarger uncompressed model, which also serves as the teacher during\ndistillation. With distillation of 60% and 90% sparse multi-domain RNN-T\nmodels, we obtain WER reductions of 4.3% and 12.1% respectively, on a noisy\nFarField eval set. We also present results of experiments on LibriSpeech, where\nthe introduction of the distillation loss yields a 4.8% relative WER reduction\non the test-other dataset for a small Conformer model.", + "authors": "Sankaran Panchapagesan, Daniel S. Park, Chung-Cheng Chiu, Yuan Shangguan, Qiao Liang, Alexander Gruenstein", + "published": "2020-11-11", + "updated": "2020-11-11", + "primary_cat": "eess.AS", + "cats": [ + "eess.AS", + "cs.SD" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2108.12905v1", + "title": "Lipschitz Continuity Guided Knowledge Distillation", + "abstract": "Knowledge distillation has become one of the most important model compression\ntechniques by distilling knowledge from larger teacher networks to smaller\nstudent ones. Although great success has been achieved by prior distillation\nmethods via delicately designing various types of knowledge, they overlook the\nfunctional properties of neural networks, which makes the process of applying\nthose techniques to new tasks unreliable and non-trivial. To alleviate such\nproblem, in this paper, we initially leverage Lipschitz continuity to better\nrepresent the functional characteristic of neural networks and guide the\nknowledge distillation process. In particular, we propose a novel Lipschitz\nContinuity Guided Knowledge Distillation framework to faithfully distill\nknowledge by minimizing the distance between two neural networks' Lipschitz\nconstants, which enables teacher networks to better regularize student networks\nand improve the corresponding performance. We derive an explainable\napproximation algorithm with an explicit theoretical derivation to address the\nNP-hard problem of calculating the Lipschitz constant. Experimental results\nhave shown that our method outperforms other benchmarks over several knowledge\ndistillation tasks (e.g., classification, segmentation and object detection) on\nCIFAR-100, ImageNet, and PASCAL VOC datasets.", + "authors": "Yuzhang Shang, Bin Duan, Ziliang Zong, Liqiang Nie, Yan Yan", + "published": "2021-08-29", + "updated": "2021-08-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2011.02255v2", + "title": "On Self-Distilling Graph Neural Network", + "abstract": "Recently, the teacher-student knowledge distillation framework has\ndemonstrated its potential in training Graph Neural Networks (GNNs). However,\ndue to the difficulty of training over-parameterized GNN models, one may not\neasily obtain a satisfactory teacher model for distillation. Furthermore, the\ninefficient training process of teacher-student knowledge distillation also\nimpedes its applications in GNN models. In this paper, we propose the first\nteacher-free knowledge distillation method for GNNs, termed GNN\nSelf-Distillation (GNN-SD), that serves as a drop-in replacement of the\nstandard training process. The method is built upon the proposed neighborhood\ndiscrepancy rate (NDR), which quantifies the non-smoothness of the embedded\ngraph in an efficient way. Based on this metric, we propose the adaptive\ndiscrepancy retaining (ADR) regularizer to empower the transferability of\nknowledge that maintains high neighborhood discrepancy across GNN layers. We\nalso summarize a generic GNN-SD framework that could be exploited to induce\nother distillation strategies. Experiments further prove the effectiveness and\ngeneralization of our approach, as it brings: 1) state-of-the-art GNN\ndistillation performance with less training cost, 2) consistent and\nconsiderable performance enhancement for various popular backbones.", + "authors": "Yuzhao Chen, Yatao Bian, Xi Xiao, Yu Rong, Tingyang Xu, Junzhou Huang", + "published": "2020-11-04", + "updated": "2021-04-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2402.02781v1", + "title": "Dual Knowledge Distillation for Efficient Sound Event Detection", + "abstract": "Sound event detection (SED) is essential for recognizing specific sounds and\ntheir temporal locations within acoustic signals. This becomes challenging\nparticularly for on-device applications, where computational resources are\nlimited. To address this issue, we introduce a novel framework referred to as\ndual knowledge distillation for developing efficient SED systems in this work.\nOur proposed dual knowledge distillation commences with temporal-averaging\nknowledge distillation (TAKD), utilizing a mean student model derived from the\ntemporal averaging of the student model's parameters. This allows the student\nmodel to indirectly learn from a pre-trained teacher model, ensuring a stable\nknowledge distillation. Subsequently, we introduce embedding-enhanced feature\ndistillation (EEFD), which involves incorporating an embedding distillation\nlayer within the student model to bolster contextual learning. On DCASE 2023\nTask 4A public evaluation dataset, our proposed SED system with dual knowledge\ndistillation having merely one-third of the baseline model's parameters,\ndemonstrates superior performance in terms of PSDS1 and PSDS2. This highlights\nthe importance of proposed dual knowledge distillation for compact SED systems,\nwhich can be ideal for edge devices.", + "authors": "Yang Xiao, Rohan Kumar Das", + "published": "2024-02-05", + "updated": "2024-02-05", + "primary_cat": "cs.SD", + "cats": [ + "cs.SD", + "cs.AI", + "cs.CL", + "cs.LG", + "eess.AS" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2405.00348v1", + "title": "Practical Dataset Distillation Based on Deep Support Vectors", + "abstract": "Conventional dataset distillation requires significant computational\nresources and assumes access to the entire dataset, an assumption impractical\nas it presumes all data resides on a central server. In this paper, we focus on\ndataset distillation in practical scenarios with access to only a fraction of\nthe entire dataset. We introduce a novel distillation method that augments the\nconventional process by incorporating general model knowledge via the addition\nof Deep KKT (DKKT) loss. In practical settings, our approach showed improved\nperformance compared to the baseline distribution matching distillation method\non the CIFAR-10 dataset. Additionally, we present experimental evidence that\nDeep Support Vectors (DSVs) offer unique information to the original\ndistillation, and their integration results in enhanced performance.", + "authors": "Hyunho Lee, Junhoo Lee, Nojun Kwak", + "published": "2024-05-01", + "updated": "2024-05-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.17732v1", + "title": "Generative Dataset Distillation: Balancing Global Structure and Local Details", + "abstract": "In this paper, we propose a new dataset distillation method that considers\nbalancing global structure and local details when distilling the information\nfrom a large dataset into a generative model. Dataset distillation has been\nproposed to reduce the size of the required dataset when training models. The\nconventional dataset distillation methods face the problem of long redeployment\ntime and poor cross-architecture performance. Moreover, previous methods\nfocused too much on the high-level semantic attributes between the synthetic\ndataset and the original dataset while ignoring the local features such as\ntexture and shape. Based on the above understanding, we propose a new method\nfor distilling the original image dataset into a generative model. Our method\ninvolves using a conditional generative adversarial network to generate the\ndistilled dataset. Subsequently, we ensure balancing global structure and local\ndetails in the distillation process, continuously optimizing the generator for\nmore information-dense dataset generation.", + "authors": "Longzhen Li, Guang Li, Ren Togo, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama", + "published": "2024-04-26", + "updated": "2024-04-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1903.04197v7", + "title": "Structured Knowledge Distillation for Dense Prediction", + "abstract": "In this work, we consider transferring the structure information from large\nnetworks to compact ones for dense prediction tasks in computer vision.\nPrevious knowledge distillation strategies used for dense prediction tasks\noften directly borrow the distillation scheme for image classification and\nperform knowledge distillation for each pixel separately, leading to\nsub-optimal performance. Here we propose to distill structured knowledge from\nlarge networks to compact networks, taking into account the fact that dense\nprediction is a structured prediction problem. Specifically, we study two\nstructured distillation schemes: i) pair-wise distillation that distills the\npair-wise similarities by building a static graph; and ii) holistic\ndistillation that uses adversarial training to distill holistic knowledge. The\neffectiveness of our knowledge distillation approaches is demonstrated by\nexperiments on three dense prediction tasks: semantic segmentation, depth\nestimation and object detection. Code is available at: https://git.io/StructKD", + "authors": "Yifan Liu, Changyong Shun, Jingdong Wang, Chunhua Shen", + "published": "2019-03-11", + "updated": "2020-06-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2104.11928v1", + "title": "Extract then Distill: Efficient and Effective Task-Agnostic BERT Distillation", + "abstract": "Task-agnostic knowledge distillation, a teacher-student framework, has been\nproved effective for BERT compression. Although achieving promising results on\nNLP tasks, it requires enormous computational resources. In this paper, we\npropose Extract Then Distill (ETD), a generic and flexible strategy to reuse\nthe teacher's parameters for efficient and effective task-agnostic\ndistillation, which can be applied to students of any size. Specifically, we\nintroduce two variants of ETD, ETD-Rand and ETD-Impt, which extract the\nteacher's parameters in a random manner and by following an importance metric\nrespectively. In this way, the student has already acquired some knowledge at\nthe beginning of the distillation process, which makes the distillation process\nconverge faster. We demonstrate the effectiveness of ETD on the GLUE benchmark\nand SQuAD. The experimental results show that: (1) compared with the baseline\nwithout an ETD strategy, ETD can save 70\\% of computation cost. Moreover, it\nachieves better results than the baseline when using the same computing\nresource. (2) ETD is generic and has been proven effective for different\ndistillation methods (e.g., TinyBERT and MiniLM) and students of different\nsizes. The source code will be publicly available upon publication.", + "authors": "Cheng Chen, Yichun Yin, Lifeng Shang, Zhi Wang, Xin Jiang, Xiao Chen, Qun Liu", + "published": "2021-04-24", + "updated": "2021-04-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.04057v1", + "title": "Score identity Distillation: Exponentially Fast Distillation of Pretrained Diffusion Models for One-Step Generation", + "abstract": "We introduce Score identity Distillation (SiD), an innovative data-free\nmethod that distills the generative capabilities of pretrained diffusion models\ninto a single-step generator. SiD not only facilitates an exponentially fast\nreduction in Fr\\'echet inception distance (FID) during distillation but also\napproaches or even exceeds the FID performance of the original teacher\ndiffusion models. By reformulating forward diffusion processes as semi-implicit\ndistributions, we leverage three score-related identities to create an\ninnovative loss mechanism. This mechanism achieves rapid FID reduction by\ntraining the generator using its own synthesized images, eliminating the need\nfor real data or reverse-diffusion-based generation, all accomplished within\nsignificantly shortened generation time. Upon evaluation across four benchmark\ndatasets, the SiD algorithm demonstrates high iteration efficiency during\ndistillation and surpasses competing distillation approaches, whether they are\none-step or few-step, data-free, or dependent on training data, in terms of\ngeneration quality. This achievement not only redefines the benchmarks for\nefficiency and effectiveness in diffusion distillation but also in the broader\nfield of diffusion-based generation. Our PyTorch implementation will be\npublicly accessible on GitHub.", + "authors": "Mingyuan Zhou, Huangjie Zheng, Zhendong Wang, Mingzhang Yin, Hai Huang", + "published": "2024-04-05", + "updated": "2024-04-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2309.09920v1", + "title": "Distilling HuBERT with LSTMs via Decoupled Knowledge Distillation", + "abstract": "Much research effort is being applied to the task of compressing the\nknowledge of self-supervised models, which are powerful, yet large and memory\nconsuming. In this work, we show that the original method of knowledge\ndistillation (and its more recently proposed extension, decoupled knowledge\ndistillation) can be applied to the task of distilling HuBERT. In contrast to\nmethods that focus on distilling internal features, this allows for more\nfreedom in the network architecture of the compressed model. We thus propose to\ndistill HuBERT's Transformer layers into an LSTM-based distilled model that\nreduces the number of parameters even below DistilHuBERT and at the same time\nshows improved performance in automatic speech recognition.", + "authors": "Danilo de Oliveira, Timo Gerkmann", + "published": "2023-09-18", + "updated": "2023-09-18", + "primary_cat": "eess.AS", + "cats": [ + "eess.AS", + "cs.LG", + "cs.SD", + "eess.SP" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2305.12330v1", + "title": "Task-agnostic Distillation of Encoder-Decoder Language Models", + "abstract": "Finetuning pretrained language models (LMs) have enabled appealing\nperformance on a diverse array of tasks. The intriguing task-agnostic property\nhas driven a shifted focus from task-specific to task-agnostic distillation of\nLMs. While task-agnostic, compute-efficient, performance-preserved LMs can be\nyielded by task-agnostic distillation, previous studies mainly sit in\ndistillation of either encoder-only LMs (e.g., BERT) or decoder-only ones\n(e.g., GPT) yet largely neglect that distillation of encoder-decoder LMs (e.g.,\nT5) can posit very distinguished behaviors. Frustratingly, we discover that\nexisting task-agnostic distillation methods can fail to handle the distillation\nof encoder-decoder LMs. To the demand, we explore a few paths and uncover a\npath named as MiniEnD that successfully tackles the distillation of\nencoder-decoder LMs in a task-agnostic fashion. We examine MiniEnD on language\nunderstanding and abstractive summarization. The results showcase that MiniEnD\nis generally effective and is competitive compared to other alternatives. We\nfurther scale MiniEnD up to distillation of 3B encoder-decoder language models\nwith interpolated distillation. The results imply the opportunities and\nchallenges in distilling large language models (e.g., LLaMA).", + "authors": "Chen Zhang, Yang Yang, Jingang Wang, Dawei Song", + "published": "2023-05-21", + "updated": "2023-05-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1907.09682v2", + "title": "Similarity-Preserving Knowledge Distillation", + "abstract": "Knowledge distillation is a widely applicable technique for training a\nstudent neural network under the guidance of a trained teacher network. For\nexample, in neural network compression, a high-capacity teacher is distilled to\ntrain a compact student; in privileged learning, a teacher trained with\nprivileged data is distilled to train a student without access to that data.\nThe distillation loss determines how a teacher's knowledge is captured and\ntransferred to the student. In this paper, we propose a new form of knowledge\ndistillation loss that is inspired by the observation that semantically similar\ninputs tend to elicit similar activation patterns in a trained network.\nSimilarity-preserving knowledge distillation guides the training of a student\nnetwork such that input pairs that produce similar (dissimilar) activations in\nthe teacher network produce similar (dissimilar) activations in the student\nnetwork. In contrast to previous distillation methods, the student is not\nrequired to mimic the representation space of the teacher, but rather to\npreserve the pairwise similarities in its own representation space. Experiments\non three public datasets demonstrate the potential of our approach.", + "authors": "Frederick Tung, Greg Mori", + "published": "2019-07-23", + "updated": "2019-08-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1812.00249v1", + "title": "On Compressing U-net Using Knowledge Distillation", + "abstract": "We study the use of knowledge distillation to compress the U-net\narchitecture. We show that, while standard distillation is not sufficient to\nreliably train a compressed U-net, introducing other regularization methods,\nsuch as batch normalization and class re-weighting, in knowledge distillation\nsignificantly improves the training process. This allows us to compress a U-net\nby over 1000x, i.e., to 0.1% of its original number of parameters, at a\nnegligible decrease in performance.", + "authors": "Karttikeya Mangalam, Mathieu Salzamann", + "published": "2018-12-01", + "updated": "2018-12-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2305.18381v3", + "title": "Distill Gold from Massive Ores: Efficient Dataset Distillation via Critical Samples Selection", + "abstract": "Data-efficient learning has garnered significant attention, especially given\nthe current trend of large multi-modal models. Recently, dataset distillation\nbecomes an effective approach for data-efficiency; however, the distillation\nprocess itself can still be inefficient. In this work, we model the dataset\ndistillation task within the context of information transport. By observing the\nsubstantial data redundancy inherent in the distillation, we argue to put more\nemphasis on the samples' utility for the distillation task. We introduce and\nvalidate a family of data utility estimators and optimal data selection methods\nto exploit the most valuable samples. This strategy significantly reduces the\ntraining costs and extends various existing distillation algorithms to larger\nand more diversified datasets, e.g., in some cases only 0.04% training data is\nsufficient for comparable distillation performance. Our method consistently\nenhances the distillation algorithms, even on much larger-scale and more\nheterogeneous datasets, e.g. ImageNet-1K and Kinetics-400. This paradigm opens\nup new avenues in the dynamics of distillation and paves the way for efficient\ndataset distillation. Our code is available on\nhttps://github.com/silicx/GoldFromOres .", + "authors": "Yue Xu, Yong-Lu Li, Kaitong Cui, Ziyu Wang, Cewu Lu, Yu-Wing Tai, Chi-Keung Tang", + "published": "2023-05-28", + "updated": "2023-11-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0012022v1", + "title": "Distilling a Greenberger-Horne-Zeilinger State From an Arbitrary Pure State of Three Qubits", + "abstract": "We present a general algorithm to achieve local operators which can produce\nthe GHZ state for an arbitrary given three-qubit state. Thus the distillation\nprocess of the state can be realized optimally. The algorithm is shown to be\nsufficient for the three-qubit state on account of the fact that any state for\nwhich this distillation algorithm is invalid cannot be distilled to the GHZ\nstate by any local actions. Moreover, an analytical result of distillation\noperations is achieved for the general state of three qubits.", + "authors": "Li-Xiang Cen, Shun-Jin Wang", + "published": "2000-12-05", + "updated": "2000-12-05", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/0803.0345v2", + "title": "Secret key distillation from shielded two-qubit states", + "abstract": "The quantum states corresponding to a secret key are characterized using the\nso-called private states, where the key part consisting of a secret key is\nshielded by the additional systems. Based on the construction, it was shown\nthat a secret key can be distilled from bound entangled states. In this work, I\nconsider the shielded two-qubit states in a key-distillation scenario and\nderive the conditions under which a secret key can be distilled using the\nrecurrence protocol or the two-way classical distillation, advantage\ndistillation together with one-way postprocessing. From the security\nconditions, it is shown that a secret key can be distilled from bound entangled\nstates in a much wider range. In addition, I consider the case that in which\nwhite noise is added to quantum states and show that the classical distillation\nprotocol still works despite a certain amount of noise although the recurrence\nprotocol does not.", + "authors": "Joonwoo Bae", + "published": "2008-03-03", + "updated": "2010-09-22", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.09740v1", + "title": "Leveraging Zero-Level Distillation to Generate High-Fidelity Magic States", + "abstract": "Magic state distillation plays an important role in universal fault-tolerant\nquantum computing, and its overhead is one of the major obstacles to realizing\nfault-tolerant quantum computers. Hence, many studies have been conducted to\nreduce this overhead. Among these, Litinski has provided a concrete assessment\nof resource-efficient distillation protocol implementations on the rotated\nsurface code. On the other hand, recently, Itogawa et al. have proposed\nzero-level distillation, a distillation protocol offering very small spatial\nand temporal overhead to generate relatively low-fidelity magic states. While\nzero-level distillation offers preferable spatial and temporal overhead, it\ncannot directly generate high-fidelity magic states since it only reduces the\nlogical error rate of the magic state quadratically. In this study, we evaluate\nthe spatial and temporal overhead of two-level distillation implementations\ngenerating relatively high-fidelity magic states, including ones incorporating\nzero-level distillation. To this end, we introduce (0+1)-level distillation, a\ntwo-level distillation protocol which combines zero-level distillation and the\n15-to-1 distillation protocol. We refine the second-level 15-to-1\nimplementation in it to capitalize on the small footprint of zero-level\ndistillation. Under conditions of a physical error probability of\n$p_{\\mathrm{phys}} = 10^{-4}$ ($10^{-3}$) and targeting an error rate for the\nmagic state within $[5 \\times 10^{-17}, 10^{-11}]$ ($[5 \\times 10^{-11},\n10^{-8}]$), (0+1)-level distillation reduces the spatiotemporal overhead by\nmore than 63% (61%) compared to the (15-to-1)$\\times$(15-to-1) protocol and\nmore than 43% (44%) compared to the (15-to-1)$\\times$(20-to-4) protocol,\noffering a substantial efficiency gain over the traditional protocols.", + "authors": "Yutaka Hirano, Tomohiro Itogawa, Keisuke Fujii", + "published": "2024-04-15", + "updated": "2024-04-15", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2206.12370v2", + "title": "Mixed Sample Augmentation for Online Distillation", + "abstract": "Mixed Sample Regularization (MSR), such as MixUp or CutMix, is a powerful\ndata augmentation strategy to generalize convolutional neural networks.\nPrevious empirical analysis has illustrated an orthogonal performance gain\nbetween MSR and conventional offline Knowledge Distillation (KD). To be more\nspecific, student networks can be enhanced with the involvement of MSR in the\ntraining stage of sequential distillation. Yet, the interplay between MSR and\nonline knowledge distillation, where an ensemble of peer students learn\nmutually from each other, remains unexplored. To bridge the gap, we make the\nfirst attempt at incorporating CutMix into online distillation, where we\nempirically observe a significant improvement. Encouraged by this fact, we\npropose an even stronger MSR specifically for online distillation, named as\nCut\\textsuperscript{n}Mix. Furthermore, a novel online distillation framework\nis designed upon Cut\\textsuperscript{n}Mix, to enhance the distillation with\nfeature level mutual learning and a self-ensemble teacher. Comprehensive\nevaluations on CIFAR10 and CIFAR100 with six network architectures show that\nour approach can consistently outperform state-of-the-art distillation methods.", + "authors": "Yiqing Shen, Liwu Xu, Yuzhe Yang, Yaqian Li, Yandong Guo", + "published": "2022-06-24", + "updated": "2023-03-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2010.13002v2", + "title": "Pre-trained Summarization Distillation", + "abstract": "Recent state-of-the-art approaches to summarization utilize large pre-trained\nTransformer models. Distilling these models to smaller student models has\nbecome critically important for practical use; however there are many different\ndistillation methods proposed by the NLP literature. Recent work on distilling\nBERT for classification and regression tasks shows strong performance using\ndirect knowledge distillation. Alternatively, machine translation practitioners\ndistill using pseudo-labeling, where a small model is trained on the\ntranslations of a larger model. A third, simpler approach is to 'shrink and\nfine-tune' (SFT), which avoids any explicit distillation by copying parameters\nto a smaller student model and then fine-tuning. We compare these three\napproaches for distillation of Pegasus and BART, the current and former state\nof the art, pre-trained summarization models, and find that SFT outperforms\nknowledge distillation and pseudo-labeling on the CNN/DailyMail dataset, but\nunder-performs pseudo-labeling on the more abstractive XSUM dataset. PyTorch\nCode and checkpoints of different sizes are available through Hugging Face\ntransformers here http://tiny.cc/4iy0tz.", + "authors": "Sam Shleifer, Alexander M. Rush", + "published": "2020-10-24", + "updated": "2020-10-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2204.00548v1", + "title": "Unified and Effective Ensemble Knowledge Distillation", + "abstract": "Ensemble knowledge distillation can extract knowledge from multiple teacher\nmodels and encode it into a single student model. Many existing methods learn\nand distill the student model on labeled data only. However, the teacher models\nare usually learned on the same labeled data, and their predictions have high\ncorrelations with groudtruth labels. Thus, they cannot provide sufficient\nknowledge complementary to task labels for student teaching. Distilling on\nunseen unlabeled data has the potential to enhance the knowledge transfer from\nthe teachers to the student. In this paper, we propose a unified and effective\nensemble knowledge distillation method that distills a single student model\nfrom an ensemble of teacher models on both labeled and unlabeled data. Since\ndifferent teachers may have diverse prediction correctness on the same sample,\non labeled data we weight the predictions of different teachers according to\ntheir correctness. In addition, we weight the distillation loss based on the\noverall prediction correctness of the teacher ensemble to distill high-quality\nknowledge. On unlabeled data, there is no groundtruth to evaluate prediction\ncorrectness. Fortunately, the disagreement among teachers is an indication of\nsample hardness, and thereby we weight the distillation loss based on teachers'\ndisagreement to emphasize knowledge distillation on important samples.\nExtensive experiments on four datasets show the effectiveness of our proposed\nensemble distillation method.", + "authors": "Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang", + "published": "2022-04-01", + "updated": "2022-04-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2206.08491v1", + "title": "Revisiting Self-Distillation", + "abstract": "Knowledge distillation is the procedure of transferring \"knowledge\" from a\nlarge model (the teacher) to a more compact one (the student), often being used\nin the context of model compression. When both models have the same\narchitecture, this procedure is called self-distillation. Several works have\nanecdotally shown that a self-distilled student can outperform the teacher on\nheld-out data. In this work, we systematically study self-distillation in a\nnumber of settings. We first show that even with a highly accurate teacher,\nself-distillation allows a student to surpass the teacher in all cases.\nSecondly, we revisit existing theoretical explanations of (self) distillation\nand identify contradicting examples, revealing possible drawbacks of these\nexplanations. Finally, we provide an alternative explanation for the dynamics\nof self-distillation through the lens of loss landscape geometry. We conduct\nextensive experiments to show that self-distillation leads to flatter minima,\nthereby resulting in better generalization.", + "authors": "Minh Pham, Minsu Cho, Ameya Joshi, Chinmay Hegde", + "published": "2022-06-17", + "updated": "2022-06-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2112.05638v2", + "title": "DistilCSE: Effective Knowledge Distillation For Contrastive Sentence Embeddings", + "abstract": "Large-scale contrastive learning models can learn very informative sentence\nembeddings, but are hard to serve online due to the huge model size. Therefore,\nthey often play the role of \"teacher\", transferring abilities to small\n\"student\" models through knowledge distillation. However, knowledge\ndistillation inevitably brings some drop in embedding effect. To tackle that,\nwe propose an effective knowledge distillation framework for contrastive\nsentence embeddings, termed DistilCSE. It first applies knowledge distillation\non a large amount of unlabeled data, and then fine-tunes student models through\ncontrastive learning on limited labeled data. To achieve better distillation\nresults, we further propose Contrastive Knowledge Distillation (CKD). CKD uses\nInfoNCE as the loss function in knowledge distillation, enhancing the objective\nconsistency among teacher model training, knowledge distillation, and student\nmodel fine-tuning. Extensive experiments show that student models trained with\nthe proposed DistilCSE and CKD suffer from little or even no performance\ndecrease and consistently outperform the corresponding counterparts of the same\nparameter size. Impressively, our 110M student model outperforms the latest\nstate-of-the-art model, i.e., Sentence-T5 (11B), with only 1% parameters and\n0.25% unlabeled data.", + "authors": "Chaochen Gao, Xing Wu, Peng Wang, Jue Wang, Liangjun Zang, Zhongyuan Wang, Songlin Hu", + "published": "2021-12-10", + "updated": "2023-01-30", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2302.09632v1", + "title": "HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained Transformers", + "abstract": "Knowledge distillation has been shown to be a powerful model compression\napproach to facilitate the deployment of pre-trained language models in\npractice. This paper focuses on task-agnostic distillation. It produces a\ncompact pre-trained model that can be easily fine-tuned on various tasks with\nsmall computational costs and memory footprints. Despite the practical\nbenefits, task-agnostic distillation is challenging. Since the teacher model\nhas a significantly larger capacity and stronger representation power than the\nstudent model, it is very difficult for the student to produce predictions that\nmatch the teacher's over a massive amount of open-domain training data. Such a\nlarge prediction discrepancy often diminishes the benefits of knowledge\ndistillation. To address this challenge, we propose Homotopic Distillation\n(HomoDistil), a novel task-agnostic distillation approach equipped with\niterative pruning. Specifically, we initialize the student model from the\nteacher model, and iteratively prune the student's neurons until the target\nwidth is reached. Such an approach maintains a small discrepancy between the\nteacher's and student's predictions throughout the distillation process, which\nensures the effectiveness of knowledge transfer. Extensive experiments\ndemonstrate that HomoDistil achieves significant improvements on existing\nbaselines.", + "authors": "Chen Liang, Haoming Jiang, Zheng Li, Xianfeng Tang, Bin Yin, Tuo Zhao", + "published": "2023-02-19", + "updated": "2023-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2211.08071v2", + "title": "Knowledge Distillation for Detection Transformer with Consistent Distillation Points Sampling", + "abstract": "DETR is a novel end-to-end transformer architecture object detector, which\nsignificantly outperforms classic detectors when scaling up the model size. In\nthis paper, we focus on the compression of DETR with knowledge distillation.\nWhile knowledge distillation has been well-studied in classic detectors, there\nis a lack of researches on how to make it work effectively on DETR. We first\nprovide experimental and theoretical analysis to point out that the main\nchallenge in DETR distillation is the lack of consistent distillation points.\nDistillation points refer to the corresponding inputs of the predictions for\nstudent to mimic, and reliable distillation requires sufficient distillation\npoints which are consistent between teacher and student. Based on this\nobservation, we propose a general knowledge distillation paradigm for\nDETR(KD-DETR) with consistent distillation points sampling. Specifically, we\ndecouple detection and distillation tasks by introducing a set of specialized\nobject queries to construct distillation points. In this paradigm, we further\npropose a general-to-specific distillation points sampling strategy to explore\nthe extensibility of KD-DETR. Extensive experiments on different DETR\narchitectures with various scales of backbones and transformer layers validate\nthe effectiveness and generalization of KD-DETR. KD-DETR boosts the performance\nof DAB-DETR with ResNet-18 and ResNet-50 backbone to 41.4$\\%$, 45.7$\\%$ mAP,\nrespectively, which are 5.2$\\%$, 3.5$\\%$ higher than the baseline, and\nResNet-50 even surpasses the teacher model by $2.2\\%$.", + "authors": "Yu Wang, Xin Li, Shengzhao Wen, Fukui Yang, Wanping Zhang, Gang Zhang, Haocheng Feng, Junyu Han, Errui Ding", + "published": "2022-11-15", + "updated": "2022-11-16", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2304.06461v2", + "title": "Multi-Mode Online Knowledge Distillation for Self-Supervised Visual Representation Learning", + "abstract": "Self-supervised learning (SSL) has made remarkable progress in visual\nrepresentation learning. Some studies combine SSL with knowledge distillation\n(SSL-KD) to boost the representation learning performance of small models. In\nthis study, we propose a Multi-mode Online Knowledge Distillation method (MOKD)\nto boost self-supervised visual representation learning. Different from\nexisting SSL-KD methods that transfer knowledge from a static pre-trained\nteacher to a student, in MOKD, two different models learn collaboratively in a\nself-supervised manner. Specifically, MOKD consists of two distillation modes:\nself-distillation and cross-distillation modes. Among them, self-distillation\nperforms self-supervised learning for each model independently, while\ncross-distillation realizes knowledge interaction between different models. In\ncross-distillation, a cross-attention feature search strategy is proposed to\nenhance the semantic feature alignment between different models. As a result,\nthe two models can absorb knowledge from each other to boost their\nrepresentation learning performance. Extensive experimental results on\ndifferent backbones and datasets demonstrate that two heterogeneous models can\nbenefit from MOKD and outperform their independently trained baseline. In\naddition, MOKD also outperforms existing SSL-KD methods for both the student\nand teacher models.", + "authors": "Kaiyou Song, Jin Xie, Shan Zhang, Zimeng Luo", + "published": "2023-04-13", + "updated": "2023-06-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2106.07137v1", + "title": "Why Can You Lay Off Heads? Investigating How BERT Heads Transfer", + "abstract": "The huge size of the widely used BERT family models has led to recent efforts\nabout model distillation. The main goal of distillation is to create a\ntask-agnostic pre-trained model that can be fine-tuned on downstream tasks\nwithout fine-tuning its full-sized version. Despite the progress of\ndistillation, to what degree and for what reason a task-agnostic model can be\ncreated from distillation has not been well studied. Also, the mechanisms\nbehind transfer learning of those BERT models are not well investigated either.\nTherefore, this work focuses on analyzing the acceptable deduction when\ndistillation for guiding the future distillation procedure. Specifically, we\nfirst inspect the prunability of the Transformer heads in RoBERTa and ALBERT\nusing their head importance estimation proposed by Michel et al. (2019), and\nthen check the coherence of the important heads between the pre-trained task\nand downstream tasks. Hence, the acceptable deduction of performance on the\npre-trained task when distilling a model can be derived from the results, and\nwe further compare the behavior of the pruned model before and after\nfine-tuning. Our studies provide guidance for future directions about BERT\nfamily model distillation.", + "authors": "Ting-Rui Chiang, Yun-Nung Chen", + "published": "2021-06-14", + "updated": "2021-06-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0305188v1", + "title": "Dynamics of Distillability", + "abstract": "The time evolution of a maximally entangled bipartite systems is presented in\nthis paper. The distillability criterion is given in terms of Kraus operators.\nUsing the criterion, we discuss the distillability of $2\\times 2$ and $n\\times\nn (n>2)$ systems in their evolution process. There are two distinguished\nprocesses, dissipation and decoherence, which may destroy the distillability.\nWe discuss the effects of those processes on distillability in details.", + "authors": "W. Wu, W. Wang, X. X. Yi", + "published": "2003-05-30", + "updated": "2003-05-30", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2007.09029v1", + "title": "Knowledge Distillation in Deep Learning and its Applications", + "abstract": "Deep learning based models are relatively large, and it is hard to deploy\nsuch models on resource-limited devices such as mobile phones and embedded\ndevices. One possible solution is knowledge distillation whereby a smaller\nmodel (student model) is trained by utilizing the information from a larger\nmodel (teacher model). In this paper, we present a survey of knowledge\ndistillation techniques applied to deep learning models. To compare the\nperformances of different techniques, we propose a new metric called\ndistillation metric. Distillation metric compares different knowledge\ndistillation algorithms based on sizes and accuracy scores. Based on the\nsurvey, some interesting conclusions are drawn and presented in this paper.", + "authors": "Abdolmaged Alkhulaifi, Fahad Alsahli, Irfan Ahmad", + "published": "2020-07-17", + "updated": "2020-07-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2306.06629v1", + "title": "GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model", + "abstract": "Currently, the reduction in the parameter scale of large-scale pre-trained\nlanguage models (PLMs) through knowledge distillation has greatly facilitated\ntheir widespread deployment on various devices. However, the deployment of\nknowledge distillation systems faces great challenges in real-world\nindustrial-strength applications, which require the use of complex distillation\nmethods on even larger-scale PLMs (over 10B), limited by memory on GPUs and the\nswitching of methods. To overcome these challenges, we propose GKD, a general\nknowledge distillation framework that supports distillation on larger-scale\nPLMs using various distillation methods. With GKD, developers can build larger\ndistillation models on memory-limited GPUs and easily switch and combine\ndifferent distillation methods within a single framework. Experimental results\nshow that GKD can support the distillation of at least 100B-scale PLMs and 25\nmainstream methods on 8 NVIDIA A100 (40GB) GPUs.", + "authors": "Shicheng Tan, Weng Lam Tam, Yuanchun Wang, Wenwen Gong, Yang Yang, Hongyin Tang, Keqing He, Jiahao Liu, Jingang Wang, Shu Zhao, Peng Zhang, Jie Tang", + "published": "2023-06-11", + "updated": "2023-06-11", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2109.14960v3", + "title": "Prune Your Model Before Distill It", + "abstract": "Knowledge distillation transfers the knowledge from a cumbersome teacher to a\nsmall student. Recent results suggest that the student-friendly teacher is more\nappropriate to distill since it provides more transferable knowledge. In this\nwork, we propose the novel framework, \"prune, then distill,\" that prunes the\nmodel first to make it more transferrable and then distill it to the student.\nWe provide several exploratory examples where the pruned teacher teaches better\nthan the original unpruned networks. We further show theoretically that the\npruned teacher plays the role of regularizer in distillation, which reduces the\ngeneralization error. Based on this result, we propose a novel neural network\ncompression scheme where the student network is formed based on the pruned\nteacher and then apply the \"prune, then distill\" strategy. The code is\navailable at https://github.com/ososos888/prune-then-distill", + "authors": "Jinhyuk Park, Albert No", + "published": "2021-09-30", + "updated": "2022-07-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/9908047v2", + "title": "On bound entanglement assisted distillation", + "abstract": "We investigate asymptotic distillation of entanglement in the presence of an\nunlimited amount of bound entanglement for bi-partite systems. We show that the\ndistillability is still bounded by the relative entropy of entanglement. This\noffers a strong support to the fact that bound entanglement does not improve\ndistillation of entanglement.", + "authors": "V. Vedral", + "published": "1999-08-14", + "updated": "1999-11-17", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2302.11472v1", + "title": "Distilling Calibrated Student from an Uncalibrated Teacher", + "abstract": "Knowledge distillation is a common technique for improving the performance of\na shallow student network by transferring information from a teacher network,\nwhich in general, is comparatively large and deep. These teacher networks are\npre-trained and often uncalibrated, as no calibration technique is applied to\nthe teacher model while training. Calibration of a network measures the\nprobability of correctness for any of its predictions, which is critical in\nhigh-risk domains. In this paper, we study how to obtain a calibrated student\nfrom an uncalibrated teacher. Our approach relies on the fusion of the\ndata-augmentation techniques, including but not limited to cutout, mixup, and\nCutMix, with knowledge distillation. We extend our approach beyond traditional\nknowledge distillation and find it suitable for Relational Knowledge\nDistillation and Contrastive Representation Distillation as well. The novelty\nof the work is that it provides a framework to distill a calibrated student\nfrom an uncalibrated teacher model without compromising the accuracy of the\ndistilled student. We perform extensive experiments to validate our approach on\nvarious datasets, including CIFAR-10, CIFAR-100, CINIC-10 and TinyImageNet, and\nobtained calibrated student models. We also observe robust performance of our\napproach while evaluating it on corrupted CIFAR-100C data.", + "authors": "Ishan Mishra, Sethu Vamsi Krishna, Deepak Mishra", + "published": "2023-02-22", + "updated": "2023-02-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2011.09969v1", + "title": "Neural network algorithm and its application in reactive distillation", + "abstract": "Reactive distillation is a special distillation technology based on the\ncoupling of chemical reaction and distillation. It has the characteristics of\nlow energy consumption and high separation efficiency. However, because the\ncombination of reaction and separation produces highly nonlinear robust\nbehavior, the control and optimization of the reactive distillation process\ncannot use conventional methods, but must rely on neural network algorithms.\nThis paper briefly describes the characteristics and research progress of\nreactive distillation technology and neural network algorithms, and summarizes\nthe application of neural network algorithms in reactive distillation, aiming\nto provide reference for the development and innovation of industry technology.", + "authors": "Huihui Wang, Ruyang Mo", + "published": "2020-11-16", + "updated": "2020-11-16", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE", + "cs.LG", + "I.2.8" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2109.15014v1", + "title": "Deep Neural Compression Via Concurrent Pruning and Self-Distillation", + "abstract": "Pruning aims to reduce the number of parameters while maintaining performance\nclose to the original network. This work proposes a novel\n\\emph{self-distillation} based pruning strategy, whereby the representational\nsimilarity between the pruned and unpruned versions of the same network is\nmaximized. Unlike previous approaches that treat distillation and pruning\nseparately, we use distillation to inform the pruning criteria, without\nrequiring a separate student network as in knowledge distillation. We show that\nthe proposed {\\em cross-correlation objective for self-distilled pruning}\nimplicitly encourages sparse solutions, naturally complementing magnitude-based\npruning criteria. Experiments on the GLUE and XGLUE benchmarks show that\nself-distilled pruning increases mono- and cross-lingual language model\nperformance. Self-distilled pruned models also outperform smaller Transformers\nwith an equal number of parameters and are competitive against (6 times) larger\ndistilled networks. We also observe that self-distillation (1) maximizes class\nseparability, (2) increases the signal-to-noise ratio, and (3) converges faster\nafter pruning steps, providing further insights into why self-distilled pruning\nimproves generalization.", + "authors": "James O' Neill, Sourav Dutta, Haytham Assem", + "published": "2021-09-30", + "updated": "2021-09-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2208.10068v1", + "title": "Tree-structured Auxiliary Online Knowledge Distillation", + "abstract": "Traditional knowledge distillation adopts a two-stage training process in\nwhich a teacher model is pre-trained and then transfers the knowledge to a\ncompact student model. To overcome the limitation, online knowledge\ndistillation is proposed to perform one-stage distillation when the teacher is\nunavailable. Recent researches on online knowledge distillation mainly focus on\nthe design of the distillation objective, including attention or gate\nmechanism. Instead, in this work, we focus on the design of the global\narchitecture and propose Tree-Structured Auxiliary online knowledge\ndistillation (TSA), which adds more parallel peers for layers close to the\noutput hierarchically to strengthen the effect of knowledge distillation.\nDifferent branches construct different views of the inputs, which can be the\nsource of the knowledge. The hierarchical structure implies that the knowledge\ntransfers from general to task-specific with the growth of the layers.\nExtensive experiments on 3 computer vision and 4 natural language processing\ndatasets show that our method achieves state-of-the-art performance without\nbells and whistles. To the best of our knowledge, we are the first to\ndemonstrate the effectiveness of online knowledge distillation for machine\ntranslation tasks.", + "authors": "Wenye Lin, Yangning Li, Yifeng Ding, Hai-Tao Zheng", + "published": "2022-08-22", + "updated": "2022-08-22", + "primary_cat": "cs.NI", + "cats": [ + "cs.NI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2112.10047v1", + "title": "Controlling the Quality of Distillation in Response-Based Network Compression", + "abstract": "The performance of a distillation-based compressed network is governed by the\nquality of distillation. The reason for the suboptimal distillation of a large\nnetwork (teacher) to a smaller network (student) is largely attributed to the\ngap in the learning capacities of given teacher-student pair. While it is hard\nto distill all the knowledge of a teacher, the quality of distillation can be\ncontrolled to a large extent to achieve better performance. Our experiments\nshow that the quality of distillation is largely governed by the quality of\nteacher's response, which in turn is heavily affected by the presence of\nsimilarity information in its response. A well-trained large capacity teacher\nloses similarity information between classes in the process of learning\nfine-grained discriminative properties for classification. The absence of\nsimilarity information causes the distillation process to be reduced from one\nexample-many class learning to one example-one class learning, thereby\nthrottling the flow of diverse knowledge from the teacher. With the implicit\nassumption that only the instilled knowledge can be distilled, instead of\nfocusing only on the knowledge distilling process, we scrutinize the knowledge\ninculcation process. We argue that for a given teacher-student pair, the\nquality of distillation can be improved by finding the sweet spot between batch\nsize and number of epochs while training the teacher. We discuss the steps to\nfind this sweet spot for better distillation. We also propose the distillation\nhypothesis to differentiate the behavior of the distillation process between\nknowledge distillation and regularization effect. We conduct all our\nexperiments on three different datasets.", + "authors": "Vibhas Vats, David Crandall", + "published": "2021-12-19", + "updated": "2021-12-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0607126v3", + "title": "Random bipartite entanglement from W and W-like states", + "abstract": "We describe a protocol for distilling maximally entangled bipartite states\nbetween random pairs of parties from those sharing a tripartite W state, and\nshow that, rather surprisingly, the total distillation rate (the total number\nof EPR pairs distilled per W, irrespective of who shares them) may be done at a\nhigher rate than distillation of bipartite entanglement between specified pairs\nof parties. Specifically, the optimal distillation rate for specified\nentanglement for the W has been previously shown to be the asymptotic\nentanglement of assistance of 0.92 EPR pairs per W, while our protocol can\nasymptotically distill 1 EPR pair per W between random pairs of parties, which\nwe conjecture to be optimal. We thus demonstrate a tradeoff between the overall\nasymptotic rate of EPR distillation and the distribution of final EPR pairs\nbetween parties. We further show that by increasing the number of parties in\nthe protocol that there exist states with fixed lower-bounded distillable\nentanglement for random parties but arbitrarily small distillable entanglement\nfor specified parties.", + "authors": "Ben Fortescue, Hoi-Kwong Lo", + "published": "2006-07-18", + "updated": "2007-02-23", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2403.09053v1", + "title": "Towards a theory of model distillation", + "abstract": "Distillation is the task of replacing a complicated machine learning model\nwith a simpler model that approximates the original [BCNM06,HVD15]. Despite\nmany practical applications, basic questions about the extent to which models\ncan be distilled, and the runtime and amount of data needed to distill, remain\nlargely open.\n To study these questions, we initiate a general theory of distillation,\ndefining PAC-distillation in an analogous way to PAC-learning [Val84]. As\napplications of this theory: (1) we propose new algorithms to extract the\nknowledge stored in the trained weights of neural networks -- we show how to\nefficiently distill neural networks into succinct, explicit decision tree\nrepresentations when possible by using the ``linear representation\nhypothesis''; and (2) we prove that distillation can be much cheaper than\nlearning from scratch, and make progress on characterizing its complexity.", + "authors": "Enric Boix-Adsera", + "published": "2024-03-14", + "updated": "2024-03-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.NE" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/0708.3699v2", + "title": "Convolutional Entanglement Distillation", + "abstract": "We develop a theory of entanglement distillation that exploits a\nconvolutional coding structure. We provide a method for converting an arbitrary\nclassical binary or quaternary convolutional code into a convolutional\nentanglement distillation protocol. The imported classical convolutional code\ndoes not have to be dual-containing or self-orthogonal. The yield and\nerror-correcting properties of such a protocol depend respectively on the rate\nand error-correcting properties of the imported classical convolutional code. A\nconvolutional entanglement distillation protocol has several other benefits.\nTwo parties sharing noisy ebits can distill noiseless ebits ``online'' as they\nacquire more noisy ebits. Distillation yield is high and decoding complexity is\nsimple for a convolutional entanglement distillation protocol. Our theory of\nconvolutional entanglement distillation reduces the problem of finding a good\nconvolutional entanglement distillation protocol to the well-established\nproblem of finding a good classical convolutional code.", + "authors": "Mark M. Wilde, Hari Krovi, Todd A. Brun", + "published": "2007-08-28", + "updated": "2007-09-19", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph", + "cs.IT", + "math.IT" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2310.18628v2", + "title": "Personalised Distillation: Empowering Open-Sourced LLMs with Adaptive Learning for Code Generation", + "abstract": "With the rise of powerful closed-sourced LLMs (ChatGPT, GPT-4), there are\nincreasing interests in distilling the capabilies of close-sourced LLMs to\nsmaller open-sourced LLMs. Previous distillation methods usually prompt ChatGPT\nto generate a set of instructions and answers, for the student model to learn.\nHowever, such standard distillation approach neglects the merits and conditions\nof the student model. Inspired by modern teaching principles, we design a\npersonalised distillation process, in which the student attempts to solve a\ntask first, then the teacher provides an adaptive refinement for the student to\nimprove. Instead of feeding the student with teacher's prior, personalised\ndistillation enables personalised learning for the student model, as it only\nlearns on examples it makes mistakes upon and learns to improve its own\nsolution. On code generation, personalised distillation consistently\noutperforms standard distillation with only one third of the data. With only\n2.5-3K personalised examples that incur a data-collection cost of 4-6$, we\nboost CodeGen-mono-16B by 7% to achieve 36.4% pass@1 and StarCoder by 12.2% to\nachieve 45.8% pass@1 on HumanEval.", + "authors": "Hailin Chen, Amrita Saha, Steven Hoi, Shafiq Joty", + "published": "2023-10-28", + "updated": "2024-01-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1504.05965v2", + "title": "Qutrit Magic State Distillation Tight in Some Directions", + "abstract": "Magic state distillation is a crucial component in the leading approaches to\nimplementing universal fault tolerant quantum computation, with existing\nprotocols for both qubit and higher dimensional systems. Early work focused on\ndetermining the region of distillable states for qubit protocols, yet\ncomparatively little is known about which states can be distilled and with what\ndistillable region for d>2. Here we focus on d=3 and present new four-qutrit\ndistillation schemes that improve upon the known distillable region, and\nachieve distillation tight to the boundary of undistillable states for some\nclasses of state. As a consequence of recent results, this implies that there\nis a family of quantum states that enable universality if and only if they\nexhibit contextuality with respect to stabilizer measurements. We also identify\na new routine whose fixed point is a magic state with maximal sum-negativity\ni.e., it is maximally non-stabilizer in a specific sense.", + "authors": "Hillary Dawkins, Mark Howard", + "published": "2015-04-22", + "updated": "2015-09-21", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/0908.0836v3", + "title": "Bound States for Magic State Distillation in Fault-Tolerant Quantum Computation", + "abstract": "Magic state distillation is an important primitive in fault-tolerant quantum\ncomputation. The magic states are pure non-stabilizer states which can be\ndistilled from certain mixed non-stabilizer states via Clifford group\noperations alone. Because of the Gottesman-Knill theorem, mixtures of Pauli\neigenstates are not expected to be magic state distillable, but it has been an\nopen question whether all mixed states outside this set may be distilled. In\nthis Letter we show that, when resources are finitely limited, non-distillable\nstates exist outside the stabilizer octahedron. In analogy with the bound\nentangled states, which arise in entanglement theory, we call such states bound\nstates for magic state distillation.", + "authors": "Earl T. Campbell, Dan E. Browne", + "published": "2009-08-06", + "updated": "2010-02-01", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2311.09874v1", + "title": "Experimental virtual distillation of entanglement and coherence", + "abstract": "Noise is in general inevitable and detrimental to practical and useful\nquantum communication and computation. Under the resource theory framework,\nresource distillation serves as a generic tool to overcome the effect of noise.\nYet, conventional resource distillation protocols generally require operations\non multi-copies of resource states, and strong limitations exist that restrict\ntheir practical utilities. Recently, by relaxing the setting of resource\ndistillation to only approximating the measurement statistics instead of the\nquantum state, a resource-frugal protocol, virtual resource distillation, is\nproposed, which allows more effective distillation of noisy resources. Here, we\nreport its experimental implementation on a four-qubit photonic quantum system\nfor the distillation of quantum coherence (up to dimension 4) and bipartite\nentanglement. We show the virtual distillation of the maximal superposed state\nof dimension four from the state of dimension two, an impossible task in\nconventional coherence distillation. Furthermore, we demonstrate the virtual\ndistillation of entanglement with operations acting only on a single copy of\nthe noisy EPR pair and showcase the quantum teleportation task using the\nvirtually distilled EPR pair with a significantly improved fidelity of the\nteleported state. These results illustrate the feasibility of the virtual\nresource distillation method and pave the way for accurate manipulation of\nquantum resources with noisy quantum hardware.", + "authors": "Ting Zhang, Yukun Zhang, Lu Liu, Xiao-Xu Fang, Qian-Xi Zhang, Xiao Yuan, He Lu", + "published": "2023-11-16", + "updated": "2023-11-16", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + } + ] + ] + }, + { + "url": "http://arxiv.org/abs/2308.04079v1", + "title": "3D Gaussian Splatting for Real-Time Radiance Field Rendering", + "abstract": "Radiance Field methods have recently revolutionized novel-view synthesis of\nscenes captured with multiple photos or videos. However, achieving high visual\nquality still requires neural networks that are costly to train and render,\nwhile recent faster methods inevitably trade off speed for quality. For\nunbounded and complete scenes (rather than isolated objects) and 1080p\nresolution rendering, no current method can achieve real-time display rates. We\nintroduce three key elements that allow us to achieve state-of-the-art visual\nquality while maintaining competitive training times and importantly allow\nhigh-quality real-time (>= 30 fps) novel-view synthesis at 1080p resolution.\nFirst, starting from sparse points produced during camera calibration, we\nrepresent the scene with 3D Gaussians that preserve desirable properties of\ncontinuous volumetric radiance fields for scene optimization while avoiding\nunnecessary computation in empty space; Second, we perform interleaved\noptimization/density control of the 3D Gaussians, notably optimizing\nanisotropic covariance to achieve an accurate representation of the scene;\nThird, we develop a fast visibility-aware rendering algorithm that supports\nanisotropic splatting and both accelerates training and allows realtime\nrendering. We demonstrate state-of-the-art visual quality and real-time\nrendering on several established datasets.", + "authors": "Bernhard Kerbl, Georgios Kopanas, Thomas Leimk\u00fchler, George Drettakis", + "published": "2023-08-08", + "updated": "2023-08-08", + "primary_cat": "cs.GR", + "cats": [ + "cs.GR", + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2201.05989v2", + "title": "Instant Neural Graphics Primitives with a Multiresolution Hash Encoding", + "abstract": "Neural graphics primitives, parameterized by fully connected neural networks,\ncan be costly to train and evaluate. We reduce this cost with a versatile new\ninput encoding that permits the use of a smaller network without sacrificing\nquality, thus significantly reducing the number of floating point and memory\naccess operations: a small neural network is augmented by a multiresolution\nhash table of trainable feature vectors whose values are optimized through\nstochastic gradient descent. The multiresolution structure allows the network\nto disambiguate hash collisions, making for a simple architecture that is\ntrivial to parallelize on modern GPUs. We leverage this parallelism by\nimplementing the whole system using fully-fused CUDA kernels with a focus on\nminimizing wasted bandwidth and compute operations. We achieve a combined\nspeedup of several orders of magnitude, enabling training of high-quality\nneural graphics primitives in a matter of seconds, and rendering in tens of\nmilliseconds at a resolution of ${1920\\!\\times\\!1080}$.", + "authors": "Thomas M\u00fcller, Alex Evans, Christoph Schied, Alexander Keller", + "published": "2022-01-16", + "updated": "2022-05-04", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.GR", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.07600v1", + "title": "Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures", + "abstract": "Text-guided image generation has progressed rapidly in recent years,\ninspiring major breakthroughs in text-guided shape generation. Recently, it has\nbeen shown that using score distillation, one can successfully text-guide a\nNeRF model to generate a 3D object. We adapt the score distillation to the\npublicly available, and computationally efficient, Latent Diffusion Models,\nwhich apply the entire diffusion process in a compact latent space of a\npretrained autoencoder. As NeRFs operate in image space, a naive solution for\nguiding them with latent score distillation would require encoding to the\nlatent space at each guidance step. Instead, we propose to bring the NeRF to\nthe latent space, resulting in a Latent-NeRF. Analyzing our Latent-NeRF, we\nshow that while Text-to-3D models can generate impressive results, they are\ninherently unconstrained and may lack the ability to guide or enforce a\nspecific 3D structure. To assist and direct the 3D generation, we propose to\nguide our Latent-NeRF using a Sketch-Shape: an abstract geometry that defines\nthe coarse structure of the desired object. Then, we present means to integrate\nsuch a constraint directly into a Latent-NeRF. This unique combination of text\nand shape guidance allows for increased control over the generation process. We\nalso show that latent score distillation can be successfully applied directly\non 3D meshes. This allows for generating high-quality textures on a given\ngeometry. Our experiments validate the power of our different forms of guidance\nand the efficiency of using latent rendering. Implementation is available at\nhttps://github.com/eladrich/latent-nerf", + "authors": "Gal Metzer, Elad Richardson, Or Patashnik, Raja Giryes, Daniel Cohen-Or", + "published": "2022-11-14", + "updated": "2022-11-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.GR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2306.17843v2", + "title": "Magic123: One Image to High-Quality 3D Object Generation Using Both 2D and 3D Diffusion Priors", + "abstract": "We present Magic123, a two-stage coarse-to-fine approach for high-quality,\ntextured 3D meshes generation from a single unposed image in the wild using\nboth2D and 3D priors. In the first stage, we optimize a neural radiance field\nto produce a coarse geometry. In the second stage, we adopt a memory-efficient\ndifferentiable mesh representation to yield a high-resolution mesh with a\nvisually appealing texture. In both stages, the 3D content is learned through\nreference view supervision and novel views guided by a combination of 2D and 3D\ndiffusion priors. We introduce a single trade-off parameter between the 2D and\n3D priors to control exploration (more imaginative) and exploitation (more\nprecise) of the generated geometry. Additionally, we employ textual inversion\nand monocular depth regularization to encourage consistent appearances across\nviews and to prevent degenerate solutions, respectively. Magic123 demonstrates\na significant improvement over previous image-to-3D techniques, as validated\nthrough extensive experiments on synthetic benchmarks and diverse real-world\nimages. Our code, models, and generated 3D assets are available at\nhttps://github.com/guochengqian/Magic123.", + "authors": "Guocheng Qian, Jinjie Mai, Abdullah Hamdi, Jian Ren, Aliaksandr Siarohin, Bing Li, Hsin-Ying Lee, Ivan Skorokhodov, Peter Wonka, Sergey Tulyakov, Bernard Ghanem", + "published": "2023-06-30", + "updated": "2023-07-23", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2111.04276v1", + "title": "Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D Shape Synthesis", + "abstract": "We introduce DMTet, a deep 3D conditional generative model that can\nsynthesize high-resolution 3D shapes using simple user guides such as coarse\nvoxels. It marries the merits of implicit and explicit 3D representations by\nleveraging a novel hybrid 3D representation. Compared to the current implicit\napproaches, which are trained to regress the signed distance values, DMTet\ndirectly optimizes for the reconstructed surface, which enables us to\nsynthesize finer geometric details with fewer artifacts. Unlike deep 3D\ngenerative models that directly generate explicit representations such as\nmeshes, our model can synthesize shapes with arbitrary topology. The core of\nDMTet includes a deformable tetrahedral grid that encodes a discretized signed\ndistance function and a differentiable marching tetrahedra layer that converts\nthe implicit signed distance representation to the explicit surface mesh\nrepresentation. This combination allows joint optimization of the surface\ngeometry and topology as well as generation of the hierarchy of subdivisions\nusing reconstruction and adversarial losses defined explicitly on the surface\nmesh. Our approach significantly outperforms existing work on conditional shape\nsynthesis from coarse voxel inputs, trained on a dataset of complex 3D animal\nshapes. Project page: https://nv-tlabs.github.io/DMTet/.", + "authors": "Tianchang Shen, Jun Gao, Kangxue Yin, Ming-Yu Liu, Sanja Fidler", + "published": "2021-11-08", + "updated": "2021-11-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2303.13873v3", + "title": "Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation", + "abstract": "Automatic 3D content creation has achieved rapid progress recently due to the\navailability of pre-trained, large language models and image diffusion models,\nforming the emerging topic of text-to-3D content creation. Existing text-to-3D\nmethods commonly use implicit scene representations, which couple the geometry\nand appearance via volume rendering and are suboptimal in terms of recovering\nfiner geometries and achieving photorealistic rendering; consequently, they are\nless effective for generating high-quality 3D assets. In this work, we propose\na new method of Fantasia3D for high-quality text-to-3D content creation. Key to\nFantasia3D is the disentangled modeling and learning of geometry and\nappearance. For geometry learning, we rely on a hybrid scene representation,\nand propose to encode surface normal extracted from the representation as the\ninput of the image diffusion model. For appearance modeling, we introduce the\nspatially varying bidirectional reflectance distribution function (BRDF) into\nthe text-to-3D task, and learn the surface material for photorealistic\nrendering of the generated surface. Our disentangled framework is more\ncompatible with popular graphics engines, supporting relighting, editing, and\nphysical simulation of the generated 3D assets. We conduct thorough experiments\nthat show the advantages of our method over existing ones under different\ntext-to-3D task settings. Project page and source codes:\nhttps://fantasia3d.github.io/.", + "authors": "Rui Chen, Yongwei Chen, Ningxin Jiao, Kui Jia", + "published": "2023-03-24", + "updated": "2023-09-27", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2304.12439v1", + "title": "TextMesh: Generation of Realistic 3D Meshes From Text Prompts", + "abstract": "The ability to generate highly realistic 2D images from mere text prompts has\nrecently made huge progress in terms of speed and quality, thanks to the advent\nof image diffusion models. Naturally, the question arises if this can be also\nachieved in the generation of 3D content from such text prompts. To this end, a\nnew line of methods recently emerged trying to harness diffusion models,\ntrained on 2D images, for supervision of 3D model generation using view\ndependent prompts. While achieving impressive results, these methods, however,\nhave two major drawbacks. First, rather than commonly used 3D meshes, they\ninstead generate neural radiance fields (NeRFs), making them impractical for\nmost real applications. Second, these approaches tend to produce over-saturated\nmodels, giving the output a cartoonish looking effect. Therefore, in this work\nwe propose a novel method for generation of highly realistic-looking 3D meshes.\nTo this end, we extend NeRF to employ an SDF backbone, leading to improved 3D\nmesh extraction. In addition, we propose a novel way to finetune the mesh\ntexture, removing the effect of high saturation and improving the details of\nthe output 3D mesh.", + "authors": "Christina Tsalicoglou, Fabian Manhardt, Alessio Tonioni, Michael Niemeyer, Federico Tombari", + "published": "2023-04-24", + "updated": "2023-04-24", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2209.14988v1", + "title": "DreamFusion: Text-to-3D using 2D Diffusion", + "abstract": "Recent breakthroughs in text-to-image synthesis have been driven by diffusion\nmodels trained on billions of image-text pairs. Adapting this approach to 3D\nsynthesis would require large-scale datasets of labeled 3D data and efficient\narchitectures for denoising 3D data, neither of which currently exist. In this\nwork, we circumvent these limitations by using a pretrained 2D text-to-image\ndiffusion model to perform text-to-3D synthesis. We introduce a loss based on\nprobability density distillation that enables the use of a 2D diffusion model\nas a prior for optimization of a parametric image generator. Using this loss in\na DeepDream-like procedure, we optimize a randomly-initialized 3D model (a\nNeural Radiance Field, or NeRF) via gradient descent such that its 2D\nrenderings from random angles achieve a low loss. The resulting 3D model of the\ngiven text can be viewed from any angle, relit by arbitrary illumination, or\ncomposited into any 3D environment. Our approach requires no 3D training data\nand no modifications to the image diffusion model, demonstrating the\neffectiveness of pretrained image diffusion models as priors.", + "authors": "Ben Poole, Ajay Jain, Jonathan T. Barron, Ben Mildenhall", + "published": "2022-09-29", + "updated": "2022-09-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2301.09629v2", + "title": "LEGO-Net: Learning Regular Rearrangements of Objects in Rooms", + "abstract": "Humans universally dislike the task of cleaning up a messy room. If machines\nwere to help us with this task, they must understand human criteria for regular\narrangements, such as several types of symmetry, co-linearity or\nco-circularity, spacing uniformity in linear or circular patterns, and further\ninter-object relationships that relate to style and functionality. Previous\napproaches for this task relied on human input to explicitly specify goal\nstate, or synthesized scenes from scratch -- but such methods do not address\nthe rearrangement of existing messy scenes without providing a goal state. In\nthis paper, we present LEGO-Net, a data-driven transformer-based iterative\nmethod for LEarning reGular rearrangement of Objects in messy rooms. LEGO-Net\nis partly inspired by diffusion models -- it starts with an initial messy state\nand iteratively ''de-noises'' the position and orientation of objects to a\nregular state while reducing distance traveled. Given randomly perturbed object\npositions and orientations in an existing dataset of professionally-arranged\nscenes, our method is trained to recover a regular re-arrangement. Results\ndemonstrate that our method is able to reliably rearrange room scenes and\noutperform other methods. We additionally propose a metric for evaluating\nregularity in room arrangements using number-theoretic machinery.", + "authors": "Qiuhong Anna Wei, Sijie Ding, Jeong Joon Park, Rahul Sajnani, Adrien Poulenard, Srinath Sridhar, Leonidas Guibas", + "published": "2023-01-23", + "updated": "2023-03-24", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2207.13751v1", + "title": "GAUDI: A Neural Architect for Immersive 3D Scene Generation", + "abstract": "We introduce GAUDI, a generative model capable of capturing the distribution\nof complex and realistic 3D scenes that can be rendered immersively from a\nmoving camera. We tackle this challenging problem with a scalable yet powerful\napproach, where we first optimize a latent representation that disentangles\nradiance fields and camera poses. This latent representation is then used to\nlearn a generative model that enables both unconditional and conditional\ngeneration of 3D scenes. Our model generalizes previous works that focus on\nsingle objects by removing the assumption that the camera pose distribution can\nbe shared across samples. We show that GAUDI obtains state-of-the-art\nperformance in the unconditional generative setting across multiple datasets\nand allows for conditional generation of 3D scenes given conditioning variables\nlike sparse image observations or text that describes the scene.", + "authors": "Miguel Angel Bautista, Pengsheng Guo, Samira Abnar, Walter Talbott, Alexander Toshev, Zhuoyuan Chen, Laurent Dinh, Shuangfei Zhai, Hanlin Goh, Daniel Ulbricht, Afshin Dehghan, Josh Susskind", + "published": "2022-07-27", + "updated": "2022-07-27", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.GR", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.02596v2", + "title": "SweetDreamer: Aligning Geometric Priors in 2D Diffusion for Consistent Text-to-3D", + "abstract": "It is inherently ambiguous to lift 2D results from pre-trained diffusion\nmodels to a 3D world for text-to-3D generation. 2D diffusion models solely\nlearn view-agnostic priors and thus lack 3D knowledge during the lifting,\nleading to the multi-view inconsistency problem. We find that this problem\nprimarily stems from geometric inconsistency, and avoiding misplaced geometric\nstructures substantially mitigates the problem in the final outputs. Therefore,\nwe improve the consistency by aligning the 2D geometric priors in diffusion\nmodels with well-defined 3D shapes during the lifting, addressing the vast\nmajority of the problem. This is achieved by fine-tuning the 2D diffusion model\nto be viewpoint-aware and to produce view-specific coordinate maps of\ncanonically oriented 3D objects. In our process, only coarse 3D information is\nused for aligning. This \"coarse\" alignment not only resolves the multi-view\ninconsistency in geometries but also retains the ability in 2D diffusion models\nto generate detailed and diversified high-quality objects unseen in the 3D\ndatasets. Furthermore, our aligned geometric priors (AGP) are generic and can\nbe seamlessly integrated into various state-of-the-art pipelines, obtaining\nhigh generalizability in terms of unseen shapes and visual appearance while\ngreatly alleviating the multi-view inconsistency problem. Our method represents\na new state-of-the-art performance with an 85+% consistency rate by human\nevaluation, while many previous methods are around 30%. Our project page is\nhttps://sweetdreamer3d.github.io/", + "authors": "Weiyu Li, Rui Chen, Xuelin Chen, Ping Tan", + "published": "2023-10-04", + "updated": "2023-10-20", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.18766v4", + "title": "HiFA: High-fidelity Text-to-3D Generation with Advanced Diffusion Guidance", + "abstract": "The advancements in automatic text-to-3D generation have been remarkable.\nMost existing methods use pre-trained text-to-image diffusion models to\noptimize 3D representations like Neural Radiance Fields (NeRFs) via\nlatent-space denoising score matching. Yet, these methods often result in\nartifacts and inconsistencies across different views due to their suboptimal\noptimization approaches and limited understanding of 3D geometry. Moreover, the\ninherent constraints of NeRFs in rendering crisp geometry and stable textures\nusually lead to a two-stage optimization to attain high-resolution details.\nThis work proposes holistic sampling and smoothing approaches to achieve\nhigh-quality text-to-3D generation, all in a single-stage optimization. We\ncompute denoising scores in the text-to-image diffusion model's latent and\nimage spaces. Instead of randomly sampling timesteps (also referred to as noise\nlevels in denoising score matching), we introduce a novel timestep annealing\napproach that progressively reduces the sampled timestep throughout\noptimization. To generate high-quality renderings in a single-stage\noptimization, we propose regularization for the variance of z-coordinates along\nNeRF rays. To address texture flickering issues in NeRFs, we introduce a kernel\nsmoothing technique that refines importance sampling weights coarse-to-fine,\nensuring accurate and thorough sampling in high-density regions. Extensive\nexperiments demonstrate the superiority of our method over previous approaches,\nenabling the generation of highly detailed and view-consistent 3D assets\nthrough a single-stage training process.", + "authors": "Junzhe Zhu, Peiye Zhuang, Sanmi Koyejo", + "published": "2023-05-30", + "updated": "2024-03-11", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2302.10663v2", + "title": "RealFusion: 360\u00b0 Reconstruction of Any Object from a Single Image", + "abstract": "We consider the problem of reconstructing a full 360{\\deg} photographic model\nof an object from a single image of it. We do so by fitting a neural radiance\nfield to the image, but find this problem to be severely ill-posed. We thus\ntake an off-the-self conditional image generator based on diffusion and\nengineer a prompt that encourages it to \"dream up\" novel views of the object.\nUsing an approach inspired by DreamFields and DreamFusion, we fuse the given\ninput view, the conditional prior, and other regularizers in a final,\nconsistent reconstruction. We demonstrate state-of-the-art reconstruction\nresults on benchmark images when compared to prior methods for monocular 3D\nreconstruction of objects. Qualitatively, our reconstructions provide a\nfaithful match of the input view and a plausible extrapolation of its\nappearance and 3D shape, including to the side of the object not visible in the\nimage.", + "authors": "Luke Melas-Kyriazi, Christian Rupprecht, Iro Laina, Andrea Vedaldi", + "published": "2023-02-21", + "updated": "2023-02-23", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.16818v2", + "title": "DreamCraft3D: Hierarchical 3D Generation with Bootstrapped Diffusion Prior", + "abstract": "We present DreamCraft3D, a hierarchical 3D content generation method that\nproduces high-fidelity and coherent 3D objects. We tackle the problem by\nleveraging a 2D reference image to guide the stages of geometry sculpting and\ntexture boosting. A central focus of this work is to address the consistency\nissue that existing works encounter. To sculpt geometries that render\ncoherently, we perform score distillation sampling via a view-dependent\ndiffusion model. This 3D prior, alongside several training strategies,\nprioritizes the geometry consistency but compromises the texture fidelity. We\nfurther propose Bootstrapped Score Distillation to specifically boost the\ntexture. We train a personalized diffusion model, Dreambooth, on the augmented\nrenderings of the scene, imbuing it with 3D knowledge of the scene being\noptimized. The score distillation from this 3D-aware diffusion prior provides\nview-consistent guidance for the scene. Notably, through an alternating\noptimization of the diffusion prior and 3D scene representation, we achieve\nmutually reinforcing improvements: the optimized 3D scene aids in training the\nscene-specific diffusion model, which offers increasingly view-consistent\nguidance for 3D optimization. The optimization is thus bootstrapped and leads\nto substantial texture boosting. With tailored 3D priors throughout the\nhierarchical generation, DreamCraft3D generates coherent 3D objects with\nphotorealistic renderings, advancing the state-of-the-art in 3D content\ngeneration. Code available at https://github.com/deepseek-ai/DreamCraft3D.", + "authors": "Jingxiang Sun, Bo Zhang, Ruizhi Shao, Lizhen Wang, Wen Liu, Zhenda Xie, Yebin Liu", + "published": "2023-10-25", + "updated": "2023-10-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.10440v2", + "title": "Magic3D: High-Resolution Text-to-3D Content Creation", + "abstract": "DreamFusion has recently demonstrated the utility of a pre-trained\ntext-to-image diffusion model to optimize Neural Radiance Fields (NeRF),\nachieving remarkable text-to-3D synthesis results. However, the method has two\ninherent limitations: (a) extremely slow optimization of NeRF and (b)\nlow-resolution image space supervision on NeRF, leading to low-quality 3D\nmodels with a long processing time. In this paper, we address these limitations\nby utilizing a two-stage optimization framework. First, we obtain a coarse\nmodel using a low-resolution diffusion prior and accelerate with a sparse 3D\nhash grid structure. Using the coarse representation as the initialization, we\nfurther optimize a textured 3D mesh model with an efficient differentiable\nrenderer interacting with a high-resolution latent diffusion model. Our method,\ndubbed Magic3D, can create high quality 3D mesh models in 40 minutes, which is\n2x faster than DreamFusion (reportedly taking 1.5 hours on average), while also\nachieving higher resolution. User studies show 61.7% raters to prefer our\napproach over DreamFusion. Together with the image-conditioned generation\ncapabilities, we provide users with new ways to control 3D synthesis, opening\nup new avenues to various creative applications.", + "authors": "Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, Tsung-Yi Lin", + "published": "2022-11-18", + "updated": "2023-03-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.GR", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2308.16512v4", + "title": "MVDream: Multi-view Diffusion for 3D Generation", + "abstract": "We introduce MVDream, a diffusion model that is able to generate consistent\nmulti-view images from a given text prompt. Learning from both 2D and 3D data,\na multi-view diffusion model can achieve the generalizability of 2D diffusion\nmodels and the consistency of 3D renderings. We demonstrate that such a\nmulti-view diffusion model is implicitly a generalizable 3D prior agnostic to\n3D representations. It can be applied to 3D generation via Score Distillation\nSampling, significantly enhancing the consistency and stability of existing\n2D-lifting methods. It can also learn new concepts from a few 2D examples, akin\nto DreamBooth, but for 3D generation.", + "authors": "Yichun Shi, Peng Wang, Jianglong Ye, Mai Long, Kejie Li, Xiao Yang", + "published": "2023-08-31", + "updated": "2024-04-18", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2303.11328v1", + "title": "Zero-1-to-3: Zero-shot One Image to 3D Object", + "abstract": "We introduce Zero-1-to-3, a framework for changing the camera viewpoint of an\nobject given just a single RGB image. To perform novel view synthesis in this\nunder-constrained setting, we capitalize on the geometric priors that\nlarge-scale diffusion models learn about natural images. Our conditional\ndiffusion model uses a synthetic dataset to learn controls of the relative\ncamera viewpoint, which allow new images to be generated of the same object\nunder a specified camera transformation. Even though it is trained on a\nsynthetic dataset, our model retains a strong zero-shot generalization ability\nto out-of-distribution datasets as well as in-the-wild images, including\nimpressionist paintings. Our viewpoint-conditioned diffusion approach can\nfurther be used for the task of 3D reconstruction from a single image.\nQualitative and quantitative experiments show that our method significantly\noutperforms state-of-the-art single-view 3D reconstruction and novel view\nsynthesis models by leveraging Internet-scale pre-training.", + "authors": "Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, Carl Vondrick", + "published": "2023-03-20", + "updated": "2023-03-20", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.GR", + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2106.10689v3", + "title": "NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction", + "abstract": "We present a novel neural surface reconstruction method, called NeuS, for\nreconstructing objects and scenes with high fidelity from 2D image inputs.\nExisting neural surface reconstruction approaches, such as DVR and IDR, require\nforeground mask as supervision, easily get trapped in local minima, and\ntherefore struggle with the reconstruction of objects with severe\nself-occlusion or thin structures. Meanwhile, recent neural methods for novel\nview synthesis, such as NeRF and its variants, use volume rendering to produce\na neural scene representation with robustness of optimization, even for highly\ncomplex objects. However, extracting high-quality surfaces from this learned\nimplicit representation is difficult because there are not sufficient surface\nconstraints in the representation. In NeuS, we propose to represent a surface\nas the zero-level set of a signed distance function (SDF) and develop a new\nvolume rendering method to train a neural SDF representation. We observe that\nthe conventional volume rendering method causes inherent geometric errors (i.e.\nbias) for surface reconstruction, and therefore propose a new formulation that\nis free of bias in the first order of approximation, thus leading to more\naccurate surface reconstruction even without the mask supervision. Experiments\non the DTU dataset and the BlendedMVS dataset show that NeuS outperforms the\nstate-of-the-arts in high-quality surface reconstruction, especially for\nobjects and scenes with complex structures and self-occlusion.", + "authors": "Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, Wenping Wang", + "published": "2021-06-20", + "updated": "2023-02-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.GR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2304.04262v1", + "title": "A Comprehensive Survey on Knowledge Distillation of Diffusion Models", + "abstract": "Diffusion Models (DMs), also referred to as score-based diffusion models,\nutilize neural networks to specify score functions. Unlike most other\nprobabilistic models, DMs directly model the score functions, which makes them\nmore flexible to parametrize and potentially highly expressive for\nprobabilistic modeling. DMs can learn fine-grained knowledge, i.e., marginal\nscore functions, of the underlying distribution. Therefore, a crucial research\ndirection is to explore how to distill the knowledge of DMs and fully utilize\ntheir potential. Our objective is to provide a comprehensible overview of the\nmodern approaches for distilling DMs, starting with an introduction to DMs and\na discussion of the challenges involved in distilling them into neural vector\nfields. We also provide an overview of the existing works on distilling DMs\ninto both stochastic and deterministic implicit generators. Finally, we review\nthe accelerated diffusion sampling algorithms as a training-free method for\ndistillation. Our tutorial is intended for individuals with a basic\nunderstanding of generative models who wish to apply DM's distillation or\nembark on a research project in this field.", + "authors": "Weijian Luo", + "published": "2023-04-09", + "updated": "2023-04-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2304.04968v3", + "title": "Re-imagine the Negative Prompt Algorithm: Transform 2D Diffusion into 3D, alleviate Janus problem and Beyond", + "abstract": "Although text-to-image diffusion models have made significant strides in\ngenerating images from text, they are sometimes more inclined to generate\nimages like the data on which the model was trained rather than the provided\ntext. This limitation has hindered their usage in both 2D and 3D applications.\nTo address this problem, we explored the use of negative prompts but found that\nthe current implementation fails to produce desired results, particularly when\nthere is an overlap between the main and negative prompts. To overcome this\nissue, we propose Perp-Neg, a new algorithm that leverages the geometrical\nproperties of the score space to address the shortcomings of the current\nnegative prompts algorithm. Perp-Neg does not require any training or\nfine-tuning of the model. Moreover, we experimentally demonstrate that Perp-Neg\nprovides greater flexibility in generating images by enabling users to edit out\nunwanted concepts from the initially generated images in 2D cases. Furthermore,\nto extend the application of Perp-Neg to 3D, we conducted a thorough\nexploration of how Perp-Neg can be used in 2D to condition the diffusion model\nto generate desired views, rather than being biased toward the canonical views.\nFinally, we applied our 2D intuition to integrate Perp-Neg with the\nstate-of-the-art text-to-3D (DreamFusion) method, effectively addressing its\nJanus (multi-head) problem. Our project page is available at\nhttps://Perp-Neg.github.io/", + "authors": "Mohammadreza Armandpour, Ali Sadeghian, Huangjie Zheng, Amir Sadeghian, Mingyuan Zhou", + "published": "2023-04-11", + "updated": "2023-04-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.GR", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2112.10752v2", + "title": "High-Resolution Image Synthesis with Latent Diffusion Models", + "abstract": "By decomposing the image formation process into a sequential application of\ndenoising autoencoders, diffusion models (DMs) achieve state-of-the-art\nsynthesis results on image data and beyond. Additionally, their formulation\nallows for a guiding mechanism to control the image generation process without\nretraining. However, since these models typically operate directly in pixel\nspace, optimization of powerful DMs often consumes hundreds of GPU days and\ninference is expensive due to sequential evaluations. To enable DM training on\nlimited computational resources while retaining their quality and flexibility,\nwe apply them in the latent space of powerful pretrained autoencoders. In\ncontrast to previous work, training diffusion models on such a representation\nallows for the first time to reach a near-optimal point between complexity\nreduction and detail preservation, greatly boosting visual fidelity. By\nintroducing cross-attention layers into the model architecture, we turn\ndiffusion models into powerful and flexible generators for general conditioning\ninputs such as text or bounding boxes and high-resolution synthesis becomes\npossible in a convolutional manner. Our latent diffusion models (LDMs) achieve\na new state of the art for image inpainting and highly competitive performance\non various tasks, including unconditional image generation, semantic scene\nsynthesis, and super-resolution, while significantly reducing computational\nrequirements compared to pixel-based DMs. Code is available at\nhttps://github.com/CompVis/latent-diffusion .", + "authors": "Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Bj\u00f6rn Ommer", + "published": "2021-12-20", + "updated": "2022-04-13", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2309.16653v2", + "title": "DreamGaussian: Generative Gaussian Splatting for Efficient 3D Content Creation", + "abstract": "Recent advances in 3D content creation mostly leverage optimization-based 3D\ngeneration via score distillation sampling (SDS). Though promising results have\nbeen exhibited, these methods often suffer from slow per-sample optimization,\nlimiting their practical usage. In this paper, we propose DreamGaussian, a\nnovel 3D content generation framework that achieves both efficiency and quality\nsimultaneously. Our key insight is to design a generative 3D Gaussian Splatting\nmodel with companioned mesh extraction and texture refinement in UV space. In\ncontrast to the occupancy pruning used in Neural Radiance Fields, we\ndemonstrate that the progressive densification of 3D Gaussians converges\nsignificantly faster for 3D generative tasks. To further enhance the texture\nquality and facilitate downstream applications, we introduce an efficient\nalgorithm to convert 3D Gaussians into textured meshes and apply a fine-tuning\nstage to refine the details. Extensive experiments demonstrate the superior\nefficiency and competitive generation quality of our proposed approach.\nNotably, DreamGaussian produces high-quality textured meshes in just 2 minutes\nfrom a single-view image, achieving approximately 10 times acceleration\ncompared to existing methods.", + "authors": "Jiaxiang Tang, Jiawei Ren, Hang Zhou, Ziwei Liu, Gang Zeng", + "published": "2023-09-28", + "updated": "2024-03-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2303.12074v2", + "title": "CC3D: Layout-Conditioned Generation of Compositional 3D Scenes", + "abstract": "In this work, we introduce CC3D, a conditional generative model that\nsynthesizes complex 3D scenes conditioned on 2D semantic scene layouts, trained\nusing single-view images. Different from most existing 3D GANs that limit their\napplicability to aligned single objects, we focus on generating complex scenes\nwith multiple objects, by modeling the compositional nature of 3D scenes. By\ndevising a 2D layout-based approach for 3D synthesis and implementing a new 3D\nfield representation with a stronger geometric inductive bias, we have created\na 3D GAN that is both efficient and of high quality, while allowing for a more\ncontrollable generation process. Our evaluations on synthetic 3D-FRONT and\nreal-world KITTI-360 datasets demonstrate that our model generates scenes of\nimproved visual and geometric quality in comparison to previous works.", + "authors": "Sherwin Bahmani, Jeong Joon Park, Despoina Paschalidou, Xingguang Yan, Gordon Wetzstein, Leonidas Guibas, Andrea Tagliasacchi", + "published": "2023-03-21", + "updated": "2023-09-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2003.08934v2", + "title": "NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis", + "abstract": "We present a method that achieves state-of-the-art results for synthesizing\nnovel views of complex scenes by optimizing an underlying continuous volumetric\nscene function using a sparse set of input views. Our algorithm represents a\nscene using a fully-connected (non-convolutional) deep network, whose input is\na single continuous 5D coordinate (spatial location $(x,y,z)$ and viewing\ndirection $(\\theta, \\phi)$) and whose output is the volume density and\nview-dependent emitted radiance at that spatial location. We synthesize views\nby querying 5D coordinates along camera rays and use classic volume rendering\ntechniques to project the output colors and densities into an image. Because\nvolume rendering is naturally differentiable, the only input required to\noptimize our representation is a set of images with known camera poses. We\ndescribe how to effectively optimize neural radiance fields to render\nphotorealistic novel views of scenes with complicated geometry and appearance,\nand demonstrate results that outperform prior work on neural rendering and view\nsynthesis. View synthesis results are best viewed as videos, so we urge readers\nto view our supplementary video for convincing comparisons.", + "authors": "Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng", + "published": "2020-03-19", + "updated": "2020-08-03", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.GR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2212.00774v1", + "title": "Score Jacobian Chaining: Lifting Pretrained 2D Diffusion Models for 3D Generation", + "abstract": "A diffusion model learns to predict a vector field of gradients. We propose\nto apply chain rule on the learned gradients, and back-propagate the score of a\ndiffusion model through the Jacobian of a differentiable renderer, which we\ninstantiate to be a voxel radiance field. This setup aggregates 2D scores at\nmultiple camera viewpoints into a 3D score, and repurposes a pretrained 2D\nmodel for 3D data generation. We identify a technical challenge of distribution\nmismatch that arises in this application, and propose a novel estimation\nmechanism to resolve it. We run our algorithm on several off-the-shelf\ndiffusion image generative models, including the recently released Stable\nDiffusion trained on the large-scale LAION dataset.", + "authors": "Haochen Wang, Xiaodan Du, Jiahao Li, Raymond A. Yeh, Greg Shakhnarovich", + "published": "2022-12-01", + "updated": "2022-12-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.11784v2", + "title": "Progressive3D: Progressively Local Editing for Text-to-3D Content Creation with Complex Semantic Prompts", + "abstract": "Recent text-to-3D generation methods achieve impressive 3D content creation\ncapacity thanks to the advances in image diffusion models and optimizing\nstrategies. However, current methods struggle to generate correct 3D content\nfor a complex prompt in semantics, i.e., a prompt describing multiple\ninteracted objects binding with different attributes. In this work, we propose\na general framework named Progressive3D, which decomposes the entire generation\ninto a series of locally progressive editing steps to create precise 3D content\nfor complex prompts, and we constrain the content change to only occur in\nregions determined by user-defined region prompts in each editing step.\nFurthermore, we propose an overlapped semantic component suppression technique\nto encourage the optimization process to focus more on the semantic differences\nbetween prompts. Extensive experiments demonstrate that the proposed\nProgressive3D framework generates precise 3D content for prompts with complex\nsemantics and is general for various text-to-3D methods driven by different 3D\nrepresentations.", + "authors": "Xinhua Cheng, Tianyu Yang, Jianan Wang, Yu Li, Lei Zhang, Jian Zhang, Li Yuan", + "published": "2023-10-18", + "updated": "2024-03-16", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.15008v3", + "title": "Wonder3D: Single Image to 3D using Cross-Domain Diffusion", + "abstract": "In this work, we introduce Wonder3D, a novel method for efficiently\ngenerating high-fidelity textured meshes from single-view images.Recent methods\nbased on Score Distillation Sampling (SDS) have shown the potential to recover\n3D geometry from 2D diffusion priors, but they typically suffer from\ntime-consuming per-shape optimization and inconsistent geometry. In contrast,\ncertain works directly produce 3D information via fast network inferences, but\ntheir results are often of low quality and lack geometric details. To\nholistically improve the quality, consistency, and efficiency of image-to-3D\ntasks, we propose a cross-domain diffusion model that generates multi-view\nnormal maps and the corresponding color images. To ensure consistency, we\nemploy a multi-view cross-domain attention mechanism that facilitates\ninformation exchange across views and modalities. Lastly, we introduce a\ngeometry-aware normal fusion algorithm that extracts high-quality surfaces from\nthe multi-view 2D representations. Our extensive evaluations demonstrate that\nour method achieves high-quality reconstruction results, robust generalization,\nand reasonably good efficiency compared to prior works.", + "authors": "Xiaoxiao Long, Yuan-Chen Guo, Cheng Lin, Yuan Liu, Zhiyang Dou, Lingjie Liu, Yuexin Ma, Song-Hai Zhang, Marc Habermann, Christian Theobalt, Wenping Wang", + "published": "2023-10-23", + "updated": "2023-11-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2203.13333v2", + "title": "CLIP-Mesh: Generating textured meshes from text using pretrained image-text models", + "abstract": "We present a technique for zero-shot generation of a 3D model using only a\ntarget text prompt. Without any 3D supervision our method deforms the control\nshape of a limit subdivided surface along with its texture map and normal map\nto obtain a 3D asset that corresponds to the input text prompt and can be\neasily deployed into games or modeling applications. We rely only on a\npre-trained CLIP model that compares the input text prompt with differentiably\nrendered images of our 3D model. While previous works have focused on\nstylization or required training of generative models we perform optimization\non mesh parameters directly to generate shape, texture or both. To constrain\nthe optimization to produce plausible meshes and textures we introduce a number\nof techniques using image augmentations and the use of a pretrained prior that\ngenerates CLIP image embeddings given a text embedding.", + "authors": "Nasir Mohammad Khalid, Tianhao Xie, Eugene Belilovsky, Tiberiu Popa", + "published": "2022-03-24", + "updated": "2022-09-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.GR", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2012.09793v2", + "title": "SceneFormer: Indoor Scene Generation with Transformers", + "abstract": "We address the task of indoor scene generation by generating a sequence of\nobjects, along with their locations and orientations conditioned on a room\nlayout. Large-scale indoor scene datasets allow us to extract patterns from\nuser-designed indoor scenes, and generate new scenes based on these patterns.\nExisting methods rely on the 2D or 3D appearance of these scenes in addition to\nobject positions, and make assumptions about the possible relations between\nobjects. In contrast, we do not use any appearance information, and implicitly\nlearn object relations using the self-attention mechanism of transformers. We\nshow that our model design leads to faster scene generation with similar or\nimproved levels of realism compared to previous methods. Our method is also\nflexible, as it can be conditioned not only on the room layout but also on text\ndescriptions of the room, using only the cross-attention mechanism of\ntransformers. Our user study shows that our generated scenes are preferred to\nthe state-of-the-art FastSynth scenes 53.9% and 56.7% of the time for bedroom\nand living room scenes, respectively. At the same time, we generate a scene in\n1.48 seconds on average, 20% faster than FastSynth.", + "authors": "Xinpeng Wang, Chandan Yeshwanth, Matthias Nie\u00dfner", + "published": "2020-12-17", + "updated": "2021-04-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2106.12052v2", + "title": "Volume Rendering of Neural Implicit Surfaces", + "abstract": "Neural volume rendering became increasingly popular recently due to its\nsuccess in synthesizing novel views of a scene from a sparse set of input\nimages. So far, the geometry learned by neural volume rendering techniques was\nmodeled using a generic density function. Furthermore, the geometry itself was\nextracted using an arbitrary level set of the density function leading to a\nnoisy, often low fidelity reconstruction. The goal of this paper is to improve\ngeometry representation and reconstruction in neural volume rendering. We\nachieve that by modeling the volume density as a function of the geometry. This\nis in contrast to previous work modeling the geometry as a function of the\nvolume density. In more detail, we define the volume density function as\nLaplace's cumulative distribution function (CDF) applied to a signed distance\nfunction (SDF) representation. This simple density representation has three\nbenefits: (i) it provides a useful inductive bias to the geometry learned in\nthe neural volume rendering process; (ii) it facilitates a bound on the opacity\napproximation error, leading to an accurate sampling of the viewing ray.\nAccurate sampling is important to provide a precise coupling of geometry and\nradiance; and (iii) it allows efficient unsupervised disentanglement of shape\nand appearance in volume rendering. Applying this new density representation to\nchallenging scene multiview datasets produced high quality geometry\nreconstructions, outperforming relevant baselines. Furthermore, switching shape\nand appearance between scenes is possible due to the disentanglement of the\ntwo.", + "authors": "Lior Yariv, Jiatao Gu, Yoni Kasten, Yaron Lipman", + "published": "2021-06-22", + "updated": "2021-12-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2309.16585v4", + "title": "Text-to-3D using Gaussian Splatting", + "abstract": "Automatic text-to-3D generation that combines Score Distillation Sampling\n(SDS) with the optimization of volume rendering has achieved remarkable\nprogress in synthesizing realistic 3D objects. Yet most existing text-to-3D\nmethods by SDS and volume rendering suffer from inaccurate geometry, e.g., the\nJanus issue, since it is hard to explicitly integrate 3D priors into implicit\n3D representations. Besides, it is usually time-consuming for them to generate\nelaborate 3D models with rich colors. In response, this paper proposes GSGEN, a\nnovel method that adopts Gaussian Splatting, a recent state-of-the-art\nrepresentation, to text-to-3D generation. GSGEN aims at generating high-quality\n3D objects and addressing existing shortcomings by exploiting the explicit\nnature of Gaussian Splatting that enables the incorporation of 3D prior.\nSpecifically, our method adopts a progressive optimization strategy, which\nincludes a geometry optimization stage and an appearance refinement stage. In\ngeometry optimization, a coarse representation is established under 3D point\ncloud diffusion prior along with the ordinary 2D SDS optimization, ensuring a\nsensible and 3D-consistent rough shape. Subsequently, the obtained Gaussians\nundergo an iterative appearance refinement to enrich texture details. In this\nstage, we increase the number of Gaussians by compactness-based densification\nto enhance continuity and improve fidelity. With these designs, our approach\ncan generate 3D assets with delicate details and accurate geometry. Extensive\nevaluations demonstrate the effectiveness of our method, especially for\ncapturing high-frequency components. Our code is available at\nhttps://github.com/gsgen3d/gsgen", + "authors": "Zilong Chen, Feng Wang, Yikai Wang, Huaping Liu", + "published": "2023-09-28", + "updated": "2024-04-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2205.11487v1", + "title": "Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding", + "abstract": "We present Imagen, a text-to-image diffusion model with an unprecedented\ndegree of photorealism and a deep level of language understanding. Imagen\nbuilds on the power of large transformer language models in understanding text\nand hinges on the strength of diffusion models in high-fidelity image\ngeneration. Our key discovery is that generic large language models (e.g. T5),\npretrained on text-only corpora, are surprisingly effective at encoding text\nfor image synthesis: increasing the size of the language model in Imagen boosts\nboth sample fidelity and image-text alignment much more than increasing the\nsize of the image diffusion model. Imagen achieves a new state-of-the-art FID\nscore of 7.27 on the COCO dataset, without ever training on COCO, and human\nraters find Imagen samples to be on par with the COCO data itself in image-text\nalignment. To assess text-to-image models in greater depth, we introduce\nDrawBench, a comprehensive and challenging benchmark for text-to-image models.\nWith DrawBench, we compare Imagen with recent methods including VQ-GAN+CLIP,\nLatent Diffusion Models, and DALL-E 2, and find that human raters prefer Imagen\nover other models in side-by-side comparisons, both in terms of sample quality\nand image-text alignment. See https://imagen.research.google/ for an overview\nof the results.", + "authors": "Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, Mohammad Norouzi", + "published": "2022-05-23", + "updated": "2022-05-23", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2308.10608v2", + "title": "FocalDreamer: Text-driven 3D Editing via Focal-fusion Assembly", + "abstract": "While text-3D editing has made significant strides in leveraging score\ndistillation sampling, emerging approaches still fall short in delivering\nseparable, precise and consistent outcomes that are vital to content creation.\nIn response, we introduce FocalDreamer, a framework that merges base shape with\neditable parts according to text prompts for fine-grained editing within\ndesired regions. Specifically, equipped with geometry union and dual-path\nrendering, FocalDreamer assembles independent 3D parts into a complete object,\ntailored for convenient instance reuse and part-wise control. We propose\ngeometric focal loss and style consistency regularization, which encourage\nfocal fusion and congruent overall appearance. Furthermore, FocalDreamer\ngenerates high-fidelity geometry and PBR textures which are compatible with\nwidely-used graphics engines. Extensive experiments have highlighted the\nsuperior editing capabilities of FocalDreamer in both quantitative and\nqualitative evaluations.", + "authors": "Yuhan Li, Yishun Dou, Yue Shi, Yu Lei, Xuanhong Chen, Yi Zhang, Peng Zhou, Bingbing Ni", + "published": "2023-08-21", + "updated": "2023-08-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.GR", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2306.13455v3", + "title": "DreamEditor: Text-Driven 3D Scene Editing with Neural Fields", + "abstract": "Neural fields have achieved impressive advancements in view synthesis and\nscene reconstruction. However, editing these neural fields remains challenging\ndue to the implicit encoding of geometry and texture information. In this\npaper, we propose DreamEditor, a novel framework that enables users to perform\ncontrolled editing of neural fields using text prompts. By representing scenes\nas mesh-based neural fields, DreamEditor allows localized editing within\nspecific regions. DreamEditor utilizes the text encoder of a pretrained\ntext-to-Image diffusion model to automatically identify the regions to be\nedited based on the semantics of the text prompts. Subsequently, DreamEditor\noptimizes the editing region and aligns its geometry and texture with the text\nprompts through score distillation sampling [29]. Extensive experiments have\ndemonstrated that DreamEditor can accurately edit neural fields of real-world\nscenes according to the given text prompts while ensuring consistency in\nirrelevant areas. DreamEditor generates highly realistic textures and geometry,\nsignificantly surpassing previous works in both quantitative and qualitative\nevaluations.", + "authors": "Jingyu Zhuang, Chen Wang, Lingjie Liu, Liang Lin, Guanbin Li", + "published": "2023-06-23", + "updated": "2023-09-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2103.13415v3", + "title": "Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields", + "abstract": "The rendering procedure used by neural radiance fields (NeRF) samples a scene\nwith a single ray per pixel and may therefore produce renderings that are\nexcessively blurred or aliased when training or testing images observe scene\ncontent at different resolutions. The straightforward solution of supersampling\nby rendering with multiple rays per pixel is impractical for NeRF, because\nrendering each ray requires querying a multilayer perceptron hundreds of times.\nOur solution, which we call \"mip-NeRF\" (a la \"mipmap\"), extends NeRF to\nrepresent the scene at a continuously-valued scale. By efficiently rendering\nanti-aliased conical frustums instead of rays, mip-NeRF reduces objectionable\naliasing artifacts and significantly improves NeRF's ability to represent fine\ndetails, while also being 7% faster than NeRF and half the size. Compared to\nNeRF, mip-NeRF reduces average error rates by 17% on the dataset presented with\nNeRF and by 60% on a challenging multiscale variant of that dataset that we\npresent. Mip-NeRF is also able to match the accuracy of a brute-force\nsupersampled NeRF on our multiscale dataset while being 22x faster.", + "authors": "Jonathan T. Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, Pratul P. Srinivasan", + "published": "2021-03-24", + "updated": "2021-08-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.GR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.16213v2", + "title": "ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation", + "abstract": "Score distillation sampling (SDS) has shown great promise in text-to-3D\ngeneration by distilling pretrained large-scale text-to-image diffusion models,\nbut suffers from over-saturation, over-smoothing, and low-diversity problems.\nIn this work, we propose to model the 3D parameter as a random variable instead\nof a constant as in SDS and present variational score distillation (VSD), a\nprincipled particle-based variational framework to explain and address the\naforementioned issues in text-to-3D generation. We show that SDS is a special\ncase of VSD and leads to poor samples with both small and large CFG weights. In\ncomparison, VSD works well with various CFG weights as ancestral sampling from\ndiffusion models and simultaneously improves the diversity and sample quality\nwith a common CFG weight (i.e., $7.5$). We further present various improvements\nin the design space for text-to-3D such as distillation time schedule and\ndensity initialization, which are orthogonal to the distillation algorithm yet\nnot well explored. Our overall approach, dubbed ProlificDreamer, can generate\nhigh rendering resolution (i.e., $512\\times512$) and high-fidelity NeRF with\nrich structure and complex effects (e.g., smoke and drops). Further,\ninitialized from NeRF, meshes fine-tuned by VSD are meticulously detailed and\nphoto-realistic. Project page and codes:\nhttps://ml.cs.tsinghua.edu.cn/prolificdreamer/", + "authors": "Zhengyi Wang, Cheng Lu, Yikai Wang, Fan Bao, Chongxuan Li, Hang Su, Jun Zhu", + "published": "2023-05-25", + "updated": "2023-11-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1607.04311v1", + "title": "Defensive Distillation is Not Robust to Adversarial Examples", + "abstract": "We show that defensive distillation is not secure: it is no more resistant to\ntargeted misclassification attacks than unprotected neural networks.", + "authors": "Nicholas Carlini, David Wagner", + "published": "2016-07-14", + "updated": "2016-07-14", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2304.04615v1", + "title": "A Survey on Recent Teacher-student Learning Studies", + "abstract": "Knowledge distillation is a method of transferring the knowledge from a\ncomplex deep neural network (DNN) to a smaller and faster DNN, while preserving\nits accuracy. Recent variants of knowledge distillation include teaching\nassistant distillation, curriculum distillation, mask distillation, and\ndecoupling distillation, which aim to improve the performance of knowledge\ndistillation by introducing additional components or by changing the learning\nprocess. Teaching assistant distillation involves an intermediate model called\nthe teaching assistant, while curriculum distillation follows a curriculum\nsimilar to human education. Mask distillation focuses on transferring the\nattention mechanism learned by the teacher, and decoupling distillation\ndecouples the distillation loss from the task loss. Overall, these variants of\nknowledge distillation have shown promising results in improving the\nperformance of knowledge distillation.", + "authors": "Minghong Gao", + "published": "2023-04-10", + "updated": "2023-04-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2311.13811v2", + "title": "Education distillation:getting student models to learn in shcools", + "abstract": "Knowledge distillation is one of the methods for model compression, and\nexisting knowledge distillation techniques focus on how to improve the\ndistillation algorithm so as to enhance the distillation efficiency. This paper\nintroduces dynamic incremental learning into knowledge distillation and\nproposes a distillation strategy for education distillation. Specifically, it\nis proposed to take fragmented student models divided from the complete student\nmodel as lower-grade models. As the grade level rises, fragmented student\nmodels deepen in conjunction with designed teaching reference layers, while\nlearning and distilling from more teacher models. By moving from lower to\nhigher grades, fragmented student models were gradually integrated into a\ncomplete target student model, and the performance of the student models\ngradually improved from lower to higher grades of the stage. Education\ndistillation strategies combined with distillation algorithms outperform the\nresults of single distillation algorithms on the public dataset\nCIFAR100,Caltech256, Food-101 dataset.", + "authors": "Ling Feng, Danyang Li, Tianhao Wu, Xuliang Duan", + "published": "2023-11-23", + "updated": "2023-11-27", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2203.11932v1", + "title": "Dataset Distillation by Matching Training Trajectories", + "abstract": "Dataset distillation is the task of synthesizing a small dataset such that a\nmodel trained on the synthetic set will match the test accuracy of the model\ntrained on the full dataset. In this paper, we propose a new formulation that\noptimizes our distilled data to guide networks to a similar state as those\ntrained on real data across many training steps. Given a network, we train it\nfor several iterations on our distilled data and optimize the distilled data\nwith respect to the distance between the synthetically trained parameters and\nthe parameters trained on real data. To efficiently obtain the initial and\ntarget network parameters for large-scale datasets, we pre-compute and store\ntraining trajectories of expert networks trained on the real dataset. Our\nmethod handily outperforms existing methods and also allows us to distill\nhigher-resolution visual data.", + "authors": "George Cazenavette, Tongzhou Wang, Antonio Torralba, Alexei A. Efros, Jun-Yan Zhu", + "published": "2022-03-22", + "updated": "2022-03-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2403.09053v1", + "title": "Towards a theory of model distillation", + "abstract": "Distillation is the task of replacing a complicated machine learning model\nwith a simpler model that approximates the original [BCNM06,HVD15]. Despite\nmany practical applications, basic questions about the extent to which models\ncan be distilled, and the runtime and amount of data needed to distill, remain\nlargely open.\n To study these questions, we initiate a general theory of distillation,\ndefining PAC-distillation in an analogous way to PAC-learning [Val84]. As\napplications of this theory: (1) we propose new algorithms to extract the\nknowledge stored in the trained weights of neural networks -- we show how to\nefficiently distill neural networks into succinct, explicit decision tree\nrepresentations when possible by using the ``linear representation\nhypothesis''; and (2) we prove that distillation can be much cheaper than\nlearning from scratch, and make progress on characterizing its complexity.", + "authors": "Enric Boix-Adsera", + "published": "2024-03-14", + "updated": "2024-03-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.NE" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2402.02781v1", + "title": "Dual Knowledge Distillation for Efficient Sound Event Detection", + "abstract": "Sound event detection (SED) is essential for recognizing specific sounds and\ntheir temporal locations within acoustic signals. This becomes challenging\nparticularly for on-device applications, where computational resources are\nlimited. To address this issue, we introduce a novel framework referred to as\ndual knowledge distillation for developing efficient SED systems in this work.\nOur proposed dual knowledge distillation commences with temporal-averaging\nknowledge distillation (TAKD), utilizing a mean student model derived from the\ntemporal averaging of the student model's parameters. This allows the student\nmodel to indirectly learn from a pre-trained teacher model, ensuring a stable\nknowledge distillation. Subsequently, we introduce embedding-enhanced feature\ndistillation (EEFD), which involves incorporating an embedding distillation\nlayer within the student model to bolster contextual learning. On DCASE 2023\nTask 4A public evaluation dataset, our proposed SED system with dual knowledge\ndistillation having merely one-third of the baseline model's parameters,\ndemonstrates superior performance in terms of PSDS1 and PSDS2. This highlights\nthe importance of proposed dual knowledge distillation for compact SED systems,\nwhich can be ideal for edge devices.", + "authors": "Yang Xiao, Rohan Kumar Das", + "published": "2024-02-05", + "updated": "2024-02-05", + "primary_cat": "cs.SD", + "cats": [ + "cs.SD", + "cs.AI", + "cs.CL", + "cs.LG", + "eess.AS" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2011.14554v1", + "title": "A Selective Survey on Versatile Knowledge Distillation Paradigm for Neural Network Models", + "abstract": "This paper aims to provide a selective survey about knowledge\ndistillation(KD) framework for researchers and practitioners to take advantage\nof it for developing new optimized models in the deep neural network field. To\nthis end, we give a brief overview of knowledge distillation and some related\nworks including learning using privileged information(LUPI) and generalized\ndistillation(GD). Even though knowledge distillation based on the\nteacher-student architecture was initially devised as a model compression\ntechnique, it has found versatile applications over various frameworks.\n In this paper, we review the characteristics of knowledge distillation from\nthe hypothesis that the three important ingredients of knowledge distillation\nare distilled knowledge and loss,teacher-student paradigm, and the distillation\nprocess. In addition, we survey the versatility of the knowledge distillation\nby studying its direct applications and its usage in combination with other\ndeep learning paradigms. Finally we present some future works in knowledge\ndistillation including explainable knowledge distillation where the analytical\nanalysis of the performance gain is studied and the self-supervised learning\nwhich is a hot research topic in deep learning community.", + "authors": "Jeong-Hoe Ku, JiHun Oh, YoungYoon Lee, Gaurav Pooniwala, SangJeong Lee", + "published": "2020-11-30", + "updated": "2020-11-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.01392v1", + "title": "No-go theorem for probabilistic one-way secret-key distillation", + "abstract": "The probabilistic one-way distillable secret key is equal to the largest\nexpected rate at which perfect secret key bits can be probabilistically\ndistilled from a bipartite state by means of local operations and one-way\nclassical communication. Here we define the set of super two-extendible states\nand prove that an arbitrary state in this set cannot be used for probabilistic\none-way secret-key distillation. This broad class of states includes both\nerased states and all full-rank states. Comparing the probabilistic one-way\ndistillable secret key with the more commonly studied approximate one-way\ndistillable secret key, our results demonstrate an extreme gap between them for\nmany states of interest, with the approximate one-way distillable secret key\nbeing much larger. Our findings naturally extend to probabilistic one-way\nentanglement distillation, with similar conclusions.", + "authors": "Vishal Singh, Mark M. Wilde", + "published": "2024-04-01", + "updated": "2024-04-01", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph", + "cs.IT", + "math.IT" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2305.12330v1", + "title": "Task-agnostic Distillation of Encoder-Decoder Language Models", + "abstract": "Finetuning pretrained language models (LMs) have enabled appealing\nperformance on a diverse array of tasks. The intriguing task-agnostic property\nhas driven a shifted focus from task-specific to task-agnostic distillation of\nLMs. While task-agnostic, compute-efficient, performance-preserved LMs can be\nyielded by task-agnostic distillation, previous studies mainly sit in\ndistillation of either encoder-only LMs (e.g., BERT) or decoder-only ones\n(e.g., GPT) yet largely neglect that distillation of encoder-decoder LMs (e.g.,\nT5) can posit very distinguished behaviors. Frustratingly, we discover that\nexisting task-agnostic distillation methods can fail to handle the distillation\nof encoder-decoder LMs. To the demand, we explore a few paths and uncover a\npath named as MiniEnD that successfully tackles the distillation of\nencoder-decoder LMs in a task-agnostic fashion. We examine MiniEnD on language\nunderstanding and abstractive summarization. The results showcase that MiniEnD\nis generally effective and is competitive compared to other alternatives. We\nfurther scale MiniEnD up to distillation of 3B encoder-decoder language models\nwith interpolated distillation. The results imply the opportunities and\nchallenges in distilling large language models (e.g., LLaMA).", + "authors": "Chen Zhang, Yang Yang, Jingang Wang, Dawei Song", + "published": "2023-05-21", + "updated": "2023-05-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2205.09153v1", + "title": "ERNIE-Search: Bridging Cross-Encoder with Dual-Encoder via Self On-the-fly Distillation for Dense Passage Retrieval", + "abstract": "Neural retrievers based on pre-trained language models (PLMs), such as\ndual-encoders, have achieved promising performance on the task of open-domain\nquestion answering (QA). Their effectiveness can further reach new\nstate-of-the-arts by incorporating cross-architecture knowledge distillation.\nHowever, most of the existing studies just directly apply conventional\ndistillation methods. They fail to consider the particular situation where the\nteacher and student have different structures. In this paper, we propose a\nnovel distillation method that significantly advances cross-architecture\ndistillation for dual-encoders. Our method 1) introduces a self on-the-fly\ndistillation method that can effectively distill late interaction (i.e.,\nColBERT) to vanilla dual-encoder, and 2) incorporates a cascade distillation\nprocess to further improve the performance with a cross-encoder teacher.\nExtensive experiments are conducted to validate that our proposed solution\noutperforms strong baselines and establish a new state-of-the-art on\nopen-domain QA benchmarks.", + "authors": "Yuxiang Lu, Yiding Liu, Jiaxiang Liu, Yunsheng Shi, Zhengjie Huang, Shikun Feng Yu Sun, Hao Tian, Hua Wu, Shuaiqiang Wang, Dawei Yin, Haifeng Wang", + "published": "2022-05-18", + "updated": "2022-05-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0202165v1", + "title": "Distinguishing locally of quantum states and the distillation of entanglement", + "abstract": "This paper try to probe the relation of distinguishing locally and\ndistillation of entanglement. The distinguishing information (DI) and the\nmaximal distinguishing information (MDI) of a set of pure states are defined.\nThe interpretation of distillation of entanglement in term of information is\ngiven. The relation between the maximal distinguishing information and\ndistillable entanglement is gained. As a application of this relation the\ndistillable entanglement of Bell-diagonal states is present.", + "authors": "ping-xing. chen, Cheng-zu Li", + "published": "2002-02-27", + "updated": "2002-02-27", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2010.13002v2", + "title": "Pre-trained Summarization Distillation", + "abstract": "Recent state-of-the-art approaches to summarization utilize large pre-trained\nTransformer models. Distilling these models to smaller student models has\nbecome critically important for practical use; however there are many different\ndistillation methods proposed by the NLP literature. Recent work on distilling\nBERT for classification and regression tasks shows strong performance using\ndirect knowledge distillation. Alternatively, machine translation practitioners\ndistill using pseudo-labeling, where a small model is trained on the\ntranslations of a larger model. A third, simpler approach is to 'shrink and\nfine-tune' (SFT), which avoids any explicit distillation by copying parameters\nto a smaller student model and then fine-tuning. We compare these three\napproaches for distillation of Pegasus and BART, the current and former state\nof the art, pre-trained summarization models, and find that SFT outperforms\nknowledge distillation and pseudo-labeling on the CNN/DailyMail dataset, but\nunder-performs pseudo-labeling on the more abstractive XSUM dataset. PyTorch\nCode and checkpoints of different sizes are available through Hugging Face\ntransformers here http://tiny.cc/4iy0tz.", + "authors": "Sam Shleifer, Alexander M. Rush", + "published": "2020-10-24", + "updated": "2020-10-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2302.14643v1", + "title": "Graph-based Knowledge Distillation: A survey and experimental evaluation", + "abstract": "Graph, such as citation networks, social networks, and transportation\nnetworks, are prevalent in the real world. Graph Neural Networks (GNNs) have\ngained widespread attention for their robust expressiveness and exceptional\nperformance in various graph applications. However, the efficacy of GNNs is\nheavily reliant on sufficient data labels and complex network models, with the\nformer obtaining hardly and the latter computing costly. To address the labeled\ndata scarcity and high complexity of GNNs, Knowledge Distillation (KD) has been\nintroduced to enhance existing GNNs. This technique involves transferring the\nsoft-label supervision of the large teacher model to the small student model\nwhile maintaining prediction performance. This survey offers a comprehensive\noverview of Graph-based Knowledge Distillation methods, systematically\ncategorizing and summarizing them while discussing their limitations and future\ndirections. This paper first introduces the background of graph and KD. It then\nprovides a comprehensive summary of three types of Graph-based Knowledge\nDistillation methods, namely Graph-based Knowledge Distillation for deep neural\nnetworks (DKD), Graph-based Knowledge Distillation for GNNs (GKD), and\nSelf-Knowledge Distillation based Graph-based Knowledge Distillation (SKD).\nEach type is further divided into knowledge distillation methods based on the\noutput layer, middle layer, and constructed graph. Subsequently, various\nalgorithms' ideas are analyzed and compared, concluding with the advantages and\ndisadvantages of each algorithm supported by experimental results. In addition,\nthe applications of graph-based knowledge distillation in CV, NLP, RS, and\nother fields are listed. Finally, the graph-based knowledge distillation is\nsummarized and prospectively discussed. We have also released related resources\nat https://github.com/liujing1023/Graph-based-Knowledge-Distillation.", + "authors": "Jing Liu, Tongya Zheng, Guanzheng Zhang, Qinfen Hao", + "published": "2023-02-27", + "updated": "2023-02-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1905.09747v2", + "title": "Adversarially Robust Distillation", + "abstract": "Knowledge distillation is effective for producing small, high-performance\nneural networks for classification, but these small networks are vulnerable to\nadversarial attacks. This paper studies how adversarial robustness transfers\nfrom teacher to student during knowledge distillation. We find that a large\namount of robustness may be inherited by the student even when distilled on\nonly clean images. Second, we introduce Adversarially Robust Distillation (ARD)\nfor distilling robustness onto student networks. In addition to producing small\nmodels with high test accuracy like conventional distillation, ARD also passes\nthe superior robustness of large networks onto the student. In our experiments,\nwe find that ARD student models decisively outperform adversarially trained\nnetworks of identical architecture in terms of robust accuracy, surpassing\nstate-of-the-art methods on standard robustness benchmarks. Finally, we adapt\nrecent fast adversarial training methods to ARD for accelerated robust\ndistillation.", + "authors": "Micah Goldblum, Liam Fowl, Soheil Feizi, Tom Goldstein", + "published": "2019-05-23", + "updated": "2019-12-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2401.15863v1", + "title": "Importance-Aware Adaptive Dataset Distillation", + "abstract": "Herein, we propose a novel dataset distillation method for constructing small\ninformative datasets that preserve the information of the large original\ndatasets. The development of deep learning models is enabled by the\navailability of large-scale datasets. Despite unprecedented success,\nlarge-scale datasets considerably increase the storage and transmission costs,\nresulting in a cumbersome model training process. Moreover, using raw data for\ntraining raises privacy and copyright concerns. To address these issues, a new\ntask named dataset distillation has been introduced, aiming to synthesize a\ncompact dataset that retains the essential information from the large original\ndataset. State-of-the-art (SOTA) dataset distillation methods have been\nproposed by matching gradients or network parameters obtained during training\non real and synthetic datasets. The contribution of different network\nparameters to the distillation process varies, and uniformly treating them\nleads to degraded distillation performance. Based on this observation, we\npropose an importance-aware adaptive dataset distillation (IADD) method that\ncan improve distillation performance by automatically assigning importance\nweights to different network parameters during distillation, thereby\nsynthesizing more robust distilled datasets. IADD demonstrates superior\nperformance over other SOTA dataset distillation methods based on parameter\nmatching on multiple benchmark datasets and outperforms them in terms of\ncross-architecture generalization. In addition, the analysis of self-adaptive\nweights demonstrates the effectiveness of IADD. Furthermore, the effectiveness\nof IADD is validated in a real-world medical application such as COVID-19\ndetection.", + "authors": "Guang Li, Ren Togo, Takahiro Ogawa, Miki Haseyama", + "published": "2024-01-29", + "updated": "2024-01-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0312123v2", + "title": "Many copies may be required for entanglement distillation", + "abstract": "A mixed quantum state shared between two parties is said to be distillable\nif, by means of a protocol involving only local quantum operations and\nclassical communication, the two parties can transform some number of copies of\nthat state into a single shared pair of qubits having high fidelity with a\nmaximally entangled state state. In this paper it is proved that there exist\nstates that are distillable, but for which an arbitrarily large number of\ncopies is required before any distillation procedure can produce a shared pair\nof qubits with even a small amount of entanglement. Specifically, for every\npositive integer n there exists a state that is distillable, but given n or\nfewer copies of that state every distillation procedure outputting a single\nshared pair of qubits will output those qubits in a separable state.\nEssentially all previous examples of states proved to be distillable were such\nthat some distillation procedure could output an entangled pair of qubits given\na single copy of the state in question.", + "authors": "John Watrous", + "published": "2003-12-15", + "updated": "2004-05-31", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2304.06461v2", + "title": "Multi-Mode Online Knowledge Distillation for Self-Supervised Visual Representation Learning", + "abstract": "Self-supervised learning (SSL) has made remarkable progress in visual\nrepresentation learning. Some studies combine SSL with knowledge distillation\n(SSL-KD) to boost the representation learning performance of small models. In\nthis study, we propose a Multi-mode Online Knowledge Distillation method (MOKD)\nto boost self-supervised visual representation learning. Different from\nexisting SSL-KD methods that transfer knowledge from a static pre-trained\nteacher to a student, in MOKD, two different models learn collaboratively in a\nself-supervised manner. Specifically, MOKD consists of two distillation modes:\nself-distillation and cross-distillation modes. Among them, self-distillation\nperforms self-supervised learning for each model independently, while\ncross-distillation realizes knowledge interaction between different models. In\ncross-distillation, a cross-attention feature search strategy is proposed to\nenhance the semantic feature alignment between different models. As a result,\nthe two models can absorb knowledge from each other to boost their\nrepresentation learning performance. Extensive experimental results on\ndifferent backbones and datasets demonstrate that two heterogeneous models can\nbenefit from MOKD and outperform their independently trained baseline. In\naddition, MOKD also outperforms existing SSL-KD methods for both the student\nand teacher models.", + "authors": "Kaiyou Song, Jin Xie, Shan Zhang, Zimeng Luo", + "published": "2023-04-13", + "updated": "2023-06-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2303.05958v1", + "title": "Robust Knowledge Distillation from RNN-T Models With Noisy Training Labels Using Full-Sum Loss", + "abstract": "This work studies knowledge distillation (KD) and addresses its constraints\nfor recurrent neural network transducer (RNN-T) models. In hard distillation, a\nteacher model transcribes large amounts of unlabelled speech to train a student\nmodel. Soft distillation is another popular KD method that distills the output\nlogits of the teacher model. Due to the nature of RNN-T alignments, applying\nsoft distillation between RNN-T architectures having different posterior\ndistributions is challenging. In addition, bad teachers having high\nword-error-rate (WER) reduce the efficacy of KD. We investigate how to\neffectively distill knowledge from variable quality ASR teachers, which has not\nbeen studied before to the best of our knowledge. We show that a sequence-level\nKD, full-sum distillation, outperforms other distillation methods for RNN-T\nmodels, especially for bad teachers. We also propose a variant of full-sum\ndistillation that distills the sequence discriminative knowledge of the teacher\nleading to further improvement in WER. We conduct experiments on public\ndatasets namely SpeechStew and LibriSpeech, and on in-house production data.", + "authors": "Mohammad Zeineldeen, Kartik Audhkhasi, Murali Karthick Baskar, Bhuvana Ramabhadran", + "published": "2023-03-10", + "updated": "2023-03-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.SD", + "eess.AS", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2401.06370v1", + "title": "Graph Relation Distillation for Efficient Biomedical Instance Segmentation", + "abstract": "Instance-aware embeddings predicted by deep neural networks have\nrevolutionized biomedical instance segmentation, but its resource requirements\nare substantial. Knowledge distillation offers a solution by transferring\ndistilled knowledge from heavy teacher networks to lightweight yet\nhigh-performance student networks. However, existing knowledge distillation\nmethods struggle to extract knowledge for distinguishing instances and overlook\nglobal relation information. To address these challenges, we propose a graph\nrelation distillation approach for efficient biomedical instance segmentation,\nwhich considers three essential types of knowledge: instance-level features,\ninstance relations, and pixel-level boundaries. We introduce two graph\ndistillation schemes deployed at both the intra-image level and the inter-image\nlevel: instance graph distillation (IGD) and affinity graph distillation (AGD).\nIGD constructs a graph representing instance features and relations,\ntransferring these two types of knowledge by enforcing instance graph\nconsistency. AGD constructs an affinity graph representing pixel relations to\ncapture structured knowledge of instance boundaries, transferring\nboundary-related knowledge by ensuring pixel affinity consistency. Experimental\nresults on a number of biomedical datasets validate the effectiveness of our\napproach, enabling student models with less than $ 1\\%$ parameters and less\nthan $10\\%$ inference time while achieving promising performance compared to\nteacher models.", + "authors": "Xiaoyu Liu, Yueyi Zhang, Zhiwei Xiong, Wei Huang, Bo Hu, Xiaoyan Sun, Feng Wu", + "published": "2024-01-12", + "updated": "2024-01-12", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.17732v1", + "title": "Generative Dataset Distillation: Balancing Global Structure and Local Details", + "abstract": "In this paper, we propose a new dataset distillation method that considers\nbalancing global structure and local details when distilling the information\nfrom a large dataset into a generative model. Dataset distillation has been\nproposed to reduce the size of the required dataset when training models. The\nconventional dataset distillation methods face the problem of long redeployment\ntime and poor cross-architecture performance. Moreover, previous methods\nfocused too much on the high-level semantic attributes between the synthetic\ndataset and the original dataset while ignoring the local features such as\ntexture and shape. Based on the above understanding, we propose a new method\nfor distilling the original image dataset into a generative model. Our method\ninvolves using a conditional generative adversarial network to generate the\ndistilled dataset. Subsequently, we ensure balancing global structure and local\ndetails in the distillation process, continuously optimizing the generator for\nmore information-dense dataset generation.", + "authors": "Longzhen Li, Guang Li, Ren Togo, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama", + "published": "2024-04-26", + "updated": "2024-04-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2307.12732v1", + "title": "CLIP-KD: An Empirical Study of Distilling CLIP Models", + "abstract": "CLIP has become a promising language-supervised visual pre-training framework\nand achieves excellent performance over a wide range of tasks. This paper aims\nto distill small CLIP models supervised by a large teacher CLIP model. We\npropose several distillation strategies, including relation, feature, gradient\nand contrastive paradigm, to examine the impact on CLIP distillation. We show\nthat the simplest feature mimicry with MSE loss performs best. Moreover,\ninteractive contrastive learning and relation-based distillation are also\ncritical in performance improvement. We apply the unified method to distill\nseveral student networks trained on 15 million (image, text) pairs.\nDistillation improves the student CLIP models consistently over zero-shot\nImageNet classification and cross-modal retrieval benchmarks. We hope our\nempirical study will become an important baseline for future CLIP distillation\nresearch. The code is available at \\url{https://github.com/winycg/CLIP-KD}.", + "authors": "Chuanguang Yang, Zhulin An, Libo Huang, Junyu Bi, Xinqiang Yu, Han Yang, Yongjun Xu", + "published": "2023-07-24", + "updated": "2023-07-24", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2304.05563v2", + "title": "Entanglement distillation in terms of Schmidt rank and matrix rank", + "abstract": "Entanglement distillation is a key task in quantum-information processing. In\nthis paper, we distill non-positive-partial-transpose (NPT) bipartite states of\nsome given Schmidt rank and matrix rank. We show that all bipartite states of\nSchmidt rank two are locally equivalent to classical-classical states, and all\nbipartite states of Schmidt rank three are 1-undistillable. Subsequently, we\nshow that low-rank B-irreducible NPT states are distillable for large-rank\nreduced density operators by proving low-rank B-irreducible NPT state whose\nrange contains a product vector is distillable. Eventually, we present an\nequivalent condition to distill $M\\times N$ bipartite states of rank\n$\\max\\{M,N\\}+1$.", + "authors": "Tianyi Ding, Lin Chen", + "published": "2023-04-12", + "updated": "2023-07-06", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0001084v2", + "title": "Distillation of GHZ states by selective information manipulation", + "abstract": "Methods for distilling maximally entangled tripartite (GHZ) states from\narbitrary entangled tripartite pure states are described. These techniques work\nfor virtually any input state. Each technique has two stages which we call\nprimary and secondary distillation. Primary distillation produces a GHZ state\nwith some probability, so that when applied to an ensemble of systems, a\ncertain percentage is discarded. Secondary distillation produces further GHZs\nfrom the discarded systems. These protocols are developed with the help of an\napproach to quantum information theory based on absolutely selective\ninformation, which has other potential applications.", + "authors": "Oliver Cohen, Todd A. Brun", + "published": "2000-01-23", + "updated": "2000-02-02", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2305.18381v3", + "title": "Distill Gold from Massive Ores: Efficient Dataset Distillation via Critical Samples Selection", + "abstract": "Data-efficient learning has garnered significant attention, especially given\nthe current trend of large multi-modal models. Recently, dataset distillation\nbecomes an effective approach for data-efficiency; however, the distillation\nprocess itself can still be inefficient. In this work, we model the dataset\ndistillation task within the context of information transport. By observing the\nsubstantial data redundancy inherent in the distillation, we argue to put more\nemphasis on the samples' utility for the distillation task. We introduce and\nvalidate a family of data utility estimators and optimal data selection methods\nto exploit the most valuable samples. This strategy significantly reduces the\ntraining costs and extends various existing distillation algorithms to larger\nand more diversified datasets, e.g., in some cases only 0.04% training data is\nsufficient for comparable distillation performance. Our method consistently\nenhances the distillation algorithms, even on much larger-scale and more\nheterogeneous datasets, e.g. ImageNet-1K and Kinetics-400. This paradigm opens\nup new avenues in the dynamics of distillation and paves the way for efficient\ndataset distillation. Our code is available on\nhttps://github.com/silicx/GoldFromOres .", + "authors": "Yue Xu, Yong-Lu Li, Kaitong Cui, Ziyu Wang, Cewu Lu, Yu-Wing Tai, Chi-Keung Tang", + "published": "2023-05-28", + "updated": "2023-11-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0305188v1", + "title": "Dynamics of Distillability", + "abstract": "The time evolution of a maximally entangled bipartite systems is presented in\nthis paper. The distillability criterion is given in terms of Kraus operators.\nUsing the criterion, we discuss the distillability of $2\\times 2$ and $n\\times\nn (n>2)$ systems in their evolution process. There are two distinguished\nprocesses, dissipation and decoherence, which may destroy the distillability.\nWe discuss the effects of those processes on distillability in details.", + "authors": "W. Wu, W. Wang, X. X. Yi", + "published": "2003-05-30", + "updated": "2003-05-30", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1903.04197v7", + "title": "Structured Knowledge Distillation for Dense Prediction", + "abstract": "In this work, we consider transferring the structure information from large\nnetworks to compact ones for dense prediction tasks in computer vision.\nPrevious knowledge distillation strategies used for dense prediction tasks\noften directly borrow the distillation scheme for image classification and\nperform knowledge distillation for each pixel separately, leading to\nsub-optimal performance. Here we propose to distill structured knowledge from\nlarge networks to compact networks, taking into account the fact that dense\nprediction is a structured prediction problem. Specifically, we study two\nstructured distillation schemes: i) pair-wise distillation that distills the\npair-wise similarities by building a static graph; and ii) holistic\ndistillation that uses adversarial training to distill holistic knowledge. The\neffectiveness of our knowledge distillation approaches is demonstrated by\nexperiments on three dense prediction tasks: semantic segmentation, depth\nestimation and object detection. Code is available at: https://git.io/StructKD", + "authors": "Yifan Liu, Changyong Shun, Jingdong Wang, Chunhua Shen", + "published": "2019-03-11", + "updated": "2020-06-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1912.12630v1", + "title": "Real-time Policy Distillation in Deep Reinforcement Learning", + "abstract": "Policy distillation in deep reinforcement learning provides an effective way\nto transfer control policies from a larger network to a smaller untrained\nnetwork without a significant degradation in performance. However, policy\ndistillation is underexplored in deep reinforcement learning, and existing\napproaches are computationally inefficient, resulting in a long distillation\ntime. In addition, the effectiveness of the distillation process is still\nlimited to the model capacity. We propose a new distillation mechanism, called\nreal-time policy distillation, in which training the teacher model and\ndistilling the policy to the student model occur simultaneously. Accordingly,\nthe teacher's latest policy is transferred to the student model in real time.\nThis reduces the distillation time to half the original time or even less and\nalso makes it possible for extremely small student models to learn skills at\nthe expert level. We evaluated the proposed algorithm in the Atari 2600 domain.\nThe results show that our approach can achieve full distillation in most games,\neven with compression ratios up to 1.7%.", + "authors": "Yuxiang Sun, Pooyan Fazli", + "published": "2019-12-29", + "updated": "2019-12-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2011.09969v1", + "title": "Neural network algorithm and its application in reactive distillation", + "abstract": "Reactive distillation is a special distillation technology based on the\ncoupling of chemical reaction and distillation. It has the characteristics of\nlow energy consumption and high separation efficiency. However, because the\ncombination of reaction and separation produces highly nonlinear robust\nbehavior, the control and optimization of the reactive distillation process\ncannot use conventional methods, but must rely on neural network algorithms.\nThis paper briefly describes the characteristics and research progress of\nreactive distillation technology and neural network algorithms, and summarizes\nthe application of neural network algorithms in reactive distillation, aiming\nto provide reference for the development and innovation of industry technology.", + "authors": "Huihui Wang, Ruyang Mo", + "published": "2020-11-16", + "updated": "2020-11-16", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE", + "cs.LG", + "I.2.8" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2006.01683v1", + "title": "Channel Distillation: Channel-Wise Attention for Knowledge Distillation", + "abstract": "Knowledge distillation is to transfer the knowledge from the data learned by\nthe teacher network to the student network, so that the student has the\nadvantage of less parameters and less calculations, and the accuracy is close\nto the teacher. In this paper, we propose a new distillation method, which\ncontains two transfer distillation strategies and a loss decay strategy. The\nfirst transfer strategy is based on channel-wise attention, called Channel\nDistillation (CD). CD transfers the channel information from the teacher to the\nstudent. The second is Guided Knowledge Distillation (GKD). Unlike Knowledge\nDistillation (KD), which allows the student to mimic each sample's prediction\ndistribution of the teacher, GKD only enables the student to mimic the correct\noutput of the teacher. The last part is Early Decay Teacher (EDT). During the\ntraining process, we gradually decay the weight of the distillation loss. The\npurpose is to enable the student to gradually control the optimization rather\nthan the teacher. Our proposed method is evaluated on ImageNet and CIFAR100. On\nImageNet, we achieve 27.68% of top-1 error with ResNet18, which outperforms\nstate-of-the-art methods. On CIFAR100, we achieve surprising result that the\nstudent outperforms the teacher. Code is available at\nhttps://github.com/zhouzaida/channel-distillation.", + "authors": "Zaida Zhou, Chaoran Zhuge, Xinwei Guan, Wen Liu", + "published": "2020-06-02", + "updated": "2020-06-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.09740v1", + "title": "Leveraging Zero-Level Distillation to Generate High-Fidelity Magic States", + "abstract": "Magic state distillation plays an important role in universal fault-tolerant\nquantum computing, and its overhead is one of the major obstacles to realizing\nfault-tolerant quantum computers. Hence, many studies have been conducted to\nreduce this overhead. Among these, Litinski has provided a concrete assessment\nof resource-efficient distillation protocol implementations on the rotated\nsurface code. On the other hand, recently, Itogawa et al. have proposed\nzero-level distillation, a distillation protocol offering very small spatial\nand temporal overhead to generate relatively low-fidelity magic states. While\nzero-level distillation offers preferable spatial and temporal overhead, it\ncannot directly generate high-fidelity magic states since it only reduces the\nlogical error rate of the magic state quadratically. In this study, we evaluate\nthe spatial and temporal overhead of two-level distillation implementations\ngenerating relatively high-fidelity magic states, including ones incorporating\nzero-level distillation. To this end, we introduce (0+1)-level distillation, a\ntwo-level distillation protocol which combines zero-level distillation and the\n15-to-1 distillation protocol. We refine the second-level 15-to-1\nimplementation in it to capitalize on the small footprint of zero-level\ndistillation. Under conditions of a physical error probability of\n$p_{\\mathrm{phys}} = 10^{-4}$ ($10^{-3}$) and targeting an error rate for the\nmagic state within $[5 \\times 10^{-17}, 10^{-11}]$ ($[5 \\times 10^{-11},\n10^{-8}]$), (0+1)-level distillation reduces the spatiotemporal overhead by\nmore than 63% (61%) compared to the (15-to-1)$\\times$(15-to-1) protocol and\nmore than 43% (44%) compared to the (15-to-1)$\\times$(20-to-4) protocol,\noffering a substantial efficiency gain over the traditional protocols.", + "authors": "Yutaka Hirano, Tomohiro Itogawa, Keisuke Fujii", + "published": "2024-04-15", + "updated": "2024-04-15", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2104.11928v1", + "title": "Extract then Distill: Efficient and Effective Task-Agnostic BERT Distillation", + "abstract": "Task-agnostic knowledge distillation, a teacher-student framework, has been\nproved effective for BERT compression. Although achieving promising results on\nNLP tasks, it requires enormous computational resources. In this paper, we\npropose Extract Then Distill (ETD), a generic and flexible strategy to reuse\nthe teacher's parameters for efficient and effective task-agnostic\ndistillation, which can be applied to students of any size. Specifically, we\nintroduce two variants of ETD, ETD-Rand and ETD-Impt, which extract the\nteacher's parameters in a random manner and by following an importance metric\nrespectively. In this way, the student has already acquired some knowledge at\nthe beginning of the distillation process, which makes the distillation process\nconverge faster. We demonstrate the effectiveness of ETD on the GLUE benchmark\nand SQuAD. The experimental results show that: (1) compared with the baseline\nwithout an ETD strategy, ETD can save 70\\% of computation cost. Moreover, it\nachieves better results than the baseline when using the same computing\nresource. (2) ETD is generic and has been proven effective for different\ndistillation methods (e.g., TinyBERT and MiniLM) and students of different\nsizes. The source code will be publicly available upon publication.", + "authors": "Cheng Chen, Yichun Yin, Lifeng Shang, Zhi Wang, Xin Jiang, Xiao Chen, Qun Liu", + "published": "2021-04-24", + "updated": "2021-04-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1901.09135v1", + "title": "Progressive Label Distillation: Learning Input-Efficient Deep Neural Networks", + "abstract": "Much of the focus in the area of knowledge distillation has been on\ndistilling knowledge from a larger teacher network to a smaller student\nnetwork. However, there has been little research on how the concept of\ndistillation can be leveraged to distill the knowledge encapsulated in the\ntraining data itself into a reduced form. In this study, we explore the concept\nof progressive label distillation, where we leverage a series of\nteacher-student network pairs to progressively generate distilled training data\nfor learning deep neural networks with greatly reduced input dimensions. To\ninvestigate the efficacy of the proposed progressive label distillation\napproach, we experimented with learning a deep limited vocabulary speech\nrecognition network based on generated 500ms input utterances distilled\nprogressively from 1000ms source training data, and demonstrated a significant\nincrease in test accuracy of almost 78% compared to direct learning.", + "authors": "Zhong Qiu Lin, Alexander Wong", + "published": "2019-01-26", + "updated": "2019-01-26", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2304.14800v1", + "title": "Multi-to-Single Knowledge Distillation for Point Cloud Semantic Segmentation", + "abstract": "3D point cloud semantic segmentation is one of the fundamental tasks for\nenvironmental understanding. Although significant progress has been made in\nrecent years, the performance of classes with few examples or few points is\nstill far from satisfactory. In this paper, we propose a novel multi-to-single\nknowledge distillation framework for the 3D point cloud semantic segmentation\ntask to boost the performance of those hard classes. Instead of fusing all the\npoints of multi-scans directly, only the instances that belong to the\npreviously defined hard classes are fused. To effectively and sufficiently\ndistill valuable knowledge from multi-scans, we leverage a multilevel\ndistillation framework, i.e., feature representation distillation, logit\ndistillation, and affinity distillation. We further develop a novel\ninstance-aware affinity distillation algorithm for capturing high-level\nstructural knowledge to enhance the distillation efficacy for hard classes.\nFinally, we conduct experiments on the SemanticKITTI dataset, and the results\non both the validation and test sets demonstrate that our method yields\nsubstantial improvements compared with the baseline method. The code is\navailable at \\Url{https://github.com/skyshoumeng/M2SKD}.", + "authors": "Shoumeng Qiu, Feng Jiang, Haiqiang Zhang, Xiangyang Xue, Jian Pu", + "published": "2023-04-28", + "updated": "2023-04-28", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0012022v1", + "title": "Distilling a Greenberger-Horne-Zeilinger State From an Arbitrary Pure State of Three Qubits", + "abstract": "We present a general algorithm to achieve local operators which can produce\nthe GHZ state for an arbitrary given three-qubit state. Thus the distillation\nprocess of the state can be realized optimally. The algorithm is shown to be\nsufficient for the three-qubit state on account of the fact that any state for\nwhich this distillation algorithm is invalid cannot be distilled to the GHZ\nstate by any local actions. Moreover, an analytical result of distillation\noperations is achieved for the general state of three qubits.", + "authors": "Li-Xiang Cen, Shun-Jin Wang", + "published": "2000-12-05", + "updated": "2000-12-05", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2004.03097v1", + "title": "Towards Non-task-specific Distillation of BERT via Sentence Representation Approximation", + "abstract": "Recently, BERT has become an essential ingredient of various NLP deep models\ndue to its effectiveness and universal-usability. However, the online\ndeployment of BERT is often blocked by its large-scale parameters and high\ncomputational cost. There are plenty of studies showing that the knowledge\ndistillation is efficient in transferring the knowledge from BERT into the\nmodel with a smaller size of parameters. Nevertheless, current BERT\ndistillation approaches mainly focus on task-specified distillation, such\nmethodologies lead to the loss of the general semantic knowledge of BERT for\nuniversal-usability. In this paper, we propose a sentence representation\napproximating oriented distillation framework that can distill the pre-trained\nBERT into a simple LSTM based model without specifying tasks. Consistent with\nBERT, our distilled model is able to perform transfer learning via fine-tuning\nto adapt to any sentence-level downstream task. Besides, our model can further\ncooperate with task-specific distillation procedures. The experimental results\non multiple NLP tasks from the GLUE benchmark show that our approach\noutperforms other task-specific distillation methods or even much larger\nmodels, i.e., ELMO, with efficiency well-improved.", + "authors": "Bowen Wu, Huan Zhang, Mengyuan Li, Zongsheng Wang, Qihang Feng, Junhong Huang, Baoxun Wang", + "published": "2020-04-07", + "updated": "2020-04-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2108.12905v1", + "title": "Lipschitz Continuity Guided Knowledge Distillation", + "abstract": "Knowledge distillation has become one of the most important model compression\ntechniques by distilling knowledge from larger teacher networks to smaller\nstudent ones. Although great success has been achieved by prior distillation\nmethods via delicately designing various types of knowledge, they overlook the\nfunctional properties of neural networks, which makes the process of applying\nthose techniques to new tasks unreliable and non-trivial. To alleviate such\nproblem, in this paper, we initially leverage Lipschitz continuity to better\nrepresent the functional characteristic of neural networks and guide the\nknowledge distillation process. In particular, we propose a novel Lipschitz\nContinuity Guided Knowledge Distillation framework to faithfully distill\nknowledge by minimizing the distance between two neural networks' Lipschitz\nconstants, which enables teacher networks to better regularize student networks\nand improve the corresponding performance. We derive an explainable\napproximation algorithm with an explicit theoretical derivation to address the\nNP-hard problem of calculating the Lipschitz constant. Experimental results\nhave shown that our method outperforms other benchmarks over several knowledge\ndistillation tasks (e.g., classification, segmentation and object detection) on\nCIFAR-100, ImageNet, and PASCAL VOC datasets.", + "authors": "Yuzhang Shang, Bin Duan, Ziliang Zong, Liqiang Nie, Yan Yan", + "published": "2021-08-29", + "updated": "2021-08-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2312.06899v1", + "title": "LoRA-Enhanced Distillation on Guided Diffusion Models", + "abstract": "Diffusion models, such as Stable Diffusion (SD), offer the ability to\ngenerate high-resolution images with diverse features, but they come at a\nsignificant computational and memory cost. In classifier-free guided diffusion\nmodels, prolonged inference times are attributed to the necessity of computing\ntwo separate diffusion models at each denoising step. Recent work has shown\npromise in improving inference time through distillation techniques, teaching\nthe model to perform similar denoising steps with reduced computations.\nHowever, the application of distillation introduces additional memory overhead\nto these already resource-intensive diffusion models, making it less practical.\n To address these challenges, our research explores a novel approach that\ncombines Low-Rank Adaptation (LoRA) with model distillation to efficiently\ncompress diffusion models. This approach not only reduces inference time but\nalso mitigates memory overhead, and notably decreases memory consumption even\nbefore applying distillation. The results are remarkable, featuring a\nsignificant reduction in inference time due to the distillation process and a\nsubstantial 50% reduction in memory consumption. Our examination of the\ngenerated images underscores that the incorporation of LoRA-enhanced\ndistillation maintains image quality and alignment with the provided prompts.\nIn summary, while conventional distillation tends to increase memory\nconsumption, LoRA-enhanced distillation offers optimization without any\ntrade-offs or compromises in quality.", + "authors": "Pareesa Ameneh Golnari", + "published": "2023-12-12", + "updated": "2023-12-12", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2109.14960v3", + "title": "Prune Your Model Before Distill It", + "abstract": "Knowledge distillation transfers the knowledge from a cumbersome teacher to a\nsmall student. Recent results suggest that the student-friendly teacher is more\nappropriate to distill since it provides more transferable knowledge. In this\nwork, we propose the novel framework, \"prune, then distill,\" that prunes the\nmodel first to make it more transferrable and then distill it to the student.\nWe provide several exploratory examples where the pruned teacher teaches better\nthan the original unpruned networks. We further show theoretically that the\npruned teacher plays the role of regularizer in distillation, which reduces the\ngeneralization error. Based on this result, we propose a novel neural network\ncompression scheme where the student network is formed based on the pruned\nteacher and then apply the \"prune, then distill\" strategy. The code is\navailable at https://github.com/ososos888/prune-then-distill", + "authors": "Jinhyuk Park, Albert No", + "published": "2021-09-30", + "updated": "2022-07-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2303.05015v2", + "title": "Smooth and Stepwise Self-Distillation for Object Detection", + "abstract": "Distilling the structured information captured in feature maps has\ncontributed to improved results for object detection tasks, but requires\ncareful selection of baseline architectures and substantial pre-training.\nSelf-distillation addresses these limitations and has recently achieved\nstate-of-the-art performance for object detection despite making several\nsimplifying architectural assumptions. Building on this work, we propose Smooth\nand Stepwise Self-Distillation (SSSD) for object detection. Our SSSD\narchitecture forms an implicit teacher from object labels and a feature pyramid\nnetwork backbone to distill label-annotated feature maps using Jensen-Shannon\ndistance, which is smoother than distillation losses used in prior work. We\nadditionally add a distillation coefficient that is adaptively configured based\non the learning rate. We extensively benchmark SSSD against a baseline and two\nstate-of-the-art object detector architectures on the COCO dataset by varying\nthe coefficients and backbone and detector networks. We demonstrate that SSSD\nachieves higher average precision in most experimental settings, is robust to a\nwide range of coefficients, and benefits from our stepwise distillation\nprocedure.", + "authors": "Jieren Deng, Xin Zhou, Hao Tian, Zhihong Pan, Derek Aguiar", + "published": "2023-03-09", + "updated": "2024-01-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2301.01615v2", + "title": "StereoDistill: Pick the Cream from LiDAR for Distilling Stereo-based 3D Object Detection", + "abstract": "In this paper, we propose a cross-modal distillation method named\nStereoDistill to narrow the gap between the stereo and LiDAR-based approaches\nvia distilling the stereo detectors from the superior LiDAR model at the\nresponse level, which is usually overlooked in 3D object detection\ndistillation. The key designs of StereoDistill are: the X-component Guided\nDistillation~(XGD) for regression and the Cross-anchor Logit Distillation~(CLD)\nfor classification. In XGD, instead of empirically adopting a threshold to\nselect the high-quality teacher predictions as soft targets, we decompose the\npredicted 3D box into sub-components and retain the corresponding part for\ndistillation if the teacher component pilot is consistent with ground truth to\nlargely boost the number of positive predictions and alleviate the mimicking\ndifficulty of the student model. For CLD, we aggregate the probability\ndistribution of all anchors at the same position to encourage the highest\nprobability anchor rather than individually distill the distribution at the\nanchor level. Finally, our StereoDistill achieves state-of-the-art results for\nstereo-based 3D detection on the KITTI test benchmark and extensive experiments\non KITTI and Argoverse Dataset validate the effectiveness.", + "authors": "Zhe Liu, Xiaoqing Ye, Xiao Tan, Errui Ding, Xiang Bai", + "published": "2023-01-04", + "updated": "2023-01-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0607126v3", + "title": "Random bipartite entanglement from W and W-like states", + "abstract": "We describe a protocol for distilling maximally entangled bipartite states\nbetween random pairs of parties from those sharing a tripartite W state, and\nshow that, rather surprisingly, the total distillation rate (the total number\nof EPR pairs distilled per W, irrespective of who shares them) may be done at a\nhigher rate than distillation of bipartite entanglement between specified pairs\nof parties. Specifically, the optimal distillation rate for specified\nentanglement for the W has been previously shown to be the asymptotic\nentanglement of assistance of 0.92 EPR pairs per W, while our protocol can\nasymptotically distill 1 EPR pair per W between random pairs of parties, which\nwe conjecture to be optimal. We thus demonstrate a tradeoff between the overall\nasymptotic rate of EPR distillation and the distribution of final EPR pairs\nbetween parties. We further show that by increasing the number of parties in\nthe protocol that there exist states with fixed lower-bounded distillable\nentanglement for random parties but arbitrarily small distillable entanglement\nfor specified parties.", + "authors": "Ben Fortescue, Hoi-Kwong Lo", + "published": "2006-07-18", + "updated": "2007-02-23", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2405.00348v1", + "title": "Practical Dataset Distillation Based on Deep Support Vectors", + "abstract": "Conventional dataset distillation requires significant computational\nresources and assumes access to the entire dataset, an assumption impractical\nas it presumes all data resides on a central server. In this paper, we focus on\ndataset distillation in practical scenarios with access to only a fraction of\nthe entire dataset. We introduce a novel distillation method that augments the\nconventional process by incorporating general model knowledge via the addition\nof Deep KKT (DKKT) loss. In practical settings, our approach showed improved\nperformance compared to the baseline distribution matching distillation method\non the CIFAR-10 dataset. Additionally, we present experimental evidence that\nDeep Support Vectors (DSVs) offer unique information to the original\ndistillation, and their integration results in enhanced performance.", + "authors": "Hyunho Lee, Junhoo Lee, Nojun Kwak", + "published": "2024-05-01", + "updated": "2024-05-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2309.09920v1", + "title": "Distilling HuBERT with LSTMs via Decoupled Knowledge Distillation", + "abstract": "Much research effort is being applied to the task of compressing the\nknowledge of self-supervised models, which are powerful, yet large and memory\nconsuming. In this work, we show that the original method of knowledge\ndistillation (and its more recently proposed extension, decoupled knowledge\ndistillation) can be applied to the task of distilling HuBERT. In contrast to\nmethods that focus on distilling internal features, this allows for more\nfreedom in the network architecture of the compressed model. We thus propose to\ndistill HuBERT's Transformer layers into an LSTM-based distilled model that\nreduces the number of parameters even below DistilHuBERT and at the same time\nshows improved performance in automatic speech recognition.", + "authors": "Danilo de Oliveira, Timo Gerkmann", + "published": "2023-09-18", + "updated": "2023-09-18", + "primary_cat": "eess.AS", + "cats": [ + "eess.AS", + "cs.LG", + "cs.SD", + "eess.SP" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2307.08436v1", + "title": "DOT: A Distillation-Oriented Trainer", + "abstract": "Knowledge distillation transfers knowledge from a large model to a small one\nvia task and distillation losses. In this paper, we observe a trade-off between\ntask and distillation losses, i.e., introducing distillation loss limits the\nconvergence of task loss. We believe that the trade-off results from the\ninsufficient optimization of distillation loss. The reason is: The teacher has\na lower task loss than the student, and a lower distillation loss drives the\nstudent more similar to the teacher, then a better-converged task loss could be\nobtained. To break the trade-off, we propose the Distillation-Oriented Trainer\n(DOT). DOT separately considers gradients of task and distillation losses, then\napplies a larger momentum to distillation loss to accelerate its optimization.\nWe empirically prove that DOT breaks the trade-off, i.e., both losses are\nsufficiently optimized. Extensive experiments validate the superiority of DOT.\nNotably, DOT achieves a +2.59% accuracy improvement on ImageNet-1k for the\nResNet50-MobileNetV1 pair. Conclusively, DOT greatly benefits the student's\noptimization properties in terms of loss convergence and model generalization.\nCode will be made publicly available.", + "authors": "Borui Zhao, Quan Cui, Renjie Song, Jiajun Liang", + "published": "2023-07-17", + "updated": "2023-07-17", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/9809078v2", + "title": "A rigorous treatment of distillable entanglement", + "abstract": "The notion of distillable entanglement is one of the fundamental concepts of\nquantum information theory. Unfortunately, there is an apparent mismatch\nbetween the intuitive and rigorous definitions of distillable entanglement. To\nbe precise, the existing rigorous definitions impose the constraint that the\ndistilation protocol produce an output of constant dimension. It is therefore\nconceivable that this unnecessary constraint might have led to underestimation\nof the true distillable entanglement. We give a new definition of distillable\nentanglement which removes this constraint, but could conceivably overestimate\nthe true value. Since the definitions turn out to be equivalent, neither\nunderestimation nor overestimation is possible, and both definitions are\narguably correct", + "authors": "Eric M. Rains", + "published": "1998-09-24", + "updated": "1998-10-12", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2305.08076v1", + "title": "Improving Defensive Distillation using Teacher Assistant", + "abstract": "Adversarial attacks pose a significant threat to the security and safety of\ndeep neural networks being applied to modern applications. More specifically,\nin computer vision-based tasks, experts can use the knowledge of model\narchitecture to create adversarial samples imperceptible to the human eye.\nThese attacks can lead to security problems in popular applications such as\nself-driving cars, face recognition, etc. Hence, building networks which are\nrobust to such attacks is highly desirable and essential. Among the various\nmethods present in literature, defensive distillation has shown promise in\nrecent years. Using knowledge distillation, researchers have been able to\ncreate models robust against some of those attacks. However, more attacks have\nbeen developed exposing weakness in defensive distillation. In this project, we\nderive inspiration from teacher assistant knowledge distillation and propose\nthat introducing an assistant network can improve the robustness of the\ndistilled model. Through a series of experiments, we evaluate the distilled\nmodels for different distillation temperatures in terms of accuracy,\nsensitivity, and robustness. Our experiments demonstrate that the proposed\nhypothesis can improve robustness in most cases. Additionally, we show that\nmulti-step distillation can further improve robustness with very little impact\non model accuracy.", + "authors": "Maniratnam Mandal, Suna Gao", + "published": "2023-05-14", + "updated": "2023-05-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CR", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1504.05965v2", + "title": "Qutrit Magic State Distillation Tight in Some Directions", + "abstract": "Magic state distillation is a crucial component in the leading approaches to\nimplementing universal fault tolerant quantum computation, with existing\nprotocols for both qubit and higher dimensional systems. Early work focused on\ndetermining the region of distillable states for qubit protocols, yet\ncomparatively little is known about which states can be distilled and with what\ndistillable region for d>2. Here we focus on d=3 and present new four-qutrit\ndistillation schemes that improve upon the known distillable region, and\nachieve distillation tight to the boundary of undistillable states for some\nclasses of state. As a consequence of recent results, this implies that there\nis a family of quantum states that enable universality if and only if they\nexhibit contextuality with respect to stabilizer measurements. We also identify\na new routine whose fixed point is a magic state with maximal sum-negativity\ni.e., it is maximally non-stabilizer in a specific sense.", + "authors": "Hillary Dawkins, Mark Howard", + "published": "2015-04-22", + "updated": "2015-09-21", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2104.02857v2", + "title": "Soft-Label Anonymous Gastric X-ray Image Distillation", + "abstract": "This paper presents a soft-label anonymous gastric X-ray image distillation\nmethod based on a gradient descent approach. The sharing of medical data is\ndemanded to construct high-accuracy computer-aided diagnosis (CAD) systems.\nHowever, the large size of the medical dataset and privacy protection are\nremaining problems in medical data sharing, which hindered the research of CAD\nsystems. The idea of our distillation method is to extract the valid\ninformation of the medical dataset and generate a tiny distilled dataset that\nhas a different data distribution. Different from model distillation, our\nmethod aims to find the optimal distilled images, distilled labels and the\noptimized learning rate. Experimental results show that the proposed method can\nnot only effectively compress the medical dataset but also anonymize medical\nimages to protect the patient's private information. The proposed approach can\nimprove the efficiency and security of medical data sharing.", + "authors": "Guang Li, Ren Togo, Takahiro Ogawa, Miki Haseyama", + "published": "2021-04-07", + "updated": "2024-03-21", + "primary_cat": "eess.IV", + "cats": [ + "eess.IV", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2206.08491v1", + "title": "Revisiting Self-Distillation", + "abstract": "Knowledge distillation is the procedure of transferring \"knowledge\" from a\nlarge model (the teacher) to a more compact one (the student), often being used\nin the context of model compression. When both models have the same\narchitecture, this procedure is called self-distillation. Several works have\nanecdotally shown that a self-distilled student can outperform the teacher on\nheld-out data. In this work, we systematically study self-distillation in a\nnumber of settings. We first show that even with a highly accurate teacher,\nself-distillation allows a student to surpass the teacher in all cases.\nSecondly, we revisit existing theoretical explanations of (self) distillation\nand identify contradicting examples, revealing possible drawbacks of these\nexplanations. Finally, we provide an alternative explanation for the dynamics\nof self-distillation through the lens of loss landscape geometry. We conduct\nextensive experiments to show that self-distillation leads to flatter minima,\nthereby resulting in better generalization.", + "authors": "Minh Pham, Minsu Cho, Ameya Joshi, Chinmay Hegde", + "published": "2022-06-17", + "updated": "2022-06-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2310.18628v2", + "title": "Personalised Distillation: Empowering Open-Sourced LLMs with Adaptive Learning for Code Generation", + "abstract": "With the rise of powerful closed-sourced LLMs (ChatGPT, GPT-4), there are\nincreasing interests in distilling the capabilies of close-sourced LLMs to\nsmaller open-sourced LLMs. Previous distillation methods usually prompt ChatGPT\nto generate a set of instructions and answers, for the student model to learn.\nHowever, such standard distillation approach neglects the merits and conditions\nof the student model. Inspired by modern teaching principles, we design a\npersonalised distillation process, in which the student attempts to solve a\ntask first, then the teacher provides an adaptive refinement for the student to\nimprove. Instead of feeding the student with teacher's prior, personalised\ndistillation enables personalised learning for the student model, as it only\nlearns on examples it makes mistakes upon and learns to improve its own\nsolution. On code generation, personalised distillation consistently\noutperforms standard distillation with only one third of the data. With only\n2.5-3K personalised examples that incur a data-collection cost of 4-6$, we\nboost CodeGen-mono-16B by 7% to achieve 36.4% pass@1 and StarCoder by 12.2% to\nachieve 45.8% pass@1 on HumanEval.", + "authors": "Hailin Chen, Amrita Saha, Steven Hoi, Shafiq Joty", + "published": "2023-10-28", + "updated": "2024-01-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2403.10045v1", + "title": "Towards Adversarially Robust Dataset Distillation by Curvature Regularization", + "abstract": "Dataset distillation (DD) allows datasets to be distilled to fractions of\ntheir original size while preserving the rich distributional information so\nthat models trained on the distilled datasets can achieve a comparable accuracy\nwhile saving significant computational loads. Recent research in this area has\nbeen focusing on improving the accuracy of models trained on distilled\ndatasets. In this paper, we aim to explore a new perspective of DD. We study\nhow to embed adversarial robustness in distilled datasets, so that models\ntrained on these datasets maintain the high accuracy and meanwhile acquire\nbetter adversarial robustness. We propose a new method that achieves this goal\nby incorporating curvature regularization into the distillation process with\nmuch less computational overhead than standard adversarial training. Extensive\nempirical experiments suggest that our method not only outperforms standard\nadversarial training on both accuracy and robustness with less computation\noverhead but is also capable of generating robust distilled datasets that can\nwithstand various adversarial attacks.", + "authors": "Eric Xue, Yijiang Li, Haoyang Liu, Yifan Shen, Haohan Wang", + "published": "2024-03-15", + "updated": "2024-03-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2403.03846v1", + "title": "On the Effectiveness of Distillation in Mitigating Backdoors in Pre-trained Encoder", + "abstract": "In this paper, we study a defense against poisoned encoders in SSL called\ndistillation, which is a defense used in supervised learning originally.\nDistillation aims to distill knowledge from a given model (a.k.a the teacher\nnet) and transfer it to another (a.k.a the student net). Now, we use it to\ndistill benign knowledge from poisoned pre-trained encoders and transfer it to\na new encoder, resulting in a clean pre-trained encoder. In particular, we\nconduct an empirical study on the effectiveness and performance of distillation\nagainst poisoned encoders. Using two state-of-the-art backdoor attacks against\npre-trained image encoders and four commonly used image classification\ndatasets, our experimental results show that distillation can reduce attack\nsuccess rate from 80.87% to 27.51% while suffering a 6.35% loss in accuracy.\nMoreover, we investigate the impact of three core components of distillation on\nperformance: teacher net, student net, and distillation loss. By comparing 4\ndifferent teacher nets, 3 student nets, and 6 distillation losses, we find that\nfine-tuned teacher nets, warm-up-training-based student nets, and\nattention-based distillation loss perform best, respectively.", + "authors": "Tingxu Han, Shenghan Huang, Ziqi Ding, Weisong Sun, Yebo Feng, Chunrong Fang, Jun Li, Hanwei Qian, Cong Wu, Quanjun Zhang, Yang Liu, Zhenyu Chen", + "published": "2024-03-06", + "updated": "2024-03-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0108029v1", + "title": "Distillability, Bell inequalities and multiparticle bound entanglement", + "abstract": "We study the relation between violation of Bell inequalities and\ndistillability properties of quantum states. Recently, D\\\"ur has shown that\nthere are some multiparticle bound entangled states, non-separable and\nnon-distillable, that violate a Bell inequality. We prove that for all the\nstates violating this inequality there exist at least one splitting of the\nparties into two groups such that some pure-state entanglement can be\ndistilled, obtaining a connection between Bell inequalities and bipartite\ndistillable entanglement.", + "authors": "A. Acin", + "published": "2001-08-07", + "updated": "2001-08-07", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.06170v1", + "title": "CLIP-Embed-KD: Computationally Efficient Knowledge Distillation Using Embeddings as Teachers", + "abstract": "Contrastive Language-Image Pre-training (CLIP) has been shown to improve\nzero-shot generalization capabilities of language and vision models. In this\npaper, we extend CLIP for efficient knowledge distillation, by utilizing\nembeddings as teachers. Typical knowledge distillation frameworks require\nrunning forward passes through a teacher model, which is often prohibitive in\nthe case of billion or trillion parameter teachers. In these cases, using only\nthe embeddings of the teacher models to guide the distillation can yield\nsignificant computational savings. Our preliminary findings show that\nCLIP-based knowledge distillation with embeddings can outperform full scale\nknowledge distillation using $9\\times$ less memory and $8\\times$ less training\ntime. Code available at: https://github.com/lnairGT/CLIP-Distillation/", + "authors": "Lakshmi Nair", + "published": "2024-04-09", + "updated": "2024-04-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2205.16004v3", + "title": "What Knowledge Gets Distilled in Knowledge Distillation?", + "abstract": "Knowledge distillation aims to transfer useful information from a teacher\nnetwork to a student network, with the primary goal of improving the student's\nperformance for the task at hand. Over the years, there has a been a deluge of\nnovel techniques and use cases of knowledge distillation. Yet, despite the\nvarious improvements, there seems to be a glaring gap in the community's\nfundamental understanding of the process. Specifically, what is the knowledge\nthat gets distilled in knowledge distillation? In other words, in what ways\ndoes the student become similar to the teacher? Does it start to localize\nobjects in the same way? Does it get fooled by the same adversarial samples?\nDoes its data invariance properties become similar? Our work presents a\ncomprehensive study to try to answer these questions. We show that existing\nmethods can indeed indirectly distill these properties beyond improving task\nperformance. We further study why knowledge distillation might work this way,\nand show that our findings have practical implications as well.", + "authors": "Utkarsh Ojha, Yuheng Li, Anirudh Sundara Rajan, Yingyu Liang, Yong Jae Lee", + "published": "2022-05-31", + "updated": "2023-11-06", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2011.06110v1", + "title": "Efficient Knowledge Distillation for RNN-Transducer Models", + "abstract": "Knowledge Distillation is an effective method of transferring knowledge from\na large model to a smaller model. Distillation can be viewed as a type of model\ncompression, and has played an important role for on-device ASR applications.\nIn this paper, we develop a distillation method for RNN-Transducer (RNN-T)\nmodels, a popular end-to-end neural network architecture for streaming speech\nrecognition. Our proposed distillation loss is simple and efficient, and uses\nonly the \"y\" and \"blank\" posterior probabilities from the RNN-T output\nprobability lattice. We study the effectiveness of the proposed approach in\nimproving the accuracy of sparse RNN-T models obtained by gradually pruning a\nlarger uncompressed model, which also serves as the teacher during\ndistillation. With distillation of 60% and 90% sparse multi-domain RNN-T\nmodels, we obtain WER reductions of 4.3% and 12.1% respectively, on a noisy\nFarField eval set. We also present results of experiments on LibriSpeech, where\nthe introduction of the distillation loss yields a 4.8% relative WER reduction\non the test-other dataset for a small Conformer model.", + "authors": "Sankaran Panchapagesan, Daniel S. Park, Chung-Cheng Chiu, Yuan Shangguan, Qiao Liang, Alexander Gruenstein", + "published": "2020-11-11", + "updated": "2020-11-11", + "primary_cat": "eess.AS", + "cats": [ + "eess.AS", + "cs.SD" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/0908.2142v1", + "title": "Distillation of Bell states in open systems", + "abstract": "In this work we review the entire classification of 2x2 distillable states\nfor protocols with a finite numbers of copies. We show a distillation protocol\nthat allows to distill Bell states with non zero probability at any time for an\ninitial singlet in vacuum. It is shown that the same protocol used in non zero\nthermal baths yields a considerable recovering of entanglement.", + "authors": "E. Isasi, D. Mundarain", + "published": "2009-08-14", + "updated": "2009-08-14", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2306.06629v1", + "title": "GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model", + "abstract": "Currently, the reduction in the parameter scale of large-scale pre-trained\nlanguage models (PLMs) through knowledge distillation has greatly facilitated\ntheir widespread deployment on various devices. However, the deployment of\nknowledge distillation systems faces great challenges in real-world\nindustrial-strength applications, which require the use of complex distillation\nmethods on even larger-scale PLMs (over 10B), limited by memory on GPUs and the\nswitching of methods. To overcome these challenges, we propose GKD, a general\nknowledge distillation framework that supports distillation on larger-scale\nPLMs using various distillation methods. With GKD, developers can build larger\ndistillation models on memory-limited GPUs and easily switch and combine\ndifferent distillation methods within a single framework. Experimental results\nshow that GKD can support the distillation of at least 100B-scale PLMs and 25\nmainstream methods on 8 NVIDIA A100 (40GB) GPUs.", + "authors": "Shicheng Tan, Weng Lam Tam, Yuanchun Wang, Wenwen Gong, Yang Yang, Hongyin Tang, Keqing He, Jiahao Liu, Jingang Wang, Shu Zhao, Peng Zhang, Jie Tang", + "published": "2023-06-11", + "updated": "2023-06-11", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2112.10047v1", + "title": "Controlling the Quality of Distillation in Response-Based Network Compression", + "abstract": "The performance of a distillation-based compressed network is governed by the\nquality of distillation. The reason for the suboptimal distillation of a large\nnetwork (teacher) to a smaller network (student) is largely attributed to the\ngap in the learning capacities of given teacher-student pair. While it is hard\nto distill all the knowledge of a teacher, the quality of distillation can be\ncontrolled to a large extent to achieve better performance. Our experiments\nshow that the quality of distillation is largely governed by the quality of\nteacher's response, which in turn is heavily affected by the presence of\nsimilarity information in its response. A well-trained large capacity teacher\nloses similarity information between classes in the process of learning\nfine-grained discriminative properties for classification. The absence of\nsimilarity information causes the distillation process to be reduced from one\nexample-many class learning to one example-one class learning, thereby\nthrottling the flow of diverse knowledge from the teacher. With the implicit\nassumption that only the instilled knowledge can be distilled, instead of\nfocusing only on the knowledge distilling process, we scrutinize the knowledge\ninculcation process. We argue that for a given teacher-student pair, the\nquality of distillation can be improved by finding the sweet spot between batch\nsize and number of epochs while training the teacher. We discuss the steps to\nfind this sweet spot for better distillation. We also propose the distillation\nhypothesis to differentiate the behavior of the distillation process between\nknowledge distillation and regularization effect. We conduct all our\nexperiments on three different datasets.", + "authors": "Vibhas Vats, David Crandall", + "published": "2021-12-19", + "updated": "2021-12-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2007.09029v1", + "title": "Knowledge Distillation in Deep Learning and its Applications", + "abstract": "Deep learning based models are relatively large, and it is hard to deploy\nsuch models on resource-limited devices such as mobile phones and embedded\ndevices. One possible solution is knowledge distillation whereby a smaller\nmodel (student model) is trained by utilizing the information from a larger\nmodel (teacher model). In this paper, we present a survey of knowledge\ndistillation techniques applied to deep learning models. To compare the\nperformances of different techniques, we propose a new metric called\ndistillation metric. Distillation metric compares different knowledge\ndistillation algorithms based on sizes and accuracy scores. Based on the\nsurvey, some interesting conclusions are drawn and presented in this paper.", + "authors": "Abdolmaged Alkhulaifi, Fahad Alsahli, Irfan Ahmad", + "published": "2020-07-17", + "updated": "2020-07-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/9908047v2", + "title": "On bound entanglement assisted distillation", + "abstract": "We investigate asymptotic distillation of entanglement in the presence of an\nunlimited amount of bound entanglement for bi-partite systems. We show that the\ndistillability is still bounded by the relative entropy of entanglement. This\noffers a strong support to the fact that bound entanglement does not improve\ndistillation of entanglement.", + "authors": "V. Vedral", + "published": "1999-08-14", + "updated": "1999-11-17", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0303009v2", + "title": "Security bounds in Quantum Cryptography using d-level systems", + "abstract": "We analyze the security of quantum cryptography schemes for $d$-level systems\nusing 2 or $d+1$ maximally conjugated bases, under individual eavesdropping\nattacks based on cloning machines and measurement after the basis\nreconciliation. We consider classical advantage distillation protocols, that\nallow to extract a key even in situations where the mutual information between\nthe honest parties is smaller than the eavesdropper's information. In this\nscenario, advantage distillation protocols are shown to be as powerful as\nquantum distillation: key distillation is possible using classical techniques\nif and only if the corresponding state in the entanglement based protocol is\ndistillable.", + "authors": "Antonio Acin, Nicolas Gisin, Valerio Scarani", + "published": "2003-03-03", + "updated": "2003-11-03", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/0704.3661v1", + "title": "Complementarity, distillable secret key, and distillable entanglement", + "abstract": "We consider controllability of two conjugate observables Z and X by two\nparties with classical communication. The ability is specified by two\nalternative tasks, (i) agreement on Z and (ii) preparation of an eigenstate of\nX with use of an extra communication channel. We prove that their feasibility\nis equivalent to that of key distillation if the extra channel is quantum, and\nto that of entanglement distillation if it is classical. This clarifies the\ndistinction between two entanglement measures, distillable key and distillable\nentanglement.", + "authors": "Masato Koashi", + "published": "2007-04-27", + "updated": "2007-04-27", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1812.00249v1", + "title": "On Compressing U-net Using Knowledge Distillation", + "abstract": "We study the use of knowledge distillation to compress the U-net\narchitecture. We show that, while standard distillation is not sufficient to\nreliably train a compressed U-net, introducing other regularization methods,\nsuch as batch normalization and class re-weighting, in knowledge distillation\nsignificantly improves the training process. This allows us to compress a U-net\nby over 1000x, i.e., to 0.1% of its original number of parameters, at a\nnegligible decrease in performance.", + "authors": "Karttikeya Mangalam, Mathieu Salzamann", + "published": "2018-12-01", + "updated": "2018-12-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + } +] \ No newline at end of file