diff --git "a/abs_29K_G/test_abstract_long_2405.02730v1.json" "b/abs_29K_G/test_abstract_long_2405.02730v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.02730v1.json" @@ -0,0 +1,330 @@ +{ + "url": "http://arxiv.org/abs/2405.02730v1", + "title": "U-DiTs: Downsample Tokens in U-Shaped Diffusion Transformers", + "abstract": "Diffusion Transformers (DiTs) introduce the transformer architecture to\ndiffusion tasks for latent-space image generation. With an isotropic\narchitecture that chains a series of transformer blocks, DiTs demonstrate\ncompetitive performance and good scalability; but meanwhile, the abandonment of\nU-Net by DiTs and their following improvements is worth rethinking. To this\nend, we conduct a simple toy experiment by comparing a U-Net architectured DiT\nwith an isotropic one. It turns out that the U-Net architecture only gain a\nslight advantage amid the U-Net inductive bias, indicating potential\nredundancies within the U-Net-style DiT. Inspired by the discovery that U-Net\nbackbone features are low-frequency-dominated, we perform token downsampling on\nthe query-key-value tuple for self-attention and bring further improvements\ndespite a considerable amount of reduction in computation. Based on\nself-attention with downsampled tokens, we propose a series of U-shaped DiTs\n(U-DiTs) in the paper and conduct extensive experiments to demonstrate the\nextraordinary performance of U-DiT models. The proposed U-DiT could outperform\nDiT-XL/2 with only 1/6 of its computation cost. Codes are available at\nhttps://github.com/YuchuanTian/U-DiT.", + "authors": "Yuchuan Tian, Zhijun Tu, Hanting Chen, Jie Hu, Chao Xu, Yunhe Wang", + "published": "2024-05-04", + "updated": "2024-05-04", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Diffusion Transformers (DiTs) introduce the transformer architecture to\ndiffusion tasks for latent-space image generation. With an isotropic\narchitecture that chains a series of transformer blocks, DiTs demonstrate\ncompetitive performance and good scalability; but meanwhile, the abandonment of\nU-Net by DiTs and their following improvements is worth rethinking. To this\nend, we conduct a simple toy experiment by comparing a U-Net architectured DiT\nwith an isotropic one. It turns out that the U-Net architecture only gain a\nslight advantage amid the U-Net inductive bias, indicating potential\nredundancies within the U-Net-style DiT. Inspired by the discovery that U-Net\nbackbone features are low-frequency-dominated, we perform token downsampling on\nthe query-key-value tuple for self-attention and bring further improvements\ndespite a considerable amount of reduction in computation. Based on\nself-attention with downsampled tokens, we propose a series of U-shaped DiTs\n(U-DiTs) in the paper and conduct extensive experiments to demonstrate the\nextraordinary performance of U-DiT models. The proposed U-DiT could outperform\nDiT-XL/2 with only 1/6 of its computation cost. Codes are available at\nhttps://github.com/YuchuanTian/U-DiT.", + "main_content": "Introduction Thanks to the attention mechanism that establishes long-range spatial dependencies, Transformers [32] are proved highly effective on various vision tasks including image classification [13], object detection [5], segmentation [37], and image restoration [6]. DiTs [24] introduce full transformer backbones to diffusion, which demonstrate outstanding performance and scalability on image-space and latent-space generation tasks. Recent follow-up works have demonstrated the promising prospect of diffusion transformers by extending their applications to flexible-resolution image generation [22], realistic video generation [2], et cetera. Interestingly, DiTs have discarded the U-Net architecture [26] that is universally applied in manifold previous works, either in pixel [17; 11] or latent space [25]. The use of isotropic architectures in DiTs is indeed successful, as scaled-up DiT models achieve supreme performance. However, the abandonment of the widely-applied U-Net architecture by DiTs and their improvements [16; 8; 22] on latent-space image generation tasks triggers our curiosity, because the U-Net inductive bias is always believed to help denoising. Hence, we rethink deploying DiTs on a canonical U-Net architecture. In order to experiment with the combination of U-Net with DiT, we first propose a naive DiT in U-Net style (DiT-UNet) and compare it with an isotropic DiT of similar size. Results turn out that DiT-UNets are merely comparable to DiTs at similar computation costs. From this toy experiment, it \u2217Equal Contribution. \u2020Corresponding Author. Preprint. Under review. arXiv:2405.02730v1 [cs.CV] 4 May 2024 \f101 102 Transformer GFLOPs 10 20 30 40 50 60 70 FID-50K DiT SiT SiT-LLAMA U-DiT (Ours) Figure 1: Comparing U-DiTs with DiTs and their improvements. We plot FID-50K versus denoiser GFLOPs (in log scale) after 400K training steps. U-DiTs could achieve better performance than its counterparts. 200 400 600 800 Training Iterations (K) 0 10 20 30 40 50 60 FID-50K DiT-B/2 DiT-L/2 DiT-XL/2 U-DiT-B U-DiT-L Figure 2: The performance of U-DiTs and DiTs of various size. U-DiTs perform consistently better than DiTs with the increase of training steps. The marker size represents the computation cost of the model qualitatively. is inferred that the inductive bias of U-Net is not fully leveraged when U-Nets and plain transformer blocks are simply combined. Hence, we rethink the self-attention mechanism in DiT-UNet. The backbone in a latent U-Net denoiser provides a feature where low-frequency components dominate [27]. The discovery implies the existence of redundancies in backbone features: the attention module in the U-Net diffuser should highlight low-frequency domains. As previous theories praised downsampling for filtering high-frequency noises in diffusion [35], we seek to leverage this natural low-pass filter by performing token downsampling on the features for self-attention. Unlike previous transformer works [15; 38; 28] that downsample key-value pairs only, we radically downsample the query-key-value tuple altogether, such that self-attention is performed among downsampled latent tokens. It is surprising that when we incorporate self-attention with downsampled tokens into DiT-UNet, better results are achieved on latent U-Net diffusers with a significant reduction of computation. Based on this discovery, we scale U-Nets with downsampled self-attention up and propose a series of State-of-the-Art U-shaped Diffusion Transformers (U-DiTs). We conduct manifold experiments to verify the outstanding performance and scalability of our U-DiT models over isotropic DiTs. As shown in Fig. 1 & Fig. 2, U-DiTs could outperform DiTs by large margins. Amazingly, the proposed U-DiT model could perform better than DiT-XL/2 which is 6 times larger in terms of FLOPs. 2 Preliminaries Vision Transformers. ViTs [13] have introduced a transformer backbone to vision tasks by patchifying the input and viewing an image as a sequence of patch tokens and have proved its effectiveness on large-scale image classification tasks. While ViTs adopt an isotropic architecture, some following works on vision transformers [33; 21] propose a pyramid-like hierarchical architecture that gradually downsamples the feature. The pyramid architecture has proved highly effective in classification and other downstream tasks. Vision transformers are also mainstream backbones for denoising models. IPT [6] introduces an isotropic transformer backbone for denoising and other low-level tasks. Some later works [19; 18; 7] follow the isotropic convention, but other denoising works [34; 36] shift to U-Net backbones as their design. The pioneering work of U-ViT [1] and DiT [24] introduces full-transformer backbones to diffusion as denoisers. Recent Advancements in Diffusion Transformers. Following DiTs, some works investigate the training and diffusion [14; 23] strategies of Diffusion Transformers. Other works focus on the design of the DiT backbone. DiffiT [16] introduces a new fusion method for conditions; FiT [22] and VisionLLaMA [8] strengthens DiT by introducing LLM tricks including RoPE2D [30] and SwishGLU. These transformer-based diffusion works agree on adopting isotropic architectures on latents, i.e. the latent feature space is not downsampled throughout the whole diffusion model. The authors of DiT [24] even regard the inductive bias of U-Net as \u201cnot crucial\u201d. 2 \fNoised Latent 32\u00d732\u00d74 Embed Transformer Block Transformer Block Transformer Block Layer Norm Linear and Reshape Noise 32\u00d732\u00d74 \u2211 32\u00d732\u00d74 Transformer Block .... (a) DiT Noised Latent 32\u00d732\u00d74 Embed Transformer Block Transformer Block Layer Norm Linear and Reshape Noise 32\u00d732\u00d74 \u2211 32\u00d732\u00d74 Transformer Block Transformer Block Transformer Block (b) DiT-UNet Layer Norm MHSA Layer Norm FFN Noised Latent 32\u00d732\u00d74 Transformer Block Transformer Block Layer Norm Linear and Reshape Noise 32\u00d732\u00d74 \u2211 32\u00d732\u00d74 Transformer Block Transformer Block Transformer Block (c) U-DiT (Ours) Layer Norm MHSA Layer Norm Embed Downsampler FFN Figure 3: The evolution from the DiT to the proposed U-DiT. Left (a): the original DiT, which uses an isotropic architecture. Middle (b): DiT-UNet, which is a plain U-Net-style DiT. We try this as a simple combination of DiT and U-Net in the toy experiment. Right (c): the proposed U-DiT. We propose to downsample the input features for self-attention. The downsampling operation could amazingly improve DiT-UNet with a huge cut on the amount of computation. U-Nets for Diffusion. From canonical works [17; 29; 11; 25], the design philosophy of U-Net [26] is generally accepted in diffusion. Specifically, Stable Diffusion [25] uses a U-Net-based denoiser on the compressed latent space for high-resolution image synthesis, which is highly successful in manifold generative tasks. Some previous trials on diffusion transformers [4; 16; 9] also adopt U-Net on pixel-space generation tasks; but strangely, they shifted to isotropic DiT-like structures for latent-space diffusion. Despite its popularity in pixel-space diffusion, the U-Net architecture is not widely accepted in recent transformer-oriented works on latent-space diffusion. Motivated by this, we are dedicated to investigating the potential of Transformer-backboned U-Net on latent-space diffusion. 3 Investigating U-Net DiTs in Latent As is recapped, the U-Net architecture is widely adopted in diffusion applications; theoretical evaluations on U-Net denoisers also reveal their advantage, as downsampling U-Net stage transitions could filter noises that dominate high frequencies [35]. The unprecedented desertion of isotropic architectures for latent diffusion transformers is thus counter-intuitive. We are rethinking and elucidating the potentials of transformer-backboned U-Net denoisers in latent diffusion via a toy experiment. A canonical U-Net-style DiT. To start with, we propose a naive Transformer-backboned U-Net denoiser named DiT-UNet by embedding DiT blocks into a canonical U-Net architecture. Following previous U-Net designs, The DiT-UNet consists of an encoder and a decoder with an equal number of stages. When the encoder processes the input image by downsampling the image as stage-level amounts, the decoder scales up the encoded image from the most compressed stage to input size. At each encoder stage transition, spatial downsampling by the factor of 2 is performed while the feature dimension is doubled as well. Skip connections are provided at each stage transition. The skipped feature is concatenated and fused with the upsampled output from the previous decoder stage, replenishing information loss to decoders brought by feature downsampling. Considering the small, cramped latent space (32\u00d7 32 for 256\u00d7256-sized generation), we designate 3 stages in total, i.e. the feature is downsampled two times and subsequently recovered to its original size. In order to fit time and condition embeddings for various feature dimensions across multiscale stages, we use independent embedders for respective stages. In addition, we avoid patchifying the latent, as the U-Net architecture itself downsamples the latent space and there is no need for further spatial compression. 3 \fVia toy experiments, we compare the proposed U-Net-style DiT with the original DiT that adopts an isotropic architecture. In order to align the model with the DiT design, we repeatedly use plain DiT blocks in each stage. Each DiT block includes a self-attention module as the token mixer and a two-layer feed-forward network as the channel mixer. We conduct the experiment by training the U-Net-Style DiT for 400K iterations and compare it with DiT-S/4 which is comparable in size. All training hyperparameters are kept unchanged. It occurs that the U-Net style DiT only gains a limited advantage over the original isotropic DiT. The inductive bias of U-Net is insufficiently utilized. ImageNet 256\u00d7256 Model GFLOPs FID\u2193 sFID\u2193 IS\u2191 Precision\u2191 Recall\u2191 DiT-S/4 1.41 97.85 21.19 13.27 0.26 0.41 DiT-UNet 1.40 93.48 20.41 14.20 0.27 0.42 + Token Downsampling 0.90 89.43 21.36 15.13 0.29 0.44 Table 1: Toy experiments on U-Net-style DiTs. The naive DiT-UNet performs slightly better than the isotropic DiT-S/4; but interestingly, when we apply token downsampling for self-attention, the DiT-UNet performs better with fewer costs. Improved U-Net-style DiT via token downsampling. In seeking to incorporate attention in transformers to diffusion U-Nets better, we review the role of the U-Net backbone as the diffusion denoiser. A recent work on latent diffusion models [27] conducted frequency analysis on intermediate features from the U-Net backbone, and concluded that energy concentrates at the low-frequency domain. This frequency-domain discovery hints at potential redundancies in the backbone: the U-Net backbone should highlight the coarse object from a global perspective rather than the high-frequency details. Naturally, we resort to attention with downsampled tokens. The operation of downsampling is a natural low-pass filter that discards high-frequency components. The low-pass feature of downsampling has been investigated under the diffusion scenario, which concludes that downsampling helps denoisers in diffusion as it automatically \u201cdiscards those higher-frequency subspaces which are dominated by noise\u201d [35]. Hence, we opt to downsample tokens for attention. In fact, attention to downsampled tokens is not new. Previous works regarding vision transformers [15; 38] have proposed methods to downsample key-value pairs for computation cost reduction. Recent work on training-free acceleration of diffusion [28] also applies key-value downsampling on Stable Diffusion models. But these works maintain the number of queries, and thus the downsampling operation is not completely performed. Besides, these downsampling measures usually involves a reduction of tensor size, which could result in a significant loss in information. Different from these works, we propose a simple yet radical token downsampling method for DiTUNets: we downsample queries, keys, and values at the same time for diffusion-friendly self-attention, but meanwhile we keep the overall tensor size to avoid information loss. The procedure is detailed as follows: the feature-map input is first converted into four 2\u00d7 downsampled features by the downsampler (the downsampler design is detailed in Sec. 4.2). Then, the downsampled features are mapped to Q, K, V for self-attention. Self-attention is performed within each downsampled feature. After the attention operation, the downsampled tokens are spatially merged as a unity to recover the original number of tokens. Notably, the feature dimension is kept intact during the whole process. Unlike U-Net downsampling, we are not reducing or increasing the number of elements in the feature during the downsampling process. Rather, we send four downsampled tokens into self-attention in a parallel manner. Self-attention with downsampled tokens does help DiT-UNets on the task of latent diffusion. As shown in Tab. 1, the substitution of downsampled self-attention to full-scale self-attention brings slight improvement in the Fr\u00e9chet Inception Distance (FID) metric despite a significant reduction in FLOPs. Complexity analysis. Apart from the performance benefits, we are aware that downsampled selfattention could save as much as 1/3 of the overall computation cost compared to full-scale selfattention. We conduct a brief computation complexity analysis on the self-attention mechanism to explain where the savings come from. Given an input feature of size N \u00d7 N and dimension d, we denote Q, K, V \u2208RN 2\u00d7d as mapped query-key-value tuples. The complexity of self-attention is analyzed as: 4 \fX = AV |{z} O(N 4D) s.t. A = Softmax \u0000QKT \u0001 | {z } O(N 4D) . In the proposed self-attention on downsampled tokens, four sets of downsampled query-key-value tuples 4\u00d7(Q\u21932, K\u21932, V\u21932) \u2208R( N 2 )2\u00d7d performs self-attention respectively. While each self-attention operation costs only 1/16 of full-scale self-attention, the total cost for downsampled self-attention is 1/4 of full-scale self-attention. 3/4 of the computation costs by self-attention is saved via token downsampling. In a nutshell, we show from toy experiments that the redundancy of DiT-UNet is reduced by downsampling the tokens for self-attention. 4 Scaling the Model Up Based on the discovery in our toy experiment, we propose a series of U-shaped DiTs (U-DiT) by applying the downsampled self-attention (proposed in Sec. 3) and scaling U-Net-Style DiT up. Settings. We adopt the training setting of DiT. The same VAE (i.e. sd-vae-ft-ema) for latent diffusion models [25] and the AdamW optimizer is adopted. The training hyperparameters are kept unchanged, including global batch size 256, learning rate 1e \u22124, weight decay 0, and global seed 0. The training is conducted with the training set of ImageNet 2012 [10]. Apart from the self-attention on downsampling as introduced in the toy experiment (Section 3), we further introduce a series of modifications to U-DiTs, including cosine similarity attention [20; 18], RoPE2D [30; 22; 8], depthwise conv FFN [34; 3; 38], and re-parametrization [12; 31]. The contribution of each modification is quantitatively evaluated in Sec. 6. 4.1 U-DiT at Larger Scales ImageNet 256\u00d7256 Model FLOPs(G) FID\u2193 sFID\u2193 IS\u2191 Precision\u2191 Recall\u2191 DiT-S/2 [24] 6.06 68.40 DiT-S/2\u2217 6.07 67.40 11.93 20.44 0.368 0.559 U-DiT-S (Ours) 6.04 31.51 8.97 51.62 0.543 0.633 DiT-L/4 [24] 19.70 45.64 DiT-L/4\u2217 19.70 46.10 9.17 31.05 0.472 0.612 DiT-B/2 [24] 23.01 43.47 DiT-B/2\u2217 23.02 42.84 8.24 33.66 0.491 0.629 U-DiT-B (Ours) 22.22 16.64 6.33 85.15 0.642 0.639 DiT-L/2 [24] 80.71 23.33 DiT-L/2\u2217 80.75 23.27 6.35 59.63 0.611 0.635 DiT-XL/2 [24] 118.64 19.47 DiT-XL/2\u2217 118.68 20.05 6.25 66.74 0.632 0.629 U-DiT-L (Ours) 85.00 10.08 5.21 112.44 0.702 0.631 Table 2: Comparing U-DiTs against DiTs on ImageNet 256\u00d7256 generation. Experiments with a supermark \u2217are replicated according to the official code of DiT. We compare models trained for 400K iterations with the standard training hyperparameters of DiT. The performance of U-DiTs is outstanding: U-DiT-B could beat DiT-XL/2 with only 1/6 of inference FLOPs; U-DiT-L could outcompete DiT-XL/2 by 10 FIDs. Comparison with DiTs and their improvements. In order to validate the effectiveness of the proposed U-DiT models beyond simple toy experiments, we scale them up and compare them with DiTs [24] of larger sizes. For a fair comparison, we use the same sets of training hyperparameters as DiT; all models are trained for 400K iterations. The results on ImageNet 256\u00d7256 are shown in Tab. 2, where we scale U-DiTs to \u223c6e9, \u223c20e9, \u223c80e9 FLOPs respectively and compare them with DiTs of similar computation costs. 5 \fIt could be concluded from Tab. 2 that all U-DiT models could outcompete their isotropic counterparts by considerable margins. Specifically, U-DiT-S and U-DiT-B could outperform DiTs of comparable size by \u223c30 FIDs; U-DiT-L could outperform DiT-XL/2 by \u223c10 FIDs. It is shocking that U-DiT-B could outcompete DiT-XL/2 with only 1/6 of the computation costs. To present the advantage of our method better, we also include the performance of U-DiTs in an FID-50K versus FLOPs plot (Fig. 1). Apart from DiTs and U-DiTs, we also include other state-of-the-art methods: SiT [23] that proposes an interpolant framework for DiTs, and SiT-LLaMA [8] that combines state-of-the-art DiT backbone VisionLLaMA and SiT. The advantages of U-DiTs over other baselines are prominent in the plot. The results highlight the extraordinary scalability of the proposed U-DiT models. U-DiTs are also performant in generation scenarios with classifier-free guidance. In Tab. 3, we compare U-DiTs with DiTs at cfg = 1.5. For a fair comparison, we train U-DiTs and DiTs for 400K iterations under identical settings. ImageNet 256\u00d7256 Model Cfg-Scale FLOPs(G) FID\u2193 sFID\u2193 IS\u2191 Precision\u2191 Recall\u2191 DiT-L/2\u2217 1.5 80.75 7.53 4.78 134.69 0.780 0.532 DiT-XL/2\u2217 1.5 118.68 6.24 4.66 150.10 0.794 0.514 U-DiT-B 1.5 22.22 4.26 4.74 199.18 0.825 0.507 U-DiT-L 1.5 85.00 3.37 4.49 246.03 0.862 0.502 Table 3: Generation performance with classifier-free guidance. We measure the performance of U-DiTs and DiTs at 400K training steps with cfg = 1.5. Experiments with a supermark \u2217are replicated according to the official code of DiT. U-DiTs are also performant on conditional generation. Extended training steps. We evacuate the potentials of U-DiTs by extending training steps to 1 Million. Fig. 2 further demonstrate that the advantage of U-DiTs is consistent at all training steps. As training steps gradually goes up to 1 Million, the performance of U-DiTs is improving (Tab. 4). We visualize the process where the image quality is gradually getting better (Fig. 4). Notably, U-DiT-L at only 600K training steps could outperform DiT-XL/2 at 7M training steps without classifier-free guidance. As additionally shown in Fig. 5, U-DiT models could conditionally generate authentic images at merely 1M iterations. U-DiT-B U-DiT-L 200K 400K 600K 800K 200K 400K 600K 800K Figure 4: Quality improvements of generated samples as training continues. We sample from U-DiT models trained for different numbers of iterations on ImageNet 256\u00d7256. More training does improve generation quality. Best viewed on screen. 4.2 Ablations The design of downsampler. The downsampling operation in the proposed U-DiT transforms a complete feature into multiple spatially downsampled features. Based on previous wisdom, we figured out that previous works either directly perform pixel shuffling, or apply a convolution layer before pixel shuffling. While we hold that it is much too rigid to shuffle pixels directly as downsampling, 6 \fImageNet 256\u00d7256 Model Training Steps FID\u2193 sFID\u2193 IS\u2191 Precision\u2191 Recall\u2191 DiT-XL/2 7M 9.62 U-DiT-B 200K 23.23 6.84 64.42 0.610 0.621 U-DiT-B 400K 16.64 6.33 85.15 0.642 0.639 U-DiT-B 600K 14.51 6.30 94.56 0.652 0.643 U-DiT-B 800K 13.53 6.27 98.99 0.654 0.645 U-DiT-B 1M 12.87 6.33 103.79 0.661 0.653 U-DiT-L 200K 15.26 5.60 86.01 0.685 0.615 U-DiT-L 400K 10.08 5.21 112.44 0.702 0.631 U-DiT-L 600K 8.71 5.17 122.45 0.705 0.645 U-DiT-L 800K 7.96 5.21 131.35 0.705 0.648 U-DiT-L 1M 7.54 5.27 135.49 0.706 0.659 Table 4: The performance of U-DiT-B and U-DiT-L models with respect to training iterations. The unconditional generation performance of both models on ImageNet 256\u00d7256 consistently improves as training goes on, where U-DiT-L at 600K steps strikingly beats DiT-XL/2 at 7M steps. ImageNet 256\u00d7256 Model FLOPs(G) FID\u2193 sFID\u2193 IS\u2191 Precision\u2191 Recall\u2191 Pixel Shuffle (PS) 0.89 96.15 23.90 13.93 0.272 0.389 Depthwise (DW) Conv. + PS 0.91 89.87 20.99 14.92 0.288 0.419 DW Conv. || Shortcut + PS 0.91 89.43 21.36 15.13 0.291 0.436 Table 5: Ablations on the choice of downsampler. We have tried several downsampler designs, and it turns out that the parallel connection of a shortcut and a depthwise convolution is the best fit. We avoid using ordinary convolution (i.e. Conv.+PS) because channel-mixing is costly: conventional convolution-based downsamplers could double the amount of computation. The U-DiT with a conventional downsampler costs as many as 2.22G FLOPs in total. applying convolution is hardly affordable in terms of computation costs. Specifically, ordinary convolutions are costly as extensive dense connections on the channel dimension are involved: using convolution-based downsamplers could double computation costs. As a compromise, we apply depthwise convolution instead. We also add a shortcut that short-circuits this depthwise convolution, which has proved crucial for better performance. The shortcut adds negligible computation cost to the model, and in fact, it could be removed during the inference stage with re-parameterization tricks. The results are shown in Tab. 5. The contribution of each individual modification. In this part, we start from a plain U-Net-style DiT (DiT-UNet) and evaluate the contribution of individual components. Firstly, we inspect the advantage of downsampled self-attention. Recapping the toy experiment results in Sec. 3, replacing the full-scale self-attention with downsampled self-attention would result in an improvement in FID and 1/3 reduction in FLOPs. In order to evaluate the improvement of downsampling via model performance, we also design a slim version of DiT-UNet (i.e. DIT-UNet (Slim)). The DiT-UNet (Slim) serves as a full-scale self-attention baseline that spends approximately the same amount (\u223c0.9GFLOPs) of computation as our U-DiT. As shown in the upper part of Tab. 6, by comparing U-DiT against DiT-UNet (Slim), it turns out that downsampling tokens in DiT-UNet could bring a performance improvement of \u223c18FIDs. Next, we inspect other modifications that further refine U-DiTs (lower part of Tab. 6). Swin Transformer V2 [20] proposes a stronger variant of self-attention: instead of directly multiplying Q and K matrices, cosine similarities between queries and keys are used. We apply the design to our selfattention, which yields \u223c2.5FIDs of improvement. RoPE [30] is a powerful positional embedding method, which has been widely applied in Large Language Models. Following the latest diffusion transformer works [22; 8], we inject 2-dimensional RoPE (RoPE2D) into queries and keys right before self-attention. The introduction of RoPE2D improves performance by \u223c2.5FIDs. Some recent transformer works strengthen MLP by inserting a depthwise convolution layer between two linear mappings [34; 3; 38]. As the measure is proved effective in these works, we borrow it to our 7 \fFigure 5: Generated samples by U-DiT-L at 1M iterations. It is astonishing that U-DiT could achieve authentic visual quality at merely 1 Million training steps. Best viewed on screen. U-DiT model, improving \u223c5FIDs. As re-parametrization during training [12] could improve model performance, we apply the trick to FFN [31] and bring an additional improvement of \u223c3.5FIDs. Above all, based on the components mentioned above, the proposed U-DiTs could outcompete plain DiT-UNets and isotropic DiTs by large margins. ImageNet 256\u00d7256 Model FLOPs(G) FID\u2193 sFID\u2193 IS\u2191 Precision\u2191 Recall\u2191 DiT-UNet (Slim) 0.92 107.00 24.66 11.95 0.230 0.315 DiT-UNet 1.40 93.48 20.41 14.20 0.274 0.415 U-DiT-T (DiT-UNet+Downsampling) 0.91 89.43 21.36 15.13 0.291 0.436 U-DiT-T (+Cos.Sim.) 0.91 86.96 19.98 15.63 0.299 0.450 U-DiT-T (+RoPE2D) 0.91 84.64 19.38 16.19 0.306 0.454 U-DiT-T (+DWconv FFN) 0.95 79.30 17.84 17.48 0.326 0.494 U-DiT-T (+Re-param.) 0.95 75.71 16.27 18.59 0.336 0.512 Table 6: Ablations on U-DiT components. Apart from the toy example in Sec. 3, we further validate the effectiveness of downsampled by comparing the U-DiT with a slimmed version of DiT-UNet at equal FLOPs. Results reveal that downsampling could bring \u223c18FIDs on DiT-UNet. Further modifications on top of the U-DiT architecture could improve 2 to 5 FIDs each. 5", + "additional_graph_info": { + "graph": [ + [ + "Yuchuan Tian", + "Hanting Chen" + ], + [ + "Yuchuan Tian", + "Chao Xu" + ], + [ + "Yuchuan Tian", + "Jie Hu" + ], + [ + "Hanting Chen", + "Chao Xu" + ], + [ + "Hanting Chen", + "Chunjing Xu" + ], + [ + "Chao Xu", + "Jiazheng Xing" + ], + [ + "Chao Xu", + "Mingze Sun" + ], + [ + "Jie Hu", + "Piercarlo Bonifacio" + ] + ], + "node_feat": { + "Yuchuan Tian": [ + { + "url": "http://arxiv.org/abs/2405.02730v1", + "title": "U-DiTs: Downsample Tokens in U-Shaped Diffusion Transformers", + "abstract": "Diffusion Transformers (DiTs) introduce the transformer architecture to\ndiffusion tasks for latent-space image generation. With an isotropic\narchitecture that chains a series of transformer blocks, DiTs demonstrate\ncompetitive performance and good scalability; but meanwhile, the abandonment of\nU-Net by DiTs and their following improvements is worth rethinking. To this\nend, we conduct a simple toy experiment by comparing a U-Net architectured DiT\nwith an isotropic one. It turns out that the U-Net architecture only gain a\nslight advantage amid the U-Net inductive bias, indicating potential\nredundancies within the U-Net-style DiT. Inspired by the discovery that U-Net\nbackbone features are low-frequency-dominated, we perform token downsampling on\nthe query-key-value tuple for self-attention and bring further improvements\ndespite a considerable amount of reduction in computation. Based on\nself-attention with downsampled tokens, we propose a series of U-shaped DiTs\n(U-DiTs) in the paper and conduct extensive experiments to demonstrate the\nextraordinary performance of U-DiT models. The proposed U-DiT could outperform\nDiT-XL/2 with only 1/6 of its computation cost. Codes are available at\nhttps://github.com/YuchuanTian/U-DiT.", + "authors": "Yuchuan Tian, Zhijun Tu, Hanting Chen, Jie Hu, Chao Xu, Yunhe Wang", + "published": "2024-05-04", + "updated": "2024-05-04", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Thanks to the attention mechanism that establishes long-range spatial dependencies, Transformers [32] are proved highly effective on various vision tasks including image classification [13], object detection [5], segmentation [37], and image restoration [6]. DiTs [24] introduce full transformer backbones to diffusion, which demonstrate outstanding performance and scalability on image-space and latent-space generation tasks. Recent follow-up works have demonstrated the promising prospect of diffusion transformers by extending their applications to flexible-resolution image generation [22], realistic video generation [2], et cetera. Interestingly, DiTs have discarded the U-Net architecture [26] that is universally applied in manifold previous works, either in pixel [17; 11] or latent space [25]. The use of isotropic architectures in DiTs is indeed successful, as scaled-up DiT models achieve supreme performance. However, the abandonment of the widely-applied U-Net architecture by DiTs and their improvements [16; 8; 22] on latent-space image generation tasks triggers our curiosity, because the U-Net inductive bias is always believed to help denoising. Hence, we rethink deploying DiTs on a canonical U-Net architecture. In order to experiment with the combination of U-Net with DiT, we first propose a naive DiT in U-Net style (DiT-UNet) and compare it with an isotropic DiT of similar size. Results turn out that DiT-UNets are merely comparable to DiTs at similar computation costs. From this toy experiment, it \u2217Equal Contribution. \u2020Corresponding Author. Preprint. Under review. arXiv:2405.02730v1 [cs.CV] 4 May 2024 \f101 102 Transformer GFLOPs 10 20 30 40 50 60 70 FID-50K DiT SiT SiT-LLAMA U-DiT (Ours) Figure 1: Comparing U-DiTs with DiTs and their improvements. We plot FID-50K versus denoiser GFLOPs (in log scale) after 400K training steps. U-DiTs could achieve better performance than its counterparts. 200 400 600 800 Training Iterations (K) 0 10 20 30 40 50 60 FID-50K DiT-B/2 DiT-L/2 DiT-XL/2 U-DiT-B U-DiT-L Figure 2: The performance of U-DiTs and DiTs of various size. U-DiTs perform consistently better than DiTs with the increase of training steps. The marker size represents the computation cost of the model qualitatively. is inferred that the inductive bias of U-Net is not fully leveraged when U-Nets and plain transformer blocks are simply combined. Hence, we rethink the self-attention mechanism in DiT-UNet. The backbone in a latent U-Net denoiser provides a feature where low-frequency components dominate [27]. The discovery implies the existence of redundancies in backbone features: the attention module in the U-Net diffuser should highlight low-frequency domains. As previous theories praised downsampling for filtering high-frequency noises in diffusion [35], we seek to leverage this natural low-pass filter by performing token downsampling on the features for self-attention. Unlike previous transformer works [15; 38; 28] that downsample key-value pairs only, we radically downsample the query-key-value tuple altogether, such that self-attention is performed among downsampled latent tokens. It is surprising that when we incorporate self-attention with downsampled tokens into DiT-UNet, better results are achieved on latent U-Net diffusers with a significant reduction of computation. Based on this discovery, we scale U-Nets with downsampled self-attention up and propose a series of State-of-the-Art U-shaped Diffusion Transformers (U-DiTs). We conduct manifold experiments to verify the outstanding performance and scalability of our U-DiT models over isotropic DiTs. As shown in Fig. 1 & Fig. 2, U-DiTs could outperform DiTs by large margins. Amazingly, the proposed U-DiT model could perform better than DiT-XL/2 which is 6 times larger in terms of FLOPs. 2 Preliminaries Vision Transformers. ViTs [13] have introduced a transformer backbone to vision tasks by patchifying the input and viewing an image as a sequence of patch tokens and have proved its effectiveness on large-scale image classification tasks. While ViTs adopt an isotropic architecture, some following works on vision transformers [33; 21] propose a pyramid-like hierarchical architecture that gradually downsamples the feature. The pyramid architecture has proved highly effective in classification and other downstream tasks. Vision transformers are also mainstream backbones for denoising models. IPT [6] introduces an isotropic transformer backbone for denoising and other low-level tasks. Some later works [19; 18; 7] follow the isotropic convention, but other denoising works [34; 36] shift to U-Net backbones as their design. The pioneering work of U-ViT [1] and DiT [24] introduces full-transformer backbones to diffusion as denoisers. Recent Advancements in Diffusion Transformers. Following DiTs, some works investigate the training and diffusion [14; 23] strategies of Diffusion Transformers. Other works focus on the design of the DiT backbone. DiffiT [16] introduces a new fusion method for conditions; FiT [22] and VisionLLaMA [8] strengthens DiT by introducing LLM tricks including RoPE2D [30] and SwishGLU. These transformer-based diffusion works agree on adopting isotropic architectures on latents, i.e. the latent feature space is not downsampled throughout the whole diffusion model. The authors of DiT [24] even regard the inductive bias of U-Net as \u201cnot crucial\u201d. 2 \fNoised Latent 32\u00d732\u00d74 Embed Transformer Block Transformer Block Transformer Block Layer Norm Linear and Reshape Noise 32\u00d732\u00d74 \u2211 32\u00d732\u00d74 Transformer Block .... (a) DiT Noised Latent 32\u00d732\u00d74 Embed Transformer Block Transformer Block Layer Norm Linear and Reshape Noise 32\u00d732\u00d74 \u2211 32\u00d732\u00d74 Transformer Block Transformer Block Transformer Block (b) DiT-UNet Layer Norm MHSA Layer Norm FFN Noised Latent 32\u00d732\u00d74 Transformer Block Transformer Block Layer Norm Linear and Reshape Noise 32\u00d732\u00d74 \u2211 32\u00d732\u00d74 Transformer Block Transformer Block Transformer Block (c) U-DiT (Ours) Layer Norm MHSA Layer Norm Embed Downsampler FFN Figure 3: The evolution from the DiT to the proposed U-DiT. Left (a): the original DiT, which uses an isotropic architecture. Middle (b): DiT-UNet, which is a plain U-Net-style DiT. We try this as a simple combination of DiT and U-Net in the toy experiment. Right (c): the proposed U-DiT. We propose to downsample the input features for self-attention. The downsampling operation could amazingly improve DiT-UNet with a huge cut on the amount of computation. U-Nets for Diffusion. From canonical works [17; 29; 11; 25], the design philosophy of U-Net [26] is generally accepted in diffusion. Specifically, Stable Diffusion [25] uses a U-Net-based denoiser on the compressed latent space for high-resolution image synthesis, which is highly successful in manifold generative tasks. Some previous trials on diffusion transformers [4; 16; 9] also adopt U-Net on pixel-space generation tasks; but strangely, they shifted to isotropic DiT-like structures for latent-space diffusion. Despite its popularity in pixel-space diffusion, the U-Net architecture is not widely accepted in recent transformer-oriented works on latent-space diffusion. Motivated by this, we are dedicated to investigating the potential of Transformer-backboned U-Net on latent-space diffusion. 3 Investigating U-Net DiTs in Latent As is recapped, the U-Net architecture is widely adopted in diffusion applications; theoretical evaluations on U-Net denoisers also reveal their advantage, as downsampling U-Net stage transitions could filter noises that dominate high frequencies [35]. The unprecedented desertion of isotropic architectures for latent diffusion transformers is thus counter-intuitive. We are rethinking and elucidating the potentials of transformer-backboned U-Net denoisers in latent diffusion via a toy experiment. A canonical U-Net-style DiT. To start with, we propose a naive Transformer-backboned U-Net denoiser named DiT-UNet by embedding DiT blocks into a canonical U-Net architecture. Following previous U-Net designs, The DiT-UNet consists of an encoder and a decoder with an equal number of stages. When the encoder processes the input image by downsampling the image as stage-level amounts, the decoder scales up the encoded image from the most compressed stage to input size. At each encoder stage transition, spatial downsampling by the factor of 2 is performed while the feature dimension is doubled as well. Skip connections are provided at each stage transition. The skipped feature is concatenated and fused with the upsampled output from the previous decoder stage, replenishing information loss to decoders brought by feature downsampling. Considering the small, cramped latent space (32\u00d7 32 for 256\u00d7256-sized generation), we designate 3 stages in total, i.e. the feature is downsampled two times and subsequently recovered to its original size. In order to fit time and condition embeddings for various feature dimensions across multiscale stages, we use independent embedders for respective stages. In addition, we avoid patchifying the latent, as the U-Net architecture itself downsamples the latent space and there is no need for further spatial compression. 3 \fVia toy experiments, we compare the proposed U-Net-style DiT with the original DiT that adopts an isotropic architecture. In order to align the model with the DiT design, we repeatedly use plain DiT blocks in each stage. Each DiT block includes a self-attention module as the token mixer and a two-layer feed-forward network as the channel mixer. We conduct the experiment by training the U-Net-Style DiT for 400K iterations and compare it with DiT-S/4 which is comparable in size. All training hyperparameters are kept unchanged. It occurs that the U-Net style DiT only gains a limited advantage over the original isotropic DiT. The inductive bias of U-Net is insufficiently utilized. ImageNet 256\u00d7256 Model GFLOPs FID\u2193 sFID\u2193 IS\u2191 Precision\u2191 Recall\u2191 DiT-S/4 1.41 97.85 21.19 13.27 0.26 0.41 DiT-UNet 1.40 93.48 20.41 14.20 0.27 0.42 + Token Downsampling 0.90 89.43 21.36 15.13 0.29 0.44 Table 1: Toy experiments on U-Net-style DiTs. The naive DiT-UNet performs slightly better than the isotropic DiT-S/4; but interestingly, when we apply token downsampling for self-attention, the DiT-UNet performs better with fewer costs. Improved U-Net-style DiT via token downsampling. In seeking to incorporate attention in transformers to diffusion U-Nets better, we review the role of the U-Net backbone as the diffusion denoiser. A recent work on latent diffusion models [27] conducted frequency analysis on intermediate features from the U-Net backbone, and concluded that energy concentrates at the low-frequency domain. This frequency-domain discovery hints at potential redundancies in the backbone: the U-Net backbone should highlight the coarse object from a global perspective rather than the high-frequency details. Naturally, we resort to attention with downsampled tokens. The operation of downsampling is a natural low-pass filter that discards high-frequency components. The low-pass feature of downsampling has been investigated under the diffusion scenario, which concludes that downsampling helps denoisers in diffusion as it automatically \u201cdiscards those higher-frequency subspaces which are dominated by noise\u201d [35]. Hence, we opt to downsample tokens for attention. In fact, attention to downsampled tokens is not new. Previous works regarding vision transformers [15; 38] have proposed methods to downsample key-value pairs for computation cost reduction. Recent work on training-free acceleration of diffusion [28] also applies key-value downsampling on Stable Diffusion models. But these works maintain the number of queries, and thus the downsampling operation is not completely performed. Besides, these downsampling measures usually involves a reduction of tensor size, which could result in a significant loss in information. Different from these works, we propose a simple yet radical token downsampling method for DiTUNets: we downsample queries, keys, and values at the same time for diffusion-friendly self-attention, but meanwhile we keep the overall tensor size to avoid information loss. The procedure is detailed as follows: the feature-map input is first converted into four 2\u00d7 downsampled features by the downsampler (the downsampler design is detailed in Sec. 4.2). Then, the downsampled features are mapped to Q, K, V for self-attention. Self-attention is performed within each downsampled feature. After the attention operation, the downsampled tokens are spatially merged as a unity to recover the original number of tokens. Notably, the feature dimension is kept intact during the whole process. Unlike U-Net downsampling, we are not reducing or increasing the number of elements in the feature during the downsampling process. Rather, we send four downsampled tokens into self-attention in a parallel manner. Self-attention with downsampled tokens does help DiT-UNets on the task of latent diffusion. As shown in Tab. 1, the substitution of downsampled self-attention to full-scale self-attention brings slight improvement in the Fr\u00e9chet Inception Distance (FID) metric despite a significant reduction in FLOPs. Complexity analysis. Apart from the performance benefits, we are aware that downsampled selfattention could save as much as 1/3 of the overall computation cost compared to full-scale selfattention. We conduct a brief computation complexity analysis on the self-attention mechanism to explain where the savings come from. Given an input feature of size N \u00d7 N and dimension d, we denote Q, K, V \u2208RN 2\u00d7d as mapped query-key-value tuples. The complexity of self-attention is analyzed as: 4 \fX = AV |{z} O(N 4D) s.t. A = Softmax \u0000QKT \u0001 | {z } O(N 4D) . In the proposed self-attention on downsampled tokens, four sets of downsampled query-key-value tuples 4\u00d7(Q\u21932, K\u21932, V\u21932) \u2208R( N 2 )2\u00d7d performs self-attention respectively. While each self-attention operation costs only 1/16 of full-scale self-attention, the total cost for downsampled self-attention is 1/4 of full-scale self-attention. 3/4 of the computation costs by self-attention is saved via token downsampling. In a nutshell, we show from toy experiments that the redundancy of DiT-UNet is reduced by downsampling the tokens for self-attention. 4 Scaling the Model Up Based on the discovery in our toy experiment, we propose a series of U-shaped DiTs (U-DiT) by applying the downsampled self-attention (proposed in Sec. 3) and scaling U-Net-Style DiT up. Settings. We adopt the training setting of DiT. The same VAE (i.e. sd-vae-ft-ema) for latent diffusion models [25] and the AdamW optimizer is adopted. The training hyperparameters are kept unchanged, including global batch size 256, learning rate 1e \u22124, weight decay 0, and global seed 0. The training is conducted with the training set of ImageNet 2012 [10]. Apart from the self-attention on downsampling as introduced in the toy experiment (Section 3), we further introduce a series of modifications to U-DiTs, including cosine similarity attention [20; 18], RoPE2D [30; 22; 8], depthwise conv FFN [34; 3; 38], and re-parametrization [12; 31]. The contribution of each modification is quantitatively evaluated in Sec. 6. 4.1 U-DiT at Larger Scales ImageNet 256\u00d7256 Model FLOPs(G) FID\u2193 sFID\u2193 IS\u2191 Precision\u2191 Recall\u2191 DiT-S/2 [24] 6.06 68.40 DiT-S/2\u2217 6.07 67.40 11.93 20.44 0.368 0.559 U-DiT-S (Ours) 6.04 31.51 8.97 51.62 0.543 0.633 DiT-L/4 [24] 19.70 45.64 DiT-L/4\u2217 19.70 46.10 9.17 31.05 0.472 0.612 DiT-B/2 [24] 23.01 43.47 DiT-B/2\u2217 23.02 42.84 8.24 33.66 0.491 0.629 U-DiT-B (Ours) 22.22 16.64 6.33 85.15 0.642 0.639 DiT-L/2 [24] 80.71 23.33 DiT-L/2\u2217 80.75 23.27 6.35 59.63 0.611 0.635 DiT-XL/2 [24] 118.64 19.47 DiT-XL/2\u2217 118.68 20.05 6.25 66.74 0.632 0.629 U-DiT-L (Ours) 85.00 10.08 5.21 112.44 0.702 0.631 Table 2: Comparing U-DiTs against DiTs on ImageNet 256\u00d7256 generation. Experiments with a supermark \u2217are replicated according to the official code of DiT. We compare models trained for 400K iterations with the standard training hyperparameters of DiT. The performance of U-DiTs is outstanding: U-DiT-B could beat DiT-XL/2 with only 1/6 of inference FLOPs; U-DiT-L could outcompete DiT-XL/2 by 10 FIDs. Comparison with DiTs and their improvements. In order to validate the effectiveness of the proposed U-DiT models beyond simple toy experiments, we scale them up and compare them with DiTs [24] of larger sizes. For a fair comparison, we use the same sets of training hyperparameters as DiT; all models are trained for 400K iterations. The results on ImageNet 256\u00d7256 are shown in Tab. 2, where we scale U-DiTs to \u223c6e9, \u223c20e9, \u223c80e9 FLOPs respectively and compare them with DiTs of similar computation costs. 5 \fIt could be concluded from Tab. 2 that all U-DiT models could outcompete their isotropic counterparts by considerable margins. Specifically, U-DiT-S and U-DiT-B could outperform DiTs of comparable size by \u223c30 FIDs; U-DiT-L could outperform DiT-XL/2 by \u223c10 FIDs. It is shocking that U-DiT-B could outcompete DiT-XL/2 with only 1/6 of the computation costs. To present the advantage of our method better, we also include the performance of U-DiTs in an FID-50K versus FLOPs plot (Fig. 1). Apart from DiTs and U-DiTs, we also include other state-of-the-art methods: SiT [23] that proposes an interpolant framework for DiTs, and SiT-LLaMA [8] that combines state-of-the-art DiT backbone VisionLLaMA and SiT. The advantages of U-DiTs over other baselines are prominent in the plot. The results highlight the extraordinary scalability of the proposed U-DiT models. U-DiTs are also performant in generation scenarios with classifier-free guidance. In Tab. 3, we compare U-DiTs with DiTs at cfg = 1.5. For a fair comparison, we train U-DiTs and DiTs for 400K iterations under identical settings. ImageNet 256\u00d7256 Model Cfg-Scale FLOPs(G) FID\u2193 sFID\u2193 IS\u2191 Precision\u2191 Recall\u2191 DiT-L/2\u2217 1.5 80.75 7.53 4.78 134.69 0.780 0.532 DiT-XL/2\u2217 1.5 118.68 6.24 4.66 150.10 0.794 0.514 U-DiT-B 1.5 22.22 4.26 4.74 199.18 0.825 0.507 U-DiT-L 1.5 85.00 3.37 4.49 246.03 0.862 0.502 Table 3: Generation performance with classifier-free guidance. We measure the performance of U-DiTs and DiTs at 400K training steps with cfg = 1.5. Experiments with a supermark \u2217are replicated according to the official code of DiT. U-DiTs are also performant on conditional generation. Extended training steps. We evacuate the potentials of U-DiTs by extending training steps to 1 Million. Fig. 2 further demonstrate that the advantage of U-DiTs is consistent at all training steps. As training steps gradually goes up to 1 Million, the performance of U-DiTs is improving (Tab. 4). We visualize the process where the image quality is gradually getting better (Fig. 4). Notably, U-DiT-L at only 600K training steps could outperform DiT-XL/2 at 7M training steps without classifier-free guidance. As additionally shown in Fig. 5, U-DiT models could conditionally generate authentic images at merely 1M iterations. U-DiT-B U-DiT-L 200K 400K 600K 800K 200K 400K 600K 800K Figure 4: Quality improvements of generated samples as training continues. We sample from U-DiT models trained for different numbers of iterations on ImageNet 256\u00d7256. More training does improve generation quality. Best viewed on screen. 4.2 Ablations The design of downsampler. The downsampling operation in the proposed U-DiT transforms a complete feature into multiple spatially downsampled features. Based on previous wisdom, we figured out that previous works either directly perform pixel shuffling, or apply a convolution layer before pixel shuffling. While we hold that it is much too rigid to shuffle pixels directly as downsampling, 6 \fImageNet 256\u00d7256 Model Training Steps FID\u2193 sFID\u2193 IS\u2191 Precision\u2191 Recall\u2191 DiT-XL/2 7M 9.62 U-DiT-B 200K 23.23 6.84 64.42 0.610 0.621 U-DiT-B 400K 16.64 6.33 85.15 0.642 0.639 U-DiT-B 600K 14.51 6.30 94.56 0.652 0.643 U-DiT-B 800K 13.53 6.27 98.99 0.654 0.645 U-DiT-B 1M 12.87 6.33 103.79 0.661 0.653 U-DiT-L 200K 15.26 5.60 86.01 0.685 0.615 U-DiT-L 400K 10.08 5.21 112.44 0.702 0.631 U-DiT-L 600K 8.71 5.17 122.45 0.705 0.645 U-DiT-L 800K 7.96 5.21 131.35 0.705 0.648 U-DiT-L 1M 7.54 5.27 135.49 0.706 0.659 Table 4: The performance of U-DiT-B and U-DiT-L models with respect to training iterations. The unconditional generation performance of both models on ImageNet 256\u00d7256 consistently improves as training goes on, where U-DiT-L at 600K steps strikingly beats DiT-XL/2 at 7M steps. ImageNet 256\u00d7256 Model FLOPs(G) FID\u2193 sFID\u2193 IS\u2191 Precision\u2191 Recall\u2191 Pixel Shuffle (PS) 0.89 96.15 23.90 13.93 0.272 0.389 Depthwise (DW) Conv. + PS 0.91 89.87 20.99 14.92 0.288 0.419 DW Conv. || Shortcut + PS 0.91 89.43 21.36 15.13 0.291 0.436 Table 5: Ablations on the choice of downsampler. We have tried several downsampler designs, and it turns out that the parallel connection of a shortcut and a depthwise convolution is the best fit. We avoid using ordinary convolution (i.e. Conv.+PS) because channel-mixing is costly: conventional convolution-based downsamplers could double the amount of computation. The U-DiT with a conventional downsampler costs as many as 2.22G FLOPs in total. applying convolution is hardly affordable in terms of computation costs. Specifically, ordinary convolutions are costly as extensive dense connections on the channel dimension are involved: using convolution-based downsamplers could double computation costs. As a compromise, we apply depthwise convolution instead. We also add a shortcut that short-circuits this depthwise convolution, which has proved crucial for better performance. The shortcut adds negligible computation cost to the model, and in fact, it could be removed during the inference stage with re-parameterization tricks. The results are shown in Tab. 5. The contribution of each individual modification. In this part, we start from a plain U-Net-style DiT (DiT-UNet) and evaluate the contribution of individual components. Firstly, we inspect the advantage of downsampled self-attention. Recapping the toy experiment results in Sec. 3, replacing the full-scale self-attention with downsampled self-attention would result in an improvement in FID and 1/3 reduction in FLOPs. In order to evaluate the improvement of downsampling via model performance, we also design a slim version of DiT-UNet (i.e. DIT-UNet (Slim)). The DiT-UNet (Slim) serves as a full-scale self-attention baseline that spends approximately the same amount (\u223c0.9GFLOPs) of computation as our U-DiT. As shown in the upper part of Tab. 6, by comparing U-DiT against DiT-UNet (Slim), it turns out that downsampling tokens in DiT-UNet could bring a performance improvement of \u223c18FIDs. Next, we inspect other modifications that further refine U-DiTs (lower part of Tab. 6). Swin Transformer V2 [20] proposes a stronger variant of self-attention: instead of directly multiplying Q and K matrices, cosine similarities between queries and keys are used. We apply the design to our selfattention, which yields \u223c2.5FIDs of improvement. RoPE [30] is a powerful positional embedding method, which has been widely applied in Large Language Models. Following the latest diffusion transformer works [22; 8], we inject 2-dimensional RoPE (RoPE2D) into queries and keys right before self-attention. The introduction of RoPE2D improves performance by \u223c2.5FIDs. Some recent transformer works strengthen MLP by inserting a depthwise convolution layer between two linear mappings [34; 3; 38]. As the measure is proved effective in these works, we borrow it to our 7 \fFigure 5: Generated samples by U-DiT-L at 1M iterations. It is astonishing that U-DiT could achieve authentic visual quality at merely 1 Million training steps. Best viewed on screen. U-DiT model, improving \u223c5FIDs. As re-parametrization during training [12] could improve model performance, we apply the trick to FFN [31] and bring an additional improvement of \u223c3.5FIDs. Above all, based on the components mentioned above, the proposed U-DiTs could outcompete plain DiT-UNets and isotropic DiTs by large margins. ImageNet 256\u00d7256 Model FLOPs(G) FID\u2193 sFID\u2193 IS\u2191 Precision\u2191 Recall\u2191 DiT-UNet (Slim) 0.92 107.00 24.66 11.95 0.230 0.315 DiT-UNet 1.40 93.48 20.41 14.20 0.274 0.415 U-DiT-T (DiT-UNet+Downsampling) 0.91 89.43 21.36 15.13 0.291 0.436 U-DiT-T (+Cos.Sim.) 0.91 86.96 19.98 15.63 0.299 0.450 U-DiT-T (+RoPE2D) 0.91 84.64 19.38 16.19 0.306 0.454 U-DiT-T (+DWconv FFN) 0.95 79.30 17.84 17.48 0.326 0.494 U-DiT-T (+Re-param.) 0.95 75.71 16.27 18.59 0.336 0.512 Table 6: Ablations on U-DiT components. Apart from the toy example in Sec. 3, we further validate the effectiveness of downsampled by comparing the U-DiT with a slimmed version of DiT-UNet at equal FLOPs. Results reveal that downsampling could bring \u223c18FIDs on DiT-UNet. Further modifications on top of the U-DiT architecture could improve 2 to 5 FIDs each. 5" + }, + { + "url": "http://arxiv.org/abs/2311.17493v1", + "title": "Towards Higher Ranks via Adversarial Weight Pruning", + "abstract": "Convolutional Neural Networks (CNNs) are hard to deploy on edge devices due\nto its high computation and storage complexities. As a common practice for\nmodel compression, network pruning consists of two major categories:\nunstructured and structured pruning, where unstructured pruning constantly\nperforms better. However, unstructured pruning presents a structured pattern at\nhigh pruning rates, which limits its performance. To this end, we propose a\nRank-based PruninG (RPG) method to maintain the ranks of sparse weights in an\nadversarial manner. In each step, we minimize the low-rank approximation error\nfor the weight matrices using singular value decomposition, and maximize their\ndistance by pushing the weight matrices away from its low rank approximation.\nThis rank-based optimization objective guides sparse weights towards a\nhigh-rank topology. The proposed method is conducted in a gradual pruning\nfashion to stabilize the change of rank during training. Experimental results\non various datasets and different tasks demonstrate the effectiveness of our\nalgorithm in high sparsity. The proposed RPG outperforms the state-of-the-art\nperformance by 1.13% top-1 accuracy on ImageNet in ResNet-50 with 98% sparsity.\nThe codes are available at\nhttps://github.com/huawei-noah/Efficient-Computing/tree/master/Pruning/RPG and\nhttps://gitee.com/mindspore/models/tree/master/research/cv/RPG.", + "authors": "Yuchuan Tian, Hanting Chen, Tianyu Guo, Chao Xu, Yunhe Wang", + "published": "2023-11-29", + "updated": "2023-11-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction As Convolutional Neural Networks (CNNs) are adapted to various tasks at better performance, their sizes also explode accordingly. From shallow CNNs like LeNet [30], larger CNNs like AlexNet [27], to deeper modern CNNs like ResNets [22] and DenseNets [25], CNNs are growing larger for more complex tasks and representations, including large-scale image classification and downstream tasks like object detection [42], segmentation [21], etc. The evolution of CNN gives rise to various realworld applications, such as autonomous driving [2], camera image processing [3], optical character recognition [46], and facial recognition [50]. However, it is difficult to deploy large CNN models on mobile devices since they require heavy storage and computation. For example, deploying a ResNet-50 [22] model costs 8.2G FLOPs for processing a single image with 224 \u00d7 224 size, which is unaffordable for edge devices with limited computing power such as cellphones and drones. In order to compress heavy deep models, various methodologies have been proposed, including Weight quantization [19; 56], knowledge distillation [24; 44], and network pruning. Network pruning prunes the redundant weights in convolutional neural networks to shrink models. Weight (or unstructured) pruning [20] and filter (or structured) pruning [32] are two main pathways to prune CNNs. Weight \u2217Corresponding Author. 37th Conference on Neural Information Processing Systems (NeurIPS 2023). arXiv:2311.17493v1 [cs.CV] 29 Nov 2023 \fpruning sparsifies dense kernel weight tensors in convolutional layers in an unstructured manner including iterative pruning [20], gradual pruning [60; 17], and iterative rewinding [15; 16; 43]. Some other works [31; 53; 48] propose gradient or hessian based weight saliencies that proved effective in certain scenarios. Filter pruning [37; 23; 35; 36; 51] prunes filters in convolutional layers as a whole, reducing the redundant width of network layers. 0.95 0.96 0.97 0.98 0.99 1.00 Sparsity 0 20 40 60 Avg. Rank Oneshot RigL AC/DC RPG(Ours) Figure 1. Average weight matrix rank of ResNet32 [22] pruning baselines versus sparsity. Our rankbased method is effective in maintaining weight ranks at high sparsities compared with baselines. Although structured pruning algorithms can be well supported by existing hardwares and bring large runtime acceleration benefits, their performance is much lower than that of unstructured pruning. For example, SOTA unstructured pruning methods could achieve 80% sparsity on ResNet-50 with little performance drop [48; 45] while structured pruning could only reach less than 50% [49], since filter pruning is a subset of weight pruning by further imposing structural constraints. However, under circumstances of high sparsities, we observe that unstructured pruning partially degrade to structured pruning. When weights are with a large proportion of zeros, it is highly likely that a structured pattern appears, where a whole channel or filter is almost completely pruned. Therefore, existing weight pruning methods usually meet dramatic performance decay at high sparsities. \u2248 \u2248 \u2248 \u00d7 \u00d7 Min-max optimization Sparse weights Low-rank approximation \u00b7\u00b7\u00b7 \u00d7 Figure 2. An illustrative diagram of our Rank-based Pruning (RPG) method. Inspired by the comparison of the two pruning categories, we propose to reduce structural patterns in weight pruning. Structured pruning is factually a reduction of weight rank in deep Convnets. Thus, rank could be adopted as a metric for evaluating the \"structuredness\" of unstructured sparse weights: a sparse weight is considered highly structured if it possesses low rank. To keep unstructured pruning from being too structured, we hope to maintain weight ranks under high sparsities in pruning. Based on the goal of rank improvement, we propose an adversarial Rank-based PruninG (RPG) approach for unstructured pruning. First, we find a lowrank approximation of the weight by minimizing the approximation error. The best low-rank approximation is found via singular value decomposition. Second, to enhance weight ranks, we maximize the distance between the weight and its low-rank counterpart to increase weight rank. This adversarial rank-based optimization objective guides sparse weights towards a highrank topology. The proposed method is conducted in a gradual pruning fashion to stabilize the change of rank during training. The advantage of the proposed RPG method is evaluated through extensive experiments on image classification and downstream tasks, and Figure 1 demonstrates that our method gains a matrix rank advantage compared to baselines. 2 Maintaining Rank via Adversarial Pruning 2.1 Problem Formulation In conventional terms of supervised neural network learning, given a target loss function L, neural network weight W, and input output pairs X = {xi}i=1...n, Y = {yi}i=1...n, weight training of a 2 \fneural network W is formulated as: arg min W L(Y, WX), (2.1) Weight pruning limits the total number of non-zero weights in weight W; or mathematically, weight pruning imposes a l0-norm constraint on neural network learning. Given sparsity budget c, the constraint is described as: \u2225W\u22250 \u2264c, (2.2) A common practice is to reparameterize weight W with the Hadamard elementwise product of a weight tensor W and a binary mask M. The binary mask M has the same shape as W, and each element in M represents whether its corresponding parameter in W is pruned. After reparametrization, the weight pruning problem is then formulated as: arg min W \u2299M L(Y, (W \u2299M)X) s.t. \u2225M\u22250 \u2264c. (2.3) In Equation (2.3), \u2299is the Hadamard elementwise product of matrices. At high sparsities in unstructured pruning, the rank of sparse networks could decrease substantially. In the following sections, we will demonstrate the problem and propose a solution to maintain sparse weight ranks. 2.2 Analyzing Weight Pruning in High Sparsity Unstructured and structured pruning are two major pruning methodologies. In unstructured pruning practices, weight tensors of CNNs are pruned in a fine-grained manner: each and every solitary weight parameters could be turned off (i.e. set to zero) within the network, but the whole weight tensor structure is left unchanged. In contrast, structured pruning focuses on the pruning of filters: filters are cut-off as the smallest prunable unit in the pruning process. Comparing the two pruning paradigms under the same sparsity budget, Zhu and Gupta [60] illustrate that unstructured pruning performs much better than structured pruning under the same pruning budget. This phenomenon could be explained from the perspective of matrix ranks. In fact, structured pruning is a direct rank reduce imposed on weight matrices, which means filter pruning is basically weight pruning with low rank. The rank of a matrix represents the upper bound of the amount of information contained in the matrix. A powerful network should be rich in information, and we hope features of the sparse network could have high ranks. Feature ranks is closely related to ranks of sparse weight matrices because of the formula below that describes the relationship of ranks in matrix multiplication: Rank (Y ) = Rank (WX) \u2264min \u0000Rank (W) , Rank (X) \u0001 . (2.4) According to Equation (2.4), when filter pruning is applied on weight W that directly impacts its rank, the rank of the output feature will also degrade, causing dramatic loss in information richness. On the other hand, unstructured pruning is free from the structural constraint of filter pruning, and thus maintain more amount of information. However, under circumstances of high sparsities, we observe that unstructured pruning partially degrades to structured pruning. When weights are filled with a large proportion of zeros, it is very probably that some filters or channels are almost entirely turned-off: \"quasi-structured\" sparse weight pattern is then formed. A baseline evaluation of matrix ranks in Figure 1 illustrates this concern. Therefore, existing weight pruning methods usually meet dramatic performance decay at high sparsities. Inspired by the properties of the two categories of pruning, we propose to reduce the structured pattern in unstructured pruning, and therefore to maintain weight ranks under high sparsities. 2.3 Low Rank Approximation through SVD Now that weight ranks are important in weight pruning, we need a practical way to compute ranks in the context of deep neural networks. Previous deep learning works on ranks apply matrix rank theories to CNN low-rank tensor decomposition. In these works, low-rank approximations are proposed to fit 3 \fweight tensors. Denil et al. [10] decomposites W \u2208Rm\u00d7n into the multiplication of U \u2208Rm\u00d7r and V \u2208Rr\u00d7n where r is much smaller than m and n. UV provides a low-rank approximation (rank bounded by r) of weight W. Denton et al. [11] uses the sum of k rank-one approximations to provide the k-rank approximation of a feature tensor. Zhang et al. [57] multiplies a low-rank matrix M to weight matrix W, and solves the low-rank M with Singular Value Decomposition (SVD). In modern works [41; 7], low-rank approximations are widely studied as well. Since the weight values are always discrete, as an alternative solution and inspired by low-rank approximation works, we converge to an approximated rank rather than compute a precise rank solution. Hence, we define the approximated rank as following: Definition 1 (\u03b4-rank of a matrix). Given a matrix W and a small error tolarance \u03b4 > 0, the \u03b4-rank of W is defined as the smallest positive integer k such that there exist a k-rank matrix, whose l2 distance to W is smaller than \u03b4. In previous works, ranks are evaluated via singular values computed from Singular Value Decomposition (SVD). Zhang et al. [57] uses the sum of the top-k PCA eigenvalues to approximate ranks of layer responses; Lin et al. [33] defines rank as the number of non-negligible singular values and does SVD analysis on feature maps; Shu et al. [47] performs SVD on attention maps and augment model performance by keeping a fatter tail in the singular value distribution. These discoveries all acknowledge that singular values from SVD estimates ranks of matrices. We also leverage SVD to compute \u03b4-rank as defined in Definition 1. First, we illustrate that SVD could generate the best low-rank approximation: Theorem 1 (The best low-rank approximation). Suppose W is decomposed via SVD and yield W = Pr i=1 \u03c3iuivT i where singular values {\u03c3i} are sorted in descending order. Given integer k < r, the best k-rank approximation of W, namely the k-rank matrix that has the smallest l2 distance to W is f W = k X i=1 \u03c3iuivT i . The proof of Theorem 1 will be shown in Appendix. Since SVD could yield the best low-rank approximation, we could use this property to solve \u03b4-rank defined in Definition 1. Given weight matrix W, we search for the smallest k such that the l2 approximation error of best k-rank approximation f W as formulated in Theorem 1 is below the error tolerance \u03b4. In this way, we are able to solve the rank of W. 2.4 Adversarial Optimization for Rank Maintenance Equipped with the method for matrix rank computation, we hope to formulate a target loss function according to this heuristic such that optimization of the loss could maintain weight ranks. In contrast to low-rank approximations, high-rank matrices should be hard for low-rank matrices to approximate. Assume S is the set of all low-rank matrices, W should keep its distance away from this set S to increase its rank. But this is a hard problem, for we have to figure out all low-rank matrices. To further simplify the problem, we find the best low-rank approximation rather than all low-rank approximations. W should estrange itself from the best low-rank approximation whose distance is the farthest from W. This simplification is valid and will be proved later. Using this heuristic as motivation, we design an adversarial mechanism that increase the difficulty for W to be approximated by low-rank matrices, and consequently to advocate higher matrix ranks of W while pruning. At first, the best low-rank approximation f W of a small rank k is generated via Singular Value Decomposition, for the purpose of minimizing its distance to weight W; next, W is optimized to increase the distance from f W. The procedures could be understood as an adversarial combat between W and f W: as the low-rank f W tries to fit W, W is optimized to keep itself far away from f W. Mathematically, the combat could be expressed as a min-max problem. But unluckily, the problem may suffer the risk of not getting converged. When f W is fixed, the best W is taken when W \u2192\u221e. To resolve this issue during optimization, we constrain W within a euclidean norm ball. In other words, we plug W \u2225W \u2225F instead of W into the max-min problem. The reasons we use l2 normalization are: 1. W is bounded rather than growing to infinity; 2. the rank of 4 \fW could increase if we l2 normalize W when optimizing the min-max problem, which will be shown in the mathematical proof in the appendix; 3. l2 normalization on weight is equivalent to imposing l2 normalization on its singular values, providing a fair standard for rank comparisons based on the definition of rank in Definition 1 given fixed error tolerance. Before the introduction of this min-max problem, we introduce several notations: \u2225\u00b7 \u2225F is the Frobenius norm (2-norm) of matrices; I is the identity matrix; W := W \u2225W \u2225is the l2 normalized weight matrix W; U, \u03a3, V are matrices reached from the SVD of W, where U = {u1, u2, ...} and V = {v1, v2, ...} are orthonormal bases; \u03a3 is a diagonal matrix where singular values {\u03c31, \u03c32, ...} are sorted in descending order on the diagonal; operator Trun \u0000U\u03a3V T \u0001 = Pk i=1 \u03c3iuivT i stands for k-rank truncated SVD, or the k-rank best approximation of W. Then formally, we express the min-max problem as follows: min W max U,\u03a3,V \u2212\u2225W \u2212Trun \u0010 U\u03a3V T \u0011 \u22252 F , s.t. U T U = I, V T V = I, W = W \u2225W\u2225. (2.5) The optimization target is defined as the adversarial rank loss: Lrank = \u2212\u2225W \u2212Trun \u0010 U\u03a3V T \u0011 \u22252 F . (2.6) In deep learning, gradient descent is the most widely applied method for optimization problems, and we also adopt gradient descent for our experiments. Hence in this context, we propose the following theorem, stating that our adversarial rank loss could guide weight W towards higher rank: Theorem 2 (Effectiveness of the adversarial rank loss). Given the adversarial rank loss as defined in Equation (2.6). If we optimize W in rank loss via gradient descent, the rank of W will increase. The theorem could be mathematically proved, and the detailed proof will be provided in the appendix. With the proposed adversarial rank loss, our optimization objective consists of two goals: 1. we hope to reduce the loss for a certain task (e.g. classification, detection, etc.) for good sparse network performance; 2. we hope to reduce rank loss for higher weight ranks. We formulate the Rank-based Pruning objective by doing affine combination of the two goals. Given affine hyperparmeter \u03bb, the loss for a certain task Ltask, the adversarial rank loss Lrank, the Rank-based Pruning (RPG) objective L is defined as: L := Ltask + \u03bbLrank. (2.7) 2.5 The Gradual Pruning Framework Previous works have proposed various pruning framework, including One-shot Pruning [20], Sparseto-sparse Pruning [8; 1; 39; 12; 14], and Iterative Magnitude Pruning for Lottery Tickets [15; 43]. Compared with these frameworks, Gradual Pruning (GP) [60] could reach better performance with modest training budget. We adopt Gradual Pruning as the pruning framework, which is a usual practice in many works [48; 59; 34]. GP prunes a small portion of weights once every \u2206T training steps, trying to maintain sparse network performance via iterative \"pruning and training\" procedures. However, it is hard to associate rank loss with Gradual Pruning; we hope the factor of rank could be considered in the choice of weights via the proposed rank loss. Loss gradients are widely-applied weight saliency criteria, because gradient magnitudes reflect the potential importance of pruned weights: if a turned-off weight possesses large gradients with respect to the objective loss function, it is expected for significant contributions to loss reduction [14]. We use periodic gradient-based weight grow similar to previous pruning works [14; 34; 6], i.e. the weights are periodicly grown at each binary mask update step. But differently, the rank-based pruning objective (defined as Equation (2.7)) is used for gradients computation with respect to each model weight in our case. In this way, the rank factor is considered during the selection of active weights: there is a tendency that RPG chooses an active set of weights that features high-rank. 5 \fModels VGG19 ResNet32 Sparsity 99% 99.5% 99.9% 99% 99.5% 99.9% Dense 93.84 94.78 PBW [19] 90.89 10.00 10.00 77.03 73.03 38.64 MLPrune [55] 91.44 88.18 65.38 76.88 67.66 36.09 ProbMask [59] 93.38 92.65 89.79 91.79 89.34 76.87 AC/DC [40] 93.35 80.38 78.91 91.97 88.91 85.07 RPG (Ours) 93.62 93.13 90.49 91.61 91.14 89.36 Table 1. Sparsified VGG-19 and ResNet-32 on CIFAR-10. Baseline results are obtained from [59]. An embedded benefit of periodic gradient-based weight grow lies in computation cost considerations. Singular Value Decomposition (SVD) that is essential for rank computation is costly for large weight tensors. Calculating rank loss for each optimization step is hardly affordable. The adoption of periodic weight updating, however, amortizes the cost of rank loss computations. We also provide an SVD overhead analysis in Sec 3.6. In summary, Our Rank-based Pruning (RPG) method is formulated as follows: once every \u2206T training steps, the prune-and-grow procedures that updates binary mask M is performed. Firstly, we plan the number of parameters to prune and to grow, such that after mask updating, the whole network will reach the target sparsity at the current iteration. Target sparsity will increase gradually as training goes on, which is identical to GP. Secondly, we globally sort all parameters based on magnitude and perform the pruning operation. Thirdly, we grow the parameters based on gradient. For other training steps, mask M is left unchanged; the active weight values are updated. Specifically, HRank [33] also leverages matrix rank evaluations in pruning. Our idea is significantly different from HRank [33] in the following aspects: 1. HRank performs filter pruning while our work focuses on weight pruning; 2. HRank evaluates ranks of feature maps, but we evaluate ranks of weight tensors; 3. HRank uses feature rank as filter saliency; our work uses weight rank to guide the update of a sparse network topology. 3 Experiments Our Rank-based PruninG (RPG) method is evaluated on several behchmarks and proved outstanding among recent unstructured pruning baselines. This section presents the experiment results to empirically prove the effectiveness of our RPG method, especially on high sparsities. First, we will show the results of RPG on two image classification datasets: the comparatively small-scaled CIFAR-10, and the large-scaled ImageNet. Then, we will present the results of RPG on downstream vision tasks. Finally, an ablation study will be given. 3.1 CIFAR Experiments Experiment settings. We first compare our RPG pruning method with other methods on CIFAR-10 classification. CIFAR-10 is one of the most widely used benchmark for image classification. It consists of 60000 32 \u00d7 32 images: 50000 for training, and 10000 for validation. We hope to try our RPG method first on this relatively small dataset and look for heuristic patterns. Among the pruning baselines, we choose ProbMask [59] and AC/DC [40]for comparison because these two methods are intended for high-sparsity pruning. Additionally, ProbMask is a recent baseline that provides both CIFAR and ImageNet classification results, enabling us for further investigation on larger-scale datasets. Other baselines including PBW [19] and MLPrune [55] are earlier conventional pruning baselines for references. For fair comparison, our RPG method is applied to modern CNN structures, i.e. VGG-19 and ResNet-32, and prune for 300 epochs, according to the setting of ProbMask [59]. The results are shown in Table 1. Results analysis. At relatively low sparsities, the gap between recent baselines are small. ProbMask [59], AC/DC [40], and RPG all give satisfactory results at 99% compared with early pruning works. But as sparsity further increases, the three methods undergo significant performance decay on 6 \feither network. At 99.5% and 99.9%, our RPG method shows great advantage over the other two baselines. This discovery inspires us further investigate the high-sparsity potential of RPG on the large-scale ImageNet dataset. 3.2 ResNet-50 on ImageNet Algorithm Sparsity Accuracy ResNet-50 [22] 0 76.80 STR [29] 0.8 76.19 WoodFisher [48] 0.8 76.73 GraNet [34] 0.8 76.00 AC/DC [40] 0.8 76.30 PowerPropagation [45] 0.8 76.24 RPG (Ours) 0.8 76.66 STR [29] 0.9 74.31 WoodFisher [48] 0.9 75.26 GraNet [34] 0.9 74.50 AC/DC [40] 0.9 75.03 ProbMask [59] 0.9 74.68 PowerPropagation [45] 0.9 75.23 RPG (Ours) 0.9 75.80 STR [29] 0.95 70.40 WoodFisher [48] 0.95 72.16 AC/DC [40] 0.95 73.14 ProbMask [59] 0.95 71.50 PowerPropagation [45] 0.95 73.25 RPG (Ours) 0.95 74.05 STR [29] 0.98 62.84 WoodFisher [48] 0.98 65.55 AC/DC [40] 0.98 68.44 ProbMask [59] 0.98 66.83 PowerPropagation [45] 0.98 68.00 RPG (Ours) 0.98 69.57 Table 2. Sparsified ResNet-50 on ImageNet. All results are official reports from the original works. Best and second best results are bolded and underlined. Experiment settings. Sparse ResNet-50 networks evaluated on the ImageNet dataset are the most commonly-used and recognized weight pruning benchmarks. ImageNet ISLVRC2012 [9] is a large scale image classification dataset. It contains 1281K images in the training set and 50K images in the validation set. All the images are shaped 224 \u00d7 224 and distributed in 1000 classes. ResNet-50 [22] is a medium-size canonical CNN with 25.5M parameters and 8.2G FLOPs, designed for ImageNet classification. Our RPG method is applied on ResNet-50 under high sparsities: 80%, 90%, 95%, and 98%. We compare RPG with recent baselines. Among the baselines, STR [29] automatically learns pruning sparsity; WoodFisher [48], GraNet [34] and ProbMask [60] are methods based on gradual pruning; AC/DC [40] and ProbMask [60] are baselines targeted at high sparsities; PowerPropagation [45] is an improvement of Top-KAST [26] that relies on a pre-set layerwise sparsity distribution. For fair comparison, all results are 100-epoch baselines; we used standard ImageNet configs, detailed in the Appendix. The results are presented in Table 2. The advantage of adversarial rank-based pruning is manifested at high sparsities. Results analysis. Our method could achieve outstanding performance for sparsities 90%, 95%, and 98%. At lower sparsities (e.g. 80%, 90%), WoodFisher [48] takes the lead among the baselines. Our RPG method is slightly lower than WoodFisher [48] by 0.07% in ImageNet accuracy at 80% sparsity. At higher sparsities, our method outcompetes other baselines. Other competitive baselines at high sparsities include PowerPropagation [45] and AC/DC [40]. However, the gap between our RPG method and these baselines widened at high sparsities. Specificlly, our method outperforms current top baseline by 1.13% of ImageNet Top-1 accuracy at 98% sparsity. Erdos-Renyi-Kernel (ERK) [14] is a layerwise sparsity distribution that is commonly used for performance boosting in weight pruning methods that require a pre-set sparsity distribution. However, ERK-based sparse models are computationally costly. Differently, RPG automatically maintains a more balanced sparsity throughout the whole network under the same total sparsity constraint. Though our sparse model slightly lags behind the current ERK variant of SOTA [45] under lower sparsities in certain accuracy, it is much cost-effective. Quantatitively, for 80% sparse ResNet-50, the reported ERK-based State-of-the-Art ImageNet accuracy is merely 0.10% higher than our RPG method (reaching 76.76% for [45]), but costing an extra 58% of FLOPs. The advantage of our RPG method over ERK-based methods is clearly illustrated in Figure 3, where we compare RPG with the ERK variant of TOP-KAST [26] and the State-of-the-Art PowerPropagation [45]. DeepSparse [28] is a recent sparse acceration framework on CPU that makes unstructured-sparse network accerlation possible in applications. We time sparse ResNet-50 on DeepSparse for singleimage inference. Results in Table 3 shows that highly-sparse ResNet-50 could achieve around 7 \fSp. Acc. Runtime Dense 76.80 40.25ms 0.8 76.66 39.26ms 0.9 75.80 27.98ms 0.95 74.05 22.20ms 0.98 69.57 20.89ms Table 3. Sparse acceleration of sparse ResNet-50 on DeepSparse. Unstructured pruning could bring 2\u00d7 acceleration effects on CPU at high sparsities. Algorithm Sp. BoxAP MaskAP Mask R-CNN 0 38.6 35.2 RigL [14] 0.5 36.4 32.8 AC/DC [40] 0.5 37.9 34.6 RPG (Ours) 0.5 37.7 34.4 RigL [14] 0.7 32.3 29.1 AC/DC [40] 0.7 36.6 33.5 RPG (Ours) 0.7 37.6 34.4 RigL [14] 0.8 26.0 23.7 AC/DC [40] 0.8 34.9 32.1 RPG (Ours) 0.8 37.1 33.8 Table 4. Mask R-CNN pruning on COCO val2017. \"Sp.\" stands for model sparsity. Best results are bolded. Algorithm Sp. Accuracy DeiT-S [52] 0 79.85 SViT-E [6] 0.5 79.72 AC/DC [40] 0.5 80.15 RPG (Ours) 0.5 80.15 SViT-E [6] 0.6 79.41 AC/DC [40] 0.6 79.69 RPG (Ours) 0.6 79.89 AC/DC [40] 0.8 76.24 RPG (Ours) 0.8 77.42 Table 5. Sparse DeiT-S on ImageNet. \"Sp.\" stands for model sparsity. The best results are bolded. 2\u00d7 accerlation on CPU. This observation reveals that highly unstructured-sparse networks have promising applicative prospects on edge devices that could not afford power and cost-intensive GPUs, e.g. micro robots, wearable devices, et cetera. These devices feature limited memory and power, but high inference speed demands. In this sense, our RPG unstructured pruning method is of great application value. 3.3 Downstream Vision Tasks 0.0 0.1 0.2 0.3 0.4 FLOPs (\u00d78.2e9) 70 72 74 76 T est Accuracy RPG(Ours) WoodFisher AC/DC PowerProp-ERK T opKAST-ERK Figure 3. ImageNet accuracy versus FLOPs on sparse ResNet-50. Our method achieves better AccuracyFLOPs trade-off compared with competitive pruning baselines, especially at high sparsities. We also test our weight pruning method on downstream vision tasks. Mask R-CNN [21] is a widely used benchmark for conventional downstream tasks, namely, object detection and instance segmentation. We try to apply our weight pruning method to Mask R-CNN and compare its detection and segmentation performance against other pruning baselines. As for the choice of baselines, we found that limited weight pruning works conducted experiments on downstream vision tasks. We choose the following baselines for comparison: RigL [14] is a commonly used sparse-to-sparse baseline. AC/DC [40] is good at high-sparsity pruning on ImageNet classification. All methods are applied on Mask R-CNN ResNet-50 FPN variants to measure the mAP for bounding boxes and segmentation masks. For all Mask R-CNN experiments, we follow the official training of COCO 1\u00d7 [21]: pruning and finetuning lasts for 90K iterations in total. The pruning results evaluated on COCO val2017 are illustrated in Table 4. Similar to the trend in classification experiments, our RPG method gains an advantage at high sparsities compared with AC/DC [40]. As sparsity increases from 70% to 80%, the gap between AC/DC and RPG widens from 1.0 to nearly 2.0 for both detection and segmentation mAPs. This finding shows that RPG is a weight pruning method that could be generalized to various vision tasks: it always works well at high sparsities without the need for significant modifications. 3.4 Vision Transformers Recent works on vision model architectures focus on transformers [13; 52]. Transformer architecture models are proven particularly effective on large-scale image recognition tasks and are well applied 8 \fto various downstream tasks [4; 58; 5], but they are still struggling for industrial applications due to large model size and computation cost. To address these problems, works like SViT-E [6] attempted to apply unstructured pruning on vision transformers. Though our method is not specifically designed for models with the attention mechanism, we explore the effect of our weight pruning method on DeiT-S [52] and compare it with high-sparsity weight pruning baseline [40] and the transformer pruning baseline [6] in Table 5. For fair comparison, all pruning experiments follow the setting of SViT-E [6]: the DeiT-S model is pruned for 600 epochs on ImageNet [9]. All other settings are identical the official training setting of [52], including batchsize, learning rate, etc. 3.5 Ablations 0 1 2 3 61.5 62.0 62.5 63.0 Avg. Rank 90.46 90.63 91.14 90.87 90.76 Figure 4. Average weight matrix rank of ResNet-32 [22] versus affine hyperparameter \u03bb. Accuracies on CIFAR-10 are marked. In this section, we inspect the effect of rank loss. The rank-based pruning objective involves an affine parameter \u03bb that controls the amount of rank loss with respect to the original task loss. When \u03bb = 0, rank loss is turned off. Investigating the relations of rank versus \u03bb and accuracy versus \u03bb on a ResNet-32 of 99.5% sparsity as shown in Figure 4, we found rank loss could significantly increase the average rank throughout all layers of the sparse network. A substantial increase of accuracy is also observed. But as \u03bb further increases, the average rank will be saturated. Reversely, as \u03bb further increases, the classification accuracy will decrease. This could be attributed to the property of affine combination in Equation (2.7). When \u03bb is large, the pruning objective will pay too much attention to maintain weight ranks and neglect the goal of performing the task well. Hence, it is necessary to tune \u03bb and find the most optimal one. Type Time FLOPs SVD 16.5min 5.07e15 RPG90% 1003min 1.34e18 Table 6. SVD overhead compared with the overall pruning & finetuning cost of RPG on 90% sparse ResNet-50. Baseline Sparsity Train FLOPs ResNet-50 (Dense) 3.14e18 AC/DC 0.9 0.58\u00d7 PowerProp. 0.9 0.49\u00d7 RPG(Ours) 0.9 0.43\u00d7 Table 7. Training FLOPs comparison with pruning baselines on sparse ResNet-50. 3.6 Overhead Analysis As introduced in Section 2.4, RPG involves costly SVD calculations. However, we conduct experiments and illustrate that SVD accounts for very minimal cost overhead during pruning in terms of both time and FLOPs. As shown in Table 6, the overall time and FLOPs for SVD calculations only accounts for < 2% of the whole RPG pruning cost. We also compare the FLOPs overhead of RPG with other pruning methods. Observing from Table 7, our method is the most cost-effective compared with baselines. Above all, the extra overhead brought by rank loss calculations is not a concern. 4" + }, + { + "url": "http://arxiv.org/abs/2305.18149v4", + "title": "Multiscale Positive-Unlabeled Detection of AI-Generated Texts", + "abstract": "Recent releases of Large Language Models (LLMs), e.g. ChatGPT, are\nastonishing at generating human-like texts, but they may impact the\nauthenticity of texts. Previous works proposed methods to detect these\nAI-generated texts, including simple ML classifiers, pretrained-model-based\nzero-shot methods, and finetuned language classification models. However,\nmainstream detectors always fail on short texts, like SMSes, Tweets, and\nreviews. In this paper, a Multiscale Positive-Unlabeled (MPU) training\nframework is proposed to address the difficulty of short-text detection without\nsacrificing long-texts. Firstly, we acknowledge the human-resemblance property\nof short machine texts, and rephrase AI text detection as a partial\nPositive-Unlabeled (PU) problem by regarding these short machine texts as\npartially ``unlabeled\". Then in this PU context, we propose the\nlength-sensitive Multiscale PU Loss, where a recurrent model in abstraction is\nused to estimate positive priors of scale-variant corpora. Additionally, we\nintroduce a Text Multiscaling module to enrich training corpora. Experiments\nshow that our MPU method augments detection performance on long AI-generated\ntexts, and significantly improves short-text detection of language model\ndetectors. Language Models trained with MPU could outcompete existing detectors\non various short-text and long-text detection benchmarks. The codes are\navailable at\nhttps://github.com/mindspore-lab/mindone/tree/master/examples/detect_chatgpt\nand https://github.com/YuchuanTian/AIGC_text_detector.", + "authors": "Yuchuan Tian, Hanting Chen, Xutao Wang, Zheyuan Bai, Qinghua Zhang, Ruifeng Li, Chao Xu, Yunhe Wang", + "published": "2023-05-29", + "updated": "2024-03-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "main_content": "INTRODUCTION Recent developments in Large Language Models (LLMs) have brought astonishing changes to people\u2019s lives. The GPT-2 (Radford et al., 2019) model, created in early 2019, is capable of simple question-answering tasks; GPT-3 (Brown et al., 2020) is a great leap in model size and capability; ChatGPT (OpenAI, 2022), announced in late 2022, shows comparable performance to humans as a chatbot; GPT-4 (OpenAI, 2023a), released this year, has even better generative performance. These advancements are making people\u2019s lives easier with applications like writing aids, search engines, and Office Suites. However, they could be used to generate deceptive fake texts for illegal and unethical purposes. Previous works have proposed numerous approaches to distinguish fake AI-generated text from genuine human languages. Canonical work (Solaiman et al., 2019) used simple machine learning classifiers as baselines; some works (Gehrmann et al., 2019; Mitchell et al., 2023) proposed zero-shot detection measures based on pretrained models; numerous works (Solaiman et al., 2019; Crothers et al., 2022; Guo et al., 2023; Mitrovic et al., 2023) perform simple finetuning of pretrained language models on the AI-text classification task. Despite various methods, few mainstream methods investigated the negative impact of text length: the difficulty to detect significantly increases as texts become shorter. Some latest online ChatGPT detectors have noticed this issue, but they dodge rather than address it by putting up minimum text \u2217Corresponding Author. 1 arXiv:2305.18149v4 [cs.CL] 5 Mar 2024 \fPublished as a conference paper at ICLR 2024 length requirements (Tian, 2022; FudanNLPLab, 2023; OpenAI, 2023b). In the era of smartphones where people rely heavily on fragmented mobile media, fake short articles like SMSes, Tweets, and reviews generated by LLMs could pose huge threats to one\u2019s daily life, yet we still lack a comprehensive detector that is capable of detecting both short texts and long-texts. To improve detectors\u2019 performance on short texts, we rethink the plain \u201cBinary Classification\u201d setting that is intuitively applied. It is seemingly natural to phrase text detection as a binary classification task, as texts have clear origins (from human works or AI outputs) and thus, clear binary labels (real or fake); but interestingly, we observe a handful of machine-generated texts that are overly short and simple, such that these texts are highly similar to human (e.g. Ex. 2 in Table 1). It is not suitable to assign these simple machine texts with either clear human or AI labels; rather, they are in an \u201cUnlabeled\u201d state. Though the case is occasional and most short machine texts (e.g. Ex. 1 in Table 1) are still distinguishable based on manifold features, it prompts us to question the rationality of clear binary labels on general short machine texts. On the contrary, we hold that short machine-generated texts are partially \u201cUnlabeled\u201d. As machine-generated texts become shorter and simpler, the \u201cUnlabeled\u201d property could gradually dominate the text. Example 1: The first sentence in benchmark HC3-Sent (Guo et al., 2023) Human: You can\u2019t just go around assassinating the leaders of countries you don\u2019t like! AI: It is generally not acceptable or ethical to advocate for or condone the assassination of any individual, regardless of their actions or beliefs. Example 2: Answer to \u201cWhen is the independence day of the United States?\u201d Human: Independence Day is annually celebrated on July 4th. AI: The Independence Day of the United States is celebrated on July 4th. Table 1: Short example answers from human and AI. In general, short answers are distinguishable based on features like punctuations, emotions, and formality (see non-cherrypicked case Ex. 1). But in extreme cases (see Ex. 2), short simple answers are indistinguishable, and the unlabeled property is manifest. In this sense, we model the task of AI-generated text detection as a partial Positive-Unlabeled (PU) problem and formulate the Multiscale Positive-Unlabeled (MPU) training framework to address the challenging task of short text detection without sacrificing long texts. PU problems typically address binary classification tasks where positive data and unlabeled data are offered for training. Considering the partially \u201cUnlabeled\u201d property of short machine texts, we rephrase detector training as a partial PU problem and boost detectors\u2019 performance on multiscale texts. In order to improve conventional PU optimization targets for texts of various lengths, a length-aware Multiscale PU (MPU) loss is proposed and applied during the training process. We are aware that the PU prior probability of a text being positive is length-variant. To this end, an abstract recurrent model is designed to adjust the PU prior probability automatically based on corpus length. Further, a Text Multiscaling module is also proposed to exert the effect of Multiscale PU loss by diversifying training corpora in terms of length. Experiments demonstrate that the MPU framework is significantly effective in improving short-text detection performance; meanwhile, detection on long texts is also augmented. 2 RELATED WORK Text Detection Methods. Since the introduction of GPT-2 (Radford et al., 2019) and its successors, fake texts generated by powerful LLMs are causing ethical and legal issues. Methods are developed to discriminate against these generated texts in various misuse scenarios. Zellers et al. (2019) shed light on machine-generated fake news by proposing a GPT-based news generator GROVER, and uses GROVER itself to sort fake news out; Adelani et al. (2020) looks at detection of fake online reviews; Fagni et al. (2020) focuses on machine-generated fake tweets and proposes the TweepFake dataset. Other proposed detection methods are for general scenarios. Several canonical baselines are mentioned by Solaiman et al. (2019) to detect GPT-2 texts, including simple TF-IDF classifiers and finetuned RoBERTa (Liu et al., 2019); GLTR (Gehrmann et al., 2019) detect generated texts in a zero-shot manner by using token prediction probabilities from available pretrained NLP models like BERT (Devlin et al., 2018) and GPT-2 (Radford et al., 2019). After the introduction 2 \fPublished as a conference paper at ICLR 2024 of ChatGPT (OpenAI, 2022), some new detection methods (Liu et al., 2022; Mitchell et al., 2023; Mitrovic et al., 2023; Guo et al., 2023) are released. PU Methods. Previous works have proposed methods to train a binary classifier with positive and unlabeled data. Many PU methods (Bekker & Davis, 2020; Du Plessis et al., 2014; Kiryo et al., 2017; Su et al., 2021; Hammoudeh & Lowd, 2020; Chen et al., 2020) constructs PU loss based on positive and unlabeled samples, for classifying unlabeled data. Other PU methods include two-step learning and bias learning (Liu et al., 2003). The two-step technique first identifies reliable negative examples and then performs learning based on the positives and negatives of the mark (He et al., 2018; Ienco & Pensa, 2016); biased learning treats unlabeled data as a negative sample of class-labeled noise (Hsieh et al., 2015; Shao et al., 2015). Above all, we refer to applying a PU loss during training to address the task of multiscale AI-generated text detection, because PU losses could be generally applied on powerful finetuning text detectors without much additional computation costs. 3 MULTISCALE POSITIVE-UNLABELED TEXT DETECTION 3.1 TEXT DETECTION AS POSITIVE-UNLABELED CLASSIFICATION Despite manifold methods for detecting AI-generated texts, mainstream detectors seldom take the factor of text length into account, and thus they always fail on short texts. We have tried several existing detection methods for short LLM-generated texts (shown in Table 4), but none of them perform well. As people nowadays are immersed in short, fragmented forms of mobile media, they are vulnerable to LLM attacks with no reliable means to defend themselves. Hence, we are in urgent need of a performant short AI-generated text detector. Intuitively, past works formulated the task of AI text detection as a binary classification problem, i.e. classifying texts as AI or Human. However, the formulation could be problematic for shorter texts as we found high similarities between extremely simple AI texts and human texts. The phenomenon could be rare in actual applications. But it is fundamentally reasonable, because LLMs learn from human languages; and for sentences whose structures are overly simple, they are seemingly \u201ccopied\u201d by LLMs from what they have learned. Therefore, the attribution of these simple machine texts is uncertain: on one hand, they are indeed outputs from Language Models; on the other hand, they are ordinary human languages. Though the completely non-classifiable case mostly happens for extremely short texts or commonly used phrases (that rarely occurs in our benchmarks and detection of which is of no application value), it inspires us to think about the partially \u201cunlabeled\u201d property behind the vast majority of short, distinguishable texts despite their definite labels. To overcome this issue, we model the task of multiscale text detection as a partial Positive Unlabeled problem (PU). In this problem, corpora from human are regarded as \u201cPositive\u201d, but short texts from machines are given an additional \u201cUnlabeled\u201d mark for PU loss calculations (detailed in Sec. 3.3). Then our detector model is optimized within this partial PU context. 3.2 PRELIMINARIES: CANONICAL PU LOSS FUNCTIONS PU losses are derived from the traditional Positive-Negative (PN, i.e. Binary Classification) setting, detailed in Appendix A. Some works (Du Plessis et al., 2014; Plessis et al., 2015) perform indirect approximation of the negative risk in the PN framework, yielding the unbiased PU (uPU) loss as follows: \u02c6 RuP U(g) = \u03c0 \u02c6 RP (g, +1) \u2212\u03c0 \u02c6 RP (g, \u22121) + \u02c6 RU(g, \u22121), (1) where \u02c6 RP (g, \u22121) := 1 nP PnP i=1 L(g(xP i ), \u22121) and \u02c6 RU(g, \u22121) := 1 nU PnU i=1 L(g(xU i ), \u22121) are estimations calculated from positive and unlabeled training samples respectively. However, the deep learning classifier may be too flexible, leading to \u02c6 RU(g, \u22121) \u2212\u02dc \u03c0 \u02c6 RP (g, \u22121) < 0 and causing the model to overfit. As a remedy, Kiryo et al. (2017) proposes the non-negative risk estimator based on the uPU loss. The non-negative PU (nnPU) loss is thus derived as follows: \u02c6 RnnP U(g) = \u02dc \u03c0 \u02c6 RP (g, +1) + max{0, \u02c6 RU(g, \u22121) \u2212\u02dc \u03c0 \u02c6 RP (g, \u22121)}. (2) 3 \fPublished as a conference paper at ICLR 2024 The nnPU loss Kiryo et al. (2017) is performant and thus widely referred by later PU works and applications (Kato et al., 2019; Bepler et al., 2019; Peng et al., 2019; Xu et al., 2019; Chen et al., 2020; Su et al., 2021; Tang et al., 2022). However, to the best of our knowledge, no previous works have applied PU to scenario of length-variant texts, in which simple usage of the nnPU loss might not be effective. We hope to develop an effective PU mechanism in aid of detecting length-variant texts. 3.3 MPU: A LENGTH-SENSITIVE PU APPROACH In PU loss conventions as stated in Sec. 3.2, the estimation for the prior probability of a data being positive \u02dc \u03c0 is always kept at a constant. The reason is that prior probability \u03c0 is closely associated with the dataset distribution, which is always assumed to be uniform. However, this might not be case with texts of different lengths. As explained in Section 1, short texts and long texts hold different properties; in other words, they do not share the same distribution. In this regard, the assumption of dataset distribution being uniform is flawed; fixing the prior estimation at a certain constant value is problematic in the case of multiscale text detection (i.e. where texts to be processed are of manifold length). Though long texts and short texts have different distributions, the distribution shift from long text to short text is a gradual process with respect to text lengths. To deal with the gradual shift of distribution, we look at this shift with respect to text length from a differentiation perspective. Texts of a certain length l could be regarded as a small subset that features its own distribution, and also its own prior \u03c0(l). We hope to provide a smooth, length-variant estimation \u02dc \u03c0(l) for the prior at length l, in order to fit the PU framework for the multiscale text detection problem. In this fashion, we propose the Multiscale PU loss \u02c6 RMP U that uses length-sensitive priors \u02dc \u03c0 for multiscale texts. However, we are faced with the challenge of modeling the length-variant prior \u02dc \u03c0 in abstraction. Namely, we need to investigate the general probability of all sentences (of a certain length) being human, without access to specific details of any piece of text. To this end, we use the general recurrent language model (Mikolov et al., 2010; Sundermeyer et al., 2012) in abstraction as a discriminator for positive, human-spoken corpora, which is formulated as follows: given a sequence Sl of l tokens: Sl = [ti]n i=1, abstract recurrent discriminator \u2206: seq \u2192[0, 1] that is bounded one-dimensional (because from the discriminator we expect a confidence of a sequence being positive), the recurrent model in abstraction is expressed as: \u2206(Si+1) = f (\u2206(Si), ti+1) , \u2200i \u2208[l \u22121] , (3) where f is some function that merges the classification of all previous tokens Si\u22121 with the classification of the last token ti. Next, the abstraction is concretized based on task characteristics of human-generated text discrimination. Since relatively short texts tend to have simple semantic correlations to be captured, human text discrimination is performed via capturing signals from tokens. We hold that each token has a hidden property of origin, and the attribution contributes to the classification of the whole sequence. Tokens, as extreme cases of short texts, could be sorted into two categories: \u201cclear positive\u201d, i.e. the token could hardly be generated by AI; or \u201cunlabeled\u201d, i.e. the token is mediocre and universally used, giving no signal as \u201chuman-spoken\u201d. Each token is expected to provide an equal contribution to the overall sequence classification towards the orientation of its own category (Kang et al., 2018). In this sense, the merging function f is formulated as equally-weighted addition: f (\u2206(Si), ti+1) = wS\u2206(Si) + wt\u03b4(ti+1) s.t. wS = wt, (4) where \u03b4(ti+1) is defined as the contribution of \u03b4(ti+1). For simplicity, we discretize the transition of classification from i \u2192i + 1 and each token contribution is designated as binary. We also take text length into consideration by normalizing \u03b4(ti+1) with a factor of sequence length l. Under these assumptions, the transition is formulated as: \u2206(si+1) = clip(\u2206(Sn) + \u03b4(ti), [0, 1]), s.t. \u03b4(ti) = \u001a 1/l if ti is clear positive, \u22121/l otherwise. (5) Notably, we use a hard clip function to bound the overall classification results in interval [0, 1] rather than other non-linear functions, e.g. sigmoid. This is because clear positive tokens could be rare in 4 \fPublished as a conference paper at ICLR 2024 practice. This assumption is particularly true when we consider recent advancements of generative language models, where human and AI languages are more resembling. In other words, a majority of words are both frequently used by human and AI, while only a few signal words manifest unique human characteristics. This property requires the discriminate model to be highly sensitive to positive token signals. Hence, we set hard boundaries rather than using non-linear standardizing functions to scale the output between [0, 1]. Further, to encourage positive responses, we initially positive as the initial state \u2206(S0) of the discriminator. Return to the original objective, we tend to calculate the prior probability of a sample being positive \u02dc \u03c0 based on the introduced recurrent language model. \u02dc \u03c0 could also be interpreted as the expectation of confidence from the recurrent discriminator E [\u2206(Sl)]. The discretization of contribution is beneficial to reducing the continuous discriminator \u2206to discrete states: for a sequence Sl with l tokens, the confidence could only take values as i/l, \u2200i \u2208[l]. Therefore, discriminator \u2206has a total of i + 1 equally spaced states as confidence output. We will show that the expectation E [\u2206(Sl)] of all length-l sequences could be exactly calculated given the positive probability p of a single token, i.e. the general probability of a token showing clear-human signal. As stated previously, p tends to be a small value. State transition matrix P \u2208R(l+1)\u00d7(l+1) that represents the contribution of the last token is a band sparse matrix consisting of positive transition p and negative transition 1 \u2212p to adjacent states from the current state. Defining probability vector at state i as \u03c3i \u2208R(l+1), a single transition shown as Eq.5 and the final state probability vector could be described as: \u03c3i+1 = \u03c3iP, \u03c3l = \u03c30Pl. (6) Thus, given one-hot initial state \u03c30, we could calculate the final state probability vector and the overall expecation \u02dc \u03c0 for a sequence of length l: \u02dc \u03c0(l) = E [\u2206(Sl)] = \u27e8\u03c3l, \u03b1\u27e9= \u03c30Pl\u03b1T , (7) where vector \u03b1 \u2208R(l+1) is the sequence vector of all possible positive confidence: \u03b1 = [i/l]l i=0. Further details and derivations are mentioned in Appendix B. As a result, as text length decreases, the prior positive probability in samples of this length \u02dc \u03c0length decreases as well. This is in line with our expectation in Sec 3.1 that shorter texts tend to demonstrate more \u201cunlabeled\u201d properties. Finally, on top of the canonical non-negative PU loss as defined in Eq. 2, we define the Multiscale PU Loss with text-length-variant priors: \u02c6 RMP U(g) = \u27e8\u02dc \u03a0, \u02c6 RP (g, +1)\u27e9+ \u02c6 RU(g, \u22121) \u2212\u27e8\u02dc \u03a0, \u02c6 RP (g, \u22121)\u27e9, (8) where \u02dc \u03a0 stands for an array: [\u02dc \u03c0(lg)] that records the corresponding prior of training texts, calculated based on respective text lengths using Eq. 7. As is emphasized, short machine-generated texts should be viewed as partially \u201cunlabeled\u201d rather than entirely \u201cunlabeled\u201d. Hence, we weight-sum the multiscale PU loss and the canonical PN classification loss to get the final loss for detector model finetuning: \u02c6 R(g) = \u02c6 RP N(g) + \u03b3 \u02c6 RMP U(g). (9) 3.4 TEXT MULTISCALING The proposed Multiscale PU Loss expects training texts of highly variant lengths, but training sets may contain lengthy paragraphs only. Therefore, we introduce Text Multiscaling Module that generates a variety of short texts to exert the potential of the length-sensitive Multiscale PU loss. We propose random deletion at sentence scale as a solution. Text Multiscaling module consists of 3 steps: first, a complete training text is first tokenized into n sentences, denoted as sentence array C; then the sentences are independently and randomly masked based on a sentence-wise mask probability psent. In probabilistic terms, each sentence is decided by an independent Bernoulli trial in the sample space {0, 1}. In the sample space, 0 means the sentence is discarded and 1 stands for the sentence is maintained. Finally, all sentences are merged again for the multiscaled training text cmul. 5 \fPublished as a conference paper at ICLR 2024 Mathematically, with \u2299stands for the element-wise Hadamard product, the above process could be concluded as: cmul = C \u2299M, where M \u223cBernoullin(1 \u2212psent). (10) The proposed Text Multiscaling module is a one-to-one mapping from C \u2192cmul; we are not generating more training samples, but substituting the original sample for fair comparison in experiments. Notably, it is probable that multiscale could leave the original text intact, or only one sentence is left. The relative sequence of remaining sentences is maintained to avoid breaking excess logical relations between sentences. Multiscaled texts automatically inherit class labels of their original text. The concern for attribution change due to length reduction is to be addressed by the use of Multiscale PU Loss. Though random deletion is also applied in Easy Data Augmentation (EDA) (Wei & Zou, 2019), our method is different from theirs in two aspects. Firstly, our method is focused on multiscaling, while word-level random deletion proposed by EDA has limited effect in generating texts of various lengths. Secondly, EDA could break semantic meanings in sentences: deletion of keywords could change the class of a sentence; while a more integrated, sentence-level deletion reduces the chance of class property change. 4 EXPERIMENTS 4.1 SETTING OVERVIEW Datasets. We choose TweepFake (Fagni et al., 2020) and HC3 (Guo et al., 2023) as benchmarks for our experiments. TweepFake (Fagni et al., 2020) is a dataset of tweets for AI-generated microblog detection. Since latest LLMs have completely reshaped the task of AI text detection, we also adopt HC3 (Guo et al., 2023), which is an up-to-date ChatGPT text detection dataset including both English and Chinese. Additionally, HC3 has short-text benchmarks: HC3-English-Sent and HC3-Chinese-Sent. We use these datasets to demonstrate the effectiveness of our method. The length statistics in Table 2 show the distribution similarity of English short-text benchmarks, i.e. TweepFake (that consists of tweets) and HC3-En-Sent. We conclude from the statistics that the adopted HC3 short-text benchmark could simulate the fragmented language environment (e.g. Twitter) on mobile apps. Detector evaluation on these short-text benchmarks could reflect their real-world detection capabilities in smartphone-related scenarios. Benchmark Mean Std Q1 Q2 Q3 TweepFake (Fagni et al., 2020) 24.82 15.19 13 21 34 HC3-En-Sent (Guo et al., 2023) 24.98 15.47 15 22 31 Table 2: Token length statistics of short-text benchmarks. HC3-English-Sent has a similar length distribution as TweepFake. These short-text benchmarks could simulate languages that we encounter in Instant Messaging and Microblogging Apps, like Twitter. Detectors. BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) are adopted to apply our MPU method, due to their popularity and supreme performance in previous AI text detection works (Solaiman et al., 2019; Fagni et al., 2020; Liu et al., 2022; Guo et al., 2023). Training-agnostic detection algorithms are excluded from our consideration. 4.2 TWEEPFAKE DETECTION RESULTS In TweepFake experiments, we follow Kumarage et al. (2023) for our training settings. Kumarage et al. (2023) is one of the latest works on AI-generated text detection, and it claims outstanding performance on short-text detection. We strictly follow the original training strategy in Kumarage et al. (2023): the model is trained with the AdamW optimizer at batchsize 16 and learning rate 1e \u22125. TweepFake mainly consists of short tweets. we inspect the dataset and find that a vast majority of texts are single or a handful of sentences. Hence, we refrain from using Text Multiscaling that 6 \fPublished as a conference paper at ICLR 2024 Method Acc. BERT-Finetuned (Devlin et al., 2018) 89.1 RoBERTa-Finetuned (Liu et al., 2019) 89.6 RoBERTa-Stylo (Kumarage et al., 2023) 91.1 RoBERTa-MPU (Ours) 91.4 Table 3: Experiments on short-text dataset TweepFake (Fagni et al., 2020). randomly delete sentences for TweepFake datasets; rather, we directly apply Multiscale PU loss during training. As shown in Table 3, the experiment result of the proposed MPU is promising: it greatly improves the performance of finetuned RoBERTa, and its performance outcompetes the latest TweepFake baseline RoBERTa-Stylo (Kumarage et al., 2023) that requires an additional module for stylometric feature extraction during finetuning. 4.3 HC3-ENGLISH DETECTION RESULTS Method (F1 scores) HC3-En-Full HC3-En-Sent GLTR (Gehrmann et al., 2019) 96.52 40.19 PPL (Guo et al., 2023) 95.20 62.04 OpenAI (OpenAI, 2023b) 91.00 69.27 DetectGPT (Mitchell et al., 2023) 87.39 63.32 BERT-Finetuned (Devlin et al., 2018) 97.62\u00b10.91 57.65\u00b115.45 RoBERTa-Finetuned (Liu et al., 2019) 97.42\u00b10.92 58.60\u00b110.53 RoBERTa-Stylo (Kumarage et al., 2023) 96.48 81.46 BERT-MPU (Ours) 98.60\u00b10.52 79.76\u00b13.07 RoBERTa-MPU (Ours) 98.40\u00b10.31 85.31\u00b11.80 Table 4: Comparison with English AI-generated text detection baselines on HC3 Guo et al. (2023). Most baselines perform poorly on short texts (i.e. HC3-En-Sent); in contrast, our method improves short-text detection greatly. We also experiment our method on ChatGPT corpora that are much harder to detect. In the ChatGPT text detection experiments, we follow the setting of HC3 (Guo et al., 2023) to test the performance of our method. HC3 (Guo et al., 2023) is a dataset targeted at ChatGPT text detection. All texts are reduced into shorter texts for a sentence-level variant. We apply the MPU framework on the full-scale dataset of HC3 (Guo et al., 2023). Several baseline detectors are chosen to demonstrate the outstanding detection performance of our MPU method. These baselines are open-source and replicable. Among these baselines, GLTR (Gehrmann et al., 2019), PPL (Guo et al., 2023), and DetectGPT (Mitchell et al., 2023) are zero-shot methods that do not require further training: they rely on the likelihood outputs of a pretrained language model. The OpenAI Detector (OpenAI, 2023b) is a RoBERTa detector finetuned on OpenAI\u2019s GPT-2 (Radford et al., 2019) corpora. RoBERTa-Stylo Kumarage et al. (2023) is one of the latest detection baseline targeted for short texts. BERT-Finetuned and RoBERTa-Finetuned are language models plainly finetuned on HC3 (Guo et al., 2023), following the official setting; while BERT-MPU and RoBERTa-MPU are language models trained on HC3 (Guo et al., 2023) via the proposed MPU method. It could be observed from Table 4 that most existing methods perform poorly on short texts. The statistics verify our previous claim that the detection of shorter texts is a difficult problem. Specifically, finetuned BERT and RoBERTa are good at detecting long, full-level texts, but they fail to filter out shorter AI-generated texts. On the contrary, our MPU method could greatly improve short-text performances and boost long AI-generated text detection as well. We will further investigate the effect of solitary MPU components in Sec. 4.5. 7 \fPublished as a conference paper at ICLR 2024 Method HC3-Ch-Full HC3-Ch-Sent GLTR (Gehrmann et al., 2019) 87.40 49.94 RoBERTa-Finetuned (Liu et al., 2019) 96.28\u00b13.42 83.07\u00b16.85 RoBERTa-MPU (Ours) 97.42\u00b10.24 89.37\u00b11.94 Table 5: Comparison with Chinese AI-generated text detection baselines. Our method is also proved effective on Chinese corpora. 4.4 HC3-CHINESE DETECTION RESULTS To verify the generality of the proposed MPU method in other languages, we also compare our method with baselines on Chinese AI text detection benchmark HC3-Chinese (Guo et al., 2023). Following Guo et al. (2023), we use chinese-roberta-wwm-ext (Cui et al., 2020) as the pretrained language model. The results are shown in Table 5. Our method could still outcompete other methods by large margins in terms of short-text detection, reaching an F1 score of 89.37 on HC3-Chinese-Sent. 4.5 ABLATIONS Harmful Short Texts. We elaborate in Section 3.1 that short texts could manifest a partially unlabeled property, which impacts the normal training process of the detector. To demonstrate that short texts are indeed harmful for training, we design an experiment based on the HC3-English dataset Guo et al. (2023) as follows: when the detector encounters a short training text during training, the training text is omitted from backward operations. Other settings are identical to Section 4.3. As shown in Table 6, finetuning without short texts demonstrates better performance compared with plain finetuning. This reveals that short sentences are harmful to detector training due to their partially unlabeled properties. Hence, PU frameworks need to be leveraged to address this issue. Method HC3-En-Full HC3-En-Sent Finetuning with all texts 97.42 \u00b1 0.92 58.60 \u00b1 10.53 Finetuning without short sentences 98.19 \u00b1 0.66 62.42 \u00b1 5.60 Table 6: Performance comparison between the detector finetuned with all texts and detector finetuned without short texts. Measures HC3-English HC3-Chinese Text Mul. MPU loss Full Sent Full Sent % % 97.42\u00b10.92 58.60\u00b110.53 96.28\u00b13.42 83.07\u00b16.85 ! % 96.42\u00b12.27 82.76\u00b12.76 95.89\u00b14.18 84.79\u00b15.94 % ! 97.48\u00b12.41 45.30\u00b18.78 96.87\u00b10.89 83.46\u00b15.78 ! ! 98.40\u00b10.31 85.31\u00b11.80 97.42\u00b10.24 89.37\u00b11.94 Table 7: F1 scores of Finetuned RoBERTa on ChatGPT benchmark HC3. \u201cFull\u201d and \u201cSent\u201d stands for model validated on long-text and short-text benchmarks, respectively. Framework Components. We perform ablations on the solitary effects of Text Multiscaling and Multiscale PU loss. From Table 7, it is firm that the addition of Text Multiscaling to training corpus greatly improves performance on sentence-level corpus detection as expected. Unfortunately, the detector\u2019s capability on full corpus decays. This performance drop is attributed to the unreasonable label assignment to short corpus from random sentence deletion: the generated short corpora automatically inherit labels from their full-level predecessors in Text Multiscaling Module, neglecting \u201cunlabeled\u201d properties as introduced in Sec. 3.1. The addition of MPU loss reverses full-level corpus detection performance drop and boosts short-text performance as well. Solitary addition of MPU loss only would have little help for detection performance for lack of short texts. MPU Loss. We further investigate MPU loss configurations on ChatGPT text detection benchmark HC3-English (Guo et al., 2023). 8 \fPublished as a conference paper at ICLR 2024 The performance of Multiscale PU loss is evaluated against ordinary PU loss that disregards changes in sentence lengths, as shown in Table 8. Multiscale PU loss is sensitive to training corpora of various lengths and thus is more performant compared with its ordinary counterpart. PU type Full Sent Ordinary 97.05\u00b12.15 83.53\u00b13.14 Multiscale 98.40\u00b10.31 85.31\u00b11.80 Table 8: Performance comparison between ordinary PU loss and the proposed Multiscale PU loss. Introduced in the abstract recurrent detection model (Sec. 3.3), token-wise prior p estimates the probability of a token being highly characteristic as human-spoken. As shown in Table 9, we carefully tune p and found that the best performance is reached at p = 0.2, which is small as we expect. \u03b3 Full Sent p Full Sent psent Full Sent 0 96.42\u00b12.27 82.76\u00b12.76 0.1 96.29\u00b11.31 86.06\u00b11.97 0 97.48\u00b12.41 45.30\u00b18.78 0.2 96.52\u00b10.38 83.94\u00b14.07 0.2 98.40\u00b10.31 85.31\u00b11.80 0.1 97.73\u00b11.42 76.84\u00b17.93 0.4 98.40\u00b10.31 85.31\u00b11.80 0.3 96.81\u00b11.70 84.17\u00b12.78 0.25 98.40\u00b10.31 85.31\u00b11.80 0.6 97.42\u00b10.13 85.78\u00b11.19 0.4 97.44\u00b11.06 82.88\u00b13.32 0.4 97.45\u00b11.34 87.11\u00b11.41 0.8 96.90\u00b11.49 84.54\u00b12.09 Table 9: Ablation experiment results on hyperparameters: loss proportion \u03b3, the estimated probability of a token being clear-human p, and sentence mask probability psent. We also carefully adjust the affine weight hyperparameter for PU loss \u03b3, as shown in Table 9. As the affine weight \u03b3 for PU loss gradually increases, the full-level corpus detection performance reaches the peak at \u03b3 = 0.4 and then drops, while the sentence-level performance reaches its peak at \u03b3 = 0.6. From a comprehensive perspective, the best overall performance is reached at \u03b3 = 0.4 where both performances on full and sentence-level corpus are satisfactory. The climb-and-drop trend reveals that short machine-generated sentences are not completely unlabeled; short-text classification should be viewed as a partial PU problem rather than a complete PU problem. Further, we test the advantage of the non-negative risk estimator in the nnPU loss (Kiryo et al., 2017) against uPU loss (Du Plessis et al., 2014), as introduced in Sec. 3.2. The results are shown in Table 10. Loss type Full Sent Unbiased PU (Du Plessis et al., 2014) 97.90\u00b10.25 84.87\u00b11.28 Non-negative PU (Kiryo et al., 2017) 98.40\u00b10.31 85.31\u00b11.80 Table 10: Performance comparison between ordinary PU loss and the proposed Multiscale PU loss. Text Multiscaling. As introduced in Sec. 3.4, we randomly mask sentences of the training set at probability psent for multiscale text augmentation. We investigate on tuning psent for the optimal value. The statistics are shown in Table 9. When psent is set at 0.25, the test performance on both full and sentence level corpus are satisfactory; when psent is set too high, sentence-level detection performance is enhanced, but full-level performance is negatively impacted because the full-scale training texts are overly damaged. 5" + } + ], + "Hanting Chen": [ + { + "url": "http://arxiv.org/abs/2305.12972v2", + "title": "VanillaNet: the Power of Minimalism in Deep Learning", + "abstract": "At the heart of foundation models is the philosophy of \"more is different\",\nexemplified by the astonishing success in computer vision and natural language\nprocessing. However, the challenges of optimization and inherent complexity of\ntransformer models call for a paradigm shift towards simplicity. In this study,\nwe introduce VanillaNet, a neural network architecture that embraces elegance\nin design. By avoiding high depth, shortcuts, and intricate operations like\nself-attention, VanillaNet is refreshingly concise yet remarkably powerful.\nEach layer is carefully crafted to be compact and straightforward, with\nnonlinear activation functions pruned after training to restore the original\narchitecture. VanillaNet overcomes the challenges of inherent complexity,\nmaking it ideal for resource-constrained environments. Its easy-to-understand\nand highly simplified architecture opens new possibilities for efficient\ndeployment. Extensive experimentation demonstrates that VanillaNet delivers\nperformance on par with renowned deep neural networks and vision transformers,\nshowcasing the power of minimalism in deep learning. This visionary journey of\nVanillaNet has significant potential to redefine the landscape and challenge\nthe status quo of foundation model, setting a new path for elegant and\neffective model design. Pre-trained models and codes are available at\nhttps://github.com/huawei-noah/VanillaNet and\nhttps://gitee.com/mindspore/models/tree/master/research/cv/vanillanet.", + "authors": "Hanting Chen, Yunhe Wang, Jianyuan Guo, Dacheng Tao", + "published": "2023-05-22", + "updated": "2023-05-23", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Over the past few decades, artificial neural networks have made remarkable progress, driven by the idea that increasing network complexity leads to improved performance. These networks, which consist of numerous layers with a large number of neurons or transformer blocks [43, 31], are capable of performing a variety of human-like tasks, such as face recognition [25], speech recognition [8], object detection [38], natural language processing [43], and content generation [2]. The impressive computational power of modern hardware allows neural networks to complete these tasks with both high accuracy and efficiency. As a result, AI-embedded devices are becoming increasingly prevalent in our lives, including smartphones, AI cameras, voice assistants, and autonomous vehicles. Admittedly, one notable breakthrough in this field is the development of AlexNet [24], which consists of 12 layers and achieves state-of-the-art performance on the large-scale image recognition benchmark [7]. Building on this success, ResNet [18] introduces identity mappings through shortcut connections, enabling the training of deep neural networks with high performance across a wide range of computer vision applications, such as image classification [40], object detection [38], and semantic segmentation [33]. The incorporation of human-designed modules in these models, as well as the continued increase in network complexity, has undeniably enhanced the representational \u2217corresponding author Preprint. Under review. arXiv:2305.12972v2 [cs.CV] 23 May 2023 \fFigure 1: The architecture of VanillaNet-6 model, which consists of only 6 convolutional layers, which are very easily to be employed on any modern hardwares. The size of input features are downsampled while the channels are doubled in each stage, which borrows from the design of classical neural networks such as AlexNet [24] and VGGNet [40]. capabilities of deep neural networks, leading to a surge of research on how to train networks with more complex architectures [23, 19, 47] to achieve even higher performance. Apart from convolutional architectures, Dosovitskiy et al.[13] have introduced the transformer architecture to image recognition tasks, demonstrating its potential for leveraging large-scale training data. Zhai et al.[53] investigated the scaling laws of vision transformer architectures, achieving an impressive 90.45% top-1 accuracy on the ImageNet dataset, which indicates that deeper transformer architectures, like convolutional networks, tend to exhibit better performance. Wang et al.[44] further proposed scaling the depth of transformers to 1,000 layers for even higher accuracy. Liu et al.[32] revisited the design space of neural networks and introduced ConvNext, achieving similar performance to state-of-the-art transformer architectures. Although well-optimized deep and complex neural networks achieve satisfying performance, their increasing complexity poses challenges for deployment. For example, shortcut operations in ResNets consume significant off-chip memory traffic as they merge features from different layers [27]. Furthermore, complicated operations such as axial shift in AS-MLP [28] and shift window selfattention in Swin Transformer [31] require sophisticated engineering implementation, including rewriting CUDA codes. These challenges call for a paradigm shift towards simplicity in neural network design. However, the development of ResNet has seemingly led to the abandonment of neural architectures with pure convolutional layers (without extra modules such as shortcuts). This is mainly due to the performance enhancement achieved by adding convolutional layers not meeting expectations. As discussed in [18], plain networks without shortcuts suffer from gradient vanishing, causing a 34-layer plain network to perform worse than an 18-layer one. Moreover, the performance of simpler networks like AlexNet [24] and VGGNet [40] has been largely outpaced by deep and complex networks, such as ResNets [18] and ViT [7]. Consequently, less attention has been paid to the design and optimization of neural networks with simple architectures. Addressing this issue and developing concise models with high performance would be of great value. To this end, we propose VanillaNet, a novel neural network architecture emphasizing the elegance and simplicity of design while retaining remarkable performance in computer vision tasks. VanillaNet achieves this by eschewing excessive depth, shortcuts, and intricate operations such as self-attention, leading to a series of streamlined networks that address the issue of inherent complexity and are well-suited for resource-limited environments. To train our proposed VanillaNets, we conduct a comprehensive analysis of the challenges associated with their simplified architectures and devise a \"deep training\" strategy. This approach starts with several layers containing non-linear activation functions. As the training proceeds, we progressively eliminate these non-linear layers, allowing for easy merging while preserving inference speed. To augment the networks\u2019 non-linearity, we put forward an efficient, series-based activation function incorporating multiple learnable affine transfor2 \fmations. Applying these techniques has been demonstrated to significantly boost the performance of less complex neural networks. As illustrated in Figure 3, VanillaNet surpasses contemporary networks with elaborate architectures concerning both efficiency and accuracy, highlighting the potential of a minimalist approach in deep learning. This pioneering investigation of VanillaNet paves the way for a new direction in neural network design, challenging the established norms of foundation models and establishing a new trajectory for refined and effective model creation. 2 A Vanilla Neural Architecture Over the past decades, researchers have reach some consensus in the basic design of neural networks. Most of the state-of-the-art image classification network architectures should consist of three parts: a stem block to transform the input images from 3 channels into multiple channels with downsampling, a main body to learn useful information, a fully connect layer for classification outputs. The main body usually have four stages, where each stage is derived by stacking same blocks. After each stage, the channels of features will expand while the height and width will decrease. Different networks utilize and stack different kinds of blocks to construct deep models. Despite the success of existing deep networks, they utilize large number of complex layers to extract high-level features for the following tasks. For example, the well-known ResNet [18] requires 34 or 50 layers with shortcuts for achieving over 70% top-1 accuracy on ImageNet. The base version of ViT [13] consists of 62 layers since the query, key and value in self-attention require multiple layers to calculate. With the growing of AI chips, the bottleneck of inference speed of neural networks would not be FLOPs or parameters, since modern GPUs can easily do parallel calculation with strong computing power. In contrast, their complex designs and large depths block their speed. To this end, we propose the vanilla network, i.e., VanillaNet, whose architecture is shown in Figure 1. We follow the popular design of neural network with the stem, main body and fully connect layer. Different with existing deep networks, we only employ one layer in each stage to establish a extremely simple network with as few layer as possible. Here we show the architecture of the VanillaNet in details, which takes 6 layers as an example. For the stem, we utilize a 4 \u00d7 4 \u00d7 3 \u00d7 C convolutional layer with stride 4 following the popular settings in [18, 31, 32] to map the images with 3 channels to features with C channels. At stage 1, 2 and 3, a maxpooling layer with stride 2 is used to decrease the size and feature map and the number of channels is increased by 2. At stage 4, we do not increasing the number of channels as it follows an average pooling layer. The last layer is a fully connected layer to output the classification results. The kernel size of each convolutional layer is 1 \u00d7 1, since we aim to use minimal calculation cost for each layer while keep the information of feature maps. The activation function is applied after each 1 \u00d7 1 convolutional layer. To ease the training procedure of the network, batch normalization is also added after each layer. For the VanillaNet with different number of layers, we add blocks in each stage, which will be detailed in the supplementary material. It should be noted that the VanillaNet has no shortcut, since we empirically find adding shortcut shows little performance improvement. This also gives another benefit that the proposed architecture is extremely easy to implemented since there are no branch and extra blocks such as squeeze and excitation block [22]. Although the architecture of VanillaNet is simple and relatively shallow, its weak non-linearity caused limit the performance, Therefore, we propose a series of techniques to solve the problem. 3 Training of Vanilla Networks It is common in deep learning to enhance the performance of models by introducing stronger capacity in the training phase [4, 50]. To this end, we propose to utilize a deep training technique to bring up the ability during training in the proposed VanillaNet, since deep network have stronger non-linearity than shallow network. 3.1 Deep Training Strategy The main idea of deep training strategy is to train two convolutional layers with an activation function instead of a single convolution layer in the beginning of training procedure. The activation function 3 \fis gradually reduce to an identity mapping with the increasing number of training epochs. At the end of training, two convolutions can be easily merged into the one convolution to reduce the inference time. This kind of idea is also widely used in CNNs [10, 12, 9, 11]. We then describe how to conduct this technique in detail. For an activation function A(x) (which can be the usual functions such ReLU and Tanh), we combine it with an identity mapping, which can be formulated as: A\u2032(x) = (1 \u2212\u03bb)A(x) + \u03bbx, (1) where \u03bb is a hyper-parameter to balance the non-linearity of the modified activation function A\u2032(x). Denote the current epoch and the number of deep training epochs as e and E, respectively. We set \u03bb = e E . Therefore, at the beginning of training (e = 0), A\u2032(x) = A(x), which means the network have strong non-linearity. When the training converged, we have A\u2032(x) = x, which means the two convolutional layers have no activation functions in the middle. We further demonstrate how to merge these two convolutional layers. We first convert every batch normalization layer and its preceding convolution into a single convolution. We denote W \u2208RCout\u00d7(Cin\u00d7k\u00d7k), B \u2208RCout as the weight and bias matrices of convolutional kernel with Cin input channels, Cout output channels and kernel size k. The scale, shift, mean and variance in batch normalization are represented as \u03b3, \u03b2, \u00b5, \u03c3 \u2208RCout, respectively. The merged weight and bias matrices are: W \u2032 i = \u03b3i \u03c3i Wi, B\u2032 i = (Bi \u2212\u00b5i)\u03b3i \u03c3i + \u03b2i, (2) where subscript i \u2208{1, 2, ..., Cout} denotes the value in i-th output channels. After merging the convolution with batch normalization, we begin to merge the two 1\u00d71 convolutions. Denote x \u2208RCin\u00d7H\u00d7W and y \u2208RCout\u00d7H\u2032\u00d7W \u2032 as the input and output features, the convolution can be formulated as: y = W \u2217x = W \u00b7 im2col(x) = W \u00b7 X, (3) where \u2217denotes the convolution operation, \u00b7 denotes the matrix multiplication and X \u2208 R(Cin\u00d71\u00d71)\u00d7(H\u2032\u00d7W \u2032) is derived from the im2col operation to transform the input into a matrix corresponding to the kernel shape. Fortunately, for 1 \u00d7 1 convolution, we find that the im2col operation becomes a simple reshape operation since there are no need for sliding kernels with overlap. Therefore, denote the weight matrix of two convolution layers as W 1 and W 2, the two convolution without activation function is formulated as: y = W 1 \u2217(W 2 \u2217x) = W 1 \u00b7 W 2 \u00b7 im2col(x) = (W 1 \u00b7 W 2) \u2217X, (4) Therefore, 1 \u00d7 1 convolution can merged without increasing the inference speed. 3.2 Series Informed Activation Function There have been proposed several different activation functions for deep neural networks, including the most popular Rectified Linear Unit (ReLU) and its variants (PReLU [17], GeLU [20] and Swish [37]). They focus on bring up the performance of deep and complex networks using different activation functions. However, as theoretically proved by the existing works [35, 14, 42], the limited power of simple and shallow network are mainly caused by the poor non-linearity, which is different with deep and complex networks and thus has not been fully investigated. In fact, there are two ways to improve the non-linearity of a neural network: stacking the non-linear activation layers or increase the non-linearity of each activation layer, while the trend of existing networks choose the former one, which results in high latency when the parallel computation ability is excess. One straight forward idea to improve non-linearity of activation layer is stacking. The serially stacking of activation function is the key idea of deep networks. In contrast, we turn to concurrently stacking the activation function. Denote a single activation function for input x in neural network as A(x), which can be the usual functions such ReLU and Tanh. The concurrently stacking of A(x) can be formulated as: As(x) = n X i=1 aiA(x + bi), (5) 4 \fwhere n denotes the number of stacked activation function and ai, bi is the scale and bias of each activation to avoid simple accumulation. The non-linearity of the activation function can be largely enhanced by concurrently stacking. Equation 5 can be regarded as a series in mathematics, which is the operation of adding many quantities. To further enrich the approximation ability of the series, we enable the series based function to learn the global information by varying the inputs from their neighbors, which is similar with BNET [49]. Specifically, given a input feature x \u2208RH\u00d7W \u00d7C, where H, W and C are the number of its width, height and channel, the activation function is formulated as: As(xh,w,c) = X i,j\u2208{\u2212n,n} ai,j,cA(xi+h,j+w,c + bc), (6) where h \u2208{1, 2, ..., H}, w \u2208{1, 2, ..., W} and c \u2208{1, 2, ..., C}. It is easy to see that when n = 0, the series based activation function As(x) degenerates to the plain activation function A(x), which means that the proposed method can be regarded as a general extension of existing activation functions. We use ReLU as the basic activation function to construct our series since it is efficient for inference in GPUs. We further analyze the computation complexity of the proposed activation function compared with its corresponding convolutional layer. For a convolutional layer with K kernel size, Cin input channels and Cout output channels, the computational complexity is: O(CONV) = H \u00d7 W \u00d7 Cin \u00d7 Cout \u00d7 k2, (7) while computation cost of its series activation layer is: O(SA) = H \u00d7 W \u00d7 Cin \u00d7 n2. (8) Therefore, we have: O(CONV) O(SA) = H \u00d7 W \u00d7 Cin \u00d7 Cout \u00d7 K2 H \u00d7 W \u00d7 Cin \u00d7 n2 = Cout \u00d7 k2 n2 . (9) Taking the 4th stage in VanillaNet-B as an example, where Cout = 2048, k = 1 and n = 7, the ratio is about 84. In conclusion, the computation cost of the proposed activation function is still much lower than the convolutional layers. More experimental complexity analysis will be shown in the following section. 4 Experiments In this section, we conduct experiments to verify the performance of the proposed VanillaNet on large scale image classification. Ablation study is provided to investigate effectiveness of each component of the proposed VanillaNet. We also visualize the feature of VanillaNet to further study how the proposed network learns from images. 4.1 Ablation Study In this section, we conduct ablation study to investigate the effectiveness of proposed modules, including the series activation function and the deep training technique. Besides, we analyze the influence of adding shortcuts in the proposed VanillaNet. Table 1: Ablation study on the number of series. n FLOPs (B) Latency (ms) Top-1 (%) 0 5.83 1.96 60.53 1 5.86 1.97 74.53 2 5.91 1.99 75.62 3 5.99 2.01 76.36 4 6.10 2.18 76.43 Influence of number of series in activation function. In the above section, we propose a series activation function to enhance the performance of plain activation function and enable global information exchange in feature maps. Table 1 shows the performance of the proposed VanillaNet using different number of n in Equation 6. When n = 0, the activation function degenerate into the plain ReLU activation function. Although the inference speed of this network is higher than using the series activation function, 5 \fthe network can only achieve a 60.53% top-1 accuracy on the ImageNet dataset, which cannot be applied in real-world applications. It proves that the poor non-linearity of activation function results in poor performance of vanilla networks. To this end, we propose the series activation function. When n = 1, the network can achieve a 74.53% accuracy, which is a huge improvement compared with 60.53%. The result demonstrate the effectiveness of the proposed activation function. When the number of n increases, the performance of the network brings up. We find that n = 3 is a good balance in the top-1 accuracy and latency. Therefore, we use n = 3 for the rest experiments. It should be noted that the FLOPs of the proposed activation function is very small compared with the original network, which is the same as the conclusion we derive in Equation 9. Table 2: Ablation study on different networks. Network Deep train. Series act. Top-1 (%) VanillaNet-6 59.58 \u2713 60.53 \u2713 75.23 \u2713 \u2713 76.36 AlexNet 57.52 \u2713 59.09 \u2713 61.12 \u2713 \u2713 63.59 ResNet-50 76.13 \u2713 76.16 \u2713 76.30 \u2713 \u2713 76.27 Influence of deep training. As the VanillaNet is very shallow, we propose to increase the training nonlinearity to bring up its performance. We then analyze the effectiveness of the proposed deep training technique. Table 2 shows the results on using deep training technique with VanillaNet-6. As a result, the original VanillaNet achieves a 75.23% top-1 accuracy, which is the baseline. By using the deep training technique, the proposed VanillaNet can achieve a 76.36% accuracy. The results demonstrate that the proposed deep training technique is useful for the shallow network. Moreover, we further apply the deep training and series activation function in other networks to show the generalization ability of the two techniques. Table 2 reports the results of AlexNet and ResNet-50, which are two classical deep neural networks, on the ImageNet dataset. The original AlexNet can only acheive a 57.52% accuracy with 12 layers. By applying the proposed deep training and series activation function, the performance of AlexNet can be largely brought up by about 6%, which demonstrates that the proposed technique is highly effective for shallow networks. When it turns to ResNet-50 whose architecture are relatively complex, the performance gain is little. This results suggests the deep and complex networks already have enough non-linearity without the proposed techniques. Table 3: Ablation on adding shortcuts. Type Top-1 (%) no shortcut 76.36 shortcut before act 75.92 shortcut after act 75.72 Influence of shortcuts. In deep neural networks, a common sense is that adding shortcut can largely ease the training procedure and improve the performance [18]. To this end, we investigate whether shortcut would benefit the performance of shallow and ximple network. We propose to use two kinds of location of shortcut, i.e., shortcut after the activation function and shortcut before the activation function, which are proposed in the original ResNet [18] and PreAct-ResNet [19], respectively. Since the number of channels is large and the original convolution is with kernel size 1 \u00d7 1 in VanillaNet, adding a shortcut (even with 1 \u00d7 1 kernel size) would largely increase the FLOPs. Therefore, we use the parameter-free shortcut. It should be noted that if the stride is 2, the parameter-free shortcut will use an average pooling to decrease the size of feature maps and if the number of channel is increasing, the parameter-free shortcut utilizes padding for the extra channels following the original setting in [18]. Table 3 shows the ablation study on adding shortcuts. We surprisingly find that using shortcuts, in spite of any type of shortcuts, has little improvement on the performance of the proposed VanillaNet. We suggest that the bottleneck of vanilla networks is not the identity mapping, but the weak nonlinearity. The shortcut is useless for bringing up the non-linearity and may decrease non-linearity since the shortcut skips the activation function to decrease the depth of the vanilla network, therefore results in lower performance. 6 \f(a)Mis-classified by ResNet-50-TNR (b)Correctly classified by ResNet-50-TNR (c)Mis-classified by VanillaNet-9 (d)Correctly classified by VanillaNet-9 Figure 2: Visualization of attention maps of the classified samples by ResNet-50 and VanillaNet-9. We show the attention maps of their mis-classified samples and correctly classified samples for comparison. 4.2 Visualization of Attention To have a better understanding of the proposed VanillaNet, we further visualize the features using GradCam++ [3], which utilizes a weighted combination of the positive partial derivatives of the feature maps generated by the last convolutional layer with respect to the specific class to generate a good visual explanation. Figure 2 shows the visualization results for VanillaNet-9 and ResNets-50-TNR [45] with similar performance. The red color denotes that there are high activation in this region while the blue color denotes the weak activation for the predicted class. We can find that these two networks have different attention maps for different samples. It can be easily found that for ResNet-50, the area of active region is smaller. For the VanillaNet with only 9 depth, the active region is much larger than that of deep networks. We suggest that VanillaNet may be strong in extract all relative activations in the input images and thoroughly extract their information by using large number of parameters and FLOPs. In contrast, VanillaNet may be weak on analyzing part of the useful region since the non-linearity is relatively low. 4.3 Comparison with SOTA architectures To illustrate the effectiveness of the proposed method, we conduct experiments on the ImageNet [7] dataset, which consists of 224 \u00d7 224 pixel RGB color images. The ImageNet dataset contains 1.28M training images and 50K validation images with 1000 categories. We utilize strong regularization since the proposed VanillaNet has large number of parameters in each layer to capture useful information from the images with limited non-linearity. We also report the ImageNet Real results where the labels are refined. The latency is tested on Nvidia A100 GPU. We propose architecture for VanillaNet with different number of layers. Table 4 shows the classification results on the ImageNet dataset using different networks. We list the number of parameters, FLOPs, depth, GPU latency and accuracy for comparison. In the past decades, researchers focus on minimize the FLOPs or the latency in ARM/CPU for portable networks since they assume that the computing power in edge devices is limited. As the development of modern AI chips, several mobile devices such as driverless vehicle [26] and robots [15] are required and able to carry multiple GPUs with huge computing power for seeking real-time feedback of external inputs. Therefore, we test the GPU latency with batch size 1, which means that the AI chip has enough computing power to calculate each network. Under this situation, we find that the inference speed has little relationship with the number of FLOPs and parameters. Taking MobileNetV3-Large a an example, though it has a very low FLOPs (0.22B), its GPU latency is 7.83, which is even larger than our VanillaNet-13 with a 11.9B FLOPs. In fact, the inference speed in this setting is highly related to the complexity and number of layers. We can compare the inference speed of ShuffleNetV2x1.5 and ShuffleNetV2x2. In fact, their difference only lies in the number of channels. Therefore, although their number of parameters and FLOPs differs a lot. (0.3B v.s. 0.6B), their inference speed is nearly the same (7.23 and 7.84). We can also find in Table 4 that the straightforward architecture including 7 \fTable 4: Comparison on ImageNet. Latency is tested on Nvidia A100 GPU with batch size of 1. Model Params (M) FLOPs (B) Depth Latency (ms) Acc (%) Real Acc (%) MobileNetV3-Small [21] 2.5 0.06 48 6.65 67.67 74.33 MobileNetV3-Large [21] 5.5 0.22 48 7.83 74.04 80.01 ShuffleNetV2x1.5 [39] 3.5 0.30 51 7.23 73.00 80.19 ShuffleNetV2x2 [21] 7.4 0.58 51 7.84 76.23 82.72 RepVGG-A0 [12] 8.1 1.36 23 3.22 72.41 79.33 RepVGG-A1 [12] 12.8 2.37 23 3.24 74.46 81.02 RepVGG-B0 [12] 14.3 3.1 29 3.88 75.14 81.74 RepVGG-B3 [12] 110.9 26.2 29 4.21 80.50 86.44 ViTAE-T [48] 4.8 1.5 67 13.37 75.3 82.9 ViTAE-S [48] 23.6 5.6 116 22.13 82.0 87.0 ViTAEV2-S [55] 19.2 5.7 130 24.53 82.6 87.6 ConvNextV2-A [46] 3.7 0.55 41 6.07 76.2 82.79 ConvNextV2-F [46] 5.2 0.78 41 6.17 78.0 84.08 ConvNextV2-P [46] 9.1 1.37 41 6.29 79.7 85.60 ConvNextV2-N [46] 15.6 2.45 47 6.85 81.2 ConvNextV2-T [46] 28.6 4.47 59 8.40 82.5 ConvNextV2-B [46] 88.7 15.4 113 15.41 84.3 Swin-T [31] 28.3 4.5 48 10.51 81.18 86.64 Swin-S [31] 49.6 8.7 96 20.25 83.21 87.60 ResNet-18-TNR [45] 11.7 1.8 18 3.12 70.6 79.4 ResNet-34-TNR [45] 21.8 3.7 34 5.57 75.5 83.4 ResNet-50-TNR [45] 25.6 4.1 50 7.64 79.8 85.7 VanillaNet-5 15.5 5.2 5 1.61 72.49 79.66 VanillaNet-6 32.5 6.0 6 2.01 76.36 82.86 VanillaNet-7 32.8 6.9 7 2.27 77.98 84.16 VanillaNet-8 37.1 7.7 8 2.56 79.13 85.14 VanillaNet-9 41.4 8.6 9 2.91 79.87 85.66 VanillaNet-10 45.7 9.4 10 3.24 80.57 86.25 VanillaNet-11 50.0 10.3 11 3.59 81.08 86.54 VanillaNet-12 54.3 11.1 12 3.82 81.55 86.81 VanillaNet-13 58.6 11.9 13 4.26 82.05 87.15 VanillaNet-13-1.5\u00d7 127.8 26.5 13 7.83 82.53 87.48 VanillaNet-13-1.5\u00d7\u2020 127.8 48.9 13 9.72 83.11 87.85 ResNet, VGGNet and our VanillaNet without extra branch and complex blocks (e.g., squeeze and excitation block or densely connects) achieves the highest inference speed. To this end, we propose the VanillaNet, which is simple and has few convolutional layers without any branch (even without shortcut). We set different number of layers in VanillaNets to construct a series of networks. As shown in Table 4, the VanillaNet-9 achieves a 79.87% accuracy with only a 2.91ms inference speed in GPU, which is over 50% faster than the ResNet-50 and ConvNextV2-P with similar performance. The surprising result demonstrate the potential of VanillaNet in real-time processing over the existing deep networks. We also scale the number of channels and the pooling size to obtain the proposed VanillaNet-13-1.5\u00d7\u2020, which achieves an 83.11% Top-1 accuracy on ImageNet, which suggests that the proposed vanilla neural network still have power to obtain such a high performance on large scale image classification task. It is suggested that we may not need deep and complex networks on image classification since scaling up VanillaNets can achieve similar performance with deep networks. The Figure 3 shows the depth and inference speed of different architectures. The inference speed with batch size 1 is highly related to the depth of the network instead of the number of parameters, which suggest that simple and shallow networks have huge potential in real-time processing. It can be easily find that the proposed VanillaNet achieve the best speed-accuracy trade-off among all these architectures with low GPU latency, which demonstrates the superiority of the proposed VanillaNet when the computing power is sufficient. 8 \f5 10 20 40 80 160 Depth (in log scale) 70 72 74 76 78 80 82 Accuracy (%) VanillaNet-5 VanillaNet-6 VanillaNet-8 VanillaNet-11 VanillaNet-13-1.5x ResNet-18-TNR ResNet-34-TNR ResNet-50-TNR DenseNet-121 ParNet-S ParNet-M ParNet-L ParNet-XL MobileNetV2 RegNet-04 RegNet-08 MobileNetV3-L VGG11-BN VGG13-BN VGG16-BN RepVGG-A0 RepVGG-A1 RepVGG-B0 RepVGG-B3 ConvNeXtV2-F ConvNeXtV2-P ConvNeXtV2-T Swin-T Swin-S EfficientNet-B0 ViT-L DenseNet-161 ViTAE-T ViTAE-S ViTAEV2-S 2 4 8 16 28 GPU Latency with bs=1 (ms) 72 74 76 78 80 82 Accuracy (%) VanillaNet-5 VanillaNet-6 VanillaNet-8 VanillaNet-10 VanillaNet-13 ResNet-34-TNR ResNet-50-TNR MobileNetV3-L RepVGG-A0 RepVGG-A1 RepVGG-B0 RepVGG-B3 ConvNeXtV2-F ConvNeXtV2-P ConvNeXtV2-N ConvNeXtV2-T Swin-T Swin-S ViTAE-T ViTAE-S ViTAEV2-S VanillaNet-13-1.5x VanillaNet-13-1.5x (a) Accuracy vs. depth (b) Accuracy v.s. inference speed Figure 3: Top-1 Accuracy on ImageNet v.s. inference speed on Nvidia A100 GPU with batch size 1. Size of the circle is related to the depth and parameters of each architecture in (a) and (b), respectively. VanillaNet achieves comparable performance with deep neural networks while with much smaller depth and latency. Table 5: Performance on COCO detection and segmentation. FLOPs are calculated with image size (1280, 800)on Nvidia A100 GPU. Framework Backbone FLOPs Params FPS APb APb 50 APb 75 APm APm 50 APb 75 RetinaNet [29] Swin-T [31] 245G 38.5M 27.5 41.5 62.1 44.2 VanillaNet-13 397G 74.6M 29.8 41.8 62.8 44.3 Mask RCNN [16] Swin-T [31] 267G 47.8M 28.2 42.7 65.2 46.8 39.3 62.2 42.2 VanillaNet-13 421G 76.3M 32.6 42.9 65.5 46.9 39.6 62.5 42.2 4.4 Experiments on COCO To further demonstrate the effectiveness of the proposed VanillaNet on downstream tasks, we conduct evaluation in the COCO dataset [30]. We use RetinaNet [29] and Mask-RCNN [16] as the framework to evaluate the proposed method. FPS is measured on Nvidia A100 GPU. Table 5 shows the performance of the proposed VanillaNet on COCO detection and segmentation. The proposed VanillaNet can successfully achieve similar performance with the ConvNext and the Swin backbone. Although the FLOPs and Parameters of VanillaNet is much higher than Swin and ConvNext, it has much higher FPS, which demonstrates the effectiveness of vanilla architectures on object detection and instance segmentation tasks. 5" + }, + { + "url": "http://arxiv.org/abs/2012.00364v4", + "title": "Pre-Trained Image Processing Transformer", + "abstract": "As the computing power of modern hardware is increasing strongly, pre-trained\ndeep learning models (e.g., BERT, GPT-3) learned on large-scale datasets have\nshown their effectiveness over conventional methods. The big progress is mainly\ncontributed to the representation ability of transformer and its variant\narchitectures. In this paper, we study the low-level computer vision task\n(e.g., denoising, super-resolution and deraining) and develop a new pre-trained\nmodel, namely, image processing transformer (IPT). To maximally excavate the\ncapability of transformer, we present to utilize the well-known ImageNet\nbenchmark for generating a large amount of corrupted image pairs. The IPT model\nis trained on these images with multi-heads and multi-tails. In addition, the\ncontrastive learning is introduced for well adapting to different image\nprocessing tasks. The pre-trained model can therefore efficiently employed on\ndesired task after fine-tuning. With only one pre-trained model, IPT\noutperforms the current state-of-the-art methods on various low-level\nbenchmarks. Code is available at https://github.com/huawei-noah/Pretrained-IPT\nand https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/cv/IPT", + "authors": "Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, Wen Gao", + "published": "2020-12-01", + "updated": "2021-11-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "main_content": "Introduction Image processing is one component of the low-level part of a more global image analysis or computer vision system. Results from the image processing can largely in\ufb02uence the subsequent high-level part to perform recognition and understanding of the image data. Recently, deep learning has been widely applied to solve low-level vision tasks, such as image super-resolution, inpainting, deraining and colorization. As many image processing tasks are related, it is nat*Corresponding author 26.6 26.7 26.8 26.9 27 27.1 27.2 27.3 HAN (ECCV 2020) IPT 31.5 31.6 31.7 31.8 31.9 32 32.1 RDN (CVPR 2018) IPT 0.3dB\u2191 0.4dB\u2191 Denoising (30) Denoising (50) Deraining 33.1 33.2 33.3 33.4 33.5 33.6 33.7 33.8 HAN (ECCV 2020) IPT 39 39.5 40 40.5 41 41.5 42 RCDNet (CVPR 2020) IPT 28.9 29 29.1 29.2 29.3 29.4 29.5 29.6 HAN (ECCV 2020) IPT 28.95 29.1 29.25 29.4 29.55 29.7 29.85 RDN (CVPR 2018) IPT SISR x2 SISR x3 SISR x4 0.4dB\u2191 0.4dB\u2191 0.4dB\u2191 1.6dB\u2191 Figure 1. Comparison on the performance of the proposed IPT and the state-of-the-art image processing models on different tasks. ural to expect a model pre-trained on one dataset can be helpful for another. But few studies have generalized pretraining across image processing tasks. Pre-training has the potential to provide an attractive solution to image processing tasks by addressing the following two challenges: First, task-speci\ufb01c data can be limited. This problem is exacerbated in image processing task that involves the paid-for data or data privacy, such as medical images [8] and satellite images [83]. Various inconsistent factors (e.g. camera parameter, illumination and weather) can further perturb the distribution of the captured data for training. Second, it is unknown which type of image processing job will be requested until the test image is presented. We therefore have to prepare a series of image processing modules at hand. They have distinct aims, but some underlying operations could be shared. It is now common to have pre-training in natural language processing and computer vision [12]. For example, the backbones of object detection models [98, 97] are often pre-trained on ImageNet classi\ufb01cation [18]. A numarXiv:2012.00364v4 [cs.CV] 8 Nov 2021 \fber of well-trained networks can now be easily obtained from the Internet, including AlexNet [43], VGGNet [63] and ResNet [34]. The seminal work Transformers [70] have been widely used in many natural language processing (NLP) tasks, such as translation [73] and questionanswering [66]. The secret of its success is to pre-train transformer-based models on a large text corpus and \ufb01netune them on the task-speci\ufb01c dataset. Variants of Transformers, like BERT [19] and GPT-3 [5], further enriched the training data and improved the pre-training skills. There have been interesting attempts on extending the success of Transformers to the computer vision \ufb01eld. For example, Wang et al. [71] and Fu et al. [25] applied the self-attention based models to capture global information on images. Carion et al. [7] proposed DERT to use transformer architectures for an end-to-end object detection. Most recently, Dosovitskiy et al. [22] introduced Vision Transformer (ViT) to treat input images as 16\u00d716 words and attained excellent results on image recognition. The aforementioned pre-training in computer vision and natural language mostly investigate a pretest classi\ufb01cation task, but both the input and the output in an image processing task are images. A straightforward application of these existing pre-training strategies might not be feasible. Further, how to effectively address different target image processing tasks in the pre-training stage remains a hard challenge. It is also instructive to note that the pre-training of image processing models enjoys a convenience of selfgenerating training instances based on the original real images. The synthetically manipulated images are taken for training, while the original image itself is the ground-truth to be reconstructed. In this paper, we develop a pre-trained model for image processing using the transformer architecture, namely, Image Processing Transformer (IPT). As the pre-trained model needs to be compatible with different image processing tasks, including super-resolution, denoising, and deraining, the entire network is composed of multiple pairs of head and tail corresponding to different tasks and a single shared body. Since the potential of transformer needs to be excavated using large-scale dataset, we should prepair a great number of images with considerable diversity for training the IPT model. To this end, we select the ImageNet benchmark which contains various high-resolution with 1,000 categories. For each image in the ImageNet, we generate multiple corrupted counterparts using several carefully designed operations to serve different tasks. For example, training samples for the super-resolution task are generated by downsampling original images. The entired dataset we used for training IPT contains about over 10 millions of images. Then, the transformer architecture is trained on the huge dataset as follows. The training images are input to the speci\ufb01c head, and the generated features are cropped into patches (i.e., \u201cwords\u201d) and \ufb02attened to sequences subsequently. The transformer body is employed to process the \ufb02attened features in which position and task embedding are utilized for encoder and decoder, respectively. In addition, tails are forced to predict the original images with different output sizes according to the speci\ufb01c task. Moreover, a contrastive loss on the relationship between patches of different inputs is introduced for well adopting to different image processing tasks. The proposed image processing transformer is learned in an end-to-end manner. Experimental results conducted on several benchmarks show that the pre-trained IPT model can surpass most of existing methods on their own tasks by a signi\ufb01cant enhancement after \ufb01ne-tuning. 2. Related Works 2.1. Image Processing Image processing consists of the manipulation of images, including super-resolution, denoising, dehazing, deraining, debluring, etc. There are a variety of deep-learningbased methods proposed to conduct on one or many kinds of image processing tasks. For the super-resolution, Dong et al. propose SRCNN [20, 21] which are considered as pioneering works introducing end-to-end models that reconstructs HR images from their LR counterparts. Kim et al. [41] further explore the capacity of deep neural network with a more deeper convolutional network. Ahn et al. [2] and Lim et al. [50] propose introduce residual block into SR task. Zhang et al. [92] and Anwar and Barnes [3] utilize the power of attention to enhance the performance on SR task. A various excellent works are also proposed for the other tasks, such as denoising [68, 32, 37, 45, 24], dehazing [6, 46, 85, 80], deraining [36, 78, 62, 29, 74, 47], and debluring [67, 53, 23, 10]. Different from above methods, we dig the capacity of both big models and huge volume of data. Then a pre-training model handling several image processing tasks is introduced. 2.2. Transformer Transformer [70] and its variants have proven its success being powerful unsupervised or self-supervised pretraining frameworks in various natural language processing tasks. For example, GPTs [59, 60, 5] are pre-trained in a autoregressive way that predicting next word in huge text datasets. BERT [19] learns from data without explicit supervision and predicts a masking word based on context. Colin et al. [61] proposes a universal pre-training framework for several downstream tasks. Yinhan et al. [52] proposes a robust variant for original BERT. Due to the success of Transformer-based models in the NLP \ufb01eld, there are many attempts to explore the bene\ufb01ts \fReshape Transformer Encoder Multi-head Multi-tail Features Features Flatten features Task embedding \u2026 Denoising Head Deraining Head x2 Up Head x4 Up Head \u2026 x4 Up Tail Denoising Tail Deraining Tail x2 Up Tail \u2026 \u2026 Transformer Decoder Figure 2. The diagram of the proposed image processing transformer (IPT). The IPT model consists of multi-head and multi-tail for different tasks and a shared transformer body including encoder and decoder. The input images are \ufb01rst converted to visual features and then divided into patches as visual words for subsequent processing. The resulting images with high visual quality are reconstructed by ensembling output patches. of Transformer in computer vision tasks. These attempts can be roughly divided into two types. The \ufb01rst is to introduce self-attention into the traditional convolutional neural network. Yuan et al. [82] introduce spatial attention for image segmentation. Fu et al. [26] proposes DANET utilizing the context information by combining spatial and channel attention. Wang et al. [75], Chen et al. [15], Jiang et al. [38] and Zhang et al. [91] also augment features by selfattention to enhance model performance on several highlevel vision tasks. The other type is to replace convolutional neural network with self-attention block. For instance, Kolesnikov et al. [42] and Dosovitskiy [22] conduct image classi\ufb01cation with transformer block. Carion et al. [7] and Zhu et al. [100] implement transformer-based models in detection. Chen et al. [11] proposes a pre-trained GPT model for generative and classi\ufb01cation tasks. Wu et al. [77] and Zhao et al. [96] propose pre-training methods for teansformer-based models for image recognition task. Jiang et al. [39] propose the TransGAN to generate images using Transformer. However, few related works focus on low-level vision tasks. In this paper, we explore a universal pre-training approach for image processing tasks. 3. Image Processing Transformer To excavate the potential use of transformer on image processing tasks for achieving better results, here we present the image processing transformer by pre-training on large-scale dataset. 3.1. IPT architecture The overall architecture of our IPT consists of four components: heads for extracting features from the input corrupted images (e.g., images with noise and low-resolution images), an encoder-decoder transformer is established for recovering the missing information in input data, and tails are used formapping the features into restored images. Here we brie\ufb02y introduce our architecture, details can be found in the supplementary material. Heads. To adjust different image processing task, we use a multi-head architecture to deal with each task separately, where each head consists of three convolutional layers. Denote the input image as x \u2208R3\u00d7H\u00d7W (3 means R, G, and B), the head generates a feature map fH \u2208RC\u00d7H\u00d7W with C channels and same height and width (typical we use C = 64). The calculation can be formulated as fH = Hi(x), where Hi (i = {1, . . . , Nt}) denote the head for the ith task and Nt denotes the number of tasks. Transformer encoder. Before input features into the transformer body, we split the given features into patches and each patch is regarded as a \u201dword\u201d. Speci\ufb01cally, the features fH \u2208RC\u00d7H\u00d7W are reshaped into a sequence of patches, i.e., fpi \u2208RP 2\u00d7C, i = {1, . . . , N}, where N = HW P 2 is the number of patches (i.e., the length of sequence) and P is patch size. To maintain the position information of each patch, we add learnable position encodings Epi \u2208RP 2\u00d7C for each patch of feature fpi following [22, 7], and Epi + fpi will be directly input into the transformer encoder. The architecture of encoder layer is \ffollowing the original structure in [70], which has a multihead self-attention module and a feed forward network. The output of encoder fEi \u2208RP 2\u00d7C for each patch has the same size to that of the input patch fpi. The calculation can be formulated as y0 = [Ep1 + fp1, Ep2 + fp2, . . . , EpN + fpN ] , qi = ki = vi = LN(yi\u22121), y\u2032 i = MSA(qi, ki, vi) + yi\u22121, yi = FFN(LN(y\u2032 i)) + y\u2032 i, i = 1, . . . , l [fE1, fE2, . . . , fEN ] = yl, (1) where l denotes the number of layers in the encoder, MSA denotes the multi-head self-attention module in the conventional transformer model [70], LN denotes the layer normalization [4] and FFN denotes the feed forward network, which contains two fully connected layers. Transformer decoder. The decoder also follows the same architecture and takes the output of decoder as input in the transformer body, which consists of two multi-head self-attention (MSA) layers and one feed forward network (FFN). The difference to that of the original transformer here is that we utilize a task-speci\ufb01c embedding as an additional input of the decoder. These task-speci\ufb01c embeddings Ei t \u2208RP 2\u00d7C, i = {1, . . . , Nt} are learned to decode features for different tasks. The calculation of decoder can be formulated as: z0 = [fE1, fE2, . . . , fEN ] , qi = ki = LN(zi\u22121) + Et, vi = LN(zi\u22121), z\u2032 i = MSA(qi, ki, vi) + zi\u22121, q\u2032 i = LN(z\u2032 i) + Et, k\u2032 i = v\u2032 i = LN(z0), z\u2032\u2032 i = MSA(q\u2032 i, k\u2032 i, v\u2032 i) + z\u2032 i, zi = FFN(LN(z\u2032\u2032 i )) + z\u2032\u2032 i , i = 1, . . . , l [fD1, fD2, . . . , fDN ] = yl, (2) where fDi \u2208RP 2\u00d7C denotes the outputs of decoder. The decoded N patched features with size P 2 \u00d7 C are then reshaped into the features fD with size C \u00d7 H \u00d7 W. Tails. The properties of tails are same as those of heads, we use multi tails to deal with different tasks. The calculation can be formulated as fT = T i(fD), where T i (i = {1, . . . , Nt}) denote the head for the ith task and Nt denotes the number of tasks. The output fT is the resulted images size of 3 \u00d7 H\u2032 \u00d7 W \u2032 which is determined by the speci\ufb01c task. For example, H\u2032 = 2H, W = 2W for a 2\u00d7 super-resolution task. 3.2. Pre-training on ImageNet Besides the architecture of transformer itself, one of the key factors for successfully training an excellent transformer is that the well use of large-scale datasets. Compared with image classi\ufb01cation, the number of available data used for image processing task is relatively small (e.g., only 2000 images on DIV2K dataset for the image super-resolution task), we propose to utilize the well-known ImageNet as the baseline dataset for pre-training our IPT model, then we generate the entire dataset for several tasks (e.g., superresolution and denosing) as follows. As the images in the ImageNet benchmark are of high diversity, which contains over 1 million of natural images from 1,000 different categories. These images have abundant texture and color information. We \ufb01rst remove the semantic label and manually synthesize a variety of corrupted images from these unlabeled images with a variety of degradation models for different tasks. Note that synthesized dataset is also usually used in these image processing tasks and we use the same degeneration methods as suggested in [31, 1]. For example, super-resolution tasks often take bicubic degradation to generate low-resolution images, denoising tasks add Gaussian noise in clean images with different noise level to generate the noisy images. These synthesized images can signi\ufb01cantly improve the performance of learned deep networks including both CNN and transformer architectures, which will be shown in the experiment part. Basically, the corrupted images are synthesized as: Icorrupted = f(Iclean), (3) where f denotes the degradation transformation, which is depended on the speci\ufb01c task: for the super-resolution task, f sr is exactly the bicubic interpolation; for image denoising, f noise(I) = I + \u03b7, where \u03b7 is the additive Gaussian noise; for deraining, f rain(I) = I +r in which r is a handcrafted rain streak. The loss function for learning our IPT in the supervised fashion can be formulated as: Lsupervised = Nt X i=1 L1(IPT(Ii corrupted), Iclean), (4) where L1 denote the conventional L1 loss for reconstructing desired images and Ii corrupted denote the corrupted image for task i, respectively. In addition, Eq. 4 implies that the proposed framework is trained with multiple image process tasks simultaneously. Speci\ufb01cally, for each batch, we randomly select one task from Nt supervised tasks for training and each task will be processed using the corresponding head, tail and task embedding, simultaneously. After the pre-training the IPT model, it will capture the intrinsic features and transformations for a large variety of image processing tasks thus can be further \ufb01ne-tuned to apply on the desired task using the new provided dataset. Moreover, other heads and tails will be dropped for saving the computation costs and parameters in the remained head, tail and body will be updated according to the back-propagation. However, due to the variety of degradation models, we cannot synthesize images for all image processing tasks. \fFor example, there is a wide range of possible noise levels in practice. Therefore, the generalization ability of the resulting IPT should be further enhanced. Similar to the pre-training natural language processing models, the relationship between patches of images is also informative. The patch in image scenario can be considered as a word in natural language processing. For example, patches cropped from the same feature map are more likely to appear together, which should be embedded into similar positions. Therefore, we introduce contrastive learning [13, 33] for learning universal features so that the pre-trained IPT model can be utilized to unseen tasks. In practice, denote the output patched features generated by IPT decoder for the given input xj as f j Di \u2208RP 2\u00d7C, i = {1, . . . , N}, where xj is selected from a batch of training images X = {x1, x2, . . . , xB}. We aims to minimize the distance between patched features from the same images while maximize the distance between patches from different images. The loss function for contrastive learning is formulated as: l(f j Di1 , f j Di2 ) = \u2212log exp(d(f j Di1 , f j Di2 )) PB k=1 Ik\u0338=jexp(d(f j Di1 , f k Di2 )) , Lconstrastive = 1 BN 2 N X i1=1 N X i2=1 B X j=1 l(f j Di1 , f j Di2 ), (5) where d(a, b) = aT b \u2225a\u2225\u2225b\u2225denotes the cosine similarity. Moreover, to make fully usage of both supervised and selfsupervised information, we reformulate the loss function as: LIP T = \u03bb \u00b7 Lcontrastive + Lsupervised. (6) Wherein, we combine the \u03bb-balanced contrastive loss with the supervised loss as the \ufb01nal objective function of IPT. Thus, the proposed transformer network trained using Eq. 6 can be effectively exploited on various existing image processing tasks. 4. Experiments In this section, we evaluate the performance of the proposed IPT on various image processing tasks including super-resolution and image denoising. We show that the pre-trained IPT model can achieve state-of-the-art performance on these tasks. Moreover, extensive experiments for ablation study show that the transformer-based models perform better than convolutional neural networks when using the large-scale dataset for solving the image processing problem. Datasets. To obtain better pre-trained results of the IPT model, we use the well-known ImageNet dataset, which consists of over 1M color images of high diversity. The training images are cropped into 48 \u00d7 48 patches with 3 channels for training, i.e., there are over 10M patches for training the IPT model. We then generate the corrupted images with 6 types of degradation: 2\u00d7, 3\u00d7, 4\u00d7 bicubic interpolation, 30, 50 noise level Gaussian noise and adding rainstreaks, respectively. For the rain-streak generation, we follow the method described in [79]. During the test, we crop the images in the test set into 48 \u00d7 48 patches with a 10 pixels overlap. Note that the same testing strategy is also adopted for CNN based models for a fair comparison, and the resulting PSNR values of CNN models are the same as that of their baselines. Training & Fine-tuning. We use 32 Nvidia NVIDIA Tesla V100 cards to train our IPT model using the conventional Adam optimizer with \u03b21 = 0.9, \u03b22 = 0.999 for 300 epochs on the modi\ufb01ed ImageNet dataset. The initial learning rate is set as 5e\u22125 and decayed to 2e\u22125 in 200 epoch with 256 batch size. Since the training set consists of different tasks, we cannot input all of them in a single batch due to the expensive memory cost. Therefore, we stack a batch of images from a randomly selected task in each iteration. After pre-training on the entire synthesized dataset, we \ufb01ne-tune the IPT model on the desired task (e.g., \u00d73 single image super-resolution) for 30 epochs with a learning rate of 2e\u22125. Note that SRCNN [20] also found that using ImageNet training can bring up the performance of the super-resolution task, while we propose a model \ufb01tting general low-level vision tasks. 4.1. Super-resolution We compare our model with several state-of-the-art CNN-based SR methods. As shown in Table 1, our pretrained IPT outperforms all the other methods and achieves the best performance in \u00d72, \u00d73, \u00d74 scale on all datasets. It is worth to highlight that our model achieves 33.76dB PSNR on the \u00d72 scale Urban100 dataset, which surpasses other methods with more than \u223c0.4dB, while previous SOTA methods can only achieve a <0.2dB improvement compared with others, which indicates the superiority of the proposed model by utilizing large scale pre-training. We further present the visualization results on our model in 4\u00d7 scale on Urban100 dataset. As shown in Figure 3, it is dif\ufb01cult for recover the original high resolution images since lots of information are lost due to the high scaling factor. Previous methods generated blurry images, while the super-resolution images produced by our model can well recover the details from the low-resolution images. 4.2. Denoising Since our pre-trained model can be well adapt to many tasks, we then evaluate the performance of our model on image denoising task. The training and testing data is generated by adding Gaussian noise with \u03c3 = 30, 50 to the clean images. To verify the effectiveness of the proposed method, \fUrban100 (\u00d74): img 004 HR VDSR [41] EDSR [51] RDN [94] OISR [35] SAN [17] RNAN [93] IGNN [99] IPT (ours) Urban100 (4\u00d7):img 012 HR Bicubic VDSR [41] EDSR [51] RDN [94] OISR [35] SAN [17] RNAN [93] IGNN [99] IPT (ours) Urban100 (4\u00d7): img 044 HR Bicubic VDSR [41] EDSR [51] RDN [94] OISR [35] SAN [17] RNAN [93] IGNN [99] IPT (ours) Figure 3. Visual results with bicubic downsampling (\u00d74) from Urban100. The proposed method recovers more details. Compared images are derived from [99]. BSD68: 163085 GT Noisy (\u03c3=50) CBM3D [16] TNRD [14] RDN [94] DnCNN [87] MemNet [65] IRCNN [88] FFDNet [89] IPT (ours) Figure 4. Color image denoising results with noise level \u03c3 = 50. Compared images are derived from [90]. we compare our results with various state-of-the-art models. Table 2 reported the color image denoising results on BSD68 and Urban100 dataset. As a result, our IPT achieves the best results among all denoising methods on different Gaussian noise level. Moreover, we surprisingly found that our model improve the state-of-the-art performance by \u223c0.3dB on the Urban100 dataset, which demonstrate the effectiveness of pre-training and the superiority of our transformer-based model. Figure 4 shows the visualization of the resulted images. As shown in the \ufb01gure, noisy images are hard to be recognized and it is dif\ufb01cult to recover the clean images. Therefore, existing methods fail to reconstruct enough details and generate abnormal pixels. As a result, our pre-trained model can well recover several details in the hair of this cat and our visual quality beats all the previous models obviously. 4.3. Deraining For the image deraining task, we evaluate our model on the synthesized Rain100L dataset [79], which consists of 100 rainy images. Quantitative results can be viewed in Table 3. Compared with the state-of-the-art methods, we achieve the best performance (41.62dB) with an 1.62dB improvement. Figure 5 shows the visualization results. Previous methods are failed to reconstruct the original clean images since they lack of image prior. As a result, our IPT model can present exactly the same image as the ground-truth and sur\fInput / Groundtruth 27.37 / 0.8154 DSC 29.34 / 0.8479 GMM 32.38 / 0.9306 JCAS 31.45 / 0.9151 Clear 31.59 / 0.9380 RESCAN 41.26 / 0.9887 PReNet 37.27 / 0.9793 SPANet 35.67 / 0.9700 JORDER_E 41.11 / 0.9894 SIRR 36.99 / 0.9692 RCDNet 42.15 / 0.9912 IPT (ours) 43.91 / 0.9922 Figure 5. Image deraining results on the Rain100L dataset. Compared images are derived from [72]. Table 1. Quantitative results on image super-resolution. Best and second best results are highlighted and underlined. Method Scale Set5 Set14 B100 Urban100 VDSR [41] \u00d72 37.53 33.05 31.90 30.77 EDSR [51] \u00d72 38.11 33.92 32.32 32.93 RCAN [92] \u00d72 38.27 34.12 32.41 33.34 RDN [94] \u00d72 38.24 34.01 32.34 32.89 OISR-RK3 [35] \u00d72 38.21 33.94 32.36 33.03 RNAN [93] \u00d72 38.17 33.87 32.32 32.73 SAN [17] \u00d72 38.31 34.07 32.42 33.10 HAN [55] \u00d72 38.27 34.16 32.41 33.35 IGNN [99] \u00d72 38.24 34.07 32.41 33.23 IPT (ours) \u00d72 38.37 34.43 32.48 33.76 VDSR [41] \u00d73 33.67 29.78 28.83 27.14 EDSR [51] \u00d73 34.65 30.52 29.25 28.80 RCAN [92] \u00d73 34.74 30.65 29.32 29.09 RDN [94] \u00d73 34.71 30.57 29.26 28.80 OISR-RK3 [35] \u00d73 34.72 30.57 29.29 28.95 RNAN [93] \u00d73 34.66 30.52 29.26 28.75 SAN [17] \u00d73 34.75 30.59 29.33 28.93 HAN [55] \u00d73 34.75 30.67 29.32 29.10 IGNN [99] \u00d73 34.72 30.66 29.31 29.03 IPT (ours) \u00d73 34.81 30.85 29.38 29.49 VDSR [41] \u00d74 31.35 28.02 27.29 25.18 EDSR [51] \u00d74 32.46 28.80 27.71 26.64 RCAN [92] \u00d74 32.63 28.87 27.77 26.82 SAN [17] \u00d74 32.64 28.92 27.78 26.79 RDN [94] \u00d74 32.47 28.81 27.72 26.61 OISR-RK3 [35] \u00d74 32.53 28.86 27.75 26.79 RNAN [93] \u00d74 32.49 28.83 27.72 26.61 HAN [55] \u00d74 32.64 28.90 27.80 26.85 IGNN [99] \u00d74 32.57 28.85 27.77 26.84 IPT (ours) \u00d74 32.64 29.01 27.82 27.26 passes all the previous algorithms in visual quality. This result substantiates the generality of the proposed model. Table 2. Quantitative results on color image denoising. Best and second best results are highlighted and underlined. Method BSD68 Urban100 30 50 30 50 CBM3D [16] 29.73 27.38 30.36 27.94 TNRD [14] 27.64 25.96 27.40 25.52 DnCNN [87] 30.40 28.01 30.28 28.16 MemNet [65] 28.39 26.33 28.93 26.53 IRCNN [88] 30.22 27.86 30.28 27.69 FFDNet [89] 30.31 27.96 30.53 28.05 SADNet [9] 30.64 28.32 N/A N/A RDN [95] 30.67 28.31 31.69 29.29 IPT (ours) 30.75 28.39 32.00 29.71 4.4. Generalization Ability Although we can generate various corrupted images, natural images are of high complexity and we cannot synthesize all possible images for pre-training the transformer model. However, a good pre-trained model should have the capacity for well adapting other tasks as those in the \ufb01eld of NLP. To this end, we then conduct several experiments to verify the generalization ability of our model. In practice, we test corrupted images that did not include in our synthesized ImageNet dataset, i.e., image denoising with noisy level 10 and 70, respectively. We use the heads and tails for image denoising tasks as the pre-trained model. The detailed results are shown in Table 4, we compare the performance of using the pre-trained IPT model and the state-of-the-art methods for image denoising. Obviously, IPT model outperforms other conventional methods, which \fTable 3. Quantitative results of image deraining on the Rain100L dataset. Best and second best results are highlighted and underlined. Method Input DSC [28] GMM [49] JCAS [31] Clear [27] DDN [28] PSNR 26.90 27.34 29.05 28.54 30.24 32.38 SSIM 0.8384 0.8494 0.8717 0.8524 0.9344 0.9258 RESCAN [48] PReNet [62] JORDER E [79] SPANet [74] SSIR [76] RCDNet [72] IPT (ours) 38.52 37.45 38.59 35.33 32.37 40.00 41.62 0.9812 0.9790 0.9834 0.9694 0.9258 0.9860 0.9880 Table 4. Generation ability of our IPT model on color image denoising with different noise levels. Best and second best results are highlighted and underlined. Method BSD68 Urban100 10 70 10 70 CBM3D [16] 35.91 26.00 36.00 26.31 TNRD [14] 33.36 23.83 33.60 22.63 DnCNN [87] 36.31 26.56 36.21 26.17 MemNet [65] N/A 25.08 N/A 24.96 IRCNN [88] 36.06 N/A 35.81 N/A FFDNet [89] 36.14 26.53 35.77 26.39 RDN [95] 36.47 26.85 36.69 27.63 IPT (ours) 36.53 26.92 36.99 27.90 0.0 0.2 0.4 0.6 0.8 1.0 Percentage of Usage of ImageNet (1.1M Images) 32.8 33.0 33.2 33.4 33.6 33.8 PSNR (dB) IPT EDSR IGNN RDN Figure 6. The performance of CNN and IPT models using different percentages of data. demonstrates that the pre-trained model can capture more useful information and features from the large-scale dataset. 4.5. Ablation Study Impact of data percentage. To evaluate the effectiveness of the transformer architecture, we conduct experiments to analyse the improvement of pre-training on CNNbased model and transformer-based model. We use 20%, 40%, 60%, 80% and 100% percentages of the synthesized ImageNet dataset to analyse the impact on the number of used data for resulting performance. Figure 6 shows the results of different pre-trained models. When the models are not pre-trained or pre-trained with small amount (< 60%) of the entire dataset, the CNN models achieve better performance. In contrast, when using large-scale data, the transformer-based models overwhelming CNN models, which demonstrates that the effectiveness of our IPT model for pre-training. Table 5. Impact of \u03bb for contrastive learning. \u03bb 0 0.05 0.1 0.2 0.5 PSNR 38.27 38.32 38.37 38.33 38.26 Impact of contrastive learning. As discussed above, to improve the representation ability of our pre-trained model, we embed the contrastive learning loss (Eq. 6) into the training procedure. We then evaluate its effectiveness on the \u00d72 scale super-resolution task using the Set4 dataset. Table 5 shows the impact of the hyper-parameter \u03bb for balancing the two terms in Eq. 6. When \u03bb=0, the IPT model is trained using only a supervised learning approach, the resulting PSNR value is 38.27dB. When employing the contrastive loss for self-supervised learning, the model can achieve a 38.37dB PSNR value (\u03bb = 0.1), which is about 0.1dB higher than that of the model trained with \u03bb = 0. These results further demonstrate the effectiveness of the contrastive learning for learning better pre-trained IPT model. 5." + }, + { + "url": "http://arxiv.org/abs/2003.03519v1", + "title": "Distilling portable Generative Adversarial Networks for Image Translation", + "abstract": "Despite Generative Adversarial Networks (GANs) have been widely used in\nvarious image-to-image translation tasks, they can be hardly applied on mobile\ndevices due to their heavy computation and storage cost. Traditional network\ncompression methods focus on visually recognition tasks, but never deal with\ngeneration tasks. Inspired by knowledge distillation, a student generator of\nfewer parameters is trained by inheriting the low-level and high-level\ninformation from the original heavy teacher generator. To promote the\ncapability of student generator, we include a student discriminator to measure\nthe distances between real images, and images generated by student and teacher\ngenerators. An adversarial learning process is therefore established to\noptimize student generator and student discriminator. Qualitative and\nquantitative analysis by conducting experiments on benchmark datasets\ndemonstrate that the proposed method can learn portable generative models with\nstrong performance.", + "authors": "Hanting Chen, Yunhe Wang, Han Shu, Changyuan Wen, Chunjing Xu, Boxin Shi, Chao Xu, Chang Xu", + "published": "2020-03-07", + "updated": "2020-03-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG", + "eess.IV", + "stat.ML" + ], + "main_content": "Introduction Generative Adversarial Networks (GANs) have been successfully applied to a number of image-to-image translation tasks such as image synthesis (Karras et al. 2017), domain translation (Zhu et al. 2017; Isola et al. 2017; Choi et al. 2018; Huang et al. 2018; Lee et al. 2018), image denoising (Chen et al. 2018a) and image super-resolution (Ledig et al. 2017). The success of generative networks relies not only on the careful design of adversarial strategies but also on the growth of the computational capacities of neural networks. Executing most of the widely used GANs requires enormous computational resources, which limits GANs on PCs with modern GPUs. For example, (Zhu et al. 2017) uses a heavy GANs model that needs about 47.19G FLOPs for high \ufb01delity image synthesis. However, many fancy applications of GANs such as style transfer (Li and Wand 2016) and image enhancement (Chen et al. 2018b) are urgently required by portable devices, e.g. mobile phones and cameras. Considering the limited storage and CPU performance of mainstream mobile devices, it is essential to compress and accelerate generative networks. Copyright c \u20dd2020, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. Tremendous efforts have been made recently to compress and speed-up heavy deep models. For example, (Gong et al. 2014) utilized vector quantization approach to represent similar weights as cluster centers. (Wang et al. 2018a) introduced versatile \ufb01lters to replace conventional \ufb01lters and achieve high speed-up ratio. (Denton et al. 2014) exploited low-rank decomposition to process the weight matrices of fully-connected layers. (Chen et al. 2015) proposed a hashing based method to encode parameters in CNNs. (Wang et al. 2018c) proposed to packing neural networks in frequency domain. (Han, Mao, and Dally 2015) employed pruning, quantization and Huffman coding to obtain a compact deep CNN with lower computational complexity. (Wang et al. 2017) introduced circulant matrix to learn compact feature map of CNNs. (Courbariaux et al. 2016; Rastegari et al. 2016) explored neural networks with binary weights, which drastically reduced the memory usage. Although these approaches can provide very high compression and speed-up ratios with slight degradation on performance, most of them are devoted to processing neural networks for image classi\ufb01cation and object detection tasks. Existing neural network compression methods cannot be straightforwardly applied to compress GANs models, because of the following major reasons. First, compared with classi\ufb01cation models, it is more challenging to identify redundant weights in generative networks, as the generator requires a large number of parameters to establish a highdimensional mapping of extremely complex structures (e.g. image-to-image translation (Zhu et al. 2017)). Second, different from visual recognition and detection tasks which usually have ground-truth (e.g. labels and bounding boxes) for the training data, GAN is a generative model that usually does not have speci\ufb01c ground-truth for evaluating the output images, e.g. super-resolution and style transfer. Thus, conventional methods cannot easily excavate redundant weights or \ufb01lters in GANs. Finally, GANs have a more complex framework that consists of a generator and a discriminator and the two networks are simultaneously trained following a minimax two-player game, which is fundamentally different to the training procedure of ordinary deep neural networks for classi\ufb01cation. To this end, it is necessary to develop a speci\ufb01c framework for compressing and acceleratarXiv:2003.03519v1 [cs.CV] 7 Mar 2020 \fStudent Discriminator Teacher Generator (~43.4MB) Student Generator (~2.8MB) L1 Loss Teacher Discriminator Output Input Perceptual Loss Teacher Image Loss + Triplet Loss Distilling Generator Distilling Discriminator Figure 1: The diagram of the proposed framework for learning an ef\ufb01cient generative network by distilling knowledge from the orginal heavy network. Images generated by the student generator will be compared with those generated by the teacher generator through several metrics to fully inherit useful information from the teacher GAN. ing GANs. (Aguinaldo et al. 2019) proposed to minimize the MSE Loss between tracher and student to compress GANs, which only deal with the noise-to-image task, yet most usage of GANs in mobile devices are based on image-to-image translation task. Moreover, they do not distill knowledge to the discriminator, which takes an important part in GANs\u2019 training. In this paper, we proposed a novel framework for learning portable generative networks by utlizing the knowledge distillation scheme. In practice, the teacher generator is utlized for minimizing the pixel-wise and perceptual difference between images generated by student and teacher networks. The discriminator in the student GAN is then optimized by learning the relationship between true samples and generated samples from teacher and student networks. By following a minimax optimization, the student GAN can fully inhert knowledge from the teacher GAN. Extensive experiments conducted on several benchmark datasets and generative models demonstrate that generators learned by the proposed method can achieve a comparable performance with signi\ufb01cantly lower memory usage and computational cost compared to the original heavy networks. Preliminaries To illustrate the proposed method, here we focus on the image-to-image translation problem and take the pix2pix (Isola et al. 2017) as an example framework. Note that the proposed algorithm does not require special component of image translation and therefore can be easily embedded to any generative adversarial networks. In practice, the image translation problem aims to convert an input image in the source domain X to a output image in the target domain Y (e.g. a semantic label map to an RGB image). The goal of pix2pix is to learn mapping functions between domains X and Y . Denote the training samples in X as {x1, x2, \u00b7 \u00b7 \u00b7 , xn} and the corresponding samples in Y as {y1, y2, \u00b7 \u00b7 \u00b7 , yn}, the generator G is optimized to maps xi to yi (i.e. G : x \u2192y), which cannot be distinguished by the discriminator D. The discriminator is trained to detect the fake images generated by G. The objective of the GAN can be expressed as: LGAN(G, D) =Ex,y[log D(x, y)]+ Ex[log(1 \u2212D(x, G(x)))]. (1) Besides fooling the discriminator, the generator is to generate images which are close to the ground truth output. Therefore, the MSE loss is introduced for G: LMSE(G) = Ex,y[\u2225y \u2212G(x)\u22251]. (2) The entire objective of pix2pix is G\u2217= arg min G max D LGAN(G, D) + \u03bbLMSE(G). (3) To optimize the generator and discriminator in adversarial manner, the training of GAN is following a two-player minimax game. We alternate between optimizing D with \ufb01xed G and optimizing G with \ufb01xed D. With the help of the discriminator and L1 loss in Fcn. (3), the generator can translate images from the source domain to the target domain. \fAlthough GANs have already achieved satisfactory performance on domain translation tasks, the generators are designed to have a large number of parameters to generate images of high-dimensional semantic information, which prevents the applications of these networks in edge devices. Therefore, an effective method to learn portable GANs is urgently required. However, GANs consisting of a generator and a discriminator, has a completely different architecture and training procedures with the vanilla CNN. It is therefore dif\ufb01cult to adopt existing model compression algorithms, which are developed for image recognition tasks, to handle heavy GANs model directly. Moreover, the aim of GANs is to generate images which have complex structures instead of classi\ufb01cation or detection results. Thus, we are motivated to develop a novel framework for compressing generative models. There are a variety of schemes for network compression such as pruning and quantization. However, these methods need special supports for achieving satisfactory compression ratio and speed improvement, which cannot be directly embedded into mobile devices. Besides eliminating redundancy in pre-trained deep models, Knowledge Distillation presents an alternative approach to learn a portable student network with comparable performance and fewer parameters by inheriting knowledge from the teacher network (Hinton, Vinyals, and Dean 2015; Romero et al. 2014; You et al. 2017; Wang et al. 2018b; Heo et al. 2019), i.e. pretrained heavy network. Therefore, we introduce the teacherstudent learning paradigm (i.e. knowledge distillation) to learn portable GANs with fewer parameters and FLOPs. However, the existing teacher-student learning paradigm can only be applied to classi\ufb01cation tasks and needs to be redesigned for the generative models which have no ground truth. Denote GT as the pretrained teacher generator and GS as the portable student generator, a straightforward method, which was proposed in (Aguinaldo et al. 2019), to adopt knowledge distillation to the student generator could be formulated as: LL1(GS) = 1 n n X i=1 \u2225GT (xi) \u2212GS(xi)\u22252 1, (4) where \u2225\u00b7\u22251 is the conventional \u21131-norm. By minimizing Fcn. (4), images resulting from the student generator can be similar with those of the teacher generator in a pixel wise. However, this vanilla approach asking GS to minimize the Euclidean distance between the synthesis images of the teacher and student, which tend to produce blurry results (Isola et al. 2017). This is because that the goal of Euclidean distance is to minimize all averaged plausible outputs. Moreover, GAN consists of a generator and a discriminator. Only considering the generator is not enough. Therefore, it is necessary to advance knowledge distillation to learn ef\ufb01cient generators. Knowledge Distillation for GANs In this section, we propose a novel algorithm to obtain portable GANs utilizing the teacher-student paradigm. To transfer the useful information from the teacher GAN to the student GAN, we introduce loss functions by excavating relationship between samples and features in generators and discriminators. Distilling Generator As mentioned above, the straightforward method of utilizing the knowledge of the teacher generator is to minimize the Euclidean distance between generated images from the teacher and student generators (i.e. Fcn. (4)). However, the solutions of MSE optimization problems often lose highfrequency content, which will result in images with oversmooth textures. Instead of optimizing the pix-wise objective function, (Johnson, Alahi, and Fei-Fei 2016) de\ufb01ne the perceptual loss function based on the 19-th activation layer of the pertrained VGG network (Simonyan and Zisserman 2014). Motivated by this distance measure, we ask the teacher discriminator to assist the student generator to produce highlevel features as the teacher generator. Compared with the VGG network which is trained for image classi\ufb01cation, the discriminator is more relevant to the task of the generator. Therefore, we extract features of images generated by the teacher and student generators using the teacher discriminator and introduce the objective function guided by the teacher discriminator for training GS: Lperc(GS) = 1 n n X i=1 \u2225\u02c6 DT (GT (xi))\u2212\u02c6 DT (GS(xi))\u22252 1, (5) where \u02c6 DT is the \ufb01rst several layers of the discriminator of the teacher network. Since DT has been well trained to discriminate the true and fake samples, it can capture the manifold of the target domain. The above function is more like a \u201csoft target\u201d in knowledge distillation than directly matching the generated images of the teacher and student generators and therefore is more \ufb02exible for transferring knowledge of the teacher generator. In order to learn not only low-level but also high-level information from the teacher generator, we merge the two above loss functions. Therefore, the knowledge distillation function of the proposed method for GS is LG KD(GS) = LL1(GS) + \u03b3Lperc(GS), (6) where \u03b3 is a trade-off parameter to balance the two terms of the objective. Distilling Discriminator Besides the generator, the discriminator also plays an important role in GANs training. It is necessary to distill the student discriminator to assist training of the student generator. Different from the vanilla knowledge distillation algorithms which directly match the output of the teacher and student network, we introduce a adversarial teacher-student learning paradigm: the student discriminator is trained under the supervision of the teacher network, which will help the training of the student discriminator.Given a well-trained GANs model, images generated by the teacher generator network can mix the spurious with the genuine. The generated images of the teacher generator {G(xi)}n i=1 can be seen as an \fAlgorithm 1 Portable GAN learning via distillation. Require: A given teacher GAN consists of a generator GT and a discriminator DT , the training set X from domain X and Y from domain Y , hyper-parameters for knowledge distillation: \u03b2 and \u03b3. 1: Initialize the student generator GS and the student discriminator DS, where the number of parameters in GS in signi\ufb01cantly fewer than that in GT ; 2: repeat 3: Randomly select a batch of paired samples {xi}n i=1 from X and {yi}n i=1 from Y; 4: Employ GS and GT on the mini-batch: zS i \u2190GS(xi), zT i \u2190GT (xi); 5: Employ DT and DS to compute: DS(zS i ), DS(zT i ), DS(yi), DT (zS i ), DT (zT i ); 6: Calculate the loss function LL1(GS) (Fcn. (4)) and Lprec(GS) (Fcn. (5)) 7: Update weights in GS using back-propagation; 8: Calculate the loss function LGT (DS) (Fcn. (7)) and Ltri(DS) (Fcn. (8)) 9: Update weights in DS according to the gradient; 10: until convergence Ensure: The portable generative model GS. expansion of the target domain Y . Moreover, the ability of the teacher network exceeds that of the student network definitely. Therefore, images from teacher generator can be regarded as real samples for the student discriminator and the loss function for DS can be de\ufb01ned as: LGT (DS) = 1 n n X i=1 DS(GT (xi), True). (7) In the training of traditional GANs, the discriminator aims to classify the real images as the true samples while the fake images as the false samples, and the goal of the generator is to generate images whose outputs in the discriminator is true (i.e. to generate real images). By considering images from teacher generator as real samples, Fcn. (7) allows the student generator to imitate real images as well as the images generated by the teacher network, which makes the training of GS much more easier with abundant data. As mentioned above, we regard the true images and images generated by teacher generator as the same class (i.e. true labels) in DS. The distance between true images and images generated by teacher generator should be smaller than that between true images and the images generated by student generator. It is natural to use triplet loss to address this problem. Triplet loss, proposed by (Balntas et al. 2016), optimizes the black space such that samples with the same identity are closer to each other than those with different identity. It has been widely used in various \ufb01elds of computer vision such as face recognition (Schroff, Kalenichenko, and Philbin 2015) and person-ReID (Cheng et al. 2016). Therefore, we propose the triplet loss for DS: Ltri(DS) = 1 n n X i=1 h \u2225\u02c6 DS(yi) \u2212\u02c6 DS(GT (xi))\u22251 \u2212\u2225\u02c6 DS(yi) \u2212\u02c6 DS(GS(xi))\u22251 + \u03b1 i +, (8) where the \u03b1 is the triplet margin to decide the distance between different classes, [\u00b7]+ = max(\u00b7, 0) and \u02c6 DS is obtained by removing the last layer of the discriminator DS. The advantage of this formulation is that the discriminator can construct a more speci\ufb01c manifold for the true samples than the traditional loss and then the generator will achieve higher performance with the help of the stronger discriminator. By exploiting knowledge distillation to the student generator and discriminator, we can learn strong and ef\ufb01cient GANs. The overall structure of the proposed method is illstratedillustrated in Fig. (1). Speci\ufb01cally, the objective function for the student GAN can be written as follows: LKD(GS, DS) = LGAN(GS, DS) + \u03b21LL1(GS)+ \u03b31Lperc(GS) + \u03b22LGT (DS) + \u03b32Ltri(DS). (9) where the LGAN denotes the traditional GAN loss for the generator and discriminator while \u03b21, \u03b22, \u03b31 and \u03b32 is the trade-off hyper-parameter to balance different objective. Note that this teacher-student learning paradigm does not require any speci\ufb01c architecture of GAN, and it can be easily adapted to other variants of GANs. Following the optimization of GANs (Goodfellow et al. 2014), DS and GS are trained alternatively. The objective of the proposed method is: G\u2217 S = arg min GS max DS LKD(GS, DS). (10) By optimizing the minimax problem, the student generator can not only work cooperatively with the teacher generator but also compete adversarially with the student discriminator. In conclusion, the procedure is formally presented in Alg. (1). Proposition 1. Denote the teacher generator, the student generator training with the teacher-student learning paradigm and the student generator trained without the guide of teacher as GT , GS and G\u2032 S, the number of parameters in GS and GT as pS and pT , the number of training sample as n. The upper bound of the expected error of GS (R(GS)) is smaller than that of G\u2032 S (R(G\u2032 S)), when n \u2265p4 T p4 S . The proof of Proposition (1) can be found in the supplementary materials. The inequality n \u2265p4 T p4 S can be easily hold for deep learning whose number of training samples is large. For example, in our experiments, the number of parameters of teachers is 2 or 4 times as that of students, where p4 T p4 S = 16 or 256. The number of training samples n is larger than 256 in our experiments (e.g. n \u22483000 in Cityscapes, n \u22482000 in horse to zebra task). \fInput Ground truth Scratch Aguinaldo et.al. Ours Teacher (a)Student GANs with 1/2 channels of the teacher GAN. (b)Student GANs with 1/4 channels of the teacher GAN. Figure 2: Different methods for mapping labels\u2192photos trained on Cityscapes images using pix2pix. Experiments In this section, we evaluated the proposed method on several benchmark datasets with two mainstream generative models on domain translation: CycleGAN and pix2pix. To demonstrate the superiority of the proposed algorithm, we will not only show the generated images for perceptual studies but also exploit the \u201cFCN-score\u201d introduced by (Isola et al. 2017) for the quantitative evaluation. Note that (Aguinaldo et al. 2019) is the same as vanilla distillation in our experiments. We \ufb01rst conducted the semantic label\u2192photo task on Cityscapes dataset (Cordts et al. 2016) using pix2pix, which consists of street scenes from different cities with high quality pixel-level annotations. The dataset is divided into about 3,000 training images, 500 validation images and about 1,500 test images, which are all paired data. We followed the settings in (Isola et al. 2017) to use Unet (Ronneberger, Fischer, and Brox 2015) as the generator. The hyper-parameter \u03bb in Fcn. (3) is set to 1. For the discriminator networks, we use 70 \u00d7 70 PatchGANs, whose goal is to classify 70 \u00d7 70 image patches instead of the whole image. When optimizing the networks, the objective value is divided by 2 while optimizing D. The networks are trained for 200 epochs using the Adam solver with the learning rate of 0.0002. When testing the GANs, the generator was run in the same manner as training but without dropout. To demonstrate the effectiveness of the proposed method, we used the U-net whose number of channels are 64 as the teacher network. We evaluated two different sizes of the student generator to have omnibearing results of the proposed method: the student generators with half channels of the teacher generator andwith 1/4 channels. The student generator has half of the \ufb01lters of the teacher. Since the discriminator is not required at inference time, we kept the structure of the student discriminator same as that of the teacher discriminator. We studied the performance of different generators: the teacher generator, the student generator trained from scratch, the student generator optimized using vanilla distillation (i.e. Fcn. (4)), and the student generator trained utilizing the proposed method. Fig. (2) shows the qualitative results of these variants on the labels\u2192photos task. The teacher generator achieved satisfactory results yet required enormous parameters and computational resources. The student generator, although has fewer FLOPs and parameters, generated simple images with repeated patches, which look fake. Using vanilla distillation to minimize the \u21131-norm improved the performance of the student generator, but causes blurry results. The images generated by the proposed method are much sharper and look realistic, which demonstrated that the proposed method can learn portable generative model with high quality. Quantitative Evaluation Besides the qualitative experiments, we also conducted quantitative evaluation of the proposed method. Evaluating the quality of images generated by GANs is a dif\ufb01cult problem. Naive metrics such as \u21131norm error cannot evaluate the visual quality of the images. To this end, we used the metrics following (Isola et al. 2017), i.e. the \u201cFCN-score\u201d, which uses a pretrained semantic segmentation model to classify the synthesized images as a pseudo metric. The intuition is that if the generated images have the same manifold structure as the true images, the segmentation model which trained on true samples would achieve comparable performance. Therefore, we adopt the pretrained FCN-8s (Long, Shelhamer, and Darrell \fTable 1: FCN-scores for different methods on Cityscapes dataset using pix2pix. Algorithm FLOPs Parameters Per-pixel acc. Per-class acc. Class IOU Teacher \u223c18.15G \u223c54.41M 52.17 12.39 8.20 Student from scratch \u223c4.65G \u223c13.61M 51.62 12.10 7.93 (Aguinaldo et al. 2019) 50.42 12.30 8.00 Student(Ours) 52.22 12.37 8.11 Student from scratch \u223c1.22G \u223c3.4M 50.80 11.86 7.95 (Aguinaldo et al. 2019) 50.32 11.98 7.96 Student(Ours) 51.57 11.98 8.06 Ground truth 80.42 26.72 21.13 Input Student (Scratch) Aguinaldo et.al. Student (Ours) Teacher Figure 3: Different methods for mapping horse\u2192zebra trained on ImageNet images using CycleGAN. 2015) model on cityscapes dataset to the generated images. The results included per-pixel accuracy, per-class accuracy and mean class IOU. Tab. (1) reported the quantitative results of different methods. The teacher GAN achieved high performance. However, the huge FLOPs and heavy parameters of this generator prevent its application on real-world edge devices. Therefore, we conducted a portable GANs model of fewer parameters by removing half of the \ufb01lters in the teacher generator. Reasonably, the student generator trained from scratch suffered degradation on all the three FCN-scores. To maintain the performance of the generator, we minimized the Euclidean distance between the images generated by the teacher network and the student network, which is shown as vanilla distillation in Tab. (1). However, the vanilla distillation performed worse than the student generator trained from scratch, which suggests the MSE loss cannot be directly used in GAN. The proposed method utilized not only low-level but also high-level information of the teacher network and achieved a 52.22% per-pixel accuracy, which was even higher than that of the teacher generator. Ablation Study We have evaluated and veri\ufb01ed the effectiveness of the proposed method for learning portable GANs Table 2: FCN-scores for different losses on Cityscapes dataset. Loss Per-pixel acc. Per-class acc. IOU baseline 51.62 12.10 7.93 Lperc 51.22 12.20 8.01 LL1 + Lperc 51.82 12.32 8.06 LGT 51.66 12.12 8.05 LGT + Ltri 52.05 12.15 8.08 LL1 + Lperc 52.22 12.37 8.11 +LGT + Ltri qualitatively and quantitatively. Since there are a number of components in the proposed approach, we further conducted ablation experiments for an explicit understanding. The settings are the same as the above experiments. The loss functions of the proposed method can be divided into two parts Ltotal(GS) and Ltotal(DS), i.e. the objective functions of the generator and the discriminator. We \ufb01rst evaluated the two objectives separately. As shown in Tab. (2), the generator using LL1 loss performed better than the baseline student which was trained from scratch. By com\fInput Student (Scratch) Aguinaldo et.al. Student (Ours) Teacher Figure 4: Different methods for mapping summer\u2192winter using CycleGAN. bining the perceptual loss, the student generator can learn high-level semantic information from the teacher network and achieved higher score. For the discriminator, applying the images generated from the teacher network can make the student discriminator learn a better black of the target domain. Moreover, the triplet loss can further improve the performance of the student GAN. Finally, by exploiting all the proposed loss functions, the student network achieved the highest score. The results of the ablation study demonstrate the effectiveness of the components in the proposed objective functions. Generalization Ability In the above experiments, we have veri\ufb01ed the performance of the proposed method on paired image-to-image translation by using pix2pix. In order to illustrate the generalization ability of the proposed algorithm, we further apply it on unpaired image-to-to image translation, which is more complex than paired translation, using CycleGAN (Zhu et al. 2017). We evaluate two datasets for CycleGAN: horse\u2192zebra and label\u2192photo. For the teacher-student learning paradigm, the structure of the teacher generator was followed (Zhu et al. 2017). Note that CycleGAN has two generators to translate from domain X to Y and Y to X, the number of \ufb01lters of all the two student generators was set to half or quarter of that of the teacher generator. We use the same discriminator for the teacher and student network. Fig. 3 presented the images generated by different methods on the horse\u2192zebra task. Since the task is not very hard, we use an extremely portable student generators, which have only 1/4 channels of the teacher generator. The teacher generator has about 11.38M parameters and 47.19G FLOPs while the student generator has only about 715.65K parameters and 3.19G FLOPs. The images generated by the teacher network performed well while the student network trained from the scratch resulted in poor performance. The student network utilizing vanilla distillation achieved better performance, but the images were blurry. By using the proposed method, the student network learned abundant information from the teacher network and generated images better than other methods with the same architecture. The proposed method achieved comparable performance with the teacher network but with fewer parameters, which demonstrates the effectiveness of the proposed algorithm. We also conduct the experiments to translate summer to winter. The student generator trained using the proposed algorithm achieved similar performance with the teacher network but with only about 1/16 parameters. Therefore, the proposed method can learn from the teacher network effectively and generate images, which mix the spurious with the genuine, with relatively few parameters." + }, + { + "url": "http://arxiv.org/abs/1912.13200v6", + "title": "AdderNet: Do We Really Need Multiplications in Deep Learning?", + "abstract": "Compared with cheap addition operation, multiplication operation is of much\nhigher computation complexity. The widely-used convolutions in deep neural\nnetworks are exactly cross-correlation to measure the similarity between input\nfeature and convolution filters, which involves massive multiplications between\nfloat values. In this paper, we present adder networks (AdderNets) to trade\nthese massive multiplications in deep neural networks, especially convolutional\nneural networks (CNNs), for much cheaper additions to reduce computation costs.\nIn AdderNets, we take the $\\ell_1$-norm distance between filters and input\nfeature as the output response. The influence of this new similarity measure on\nthe optimization of neural network have been thoroughly analyzed. To achieve a\nbetter performance, we develop a special back-propagation approach for\nAdderNets by investigating the full-precision gradient. We then propose an\nadaptive learning rate strategy to enhance the training procedure of AdderNets\naccording to the magnitude of each neuron's gradient. As a result, the proposed\nAdderNets can achieve 74.9% Top-1 accuracy 91.7% Top-5 accuracy using ResNet-50\non the ImageNet dataset without any multiplication in convolution layer. The\ncodes are publicly available at: https://github.com/huaweinoah/AdderNet.", + "authors": "Hanting Chen, Yunhe Wang, Chunjing Xu, Boxin Shi, Chao Xu, Qi Tian, Chang Xu", + "published": "2019-12-31", + "updated": "2021-07-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Given the advent of Graphics Processing Units (GPUs), deep convolutional neural networks (CNNs) with billions of \ufb02oating number multiplications could receive speed-ups and make important strides in a large variety of computer vision tasks, e.g. image classi\ufb01cation [26, 17], object detection [23], segmentation [19], and human face veri\ufb01ca*Equal contribution \u2020Corresponding author tion [30]. However, the high-power consumption of these high-end GPU cards (e.g. 250W+ for GeForce RTX 2080 Ti) has blocked modern deep learning systems from being deployed on mobile devices, e.g. smart phone, camera, and watch. Existing GPU cards are far from svelte and cannot be easily mounted on mobile devices. Though the GPU itself only takes up a small part of the card, we need many other hardware for supports, e.g. memory chips, power circuitry, voltage regulators and other controller chips. It is therefore necessary to study ef\ufb01cient deep neural networks that can run with affordable computation resources on mobile devices. Addition, subtraction, multiplication and division are the four most basic operations in mathematics. It is widely known that multiplication is slower than addition, but most of the computations in deep neural networks are multiplications between \ufb02oat-valued weights and \ufb02oat-valued activations during the forward inference. There are thus many papers on how to trade multiplications for additions, to speed up deep learning. The seminal work [5] proposed BinaryConnect to force the network weights to be binary (e.g.-1 or 1), so that many multiply-accumulate operations can be replaced by simple accumulations. After that, Hubara et al. [15] proposed BNNs, which binarized not only weights but also activations in convolutional neural networks at runtime. Moreover, Rastegari et al. [22] introduced scale factors to approximate convolutions using binary operations and outperform [15, 22] by large margins. Zhou et al. [38] utilized low bit-width gradient to accelerate the training of binarized networks. Cai et al. [4] proposed an halfwave Gaussian quantizer for forward approximation, which achieved much closer performance to full precision networks. Though binarizing \ufb01lters of deep neural networks signi\ufb01cantly reduces the computation cost, the original recognition accuracy often cannot be preserved. In addition, the training procedure of binary networks is not stable and usually requests a slower convergence speed with a small 1 arXiv:1912.13200v6 [cs.CV] 1 Jul 2021 \f(a) Visualization of features in AdderNets (b) Visualization of features in CNNs Figure 1. Visualization of features in AdderNets and CNNs. Features of CNNs in different classes are divided by their angles. In contrast, features of AdderNets tend to be clustered towards different class centers, since AdderNets use the \u21131-norm to distinguish different classes. The visualization results suggest that \u21131-distance can served as a similarity measure the distance between the \ufb01lter and the input feature in deep neural networks learning rate. Convolutions in classical CNNs are actually cross-correlation to measure the similarity of two inputs. Researchers and developers are used to taking convolution as a default operation to extract features from visual data, and introduce various methods to accelerate the convolution, even if there is a risk of sacri\ufb01cing network capability. But there is hardly no attempt to replace convolution with another more ef\ufb01cient similarity measure that is better to only involve additions. In fact, additions are of much lower computational complexities than multiplications. Thus, we are motivated to investigate the feasibility of replacing multiplications by additions in convolutional neural networks. In this paper, we propose adder networks that maximize the use of addition while abandoning convolution operations. Given a series of small template as \u201c\ufb01lters\u201d in the neural network, \u21131-distance could be an ef\ufb01cient measure to summarize absolute differences between the input single and the template as shown in Figure 1. Since subtraction can be easily implemented through addition by using its complement code, \u21131-distance could be a hardware-friendly measure that only has additions, and naturally becomes an ef\ufb01cient alternative of the convolution to construct neural networks. An improved back-propagation scheme with regularized gradients is designed to ensure suf\ufb01cient updates of the templates and a better network convergence. The proposed AdderNets are deployed on several benchmarks, and experimental results demonstrate AdderNets\u2019 advantages in accelerating inference of deep neural networks while preserving comparable recognition accuracy to conventional CNNs. This paper is organized as follows. Section 2 investigates related works on network compression. Section 3 proposes Adder Networks which replace the multiplication in the conventional convolution \ufb01lters with addition. Section 4 evaluates the proposed AdderNets on various benchmark datasets and models and Section 5 concludes this paper. 2. Related works To reduce the computational complexity of convolutional neural networks, a number of works have been proposed for eliminating useless calculations. Pruning based methods aims to remove redundant weights to compress and accelerate the original network. Denton et al. [6] decomposed weight matrices of fullyconnected layers into simple calculations by exploiting singular value decomposition (SVD). Han et al. [8] proposed discarding subtle weights in pre-trained deep networks to omit their original calculations without affecting the performance. Wang et al. [29] further converted convolution \ufb01lters into the DCT frequency domain and eliminated more \ufb02oating number multiplications. In addition, Hu et al. [13] discarded redundant \ufb01lters with less impacts to directly reduce the computations brought by these \ufb01lters. Luo et al. [21] discarded redundant \ufb01lters according to the reconstruction error. He et al. [10] utilized a LASSO regression to select important channels by solving least square reconstruction. Zhuang et al. [39] introduce additional losses to consider the discriminative power of channels and selected the most discriminative channels for the portable network. Instead of directly reducing the computational complexity of a pre-trained heavy neural network, lot of works focused on designing novel blocks or operations to replace the conventional convolution \ufb01lters. Iandola et al. [16] introduced a bottleneck architecture to largely decrease the computation cost of CNNs. Howard et al. [12] designed \fMobileNet, which decompose the conventional convolution \ufb01lters into the point-wise and depth-wise convolution \ufb01lters with much fewer FLOPs. Zhang et al. [36] combined group convolutions [33] and a channel shuf\ufb02e operation to build ef\ufb01cient neural networks with fewer computations. Hu et al. [14] proposed the squeeze and excitation block, which focuses on the relationship of channels by modeling interdependencies between channels, to improve the performance at slight additional computational cost. Wu et al. [32] presented a parameter-free \u201cshift\u201d operation with zero \ufb02op and zero parameter to replace conventional \ufb01lters and largely reduce the computational and storage cost of CNNs. Zhong et al. [37] further pushed the shift-based primitives into channel shift, address shift and shortcut shift to reduce the inference time on GPU while keep the performance. Wang et al. [28] developed versatile convolution \ufb01lters to generate more useful features utilizing fewer calculations and parameters. Besides eliminating redundant weights or \ufb01lters in deep convolutional neural networks, Hinton et al. [11] proposed the knowledge distillation (KD) scheme, which transfer useful information from a heavy teacher network to a portable student network by minimizing the KullbackLeibler divergence between their outputs. Besides mimic the \ufb01nal outputs of the teacher networks, Romero et al. [25] exploit the hint layer to distill the information in features of the teacher network to the student network. You et al. [35] utilized multiple teachers to guide the training of the student network and achieve better performance. Yim et al. [34] regarded the relationship between features from two layers in the teacher network as a novel knowledge and introduced the FSP (Flow of Solution Procedure) matrix to transfer this kind of information to the student network. Nevertheless, the compressed networks using these algorithms still contain massive multiplications, which costs enormous computation resources. As a result, subtractions or additions are of much lower computational complexities when compared with multiplications. However, they have not been widely investigated in deep neural networks, especially in the widely used convolutional networks. Therefore, we propose to minimize the numbers of multiplications in deep neural networks by replacing them with subtractions or additions. 3. Networks without Multiplication Consider a \ufb01lter F \u2208Rd\u00d7d\u00d7cin\u00d7cout in an intermediate layer of the deep neural network, where kernel size is d, input channel is cin and output channel is cout. The input feature is de\ufb01ned as X \u2208RH\u00d7W \u00d7cin, where H and W are the height and width of the feature, respectively. The output feature Y indicates the similarity between the \ufb01lter and the input feature, Y (m, n, t) = d X i=0 d X j=0 cin X k=0 S \u0000X(m + i, n + j, k), F(i, j, k, t) \u0001 , (1) where S(\u00b7, \u00b7) is a pre-de\ufb01ned similarity measure. If crosscorrelation is taken as the metric of distance, i.e. S(x, y) = x \u00d7 y, Eq. (1) becomes the convolution operation. Eq. (1) can also implies the calculation of a fully-connected layer when d = 1. In fact, there are many other metrics to measure the distance between the \ufb01lter and the input feature. However, most of these metrics involve multiplications, which bring in more computational cost than additions. 3.1. Adder Networks We are therefore interested in deploying distance metrics that maximize the use of additions. \u21131 distance calculates the sum of the absolute differences of two points\u2019 vector representations, which contains no multiplication. Hence, by calculating \u21131 distance between the \ufb01lter and the input feature, Eq. (1) can be reformulated as Y (m, n, t) = \u2212 d X i=0 d X j=0 cin X k=0 |X(m + i, n + j, k) \u2212F(i, j, k, t)|. (2) Addition is the major operation in \u21131 distance measure, since subtraction can be easily reduced to addition by using complement code. With the help of \u21131 distance, similarity between the \ufb01lters and features can be ef\ufb01ciently computed. Although both \u21131 distance (Eq. (2) and cross-correlation in Eq. (1) can measure the similarity between \ufb01lters and inputs, there are some differences in their outputs. The output of a convolution \ufb01lter, as a weighted summation of values in the input feature map, can be positive or negative, but the output of an adder \ufb01lter is always negative. Hence, we resort to batch normalization for help, and the output of adder layers will be normalized to an appropriate range and all the activation functions used in conventional CNNs can then be used in the proposed AdderNets. Although the batch normalization layer involves multiplications, its computational cost is signi\ufb01cantly lower than that of the convolutional layers and can be omitted. Considering a convolutional layer with a \ufb01lter F \u2208Rd\u00d7d\u00d7cin\u00d7cout, an input X \u2208 RH\u00d7W \u00d7cin and an output Y \u2208RH\u2032\u00d7W \u2032\u00d7cout, the computation complexity of convolution and batch normalization is O(d2cincoutHW) and O(coutH\u2032W \u2032), respectively. In practice, given an input channel number cin = 512 and a kernel size d = 3 in ResNet [9], we have d2cincoutHW coutH\u2032W \u2032 \u2248 4068. Since batch normalization layer has been widely used in the state-of-the-art convolutional neural networks, we can simply upgrade these networks into AddNets by replacing their convolutional layers into adder layers to speed up the inference and reduces the energy cost. \fIntuitively, Eq. (1) has a connection with template matching [3] in computer vision, which aims to \ufb01nd the parts of an image that match the template. F in Eq. (1) actually works as a template, and we calculate its matching scores with different regions of the input feature X. Since various metrics can be utilized in template matching, it is natural that \u21131 distance can be utilized to replace the crosscorrelation in Eq. (1). 3.2. Optimization Neural networks utilize back-propagation to compute the gradients of \ufb01lters and stochastic gradient descent to update the parameters. In CNNs, the partial derivative of output features Y with respect to the \ufb01lters F is calculated as: \u2202Y (m, n, t) \u2202F(i, j, k, t) = X(m + i, n + j, k), (3) where i \u2208[m, m + d] and j \u2208[n, n + d]. To achieve a better update of the parameters, it is necessary to derive informative gradients for SGD. In AdderNets, the partial derivative of Y with respect to the \ufb01lters F is: \u2202Y (m, n, t) \u2202F(i, j, k, t) = sgn(X(m+i, n+j, k)\u2212F(i, j, k, t)), (4) where sgn(\u00b7) denotes the sign function and the value of the gradient can only take +1, 0, or -1. Considering the derivative of \u21132-norm \u2202Y (m, n, t) \u2202F(i, j, k, t) = X(m + i, n + j, k) \u2212F(i, j, k, t), (5) Eq. (4) can therefore lead to a signSGD [2] update of \u21132norm. However, signSGD almost never takes the direction of steepest descent and the direction only gets worse as dimensionality grows [1]. It is unsuitable to optimize the neural networks of a huge number of parameters using signSGD. Therefore, we propose using Eq. (5) to update the gradients in our AdderNets. The convergence of taking these two kinds of gradient will be further investigated in the supplementary material. Therefore, by utilizing the full-precision gradient, the \ufb01lters can be updated precisely. Besides the gradient of the \ufb01lters, the gradient of the input features X is also important for the update of parameters. Therefore, we also use the full-precision gradient (Eq. (5)) to calculate the gradient of X. However, the magnitude of the full-precision gradient may be larger than +1 or -1. Denote the \ufb01lters and inputs in layer i as Fi and Xi. Different from \u2202Y \u2202Fi which only affects the gradient of Fi itself, the change of \u2202Y \u2202Xi would in\ufb02uence the gradient in not only layer i but also layers before layer i according to the gradient chain rule. If we use the full-precision gradient instead of the sign gradient of \u2202Y \u2202X for each layer, the magnitude of the gradient in the layers before this layer would be increased, and the discrepancy brought by using full-precision gradient would be magni\ufb01ed. To this end, we clip the gradient of X to [\u22121, 1] to prevent gradients from exploding. Then the partial derivative of output features Y with respect to the input features X is calculated as: \u2202Y (m, n, t) \u2202X(m + i, n + j, k) = HT(F(i, j, k, t) \u2212X(m + i, n + j, k)). (6) where HT(\u00b7) denotes the HardTanh function: HT(x) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 x if \u22121 < x < 1, 1 x > 1, \u22121 x < \u22121. (7) 3.3. Adaptive Learning Rate Scaling In conventional CNNs, assuming that the weights and the input features are independent and identically distributed following normal distribution, the variance of the output can be roughly estimated as: V ar[YCNN] = d X i=0 d X j=0 cin X k=0 V ar[X \u00d7 F] = d2cinV ar[X]V ar[F]. (8) If variance of the weight is V ar[F] = 1 d2cin , the variance of output would be consistent with that of the input, which will be bene\ufb01cial for the information \ufb02ow in the neural network. In contrast, for AdderNets, the variance of the output can be approximated as: V ar[YAdderNet] = d X i=0 d X j=0 cin X k=0 V ar[|X \u2212F|] = r\u03c0 2 d2cin(V ar[X] + V ar[F]), (9) when F and X follow normal distributions. In practice, the variance of weights V ar[F] is usually very small [7], e.g. 10\u22123 or 10\u22124 in an ordinary CNN. Hence, compared with multiplying V ar[X] with a small value in Eq. (8), the addition operation in Eq. (9) tends to bring in a much larger variance of outputs in AdderNets. We next proceed to show the in\ufb02uence of this larger variance of outputs on the update of AdderNets. To promote the effectiveness of activation functions, we introduce batch normalization after each adder layer. Given input x over a mini-batch B = {x1, \u00b7 \u00b7 \u00b7 , xm}, the batch normalization layer can be denoted as: y = \u03b3 x \u2212\u00b5B \u03c3B + \u03b2, (10) \fAlgorithm 1 The feed forward and back propagation of adder neural networks. Input: An initialized adder network N and its training set X and the corresponding labels Y, the global learning rate \u03b3 and the hyper-parameter \u03b7. 1: repeat 2: Randomly select a batch {(x, y)} from X and Y; 3: Employ the AdderNet N on the mini-batch: x \u2192 N(x); 4: Calculate the full-precision derivative \u2202Y \u2202F and \u2202Y \u2202X for adder \ufb01lters using Eq. (5) and Eq. (6); 5: Exploit the chain rule to generate the gradient of parameters in N; 6: Calculate the adapative learning rate \u03b1l for each adder layer according to Eq. (13). 7: Update the parameters in N using stochastic gradient descent. 8: until convergence Output: A well-trained adder network N with almost no multiplications. where \u03b3 and \u03b2 are parameters to be learned, and \u00b5B = 1 m P i xi and \u03c32 B = 1 m P i(xi \u2212\u00b5B)2 are the mean and variance over the mini-batch, respectively. The gradient of loss \u2113with respect to x is then calculated as: \u2202\u2113 \u2202xi = m X j=1 \u03b3 m2\u03c3B \u001a \u2202\u2113 \u2202yi \u2212\u2202\u2113 \u2202yj [1 + (xi \u2212xj)(xj \u2212\u00b5B) \u03c3B ] \u001b . (11) Given a much larger variance V ar[Y ] = \u03c3B in Eq. (9), the magnitude of the gradient w.r.t X in AdderNets would be much smaller than that in CNNs according to Eq. (11), and then the magnitude of the gradient w.r.t the \ufb01lters in AdderNets would be decreased as a result of gradient chain rule. Table 1. The \u21132-norm of gradient of weight in each layer using different networks at 1st iteration. Model Layer 1 Layer 2 Layer 3 AdderNet 0.0009 0.0012 0.0146 CNN 0.2261 0.2990 0.4646 Table 1 reports the \u21132-norm of gradients of \ufb01lters \u2225F\u22252 in LeNet-5-BN using CNNs and AdderNets on the MNIST dataset during the 1st iteration. LeNet-5-BN denotes the LeNet-5 [18] adding an batch normalization layer after each convolutional layer. As shown in this table, the norms of gradients of \ufb01lters in AdderNets are much smaller than that in CNNs, which could slow down the update of \ufb01lters in AdderNets. A straightforward idea is to directly adopt a larger learning rate for \ufb01lters in AdderNets. However, it is worth noticing that the norm of gradient differs much in different layers of AdderNets as shown in Table 1, which requests special consideration of \ufb01lters in different layers. To this end, we propose an adaptive learning rate for different layers in AdderNets. Speci\ufb01cally, the update for each adder layer l is calculated by \u2206Fl = \u03b3 \u00d7 \u03b1l \u00d7 \u2206L(Fl), (12) where \u03b3 is a global learning rate of the whole neural network (e.g. for adder and BN layers), \u2206L(Fl) is the gradient of the \ufb01lter in layer l and \u03b1l is its corresponding local learning rate. As \ufb01lters in AdderNets act subtraction with the inputs, the magnitude of \ufb01lters and inputs are better to be similar to extract meaningful information from inputs. Because of the batch normalization layer, the magnitudes of inputs in different layers have been normalized, which then suggests a normalization for the magnitudes of \ufb01lters in different layers. The local learning rate can therefore be de\ufb01ned as: \u03b1l = \u03b7 \u221a k \u2225\u2206L(Fl)\u22252 , (13) where k denotes the number of elements in Fl, and \u03b7 is a hyper-parameter to control the learning rate of adder \ufb01lters. By using the proposed adaptive learning rate scaling, the adder \ufb01lters in different layers can be updated with nearly the same step. The training procedure of the proposed AdderNet is summarized in Algorithm 1. 4. Experiment In this section, we implement experiments to validate the effectiveness of the proposed AdderNets on several benchmark datasets, including MNIST, CIFAR and ImageNet. Ablation study and visualization of features are provided to further investigate the proposed method. The experiments are conducted on NVIDIA Tesla V100 GPU in PyTorch. 4.1. Experiments on MNIST To illustrate the effectiveness of the proposed AdderNets, we \ufb01rst train a LeNet-5-BN [18] on the MNIST dataset. The images are resized to 32 \u00d7 32 and are proprecessed following [18]. The networks are optimized using Nesterov Accelerated Gradient (NAG), and the weight decay and the momentum were set as 5 \u00d7 10\u22124 and 0.9, respectively. We train the networks for 50 epochs using the cosine learning rate decay [20] with an initial learning rate 0.1. The batch size is set as 256. For the proposed AdderNets, we replace the convolutional \ufb01lters in LeNet-5-BN with our adder \ufb01lters. Note that the fully connected layer can be regarded as a convolutional layer, we also replace the multiplications in the fully connect layers with subtractions. We set the hyper-parameter in Eq. (13) to be \u03b7 = 0.1, which achieves best performance compared with other values from the pool \b 1, 1 2, 1 5, 1 10, 1 20 \t . \fTable 2. Classi\ufb01cation results on the CIFAR-10 and CIFAR-100 datasets. Model Method #Mul. #Add. XNOR CIFAR-10 CIFAR-100 BNN 0 0.65G 0.65G 89.80% 65.41% VGG-small AddNN 0 1.30G 0 93.72% 72.64% CNN 0.65G 0.65G 0 93.80% 72.73% BNN 0 41.17M 41.17M 84.87% 54.14% ResNet-20 AddNN 0 82.34M 0 91.84% 67.60% CNN 41.17M 41.17M 0 92.25% 68.14% BNN 0 69.12M 69.12M 86.74% 56.21% ResNet-32 AddNN 0 138.24M 0 93.01% 69.02% CNN 69.12M 69.12M 0 93.29% 69.74% The convolutional neural network achieves a 99.4% accuracy with \u223c435K multiplications and \u223c435K additions. By replacing the multiplications in convolution with additions, the proposed AdderNet achieves a 99.4% accuracy, which is the same as that of CNNs, with \u223c870K additions and almost no multiplication.In fact, the theoretical latency of multiplications in CPUs is also larger than that of additions and subtractions. There is an instruction table 1 which lists the instruction latencies, throughputs and microoperation breakdowns for Intel, AMD and VIA CPUs. For example, in VIA Nano 2000 series, the latency of \ufb02oat multiplication and addition is 4 and 2, respectively. The AdderNet using LeNet-5 model will have \u223c1.7M latency while CNN will have \u223c2.6M latency in this CPU. In conclusion, the AdderNet can achieve similar accuracy with CNN but have fewer computational cost and latency. Noted that CUDA and cuDNN optimized adder convolutions are not yet available, we do not compare the actual inference time. 4.2. Experiments on CIFAR We then evaluate our method on the CIFAR dataset, which consist of 32\u00d732 pixel RGB color images. Since the binary networks [38] can use the XNOR operations to replace multiplications, we also compare the results of binary neural networks (BNNs). We use the same data augmentation and pro-precessing in He et al. [9] for training and testing. Following Zhou et al. [38], the learning rate is set to 0.1 in the beginning and then follows a polynomial learning rate schedule. The models are trained for 400 epochs with a 256 batch size. We follow the general setting in binary networks to set the \ufb01rst and last layers as full-precision convolutional layers. In AdderNets, we use the same setting for a fair comparison. The hyper-parameter \u03b7 is set to 0.1 following the experiments on the MNIST dataset. The classi\ufb01cation results are reported in Table 2. Since computational cost in batch normalization layer, the \ufb01rst layer and the last layer are signi\ufb01cantly less than other layers, we omit these layers when counting FLOPs. We \ufb01rst evaluate the VGG-small model [4] in the CIFAR10 and CIFAR-100 dataset. As a result, the AdderNets 1www.agner.org/optimize/instruction_tables.pdf achieve nearly the same results (93.72% in CIFAR-10 and 72.64% in CIFAR-100) with CNNs (93.80% in CIFAR-10 and 72.73% in CIFAR-100) with no multiplication. Although the model size of BNN is much smaller than those of AdderNet and CNN, its accuracies are much lower (89.80% in CIFAR-10 and 65.41% in CIFAR-100). We then turn to the widely used ResNet models (ResNet-20 and ResNet-32) to further investigate the performance of different networks. As for the ResNet-20, Tte convolutional neural networks achieve the highest accuracy (i.e. 92.25% in CIFAR-10 and 68.14% in CIFAR-100) but with a large number of multiplications (41.17M). The proposed AdderNets achieve a 91.84% accuracy in CIFAR-10 and a 67.60% accuracy in CIFAR-100 without multiplications, which is comparable with CNNs. In contrast, the BNNs only achieve 84.87% and 54.14% accuracies in CIFAR-10 and CIFAR-100. The results in ResNet-32 also suggest that the proposed AdderNets can achieve similar results with conventional CNNs. 4.3. Experiments on ImageNet We next conduct experiments on the ImageNet dataset [17], which consist of 224 \u00d7 224 pixel RGB color images. We use ResNet-18 model to evaluate the proposed AdderNets follow the same data augmentation and proprecessing in He et al. [9]. We train the AdderNets for 150 epochs utilizing the cosine learning rate decay [20]. These networks are optimized using Nesterov Accelerated Gradient (NAG), and the weight decay and the momentum are set as 10\u22124 and 0.9, respectively. The batch size is set as 256 and the hyper-parameter in AdderNets is the same as that in CIFAR experiments. Table 3 shows the classi\ufb01cation results on the ImageNet dataset by exploiting different nerual networks. The convolutional neural network achieves a 69.8% top-1 accuracy and an 89.1% top-5 accuracy in ResNet-18. However, there are 1.8G multiplications in this model, which bring enormous computational complexity. Since the addition operation has smaller computational cost than multiplication, we propose AdderNets to replace the multiplications in CNNs with subtractions. As a result, our AdderNet achieve a 66.8% top-1 accuracy and an 87.4% top-5 accuracy in \fTable 3. Classi\ufb01cation results on the ImageNet datasets. Model Method #Mul. #Add. XNOR Top-1 Acc. Top-5 Acc. BNN 0 1.8G 1.8G 51.2% 73.2% ResNet-18 AddNN 0 3.6G 0 67.0% 87.6% CNN 1.8G 1.8G 0 69.8% 89.1% BNN 0 3.9G 3.9G 55.8% 78.4% ResNet-50 AddNN 0 7.7G 0 74.9% 91.7% CNN 3.9G 3.9G 0 76.2% 92.9% (a) Visualization of \ufb01lters of AdderNets (b) Visualization of \ufb01lters of CNNs Figure 2. Visualization of \ufb01lters in the \ufb01rst layer of LeNet-5-BN on the MNIST dataset. Both of them can extract useful features for image classi\ufb01cation. ResNet-18, which demonstrate the adder \ufb01lters can extract useful information from images. Rastegari et al. [22] proposed the XNOR-net to replace the multiplications in neural networks with XNOR operations. Although the BNN can achieve high speed-up and compression ratio, it achieves only a 51.2% top-1 accuracy and a 73.2% top-5 accuracy in ResNet-18, which is much lower than the proposed AdderNet. We then conduct experiments on a deeper architecture (ResNet-50). The BNN could only achieve a 55.8% top-1 accuracy and a 78.4% top-5 accuracy using ResNet-50. In contrast, the proposed AdderNets can achieve a 74.9% top1 accuracy and a 91.7% top-5 accuracy, which is closed to that of CNN (76.2% top-1 accuracy and 92.9% top-5 accuracy). 4.4. Visualization Results Visualization on features. The AdderNets utilize the \u21131-distance to measure the relationship between \ufb01lters and input features instead of cross correlation in CNNs. Therefore, it is important to further investigate the difference of the feature space in AdderNets and CNNs. We train a LeNet++ on the MNIST dataset following [31], which has six convolutional layers and a fully-connected layer for extracting powerful 3D features. Numbers of neurons in each convolutional layer are 32, 32, 64, 64, 128, 128, and 2, respectively. For the proposed AdderNets, the last fully connected layers are replaced with the proposed add \ufb01lters. The visualization results are shown in Figure 1. The convolutional neural network calculates the cross correlation between \ufb01lters and inputs. If \ufb01lters and inputs are approximately normalized, convolution operation is then equivalent to calculate cosine distance between two vectors. That is probably the reason that features in different classes are divided by their angles in Figure 1. In contrast, AdderNets utilize the \u21131-norm to distinguish different classes. Thus, features tend to be clustered towards different class centers. The visualization results demonstrate that the proposed AdderNets could have the similar discrimination ability to classify images as CNNs. Visualization on \ufb01lters. We visualize the \ufb01lters of the LeNet-5-BN network in Figure 2. Although the AdderNets and CNNs utilize different distance metrics, \ufb01lters of the proposed adder networks (see Figure 2 (a)) still share some similar patterns with convolution \ufb01lters (see Figure 2 (b)). The visualization experiments further demonstrate that the \ufb01lters of AdderNets can effectively extract useful information from the input images and features. Visualization on distribution of weights. We then visualize the distribution of weights for the 3th convolution layer on LeNet-5-BN. As shown in Figure 4, the distribution of weights with AdderNets is close to a Laplace distribution while that with CNNs looks more like a Gaussian distribution. In fact, the prior distribution of \u21131-norm is Laplace distribution [27] and that of \u21132-norm is Gaussian distribution [24] and the \u21132-norm is exactly same as the cross correlation, which will be analyzed in the supplemental material. 4.5. Ablation Study We propose to use a full-precision gradient to update the \ufb01lters in our adder \ufb01lters and design an adaptive learning rate scaling for deal with different layers in AdderNets. It is essential to evaluate the effectiveness of these components. We \ufb01rst train the LeNet-5-BN without changing its learning rate, which results in 54.91% and 29.26% accuracies using full-precision gradient and sign gradient, respectively. The networks can be hardly trained since its gradients are very small. Therefore, it is necessary to increase the learning rate of adder \ufb01lters. We directly increase the learning rate for \ufb01lters in AdderNets by 100, which achieves best performance with fullprecision gradient compared with other values from the pool {10, 50, 100, 200, 500}. As shown in Figure 3, the AdderNets using adaptive learning rate (ALR) and increased learning rate (ILR) achieve 97.99% and 97.72% accuracy \f(a) Accuracy (b) Loss Figure 3. Learning curve of AdderNets using different optimization schemes. FP and Sgn gradient denotes the full-precision and sign gradient. The proposed adaptive learning rate scaling with full-precision gradient achieves the highest accuracy (99.40%) with the smallest loss. Figure 4. Histograms over the weights with AdderNet (left) and CNN (right). The weights of AdderNets follow Laplace distribution while those of CNNs follow Gaussian distribution. with sign gradient, which is much lower than the accuracy of CNN (99.40%). Therefore, we propose the full-precision gradient to precisely update the weights in AdderNets. As a result, the AdderNet with ILR achieves a 98.99% accuracy using the full-precision gradient. By using the adaptive learning rate (ALR), the AdderNet can achieve a 99.40% accuracy, which demonstrate the effectiveness of the proposed ALR method. Table 4. The impact of parameter \u03b7 using LeNet-5-BN on the MNIST dataset. \u03b7 1 0.5 0.2 0.1 0.05 Acc. (%) 99.26 99.30 99.35 99.40 99.32 Impact of parameters. As discussed above, the proposed adaptive learning rate scaling has a hyper-parameter: \u03b7. We then test its impact on the accuracy of the student network by conducting the experiments on the MNIST dataset. We use LeNet-5-BN as the backbone of AdderNet. Other experimental settings are same as mentioned in Sec. 4.1. It can be seen from Table 4 that the AdderNets trained utilizing the adaptive learning rate scaling achieves the highest accuracy (99.40%) when \u03b7 = 0.1. Based on the above analysis, we keep the setting of hyper-parameters for the proposed method. 5." + }, + { + "url": "http://arxiv.org/abs/1904.01186v4", + "title": "Data-Free Learning of Student Networks", + "abstract": "Learning portable neural networks is very essential for computer vision for\nthe purpose that pre-trained heavy deep models can be well applied on edge\ndevices such as mobile phones and micro sensors. Most existing deep neural\nnetwork compression and speed-up methods are very effective for training\ncompact deep models, when we can directly access the training dataset. However,\ntraining data for the given deep network are often unavailable due to some\npractice problems (e.g. privacy, legal issue, and transmission), and the\narchitecture of the given network are also unknown except some interfaces. To\nthis end, we propose a novel framework for training efficient deep neural\nnetworks by exploiting generative adversarial networks (GANs). To be specific,\nthe pre-trained teacher networks are regarded as a fixed discriminator and the\ngenerator is utilized for derivating training samples which can obtain the\nmaximum response on the discriminator. Then, an efficient network with smaller\nmodel size and computational complexity is trained using the generated data and\nthe teacher network, simultaneously. Efficient student networks learned using\nthe proposed Data-Free Learning (DAFL) method achieve 92.22% and 74.47%\naccuracies using ResNet-18 without any training data on the CIFAR-10 and\nCIFAR-100 datasets, respectively. Meanwhile, our student network obtains an\n80.56% accuracy on the CelebA benchmark.", + "authors": "Hanting Chen, Yunhe Wang, Chang Xu, Zhaohui Yang, Chuanjian Liu, Boxin Shi, Chunjing Xu, Chao Xu, Qi Tian", + "published": "2019-04-02", + "updated": "2019-12-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV", + "stat.ML" + ], + "main_content": "Introduction Deep convolutional neural networks (CNNs) have been successfully used in various computer vision applications such as image classi\ufb01cation [24, 11], object detection [21] and semantic segmentation [15]. However, launching most \u2217This work was done while visiting Huawei Noah\u2019s Ark Lab \u2020corresponding author of the widely used CNNs requires heavy computation and storage, which can only be used on PCs with modern GPU cards. For example, over 500MB of memory and over 1010\u00d7 multiplications are demanded for processing one image using VGGNet [24], which is almost impossible to be applied on edge devices such as autonomous cars and micro robots. Although these pre-trained CNNs have a number of parameters, Han et al. [6] showed that discarding over 85% of weights in a given neural network would not obviously damage its performance, which demonstrates that there is a signi\ufb01cant redundancy in these CNNs. In order to compress and speed-up pre-trained heavy deep models, various effective approaches have been proposed recently. For example, Gong et al. [5] utilized vector quantization approach to represent similar weights as cluster centers. Denton et al. [3] exploited low-rank decomposition to process the weight matrices of fully-connected layers. Chen et al. [1] proposed a hashing based method to encode parameters in CNNs. Han et al. [6] employed pruning, quantization and Huffman coding to obtain a compact deep CNN with lower computational complexity. Hinton et al. [8] proposed the knowledge distillation approach, which distills the information of the pre-trained teacher network for learning a portable student network, etc. Although the above mentioned methods have made tremendous efforts on benchmark datasets and models, an important issue has not been widely noticed, i.e. most existing network compression and speed-up algorithms have a strong assumption that training samples of the original network are available. However, the training dataset is routinely unknown in real-world applications due to privacy and transmission limitations. For instance, users do not want to let their photos leaked to others, and some of the training datasets are too huge to quickly upload to the cloud. In addition, parameters and architecture of pre-trained networks are also unknown sometimes except the input and output layers. Therefore, conventional methods cannot be 1 arXiv:1904.01186v4 [cs.LG] 31 Dec 2019 \fTeacher Network Student Network Random Signals Generated Images Generative Network Distillation Figure 1. The diagram of the proposed method for learning ef\ufb01cient deep neural networks without the training dataset. The generator is trained for approximating images in the original training set by extracting useful information from the given network. Then, the portable student network can be effective learned by using generated images and the teacher network directly used for learning portable deep models under these practice constrains. Nevertheless, only a few works have been proposed for compressing deep models without training data. Lopes et al. [16] utilized the \u201cmeta-data\u201d (e.g. means and standard deviation of activations from each layer) recorded from the original training dataset, which is not provided for most well-trained CNNs. Srinivas and Babu [26] compressed the pre-trained network by merging similar neurons in fullyconnected layers. However, the performance of compressed networks using these methods is much lower than that of the original network, due to they cannot effectively utilize the pre-trained neural networks. To address the aforementioned problem, we propose a novel framework for compressing deep neural networks without the original training dataset. To be speci\ufb01c, the given heavy neural network is regarded as a \ufb01xed discriminator. Then, a generative network is established for alternating the original training set by extracting information from the network during the adversarial procedure, which can be utlized for learning smaller networks with acceptable performance. The superiority of the proposed method is demonstrated through extensive experiments on benchmark datasets and models. Rest of this paper is organized as follows. Section 2 investigates related works on CNN compression algorithms. Section 3 proposes the data-free teacher-student paradigm by exploiting GAN. Section 4 illustrates experimental results of the proposed method on benchmark datasets and models and Section 5 concludes the paper. 2. Related Works Based on different assumptions and applications, existing portable network learning methods can be divided into two categories, i.e. data-driven and data-free methods. 2.1. Data-Driven Network Compression In order to learn ef\ufb01cient deep neural networks, a number of methods have been proposed to eliminate redundancy in pre-trained deep models. For example, Gong et al. [5] employed the vector quantization scheme to represent similar weights in neural networks. Denton et al. [3] exploited the singular value decomposition (SVD) approach to decompose weight matrices of fully-connected layers. Han et al. [6] proposed the pruning approach for removing subtle weights in pre-trained neural networks. Wang et al. [27] further introduced the discrete cosine transform (DCT) bases and converted convolution \ufb01lters into the frequency domain to achieve higher compression and speed-up ratios. Yang et al. [28] used a set of Lego \ufb01lters to build ef\ufb01cient CNNs. Besides eliminating redundant weights or \ufb01lters, Hinton et al. [8] proposed a knowledge distillation (KD) paradigm for transferring useful information from a given teacher network to a portable student network. Yim et al. [29] introduced the FSP (Flow of Solution Procedure) matrix to inherit the relationship between features from two layers. Li et al. [13] further presented a feature mimic framework to train ef\ufb01cient convolutional networks for objective detection. In addition, Rastegari et al. [20] and Courbariaux et al. [2] explored binarized neural networks to achieve considerable compression and speed-up ratios, which weights are -1/1 or -1/0/1, etc. Although the above mentioned algorithms obtained promising results on most of benchmark datasets and deep models, they cannot be effectively launched without the original training dataset. In practice, the training dataset could be unavailable for some reasons, e.g. transmission limitations and privacy. Therefore, it is necessary to study the data-free approach for compressing neural networks. \f2.2. Data-Free Network Compression There are only a few methods that are proposed for compressing deep neural networks without the original training dataset. Srinivas and Babu [26] proposed to directly merge similar neurons in fully-connected layers, which cannot be applied on convolutional layers and networks which detail architectures and parameters information are unknown. In addition, Lopes et al. [16] attempted to reconstruct the original data from \u201cmeta-data\u201d and utilize the knowledge distillation scheme to learn a smaller network. Since the \ufb01ne-tuning procedure cannot be accurately conducted without the original training dataset, performance of compressed methods by existing algorithms is worse than that of baseline models. Therefore, an effective data-free approach for learning ef\ufb01cient CNNs with comparable performance is highly required. 3. Data-free Student Network learning In this section, we will propose a novel data-free framework for compressing deep neural networks by embedding a generator network into the teacher-student learning paradigm. 3.1. Teacher-Student Interactions As mentioned above, the original training dataset is not usually provided by customers for various concerns. In addition, parameters and detailed architecture information could also be unavailable sometimes. Thus, we propose to utilized the teacher-student learning paradigm for learning portable CNNs. Knowledge Distillation (KD) [8] is a widely used approach to transfer the output information from a heavy network to a smaller network for achieving higher performance, which does not utilize parameters and the architecture of the given network. Although the given deep models may only be provided with limited interfaces (e.g. input and output interfaces), we can transfer the knowledge to inherit the useful information from the teacher networks. Let NT and NS denote the original pre-trained convolutional neural network (teacher network) and the desired portable network (student network), the student network can be optimized using the following loss function based on knowledge distillation: LKD = 1 n X i Hcross(yi S, yi T ). (1) where Hcross is the cross-entropy loss, yi T = NT (xi) and yi S = NS(xi) are the outputs of the teacher network NT and student network NS, respectively. Therefore, utilizing the knowledge transfer technique, a portable network can be optimized without the speci\ufb01c architecture of the given network. 3.2. GAN for Generating Training Samples In order to learn portable network without original data, we exploit GAN to generate training samples utilizing the available information of the given network. Generative adversarial networks (GANs) have been widely applied for generating samples. GANs consist of a generator G and a discriminator D. G is expected to generate desired data while D is trained to identify the differences between real images and those produced by the generator. To be speci\ufb01c, given an input noise vector z, G maps z to the desired data x, i.e. G : z \u2192x. On the other hand, the goal of D is to distinguish the real data from synthetic data G(z). For an aribitrary vanilla GAN, the objective function can be formulated as LGAN =Ey\u223cpdata(y)[log D(y)] +Ez\u223cpz(z)[log(1 \u2212D(G(z)))]. (2) In the adversarial procedure, the generator is continuously upgraded according to the training error produced by D. The optimal G is obtained by optimizing the following problem G\u2217= arg min G Ez\u223cpz(z)[log(1 \u2212D\u2217(G(z)))], (3) where D\u2217is the optimal discriminator. Adversarial learning techniques can be naturally employed to synthesize training data. However according to Eq. (2), the discriminator requires real images for training. In the absence of training data, it is thus impossible to train the discriminator as vanilla GANs. Recent works [19] have proved that the discriminator D can learn the hierarchy of representations from samples, which encourages the generalization of D in other tasks like image classi\ufb01cation. Odena [18] further suggested that the tasks of discrimination and classi\ufb01cation can improve each other. Instead of training a new discriminator as vanilla GANs, the given deep neural network can extract semantic features from images as well, since it has already been well trained on large-scale datasets. Hence, we propose to regard this given deep neural network (e.g. ResNet-50 [7]) as a \ufb01xed discriminator. Therefore, G can be optimized directly without training D together, i.e. the parameters of original network D are \ufb01xed during training G. In addition, the output of the discriminator is a probability indicating whether an input image is real or fake in vanilla GANs. However, given the teacher deep neural network as the discriminator, the output is to classify images to different concept sets, instead of indicating the reality of images. The loss function in vanilla GANs is therefore inapplicable for approximating the original training set. Thus, we conduct thorough analysis on real images and their responses on this teacher network. Several new loss functions will be devised to re\ufb02ect our observations. \fOn the image classi\ufb01cation task, the teacher deep neural network adopts the cross entropy loss in the training stage, which enforces the outputs to be close to groundtruth labels of inputs. Speci\ufb01cally for multi-class classi\ufb01cation, the outputs are encouraged to be one-hot vectors, where only one entry is 1 and all the others are 0s. Denote the generator and the teacher network as G and NT , respectively. Given a set of random vector {z1, z2, \u00b7 \u00b7 \u00b7 , zn}, images generated from these vectors are {x1, x2, \u00b7 \u00b7 \u00b7 , xn}, where xi = G(zi). Inputting these images into the teacher network, we can obtain the outputs {y1 T , y2 T , \u00b7 \u00b7 \u00b7 , yn T } with yi T = NT (xi). The predicted labels {t1, t2, \u00b7 \u00b7 \u00b7 , tn} are then calculated by ti = arg max j (yi T )j. If images generated by G follow the same distribution as that of the training data of the teacher network, they should also have similar outputs as the training data. We thus introduce the one-hot loss, which encourages the outputs of generated images by the teacher network to be close to one-hot like vectors. By taking {t1, t2, \u00b7 \u00b7 \u00b7 , tn} as pseudo ground-truth labels, we formulate the one-hot loss function as Loh = 1 n X i Hcross(yi T , ti), (4) where Hcross is the cross-entropy loss function. By introducing the one-hot loss, we expect that a generated image can be classi\ufb01ed into one particular category concerned by the teacher network with a higher probability. In other words, we pursue synthetic images that are exclusively compatible with the teacher network, rather than general real images for any scenario. Besides predicted class labels by DNNs, intermediate features extracted by convolution layers are also important representations of input images. A large number of works have investigated the interpretability of deep neural networks [30, 22, 4]. Features extracted by convolution \ufb01lters are supposed to contain valuable information about the input images. In particular, Zhang et al. [31] assigned each \ufb01lter in a higher convolution layer with a part of object, which demonstrates that each \ufb01lter stands for different semantics. We denote features of xi extracted by the teacher network as f i T , which corresponds to the output before the fully-connected layer. Since \ufb01lters in the teacher DNNs have been trained to extract intrinsic patterns in training data, feature maps tend to receive higher activation value if input images are real rather than some random vectors. Hence, we de\ufb01ne an activation loss function as: La = \u22121 n X i \u2225f i T \u22251, (5) where \u2225\u00b7 \u22251 is the conventional l1 norm. Moreover, to ease the training procedure of a deep neural network, the number of training examples in each category is usually balanced, e.g. there are 6,000 images in Algorithm 1 DAFL for learning portable student networks. Input: A given teacher network NT , parameters of different objects: \u03b1 and \u03b2. 1: Initialize the generator G, the student network NS with fewer memory usage and computational complexity; 2: repeat 3: Module 1: Training the Generator. 4: Randomly generate a batch of vector: {zi}n i=1; 5: Generate the training samples: x \u2190G(z); 6: Employ the teacher network on the mini-batch: 7: [yT , t, fT ] \u2190NT (x); 8: Calculate the loss function LT otal (Fcn.7): 9: Update weights in G using back-propagation; 10: Module 2: Training the student network. 11: Randomly generate a batch of vector {zi}n i=1; 12: Utlize the generator on the mini-batch: x \u2190G(z); 13: Employ the teacher network and the student network on the mini-batch simultaneously: 14: yS \u2190NS(x), yT \u2190NT (x); 15: Calculate the knowledge distillation loss: 16: LKD \u21901 n P i H(yi S, yi T ); 17: Update weights in NS according to the gradient; 18: until convergence Output: The student network NS. each class in the MNIST dataset. We employ the information entropy loss to measure the class balance of generated images. Speci\ufb01cally, given a probability vector p = (p1, p2, \u00b7 \u00b7 \u00b7 , pk), the information entropy, which measures the degree of confusion, of p is calculated as Hinfo(p) = \u22121 k P i pi log(pi). The value of Hinfo(p) indicates the amount of information that p owns, which will take the maximum when all variables equal to 1 k. Given a set of output vectors {y1 T , y2 T , \u00b7 \u00b7 \u00b7 , yn T }, where yi T = NT (xi), the frequency distribution of generated images for every class is 1 n P i yi T . The information entropy loss of generated images is therefore de\ufb01ned as Lie = \u2212Hinfo( 1 n X i yi T ). (6) When the loss takes the minimum, every element in vector 1 n P i yi S would equal to 1 k, which implies that G could generate images of each category with roughly the same probability. Therefore, minimizing the information entropy of generated images can lead to a balanced set of synthetic images. By combining the aforementioned three loss functions, we obtain the \ufb01nal objective function LT otal = Loh + \u03b1La + \u03b2Lie, (7) \fTable 1. Classi\ufb01cation result on the MNIST dataset. Algorithm Required data LeNet-5 [12] HintonNet [8] Accuracy FLOPs #params Accuracy FLOPs #params Teacher Original data 98.91% \u223c436K \u223c62K 98.39% \u223c2.39M \u223c2.4M Standard back-propagation Original data 98.65% \u223c144K \u223c16K 98.11% \u223c1.28M \u223c1.28M Knowledge Distillation [8] Original data 98.91% \u223c144K \u223c16K 98.39% \u223c1.28M \u223c1.28M Normal distribution No data 88.01% \u223c144K \u223c16K 87.58% \u223c1.28M \u223c1.28M Alternative data USPS dataset 94.56% \u223c144K \u223c16K 93.99% \u223c1.28M \u223c1.28M Meta data [16] Meta data 92.47% \u223c144K \u223c16K 91.24% \u223c1.28M \u223c1.28M Data-Free Learning (DAFL) No data 98.20% \u223c144K \u223c16K 97.91% \u223c1.28M \u223c1.28M where \u03b1 and \u03b2 are hyper parameters for balancing three different terms. By minimizing the above function, the optimal generator can synthesize images that have the similar distribution as that of the training data previously used for training the teacher network (i.e. the discriminator network). It is noted that some previous works [23, 17] could synthesize images by optimizing the input of the neural network using back-propagation. But it is dif\ufb01cult to generate abundant images for the subsequent student network training, for each synthetic image leads to an independent optimization problem solved by back-propagation. In contrast, the proposed method can imitate the distribution of training data directly, which is more \ufb02exible and ef\ufb01cient to generate new images. 3.3. Optimization The learning procedure of our algorithm can be divided into two stages of training. First, we regard the well-trained teacher network as a \ufb01xed discriminator. Using the loss function LT otal in Eq. 7, we optimize a generator G to generate images that follow the similar distribution as that of the original training images for the teacher network. Second, we utilize the knowledge distillation approach to directly transfer knowledge from the teacher network to the student network. The student network with fewer parameters is then optimized using the KD loss LKD in Eq. 1. The diagram of the proposed method is shown in Figure 1. We use stochastic gradient descent (SGD) method to optimize the image generator G and the student network NS. In the training of G, the \ufb01rst term of LT otal is the cross entropy loss, which can be trained traditionally. The second term La in Eq. 7 is exactly a linear operation, and the gradient of La with respect to f i T can be easily calculated as: \u2202La \u2202f i T = \u22121 nsgn(f i T ), (8) where sgn(\u00b7) denotes sign function. Parameters WG in G will be updated by: \u2202La \u2202WG = X i \u2202La \u2202f i T \u00b7 \u2202f i T \u2202WG , (9) where \u2202f i T \u2202WG is the gradient of the feature f i T . The gradient of the \ufb01nal term Lie with respect to yi T can be easily calculated as: \u2202Lie \u2202yi T = \u22121 nyi[log( 1 n X j yj T ) + 1], (10) where 1 denotes n-dimensional vector with all values as 1. Parameters in G will be additionally updated by: \u2202Lie \u2202WG = X i \u2202Lie \u2202yi T \u00b7 \u2202yi T \u2202WG . (11) Detailed procedures of the proposed Data-Free Learning (DAFL) scheme for learning ef\ufb01cient student neural networks is summarized in Algorithm 1. 4. Experiments In this section, we will demonstrate the effectiveness of our proposed data-free knowledge distillation method and conduct massive ablation experiments to have an explicit understanding of each component in the proposed method. 4.1. Experiments on MNIST We \ufb01rst implement experiments on the MNIST dataset, which is composed of 28 \u00d7 28 pixel images from 10 categories (from 0 to 9). The whole dataset consists of 60,000 training images and 10,000 testing images. For choosing hyper-parameters of the proposed methods, we take 10,000 images as a validation set from training images. Then, we train models on the full 60,000 images to obtain the ultimate network. To make a fair comparison, we follow the setting in [16]. Two architectures are used for investigating the performance of proposed method, i.e. a convolution-based architecture and a network consists of fully-connect layers. For convolution models, we use LeNet-5 [12] as the teacher model and LeNet-5-HALF (a modi\ufb01ed version with half the number of channels per layer) as the student model. For the second architecture, the teacher network consists of two hidden layers of 1,200 units (Hinton-784-1200-120010) [8] and student network consists of two hidden layers of \fTable 2. Effectiveness of different components of the proposed data-free learning method. One-hot loss ! ! ! ! Information entropy loss ! ! ! ! Feature maps activation loss ! ! ! ! Top 1 accuracy 88.01% 78.77% 88.14% 15.95% 42.07% 97.25% 95.53% 98.20% 800 units (Hinton-784-800-800-10). The student networks have signi\ufb01cantly fewer parameters than teacher networks. For our method, \u03b1 and \u03b2 in Fcn.7 are 0.1 and 5, respectively, and are tuned on the validation set. The generator was trained for 200 epochs using Adam. We use a deep convolutional generator1 following [19] and add a batch normalization at the end of the generator to smooth the sample values. Table 1 reports the results of different methods on the MNIST datasets. On LeNet-5 models, the teacher network achieves a 98.91% accuracy while the student network using the standard back-propagation achieves a 98.65% accuracy, respectively. Knowledge distillation improved the accuracy of student network to 98.91%. These methods use the original data to train the student network. We then train a student network exploiting the proposed method to evaluate the effectiveness of the synthetic data. We \ufb01rst use the data randomly generated from normal distribution to training the student network. By utilizing the knowledge distillation, the student network achieves only an 88.01% accuracy. In addition, we further use another handwritten digits dataset, namely USPS [9], to conduct the same experiment for training the student network. Although images in two datasets have similar properties, the student network learned using USPS can only obtain a 94.56% accuracy on the MNIST dataset, which demonstrates that it is extremely hard to \ufb01nd an alternative to the original training dataset. To this end, Lopes et al. [16] using the \u201cmeta data\u201d, which is the activation record of original data, to reconstruct the dataset and achieved only a 92.47% accuracy. Noted that the upper bound of the accuracy of student network is 98.65%, which could be achieved only if we could \ufb01nd a dataset whose distribution is same as the original dataset (i.e. MNIST dataset). The proposed method utilizing generative adversarial networks achieved a 98.20% accuracy, which is much close to this upper bound. Also, the accuracy of student network using the proposed algorithm is superior to these using other data (normal distribution, USPS dataset and reconstructed dataset using \u201cmeta data\u201d), which suggest that our method could imitate the distribution of training dataset better. On the fully-connected models, the classi\ufb01cation accuracies of teacher and student network are 98.39% and 98.11%, respectively. Knowledge Distillation brought the 1https://github.com/eriklindernoren/PyTorchGAN/blob/master/implementations/dcgan/dcgan.py performance of student network by transferring information from teacher network to 98.39%. However, in the absence of training data, the result became unacceptable. Randomly generated noise only achieves 87.58% accuracy and \u201cmeta data\u201d [16] achieves a higher accuracy of 91.24%. Using USPS dataset as alternatives achieves an accuracy of 93.99%. The proposed method results in the highest performance of 97.91% among all methods without the original data, which demonstrates the effectiveness of the generator. 4.2. Ablation Experiments In the above sections, we have tested and veri\ufb01ed the effectiveness of the proposed generative method for student network learning without training data. However, there are a number of components, i.e. three terms in Eq. 7, when optimizing the generator. We further conduct the ablation experiments for an explicit understanding and analysis. The ablation experiment is also conducted on the MNIST dataset. We used the LeNet-5 as a teacher network and LeNet-5-HALF as a student network. The training settings are same as those in Section 4.1. Table 2 reports the results of various design components. Using randomly generated samples, i.e. the generator G is not trained, the student network achieves an 88.01% accuracy. However, by utilizing one-hot loss and feature map activation loss or one of them, the generated samples are unbalanced, which results in the poor performance of the student networks. Only introducing information entropy loss, the student network achieves an 88.14% accuracy since the samples do not contain enough useful information. When combining Loh or La with Lie, the student network achieves higher performance of 97.25% and 95.53%, respectively. Moreover, the accuracy of student network is 98.20% when using all these loss functions, which achieves the best performance. It is worth noticing that the combination of one-hot loss and information entropy is essential for training the generator, which is also utilized in some previous works [25, 10]. The ablation experiments suggest that each component of the loss function of G is meaningful. By applying the proposed method, G can generate balanced samples from different classes with a similar distribution as that in the original dataset, which is effective for the training of the student network. \fTable 3. Classi\ufb01cation result on the CIFAR dataset. Algorithm Required data FLOPS #params CIFAR-10 CIFAR-100 Teacher Original data \u223c1.16G \u223c21M 95.58% 77.84% Standard back-propagation Original data \u223c557M \u223c11M 93.92% 76.53% Knowledge Distillation [8] Original data \u223c557M \u223c11M 94.34% 76.87% Normal distribution No data \u223c557M \u223c11M 14.89% 1.44% Alternative data Similar data \u223c557M \u223c11M 90.65% 69.88% Data-Free Learning (DAFL) No data \u223c557M \u223c11M 92.22% 74.47% 4.3. Visualization Results After investigating the effectiveness of the proposed method, we further conduct visualization experiments on the MNIST dataset. There are 10 categories of handwritten digits from 0 to 9 in the MNIST dataset. The settings are same as that in Section 4.1. (a) Averaged images on the MNIST dataset. (b) Averaged images on the generated dataset. Figure 2. Visualization of averaged image in each category (from 0 to 9) on the MNIST dataset. Figure 2 shows the visualization results of averaged images. Noted that the generated images are unlabeled, their classes are de\ufb01ned by the prediction of the teacher network. By exploiting the information of the given network as much as possible, we design loss function for the generator. Figure 2 (b) shows the mean of images of each class. Although no real image is provided, the generated images have similar patterns with the training images, which indicates that the generator can somehow learn the data distribution. Filter visualization. Moreover, we visualize the \ufb01lters of the LeNet-5 teacher network and student network in Figure 3. Though the student network is trained without realworld data, \ufb01lters of the student network learned by the proposed method (see Figure 3 (b)) are still similar to those of the teacher network (see Figure 3 (a)). The visualization experiments further demonstrate that the generator can produce images that have similar patterns as the original images, and by utilizing generated samples, the student network could acquire valuable knowledge from the teacher network. (a) Teacher \ufb01lters. (b) Student \ufb01lters. Figure 3. Visualization of \ufb01lters in the \ufb01rst convolutional layer learned on the MNIST dataset. The top line shows \ufb01lters trained using the original training dataset, and the bottom line shows \ufb01lters obtained using samples generated by the proposed method. 4.4. Experiments on CIFAR To further evaluate the effectiveness of our method, we conduct experiments on the CIFAR dataset. We used a ResNet-34 as the teacher network and ResNet-18 as the student network2, which is complex and advanced for further investigating the effectiveness of the proposed method. These networks are optimized using Nesterov Accelerated Gradient (NAG) and the weight decay and the momentum are set as 5 \u00d7 10\u22124 and 0.9, respectively. We train the networks for 200 epochs and the initial learning rate is set as 0.1 and divided by 10 at 80 and 120 epochs, respectively. Random \ufb02ipping, random crop and zero padding are used for data augmentation as suggested in [7]. G and the student networks of the proposed method are trained for 2,000 epochs and the other settings are same as those in MNIST experiments. Table 3 reports the classi\ufb01cation results on the CIFAR-10 and CIFAR-100 datasets. The teacher network achieves a 95.58% accuracy in CIFAR-10. The student network using knowledge distillation achieves a 94.34% accuracy, which is slightly higher than that of standard BP (93.92%). We then explore to optimize the student network without true data. Since the CIFAR dataset is more complex than MNIST, it is impossible to optimize a student network using randomly generated data which follows the normal distribution. Therefore, we then regard the MNIST dataset without labels as an alternative data to train the student network using the knowledge distillation. The student network only achieves a 28.29% accuracy on the CIFAR-10 dataset. Moreover, we train the student network using the CIFAR2https://github.com/kuangliu/pytorch-cifar \f100 dataset, which has considerable overlaps with the original CIFAR-10 dataset, but this network only achieves a 90.65% accuracy, which is obviously lower than that of the teacher model. In contrast, the student network trained utilizing the proposed method achieved a 92.22% accuracy with only synthetic data. Besides CIFAR-10, we further verify the capability of the proposed method on the CIFAR-100 dataset, which has 100 categories and 600 images per class. Therefore, the dimensionality of the input random vectors for the generator in our method is increased to 1,000. The accuracy of the teacher network is 77.84% and that of the student network is only 76.53%, respectively. Using normal distribution data, MNIST, and CIFAR-10 to train the student network cannot obtain promising results, as shown in Table 3. In contrast, the student network learned by exploiting the proposed method obtained a 74.47% accuracy without any real-world training data. 4.5. Experiments on CelebA Besides the CIFAR dataset, we conduct our experiments on the CelebA dataset, which contains 202,599 face images of pixel 224 \u00d7 224. To evaluate our approach fairly, we used AlexNet [11] to classify the most balanced attribute in CelebA [14] following the settings in [16]. The student network is AlexNet-Half, which number of \ufb01lters is half of AlexNet. The original teacher network has about 57M parameters while the student network has only about 40M parameters. The networks is optimized for 100 epochs using Adam with a learning rate of 10\u22124. We use an alternative model of DCGAN [19] to generate color images of 224 \u00d7 224. The hyper-parameters of the proposed method are same as those in MNIST and CIFAR experiments and G. Table 4 reported the classi\ufb01cation results of student networks on the CelebA dataset by exploiting the proposed method and state-of-the-art learning methods. The teacher network achieves an 81.59% accuracy and the student network using the standard BP achieves an 80.82% accuracy, respectively. Lopes et al. [16] achieves only a 77.56% accuracy rate using the \u201cmeta data\u201d. The accuracy of the student network trained using the proposed method is 80.03%, which is comparable with that of the teacher network. Table 4. Classi\ufb01cation result on the CelebA dataset. Algorithm FLOPS Accuracy Teacher \u223c711M 81.59% Standard back-propagation \u223c222M 80.82% Knowledge Distillation [8] \u223c222M 81.35% Meta data [16] \u223c222M 77.56% Data-Free Learning (DAFL) \u223c222M 80.03% 4.6. Extended Experiments Massive experiments are conducted on several benchmarks to verify the performance of the DAFL method for learning student networks using generated images. Wherein, architectures of used student networks are more portable than those of teacher networks. To investigate the difference between original training images and generated images, we use these generated images to train networks of the same architectures as those of teacher networks using the proposed methods. The results are reported in Table 5. It can be found in Table 5 that LeNet-5 and HintonNet on the MNIST dataset achieve a 98.91% accuracy and a 98.39% accuracy, respectively. In contrast, accuracies of student networks trained from scratch with same architectures are 98.47% and 98.08%, respectively, which are very close to those of teacher networks. In addition, student networks on the CIFAR-10 and the CIFAR-100 datasets also obtain similar results to those of teacher networks. These results demonstrate that the proposed method can effectively approximate the original training dataset by extracting information from teacher networks. If the network architectures are given, we can even replicate the teacher networks and achieve similar accuracies. Table 5. Classi\ufb01cation results on various datasets. Dataset Model Accuracy Teacher Student MNIST LeNet-5 [12] 98.91% 98.47% MNIST HintonNet [8] 98.39% 98.08% CIFAR-10 ResNet-34 [7] 95.58% 93.21% CIFAR-100 ResNet-34 [7] 77.84% 75.32% CelebA AlexNet [11] 81.59% 80.56% 5." + } + ], + "Chao Xu": [ + { + "url": "http://arxiv.org/abs/2403.01901v2", + "title": "FaceChain-ImagineID: Freely Crafting High-Fidelity Diverse Talking Faces from Disentangled Audio", + "abstract": "In this paper, we abstract the process of people hearing speech, extracting\nmeaningful cues, and creating various dynamically audio-consistent talking\nfaces, termed Listening and Imagining, into the task of high-fidelity diverse\ntalking faces generation from a single audio. Specifically, it involves two\ncritical challenges: one is to effectively decouple identity, content, and\nemotion from entangled audio, and the other is to maintain intra-video\ndiversity and inter-video consistency. To tackle the issues, we first dig out\nthe intricate relationships among facial factors and simplify the decoupling\nprocess, tailoring a Progressive Audio Disentanglement for accurate facial\ngeometry and semantics learning, where each stage incorporates a customized\ntraining module responsible for a specific factor. Secondly, to achieve\nvisually diverse and audio-synchronized animation solely from input audio\nwithin a single model, we introduce the Controllable Coherent Frame generation,\nwhich involves the flexible integration of three trainable adapters with frozen\nLatent Diffusion Models (LDMs) to focus on maintaining facial geometry and\nsemantics, as well as texture and temporal coherence between frames. In this\nway, we inherit high-quality diverse generation from LDMs while significantly\nimproving their controllability at a low training cost. Extensive experiments\ndemonstrate the flexibility and effectiveness of our method in handling this\nparadigm. The codes will be released at\nhttps://github.com/modelscope/facechain.", + "authors": "Chao Xu, Yang Liu, Jiazheng Xing, Weida Wang, Mingze Sun, Jun Dan, Tianxin Huang, Siyuan Li, Zhi-Qi Cheng, Ying Tai, Baigui Sun", + "published": "2024-03-04", + "updated": "2024-04-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Talking face generation [14, 51, 52, 55, 56, 63, 67] is a challenging task that aims to synthesize video based on provided audio and image. This technique finds wide application in various practical scenarios, especially virtual interaction. However, users encountered a dilemma during the process. They are concerned about privacy breaches when using real facial images, while the virtual avatars generated by off-theshelf methods often fail to align well with their own voices. Thus, we envision a new paradigm if it is possible to liberate the specified source face and directly infer the synchronized virtual portrait that matches the real audio input. arXiv:2403.01901v2 [cs.CV] 1 Apr 2024 \fIndeed, it is an intuitive process, as people often analyze a voice and then mentally visualize the corresponding video clip. To realize such Listening and Imagining paradigm, there are two critical issues. 1) How to disentangle face-related features solely from audio to ensure strong consistency between synthesized videos and given audio. We first explore the natural association between auditory and visual perception. It is true that the facial identity features are closely related to voice characteristics [2, 8, 40]. For example, a pronounced chin and prominent brow ridges usually accompany a deep voice, while women and children often have higher pitches. In addition, spoken content involves localized lip movement, and the emotion style reflects the global facial cues [11, 24]. Accurate decoupling of the above three factors, i.e., identity, content, and emotion, has a significant impact on subsequent generation. However, existing researches [20, 36] mainly focus on the disentanglement of the latter two elements, while other studies [33, 59], although exploring identity, are geared towards generating static face image instead of dynamic animations. 2) How can we maintain diversity between different videos while ensuring consistency within each video by a single network. It is the consensus that human imagination is boundless. With the same audio, we can imagine numerous different talking videos, but within each clip, all frames share the uniform information. Benefiting from the progress of diffusion models, we have two potential methods available. One is the combination of text-to-image synthesis, e.g., Latent Diffusion Models (LDMs) [42] and common driving methods like SadTalker [67] and DiffTalk [46]. The other is producing videos by borrowing the framework of text-to-video synthesis [12, 28, 38, 61]. However, the former involves two isolated models, while the latter is difficult to perfectly match the control conditions, resulting in noticeable differences between each frame. Additionally, neither of them fully utilizes the audio features. In this paper, we are dedicated to solving these two problems. Firstly, due to the high degree of coupling in audio, it is challenging to directly employ cross reconstruction under the guidance of pseudo training pairs for disentanglement [20, 36]. To simplify it, we turn to the prior knowledge 3DMM for help and propose a Progressive Audio Disentanglement (PAD) to gradually decouple identity, content, and emotion. As shown in Fig. 1, we start by predicting the most independent and fundamental identity cues, including the explicit facial geometry and implicit semantics, i.e., shape, gender, and age. Then the estimated shape is used as a condition to disentangle the localized content from the audio. We further supplement the remaining details of global emotion to generate comprehensive facial representations. In this way, we accurately estimate the face-related geometry and semantics from the audio. For the second concern, we propose a Controllable Coherent Frame (CCF) generation, which offers several appealing designs. Firstly, inspired by the subject-driven models [43, 48], we combine the implicit audio cues with pred-defined prompt and map them to the CLIP domain [39], which influence the inferred faces both on semantics and emotions. Secondly, to avoid introducing extra offline components and ensure that the diffusion model has both invariance and variability, we freeze the LDMs and propose three trainable adapters to inject hierarchical conditions, which are equipped with a flexible mechanism that determines whether the outputs have new appearance or remain faithful to the neighboring frame. We further collaborate the CCF with autoregressive inference for diverse and temporally consistent video generation. In summary, we present the following contributions. \u2022 We propose a new paradigm, Listening and Imagining, for generating diverse and coherent talking faces based on a single audio, which is the first attempt in this field. \u2022 We propose a novel PAD that gradually separates the identity, content, and emotion from the audio, providing a solid foundation for accurate and vivid face animations. \u2022 We propose a novel frozen LDMs-based CCF with three trainable adapters to faithfully integrate the audio features and address the dilemma of intra-video diversity and inter-video consistency within a single model. 2. Related Work 2.1. Audio-driven Talking Face Generation Talking face synthesis could be roughly divided into two folds. One of the research directions is GAN-based methods. Early efforts project the audio cues into latent feature space [6, 27, 35, 54, 55, 72, 73] and utilize a conditioned image generation framework to synthesize faces. To compensate for information loss in implicit codes, subsequent works have incorporated explicit structural information, such as landmark [46, 71] and 3DMM [41, 63, 65, 67], to more accurately reflect the audio features on the face. Similarly, 3DMM is used as an intermediate representation in our method. We further leverage it to fully decouple the audio signal. Motivated by progress of the diffusion model in image synthesize tasks, recent approaches [10, 46, 51, 64] focus on another research line. They carefully design conditional control modules and train them along with UNet to achieve high-fidelity faces. However, trainable diffusion models fail to inherit many appealing properties, such as diverse generation. In this work, we explore the feasibility of using pretrained models to maintain diversity. 2.2. Audio to Face Generation Human voices carry a large amount of personal information, including speaker identity [8, 40], age [15, 49], gender [25] and emotion style [58, 69]. Based on previous studies, it is possible to directly predict entire face from corresponding \faudio. Common methods [1, 4, 33, 59] adopt GANs to generate face images from voice input. However, the above reconstruction-based attempts are not reasonable because audio lacks specific visual features like hairstyles and backgrounds, while these attributes can vary significantly within the same person. Consequently, CMP [60] only models the correlation between facial geometry and voice, but the results lack authenticity. We propose an ideal solution that involves inferring facial structure information from audio and then combining it with a conditional generative model to achieve controllable and diverse generation. 2.3. Diffusion Models Diffusion models [18, 32] are popular for generating realistic and diverse samples. In practice, DDIM [50] converts the sampling process to a faster non-Markovian process, while LDMs [42] perform diffusion in a compressed latent space to reduce memory and runtime. Recently, textto-image generations [13, 19, 26, 44, 45] have gain significant attentions. For personalized control, DreamBooth [43] proposes a subject-driven generation by fine-tuning diffusion models. To reduce training costs, T2I-Adapter [31] only trains the extra encoder to influence the denoising process. Similarly, our work incorporates the above two characteristics. Another challenging topic is text-to-video generation [3, 12, 61, 62, 68]. However, they suffer from inconsistency across video frames both temporally and semantically. Additionally, they struggle to generate long videos, impacting the production of high-fidelity talking faces. Therefore, we utilize an autoregressive inference strategy to ensure smooth and consistent transitions between frames, even for long-duration videos. 3. Method Listening and imagining are instinctive behaviours for humans to perceive and comprehend the world. When people hear a voice, they first analyze the characteristics of the speaker, i.e., who is speaking (Identity), what is being said (Content), and how it feels (Emotion), and then imagine diverse and temporally consistent dynamic scenes that match the inferred audio features. To simulate such process, we follow a two-stage phase. Firstly, a Progressive Audio Disentanglement (PAD) is designed to gradually separate the identity, content, and emotion from the coupled audio in Sec. 3.1. Secondly, we design Controllable Coherent Frame (CCF) generation (Sec. 3.2), which inherits the diverse generation capabilities from Latent Diffusion Models (LDMs) and develops several techniques to enhance controllability: Texture Inversion Adapter (TIA), Spatial Conditional Adapter (SCA), and Mask-guided Blending Adapter (MBA) jointly ensure the generated result meets specific conditions. Autoregressive Inference is utilized to maintain temporal consistency throughout the video. 3.1. Progressive Audio Disentanglement To extract independent cues from the entangled audio signals, previous work [20] builds aligned pseudo pairs and adopts the cross-reconstruction manner for content and emotion disentanglement. However, due to the inevitable introduction of errors when constructing pseudo pairs, they become unreliable when more elements are involved. To this end, we only use a single audio for training and adopt 3DMM [9] to achieve accurate disentangled 3D facial prior, e.g., shape \u03b1 \u2208R80, expression sequences \u03b2 \u2208RL\u00d764, etc., which excludes the face-unrelated factors, such as hairstyle and makeup, and serve as the ground truth for supervision during training. Note that a video shares the same shape while the expression corresponds to each frame. In particular, we propose Progressive Audio Disentanglement to simplify the decoupling process by extracting the fundamental identity information, then separating the localized mouth movements, and finally obtaining the global expression, outlined in Fig. 2. For more discussions about the disentanglement order (identity \u2192content \u2192emotion), the superiority of PAD compared to other methods, and architecture details, please refer to the supplementary materials. Identity. Motivated by the recent studies that voice is articulatory related to skull shape, as well as gender and age [16, 29], the identity hinted in audio could refer to the facial geometry and semantic embedding. In practice, the identity encoder \u03a6id E is the Transformer-based architecture with an extended learnable token \u02dc \u03b1 \u2208R512 of face shape, mapping the MFCC sequence A \u2208RL\u00d71280 to both the estimated shape \u02c6 \u03b1 \u2208R80 and semantics \u03b8id \u2208R512: \u00af \u03b1, \u00af \u03b8id = \u03a6id E ([\u02dc \u03b1, MLPs(A)] + PE), (1) \u02c6 \u03b1 = MLPs(\u00af \u03b1), \u03b8id = Avg(\u00af \u03b8id), (2) where PE is the positional embedding, \u00af \u03b1 \u2208 R512, \u00af \u03b8id \u2208RL\u00d7512. During training, we calculate Reconstruction Loss between predicted \u02c6 \u03b1 and the ground truth \u03b1: Lid = \u2225\u03b1 \u2212\u02c6 \u03b1\u22252. Moreover, we apply Contrastive Loss on semantic embeddings by constructing a positive pair (\u03b8p id, \u03b8id) with the same identity and a negative pair (\u03b8n id, \u03b8id) with the different. Then InfoNCE loss [34] with cosine similarity S is enhanced between two pairs: \\b e g in { ali gn ed} \\m athca l { L} _{c on} =-\\lo g \\l ef t [ \\fr ac {\\ e xp \\left (\\mathcal {S}\\left (\\boldsymbol {\\theta }_{id}^p, \\boldsymbol {\\theta }_{id}\\right )\\right )}{\\exp \\left (\\mathcal {S}\\left (\\boldsymbol {\\theta }_{id}^p, \\boldsymbol {\\theta }_{id}\\right )\\right )+\\exp \\left (\\mathcal {S}\\left (\\boldsymbol {\\theta }_{id}^n, \\boldsymbol {\\theta }_{id}\\right )\\right )}\\right ]. \\end {aligned} \\label {eq:1} (3) Content. The 3DMM expression coefficient involves local lip and global facial movements, while the spoken content is mainly related to the former. Inspired by the work [55], we employ the lip reading expert [47] into the training phase and produce the only mouth-related expression coefficients l \u2208RL\u00d764 for content disentanglement. As shown in Fig. 2, we construct a Transformer-based encoder-decoder architecture, where the linguistic features \u00af l \u2208RL\u00d764 embedded \fA(1) head(2) to(3) shoulder(4) portrait(5), \u2026\u2026 Frozen Tunable Add Noise Controllable Coherent Frame Generation \u00d7 \ud835\udc47 1: Identity 2: Content 3: Emotion Progressive Audio Disentanglement Frozen Tunable \ud835\udf31\ud835\udc38 \ud835\udc56\ud835\udc51 \ud835\udf31\ud835\udc38 \ud835\udc56\ud835\udc51 \ud835\udf31\ud835\udc38 \ud835\udc50 \ud835\udf31\ud835\udc37 \ud835\udc50 \ud835\udf31\ud835\udc38 \ud835\udc56\ud835\udc51 \ud835\udf31\ud835\udc38 \ud835\udc50 \ud835\udf31\ud835\udc38 \ud835\udc52 \ud835\udf31\ud835\udc37 \ud835\udc52 \u0ddd \ud835\udf36 \ud835\udf3d\ud835\udc56\ud835\udc51 \ud835\udc8d \u0de1 \ud835\udf37 \ud835\udf3d\ud835\udc52 Tokenizer & Lookup \ud835\udc801 \ud835\udc802 \ud835\udc804 \ud835\udc803 \ud835\udc805 \u0de1 \ud835\udc801 \u0de1 \ud835\udc802 \u0de1 \ud835\udc80\ud835\udc41 \u2026 \u2026 \ud835\udf3d\ud835\udc56\ud835\udc51\ud835\udf3d\ud835\udc52 \ud835\udc70 \ud835\udc70\ud835\udc5f\ud835\udc51 \u0de0 \ud835\udc70 \ud835\udcd4 \ud835\udcd3 \ud835\udcd5\ud835\udc46\ud835\udc36 \ud835\udc8e \ud835\udc9b\ud835\udc47 Denoising U-Net \ud835\udf50\ud835\udf03 \ud835\udc70\ud835\udc4e\ud835\udc51 \ud835\udc70\ud835\udc56\ud835\udc51 \ud835\udc70\ud835\udc4f\ud835\udc54 \ud835\udc68 MLPs \ud835\udc68 \ud835\udc68 \u0de5 \ud835\udf36 \u04a7 \ud835\udc8d \ud835\udcd5\ud835\udc47\ud835\udc3c CLIP Text Encoder \ud835\udcd5\ud835\udc40\ud835\udc35 Add From PAD \u0ddd \ud835\udf36 \u0ddd \ud835\udf36 \u04a7 \ud835\udc8d Figure 2. Overview of the proposed method. Our approach involves a two-stage framework that corresponds to the Listening and Imagining. For listening, the Progressive Audio Disentanglement gradually separates the identity, content, and emotion from the entangled audio. For Imagining, Controllable Coherent Frame generation receives the facial semantics (\u03b8id and \u03b8e inferred from PAD) and geometry (3D mesh Ird rendered from \u02c6 \u03b1, \u02c6 \u03b2 and other coefficients extracted from I) to synthesize the diverse audio-synchronized faces, while the Iad, Iid, and Ibg are further introduced to achieve highly controllable generation with complete visual and temporal consistency. Please refer to Alg. 1 for more details. In this way, we achieve diverse and high-fidelity face animation solely from audio. by the content encoder \u03a6c E and the identity template output by the frozen identity encoder are combined together and sent into the content decoder \u03a6c D to obtain l: \u00af l = \u03a6c E(MLPs(A) + PE), (4) l = MLPs(\u03a6c D(\u00af l + MLPs(\u02c6 \u03b1))). (5) We train this stage by using two losses: Regularization Loss calculates the distance between \u03b2 and l with a relative small weight to smooth training phase: Lreg = \u2225l \u2212\u03b2\u22252. Besides, we project the coefficients to the image domain with textures, and utilize lip-reading expert to predict the text \u02c6 X. Assuming that the original text content is X, we compute the Lip-reading Loss via cross-entropy: Llip = \u2212XlogP( \u02c6 X|V ), where V is the ground truth video. Emotion. In this stage, we directly utilize \u03b2 as constraint to decouple the remaining emotion styles. As shown in Fig. 2, based on the second stage, an additional emotion encoder \u03a6e E is introduced to generate the pooled emotion embeddings \u03b8e \u2208R512, which is combined with identity \u02c6 \u03b1 and linguistic features \u00af l for the expression coefficient prediction by the emotion decoder \u03a6e D: \u03b8e = Avg(\u03a6e E(MLPs(A) + PE)), (6) \u02c6 \u03b2 = MLPs(\u03a6e D(\u03b8e + \u00af l + MLPs(\u02c6 \u03b1))). (7) Similar to Eq. 3, we adopt Contrastive Loss to extract more discriminative emotion features, details of which are omitted here. Additionally, the regular Reconstruction Loss is used to supervise generated \u02c6 \u03b2. 3.2. Controllable Coherent Frame Generation Existing methods typically specify the input source face along with audio cues for video generation, while our objective is to achieve visually diverse and audio-consistent animation directly from input audio. Recent LDMs are naturally suited to diverse generation, and their conditional text prompts offer a pathway for freely editing attributes that cannot be deduced from the audio, as shown in Fig. 1. In exchange, they are relatively weak in controllability, especially in video generation [38, 61]. Thus, to tailor it to our task without introducing extra offline models (e.g., LDMs + DiffTalk[46]), we must tackle two challenges: ensuring that the synthesized video content aligns with the given conditions, and achieving smooth temporal transitions across frames. Our targeted designs are depicted in Fig. 2. Textual Inversion Adapter. The critical issue in injecting the identity \u03b8id and emotion \u03b8e inferred from PAD to the frozen LDMs lies in aligning them with the CLIP domain. Inspired by the inversion technique [43], we propose a Textual Inversion Adapter FT I, which maps the input vectorbased conditions into a set of token embeddings to sufficiently represent these conditions in the CLIP token embedding space. In detail, we first predefine the prompt for basic high-quality face generation. It is tokenized and mapped into the token embedding space by employing a CLIP embedding lookup module, obtaining {Y 1, . . . , Y M}. Then, the adapter encodes \u03b8id and \u03b8e into the aligned pseudoword token embeddings n \u02c6 Y 1, . . . , \u02c6 Y N o . We concatenate these two embeddings and feed them to the CLIP text en\fAlgorithm 1 Autoregressive Inference Input: zT : random noise \b Ii rd \tH i=1: 3D mesh sequences \u25b7Inferred from audio Y : token embeddings \u25b7Inferred from audio \u25b7i = 1, Diverse mode \u02c6 I1 = D(CCF(zT , E(I1 rd), Y ) \u25b7i = 2...H, Coherent mode for i = 2...H do Ii id = \u02c6 I1, Ii bg = GIA(\u02c6 I1) Ii ad = M(\u02c6 Ii\u22121), mi = GMOD(\u02c6 Ii\u22121) \u02c6 Ii = D \u2032(CCF(zT , E([Ii rd, Ii id, Ii ad]), Y ), E(Ii bg), mi) end for Output: \u02c6 V : generated video coder, whose output Y is applied on the cross-attention layer of LDMs to guide the denoising process. When training this adapter, we freeze all of the other parameters: \\begin {aligne d} \\m athb b { E}_ { \\boldsymbol {z}_0, \\boldsymbol {\\varepsilon } \\sim N(0, \\boldsymbol {I}), t, \\boldsymbol {Y}}\\left \\|\\boldsymbol {\\varepsilon }-\\boldsymbol {\\varepsilon }_\\theta \\left (\\boldsymbol {z}_t, t, \\boldsymbol {Y}\\right )\\right \\|_2^2. \\end {aligned} (8) Spatial Conditional Adapter. Inspired by T2I-Adapter, we devise Spatial Conditional Adapter FSC to further fuse the explicit conditions. As shown in Fig. 2, 3D face mesh Ird contains rich audio-synchronized facial geometry, i.e., face shape, lip movement, and expression style. Besides, we sample a random frame Iid of the same identity to provide the face appearance and background. These two conditions are enough for common methods [67, 73] to animate the identity face to desired pose and expression. However, it is difficult for frozen LDMs to learn complex spatial transformation. Thus, we further incorporate the adjacent frame Iad as an additional reference, and mask its mouth area M produced by 3D mesh to avoid the networks from learning shortcuts by directly copying from it. This condition makes the deformation learning much easier and also provides motion cues. We only train this adapter in this stage: \\begin {aligne d} \\ m at hbb {E } _ { \\bold s ymbol {z}_0, \\boldsymbol {\\varepsilon } \\sim N(0, \\boldsymbol {I}), t, \\boldsymbol {F}_{sc}}\\left \\|\\boldsymbol {\\varepsilon }-\\boldsymbol {\\varepsilon }_\\theta \\left (\\boldsymbol {z}_t, t, \\boldsymbol {Y}, \\boldsymbol {F}_{sc}\\right )\\right \\|_2^2, \\end {aligned} (9) where F sc = FSC(E(Ird) + E(Iid) + E(Iad)) when p<0.8, else FSC(E(Ird)). E is the VAE encoder. p is a random number generator within the range from 0 to 1. Masked-guided Blending Adapter. Despite spatial pixellevel conditions have been given, we observe significant distortion artifacts in the background, resulting in a low fidelity of the synthesized video. Therefore, we introduce a Masked-guided Blending Adapter FMB into VAE Decoder D, forming D \u2032, which utilizes a mask to directly copy the masked region of the k-th decoder feature F k vd while combining the unmasked region of the k-th background feature F k ve from the VAE encoder. The dilated mask m and background Ibg are produced by MODNet [23] GMOD and Neutral Fearful Ours-I Mesh Ours-C Mesh Ours-E Mesh CMP Mesh CMP Image Ours Image Figure 3. Qualitative results of audio-to-face on MEAD . Icons of the same color indicate samples from the same audio. Ours-I, -C, and -E mean the stage of identity, content, emotion decoupling. Inpaint-Anything [66] GIA respectively. The blending process is formulated as: \\ be g i n { a l i gned} \\ hat { \\boldsymbol {F}}_{vd}^k = \\boldsymbol {F}_{vd}^k \\otimes \\boldsymbol {m} + \\text {Conv}(\\boldsymbol {F}_{ve}^k) \\otimes \\tilde {\\boldsymbol {m}}, \\end {aligned} (10) where \u2297is element-wise multiplication, \u02dc m = 1 \u2212m. Experimentally, we inset this module into the decoder layer with the resolution of 512. The training only involves a frozen autoencoder and a trainable convolution layer, under the supervision of reconstruction and VGG losses. With this design, the synthesized video maintains a consistent background, and the fusion edge is seamlessly harmonious. Note that the above three adapters are trained sequentially. Autoregressive Inference. The designed CCF has diverse and controllable image generation capabilities. We incorporate an Autoregressive Inference to further generate diverse and temporally coherent videos. As shown in Alg. 1, unlike DiffTalk [46] starting from a given frame, for the first frame, we switch CCF to diverse mode, i.e., only receive audio-enhanced CLIP embedding Y and encoded 3D face mesh features E(I1 rd), generating faces \u02c6 I1 with varying appearance while maintaining the consistent geometry with the audio. For the subsequent frames, we further apply three adapters to achieve controllable generation. Specifically, Ii id and Ii bg are \u02c6 I1 and GIA(\u02c6 I1) all the time, Ii rd is the mesh at time stamp i, and Ii ad is the masked former frame \u02c6 Ii\u22121, mi = GMOD(\u02c6 Ii\u22121) since the difference between adjacent frames is small. \fOurs SadTalker EAMM PC-AVS Wav2Lip GT Figure 4. Visual comparison with recent SOTA methods. Images are from officially released codes for fair comparisons. The first sample selected from HDTF, second from MEAD. For the third, based on the audio of the first row, our method generate a unseen face for all competitors, which using this as the source face to produce talking faces. The first row provides the ground truth for facial expression. 4. Experiments 4.1. Experimental Setup Datasets. We adopt three talking face dataset MEAD [57], VoxCeleb2 [7], and HDTF [70] in our experiments. Metrics. We adopt PSNR, SSIM, and FID [17] to measure the image quality of synthesized videos. The distance between the landmarks of the mouth (LMD) and confidence score (Sync) proposed in SyncNet [5] is used to measure the audio-visual synchronization. Besides, we compute emotion accuracy (EACC) to measure the generated emotions. Furthermore, we compute the cosine similarity of geometry (Shape) and semantics (Age, Gender, Emotion), and overall quality (FID) between the ground truth and generated faces inferred from audio by using several off-the-shelf tools, D3DFR [9], FairFace [22], and FAN [30]. Implementation Details. For PAD training, we adopt VoxCeleb2 for identity disentanglement and MEAD training set for content and emotion. The audios are pre-processed to 16kHz and converted to mel-spectrogram with 80 Mel filterbank, as Wav2Lip. The length of the sampled video clip is L = 32. For CCF training, the input videos are cropped and resized to 512 \u00d7 512. We adopt MEAD and HDTF as training sets and optimize TIA, SCA, and MBA sequentially. It takes about 2.5 days totally when trained on 8 V100 GPUs. N = 8 and M is the length of the prompts in SCA. During quantitative assessment, we sample 12 identities from MEAD and 10 from HDTF, which serve as the test set. Denoising step T is set as 50 for both training and inference. 4.2. Comparison with State-of-the-Art Methods. Qualitative Results. In this section, we first perform audioto-face method comparisons with CMP [60]. As shown in Fig. 3, we display the generated 3D meshes and their corresponding faces of our method in rows 4, 5, and CMP in rows 6, 7. The real faces in the first row provide geometry and semantics references. It can be seen that CMP fails to generate desired shapes and the synthesized faces have blurred details with obvious artifacts. By contrast, benefiting from progressive disentanglement and LDMs, our method produces more accurate geometry, including shape, lip movement, emotion styles, and high-quality realistic textures. Besides, in each sample, the first two columns correspond to the same audio, while the third represents a different audio of the same person. Across different audio clips, our method produces more consistent results than CMP. We further compare our method with talking face methods including Wav2Lip, PC-AVS, EAMM, and SadTalker, which are reproduced from their officially released codes. As shown in Fig. 4, in terms of emotion accuracy, audio-visual syn\fRef Exchange Content Exchange Semantic Exchange Emotion Output Angry + Content-1 Disgusted + Content-1 Happy + Content-1 Sad + Content-1 Sad + Content-2 Source Output \uff08a\uff09 \uff08b\uff09 Figure 5. Illustration of disentangled controllability. (a) is under the diverse mode and (b) is under the coherent mode. Method Shape \u2191 Age \u2191 Gender \u2191 Emotion \u2191 FID \u2193 CMP [60] 0.60 0.41 0.87 0.23 228.05 Ours 0.75 0.58 0.93 0.76 25.78 Table 1. Quantitative comparison with CMP. The value is the average of all samples on two test sets (same as the Tab. 2). Method EACC \u2193 LMD \u2193 Sync \u2191 FID \u2193 PSNR \u2191 SSIM \u2191 Wav2Lip [37] 0.129 2.96 5.93 40.79 29.05 0.59 PC-AVS [73] 0.102 2.71 5.75 45.84 29.28 0.59 EAMM [21] 0.097 2.75 5.46 50.55 29.14 0.61 SadTalker [67] 0.121 2.80 5.84 31.62 29.55 0.64 Ours-C 0.113 2.82 5.86 28.09 29.59 0.60 Ours-E 0.090 2.67 5.91 28.37 29.66 0.67 Table 2. Quantitative comparison with state-of-the-art talking face methods on two test sets. chronization, and image quality, our method outperforms these SOTA methods. Notably, for the third sample, other competitors adhere the two stage paradigm, i.e., Audio-toImage and then Image-to-Video, while our method autoregressively infers video from the given audio. Quantitative Results. As shown in Tab. 1, our method excels over CMP across all metrics, especially in FID, which is consistent with the qualitative results in Fig. 3. Moreover, we observe that gender is relatively easy to predict, while others are more challenging. Yet, our method still gains significant improvements. We further present the quantitative comparison in Tab. 2 of talking face generation, which reveals our method can achieve the best performance in most metrics. Wav2Lip surpasses all the competitors on Sync due to the supervision of the synchronization scoring model. We further report the user study in supplementary materials. 4.3. Further Analysis Disentangled Controllability. To demonstrate the facial factor disentanglement of our method, we first sample two distinctly different faces and exchange one factor at a time in diverse mode. Although our method independently controls the desired factor, i.e., content (lip movement), semantics (gender and age), and emotion styles in columns Query Audios Output Retrieved Audios Figure 6. Illustration of the image retrieval using the identity semantic features. The faces (column 1) only shown for reference. Figure 7. Clusters of the disentangled emotion embeddings. Different emotions are clustered in different regions. 3, 4, 5 of Fig. 5(a), the results inherit the weaknesses of the LDM in preserving visual consistency. As the red rectangles marked in row 1, despite only editing the mouth shape, they exhibit noticeable difference in appearance, which indicates that simply generating frames and splicing them together cannot create visually and temporally consistent videos. Furthermore, we conduct a more rigorous analysis in coherent mode. The emotion control (cols. 1-4) and content editing (cols. 4-5) are illustrated in Fig. 5(b), verifying the effectiveness of the disentanglement. Interpretability of Disentanglement. We further perform qualitative experiments to demonstrate the disentanglement of learned identity and emotion representation. Firstly, we randomly sample four audios of different ages and genders from the CelebV-HQ [74] dataset and select their most consistent pairs according to the cosine similarity between two semantic features. We display the corresponding images for visual comparisons in Fig. 6, where the retrieved audios have similar ages and genders to those of the query. We also attach the generated face in column 2, which closely resembles the conditions in column 1, illustrating the effectiveness of our method. Secondly, to evaluate the disentanglement of the emotion, we use t-SNE [53] to visualize the \f\ud835\udc75= \ud835\udfcf w/o. PAD Ref \ud835\udc75= \ud835\udfd2 \ud835\udc75= \ud835\udfd6 \ud835\udc75= \ud835\udfcf\ud835\udfd4 N=1 N=4 N=8 N=16 w/o. PAD Age 0.23 0.42 0.51 0.50 0.38 Gender 0.90 0.91 0.91 0.92 0.91 Emotion 0.49 0.62 0.71 0.73 0.53 Figure 8. Ablation study on the number of pseudo-word tokens in TIA. We do not apply SCA and MBA in this stage. Ref TIA TIA + SCA (\ud835\udc70!\") TIA + SCA (\ud835\udc70!\"+ \ud835\udc70#\") TIA + SCA (\ud835\udc70!\"+ \ud835\udc70$\") TIA + SCA (ALL) Jointly training TIA and SCA : Tuning UNet : Figure 9. Ablation study on the design of the conditions in SCA. emotion latent space in Fig. 7. It is obvious that the different emotions are clustered into separate groups. Based on these analyses, we conclude that the PAD contributes to the decoupling entangled facial cues from the audio. 4.4. Ablation Study and Efficiency Evaluation Impact of Disentangled Module in PAD. To verify the superiority of PAD, we show the generated meshes under different stages in Fig. 3 rows 2-4. The first stage handles identity disentanglement, and the corresponding meshes accurately reflect desired face shapes and outlines. The following content disentanglement ensures that the mouth movements synchronize with the spoken phonemes. The final emotion disentanglement enhances the facial global cues, leading to highly consistent meshes with the reference faces, e.g., more accurate lips with fearful emotion (the right sample). Note that the left sample with neutral shows no noticeable changes. Besides, the last two rows of Tab. 2 can also prove the effectiveness of PAD. Impact of Pseudo-word Tokens in TIA. In Fig. 8, we show the results when varying the number of pseudo-word tokens in TIA. We observe that too few tokens can not synthesize semantically consistent results, while too many do not lead to further visual improvement. To balance the computational load and performance, we set N as 8 experimentally. Besides, the sixth column indicates that when directly inputting audio features into TIA, the output suffers degradation due to insufficient decoupling. We further attach the quantitative results in Fig. 8. Since TIA does not involves face geometry, shape metric is not displayed. These results are based on the 100 faces sampled from CelebV-HQ. Impact of Conditions in SCA. The conditions in SCA are the critical design in this work for generating controlGT w/o. MBA w. MBA (ALL Layer) w. MBA(256 + 512) w. MBA(512) w. MBA (no Mask) Figure 10. Ablation study on the design of the blending structure in MBA. This case is sampled from Fig. 4. lable and coherent frames. As shown in Fig. 9 row 1, the model with TIA can generate semantically consistent faces but fails to preserve the geometrical information (column 2). Introducing the SCA module with Ird input tackles the spatial misalignment (column 3). The adjacent frame Iad is further incorporated trying to maintain the appearance like the reference face (column 4), but it introduces obvious artifacts, especially in the mouth area. When taking Ird and Iid as input, as previously mentioned, the frozen LDMs encounter difficulties in learning complex transformations, resulting in both the appearance and geometry are not well aligned (column 5). In this work, we employ all three inputs together for SCA (column 6), where Ird is responsible for the facial structure, Iad for the appearance, while Iid further refines the details and enhances the image quality. Impact of Blending Structure in MBA. In Fig. 10, the result in the second column is produced by the method without MBA, which fails to preserve the background details. To solve this challenge, we design several variants to explore the blending structure. Without mask guidance, the synthesized background still differs from the ground truth (column 3). For this reason, we introduce the mask into all VAE decoder layers (column 4). Due to the edge aliasing in the low-resolution mask, although it solves the background issue, obvious artifacts appear around the blending edges. When applied at 512 resolution (column 6), the generated face achieves the best visual performance, achieving consistent background and seamless edge blending. Impact of Training Strategy in CCF. We explore the effectiveness of sequential training in CCF. As shown in Fig. 9, when jointly training TIA and SCA, the former does not work well as facial semantics can take shortcuts, i.e., learning from Iid and Iad instead of token embeddings Y . Efficiency Evaluation. We follow the DiffTalk to open the UNet parameters during SCA training, the number of trainable parameters increases from 282.9M to 1.15G. Furthermore, tuning the UNet does not bring significant benefits in Fig. 9. Thus, our method is effective and user-friendly. 5." + } + ], + "Jie Hu": [ + { + "url": "http://arxiv.org/abs/2401.09665v1", + "title": "Accelerating Distributed Stochastic Optimization via Self-Repellent Random Walks", + "abstract": "We study a family of distributed stochastic optimization algorithms where\ngradients are sampled by a token traversing a network of agents in random-walk\nfashion. Typically, these random-walks are chosen to be Markov chains that\nasymptotically sample from a desired target distribution, and play a critical\nrole in the convergence of the optimization iterates. In this paper, we take a\nnovel approach by replacing the standard linear Markovian token by one which\nfollows a nonlinear Markov chain - namely the Self-Repellent Radom Walk (SRRW).\nDefined for any given 'base' Markov chain, the SRRW, parameterized by a\npositive scalar {\\alpha}, is less likely to transition to states that were\nhighly visited in the past, thus the name. In the context of MCMC sampling on a\ngraph, a recent breakthrough in Doshi et al. (2023) shows that the SRRW\nachieves O(1/{\\alpha}) decrease in the asymptotic variance for sampling. We\npropose the use of a 'generalized' version of the SRRW to drive token\nalgorithms for distributed stochastic optimization in the form of stochastic\napproximation, termed SA-SRRW. We prove that the optimization iterate errors of\nthe resulting SA-SRRW converge to zero almost surely and prove a central limit\ntheorem, deriving the explicit form of the resulting asymptotic covariance\nmatrix corresponding to iterate errors. This asymptotic covariance is always\nsmaller than that of an algorithm driven by the base Markov chain and decreases\nat rate O(1/{\\alpha}^2) - the performance benefit of using SRRW thereby\namplified in the stochastic optimization context. Empirical results support our\ntheoretical findings.", + "authors": "Jie Hu, Vishwaraj Doshi, Do Young Eun", + "published": "2024-01-18", + "updated": "2024-01-18", + "primary_cat": "math.PR", + "cats": [ + "math.PR", + "cs.LG", + "math.OC" + ], + "main_content": "INTRODUCTION Stochastic optimization algorithms solve optimization problems of the form \u03b8\u2217\u2208arg min \u03b8\u2208Rd f(\u03b8), where f(\u03b8) \u225cEX\u223c\u00b5 [F(\u03b8, X)] = X i\u2208N \u00b5iF(\u03b8, i), (1) with the objective function f : Rd \u2192R and X taking values in a finite state space N with distribution \u00b5 \u225c[\u00b5i]i\u2208N . Leveraging partial gradient information per iteration, these algorithms have been recognized for their scalability and efficiency with large datasets (Bottou et al., 2018; Even, 2023). For any given noise sequence {Xn}n\u22650 \u2282N, and step size sequence {\u03b2n}n\u22650 \u2282R+, most stochastic optimization algorithms can be classified as stochastic approximations (SA) of the form \u03b8n+1 = \u03b8n + \u03b2n+1H(\u03b8n, Xn+1), \u2200n \u22650, (2) where, roughly speaking, H(\u03b8, i) contains gradient information \u2207\u03b8F(\u03b8, i), such that \u03b8\u2217solves h(\u03b8) \u225cEX\u223c\u00b5[H(\u03b8, X)] = P i\u2208N \u00b5iH(\u03b8, i) = 0. Such SA iterations include the well-known stochastic gradient descent (SGD), stochastic heavy ball (SHB) (Gadat et al., 2018; Li et al., 2022), *Equal contributors. 1 arXiv:2401.09665v1 [math.PR] 18 Jan 2024 \fPublished as a conference paper at ICLR 2024 and some SGD-type algorithms employing additional auxiliary variables (Barakat et al., 2021).1 These algorithms typically have the stochastic noise term Xn generated by i.i.d. random variables with probability distribution \u00b5 in each iteration. In this paper, we study a stochastic optimization algorithm where the noise sequence governing access to the gradient information is generated from general stochastic processes in place of i.i.d. random variables. This is commonly the case in distributed learning, where {Xn} is a (typically Markovian) random walk, and should asymptotically be able to sample the gradients from the desired probability distribution \u00b5. This is equivalent to saying that the random walker\u2019s empirical distribution converges to \u00b5 almost surely (a.s.); that is, xn \u225c 1 n+1 Pn k=0 \u03b4Xk a.s. \u2212 \u2212 \u2212 \u2212 \u2192 n\u2192\u221e\u00b5 for any initial X0 \u2208N, where \u03b4Xk is the delta measure whose Xk\u2019th entry is one, the rest being zero. Such convergence is most commonly achieved by employing the Metropolis Hastings random walk (MHRW) which can be designed to sample from any target measure \u00b5 and implemented in a scalable manner (Sun et al., 2018). Unsurprisingly, convergence characteristics of the employed Markov chain affect that of the SA sequence (2), and appear in both finite-time and asymptotic analyses. Finite-time bounds typically involve the second largest eigenvalue in modulus (SLEM) of the Markov chain\u2019s transition kernel P, which is critically connected to the mixing time of a Markov chain (Levin & Peres, 2017); whereas asymptotic results such as central limit theorems (CLT) involve asymptotic covariance matrices that embed information regarding the entire spectrum of P, i.e., all eigenvalues as well as eigenvectors (Br\u00b4 emaud, 2013), which are key to understanding the sampling efficiency of a Markov chain. Thus, the choice of random walker can significantly impact the performance of (2), and simply ensuring that it samples from \u00b5 asymptotically is not enough to achieve optimal algorithmic performance. In this paper, we take a closer look at the distributed stochastic optimization problem through the lens of a non-linear Markov chain, known as the Self Repellent Random Walk (SRRW), which was shown in Doshi et al. (2023) to achieve asymptotically minimal sampling variance for large values of \u03b1, a positive scalar controlling the strength of the random walker\u2019s self-repellence behaviour. Our proposed modification of (2) can be implemented within the settings of decentralized learning applications in a scalable manner, while also enjoying significant performance benefit over distributed stochastic optimization algorithms driven by vanilla Markov chains. Token Algorithms for Decentralized Learning. In decentralized learning, agents like smartphones or IoT devices, each containing a subset of data, collaboratively train models on a graph G(N, E) by sharing information locally without a central server (McMahan et al., 2017). In this setup, N =|N| agents correspond to nodes i \u2208N, and an edge (i, j) \u2208E indicates direct communication between agents i and j. This decentralized approach offers several advantages compared to the traditional centralized learning setting, promoting data privacy and security by eliminating the need for raw data to be aggregated centrally and thus reducing the risk of data breach or misuse (Bottou et al., 2018; Nedic, 2020). Additionally, decentralized approaches are more scalable and can handle vast amounts of heterogeneous data from distributed agents without overwhelming a central server, alleviating concerns about single point of failure (Vogels et al., 2021). Among decentralized learning approaches, the class of \u2018Token\u2019 algorithms can be expressed as stochastic approximation iterations of the type (2), wherein the sequence {Xn} is realized as the sample path of a token that stochastically traverses the graph G, carrying with it the iterate \u03b8n for any time n \u22650 and allowing each visited node (agent) to incrementally update \u03b8n using locally available gradient information. Token algorithms have gained popularity in recent years (Hu et al., 2022; Triastcyn et al., 2022; Hendrikx, 2023), and are provably more communication efficient (Even, 2023) when compared to consensus-based algorithms another popular approach for solving distributed optimization problems (Boyd et al., 2006; Morral et al., 2017; Olshevsky, 2022). The construction of token algorithms means that they do not suffer from expensive costs of synchronization and communication that are typical of consensus-based approaches, where all agents (or a subset of agents selected by a coordinator (Boyd et al., 2006; Wang et al., 2019)) on the graph are required to take simultaneous actions, such as communicating on the graph at each iteration. While decentralized Federated learning has indeed helped mitigate the communication overhead by processing multiple SGD iterations prior to each aggregation (Lalitha et al., 2018; Ye et al., 2022; Chellapandi et al., 2023), they still cannot overcome challenges such as synchronization and straggler issues. 1Further illustrations of stochastic optimization algorithms of the form (2) are deferred to Appendix A. 2 \fPublished as a conference paper at ICLR 2024 Self Repellent Random Walk. As mentioned earlier, sample paths {Xn} of token algorithms are usually generated using Markov chains with \u00b5 \u2208Int(\u03a3) as their limiting distribution. Here, \u03a3 denotes the N-dimensional probability simplex, with Int(\u03a3) representing its interior. A recent work by Doshi et al. (2023) pioneers the use of non-linear Markov chains to, in some sense, improve upon any given time-reversible Markov chain with transition kernel P whose stationary distribution is \u00b5. They show that the non-linear transition kernel2 K[\u00b7] : Int(\u03a3) \u2192[0, 1]N\u00d7N, given by Kij[x] \u225c Pij(xj/\u00b5j)\u2212\u03b1 P k\u2208N Pik(xk/\u00b5k)\u2212\u03b1 , \u2200i, j \u2208N, (3) for any x \u2208Int(\u03a3), when simulated as a self-interacting random walk (Del Moral & Miclo, 2006; Del Moral & Doucet, 2010), can achieve smaller asymptotic variance than the base Markov chain when sampling over a graph G, for all \u03b1 > 0. The argument x for the kernel K[x] is taken to be the empirical distribution xn at each time step n \u22650. For instance, if node j has been visited more often than other nodes so far, the entry xj becomes larger (than target value \u00b5j), resulting in a smaller transition probability from i to j under K[x] in (3) compared to Pij. This ensures that a random walker prioritizes more seldom visited nodes in the process, and is thus \u2018self-repellent\u2019. This effect is made more drastic by increasing \u03b1, and leads to asymptotically near-zero variance at a rate of O(1/\u03b1). Moreover, the polynomial function (xi/\u00b5i)\u2212\u03b1 chosen to encode self-repellent behaviour is shown in Doshi et al. (2023) to be the only one that allows the SRRW to inherit the socalled \u2018scale-invariance\u2019 property of the underlying Markov chain \u2013 a necessary component for the scalable implementation of a random walker over a large network without requiring knowledge of any graph-related global constants. Conclusively, such attributes render SRRW especially suitable for distributed optimization.3 Effect of Stochastic Noise Finite time and Asymptotic Approaches. Most contemporary token algorithms driven by Markov chains are analyzed using the finite-time bounds approach for obtaining insights into their convergence rates (Sun et al., 2018; Doan et al., 2019; 2020; Triastcyn et al., 2022; Hendrikx, 2023). However, as also explained in Even (2023), in most cases these bounds are overly dependent on mixing time properties of the specific Markov chain employed therein. This makes them largely ineffective in capturing the exact contribution of the underlying random walk in a manner which is qualitative enough to be used for algorithm design; and performance enhancements are typically achieved via application of techniques such as variance reduction (Defazio et al., 2014; Schmidt et al., 2017), momentum/Nesterov\u2019s acceleration (Gadat et al., 2018; Li et al., 2022), adaptive step size (Kingma & Ba, 2015; Reddi et al., 2018), which work by modifying the algorithm iterations themselves, and never consider potential improvements to the stochastic input itself. Complimentary to finite-time approaches, asymptotic analysis using CLT has proven to be an excellent tool to approach the design of stochastic algorithms (Hu et al., 2022; Devraj & Meyn, 2017; Morral et al., 2017; Chen et al., 2020a; Mou et al., 2020; Devraj & Meyn, 2021). Hu et al. (2022) shows how asymptotic analysis can be used to compare the performance of SGD algorithms for various stochastic inputs using their notion of efficiency ordering, and, as mentioned in Devraj & Meyn (2017), the asymptotic benefits from minimizing the limiting covariance matrix are known to be a good predictor of finite-time algorithmic performance, also observed empirically in Section 4. From the perspective of both finite-time analysis as well as asymptotic analysis of token algorithms, it is now well established that employing \u2018better\u2019 Markov chains can enhance the performance of stochastic optimization algorithm. For instance, Markov chains with smaller SLEMs yield tighter finite-time upper bounds (Sun et al., 2018; Ayache & El Rouayheb, 2021; Even, 2023). Similarly, Markov chains with smaller asymptotic variance for MCMC sampling tasks also provide better performance, resulting in smaller covariance matrix of SGD algorithms (Hu et al., 2022). Therefore, with these breakthrough results via SRRW achieving near-zero sampling variance, it is within reason 2Here, non-linearity in the transition kernel implies that K[x] takes probability distribution x as the argument (Andrieu et al., 2007), as opposed to the kernel being a linear operator K[x] = P for a constant stochastic matrix P in a standard (linear) Markovian setting. 3Recently, Guo et al. (2020) introduce an optimization scheme, which designs self-repellence into the perturbation of the gradient descent iterates (Jin et al., 2017; 2018; 2021) with the goal of escaping saddle points. This notion of self-repellence is distinct from the SRRW, which is a probability kernel designed specifically for a token to sample from a target distribution \u00b5 over a set of nodes on an arbitrary graph. 3 \fPublished as a conference paper at ICLR 2024 Stochastic Optimization Algorithm Asymptotic Covariance \ud835\udc7d\ud835\udf03 High Variance Near-Zero Variance (Our Result) \ud835\udf03\ud835\udc5b+1 = \ud835\udf03\ud835\udc5b+ \ud835\udefd\ud835\udc5b+1\ud835\udc3b\ud835\udf03\ud835\udc5b, \ud835\udc4b\ud835\udc5b+1 \ud835\udefd\ud835\udc5b \u22121/2 \ud835\udf03\ud835\udc5b\u2212\ud835\udf03\u2217\u055c \ud835\udc51\ud835\udc41(0, \ud835\udc7d\ud835\udf03) Nonlinear MC (SRRW [Doshi et al. 2023]) Traditional MC, e.g., MHRW 1 2 4 1 1 4 1 2 4 1 1 4 Token\u2019s trajectory \ud835\udc4b\ud835\udc5b\ud835\udc5b\u22650 ? Figure 1: Visualization of token algorithms using SRRW versus traditional MC in distributed learning. Our CLT analysis, extended from SRRW itself to distributed stochastic approximation, leads to near-zero variance for the SA iteration \u03b8n. Node numbers on the left denote visit counts. to ask: Can we achieve near-zero variance in distributed stochastic optimization driven by SRRWlike token algorithms on any general graph?4 In this paper, we answer in the affirmative. SRRW Driven Algorithm and Analysis Approach. For any ergodic time-reversible Markov chain with transition probability matrix P \u225c[Pij]i,j\u2208N and stationary distribution \u00b5 \u2208Int(\u03a3), we consider a general step size version of the SRRW stochastic process analysed in Doshi et al. (2023) and use it to drive the noise sequence in (2). Our SA-SRRW algorithm is as follows: Draw: Xn+1 \u223cKXn,\u00b7[xn] (4a) Update: xn+1 = xn + \u03b3n+1(\u03b4Xn+1 \u2212xn), (4b) \u03b8n+1 = \u03b8n + \u03b2n+1H(\u03b8n, Xn+1), (4c) where {\u03b2n} and {\u03b3n} are step size sequences decreasing to zero, and K[x] is the SRRW kernel in (3). Current non-asymptotic analyses require globally Lipschitz mean field function (Chen et al., 2020b; Doan, 2021; Zeng et al., 2021; Even, 2023) and is thus inapplicable to SA-SRRW since the mean field function of the SRRW iterates (4b) is only locally Lipschitz (details deferred to Appendix B). Instead, we successfully obtain non-trivial results by taking an asymptotic CLT-based approach to analyze (4). This goes beyond just analyzing the asymptotic sampling covariance5 as in Doshi et al. (2023), the result therein forming a special case of ours by setting \u03b3n =1/(n+1) and considering only (4a) and (4b), that is, in the absence of optimization iteration (4c). Specifically, we capture the effect of SRRW\u2019s hyper-parameter \u03b1 on the asymptotic speed of convergence of the optimization error term \u03b8n \u2212\u03b8\u2217to zero via explicit deduction of its asymptotic covariance matrix. See Figure 1 for illustration. Our Contributions. 1. Given any time-reversible \u2018base\u2019 Markov chain with transition kernel P and stationary distribution \u00b5, we generalize first and second order convergence results of xn to target measure \u00b5 (Theorems 4.1 and 4.2 in Doshi et al., 2023) to a class of weighted empirical measures, through the use of more general step sizes \u03b3n. This includes showing that the asymptotic sampling covariance terms decrease to zero at rate O(1/\u03b1), thus quantifying the effect of self-repellent on xn. Our generalization is not for the sake thereof and is shown in Section 3 to be crucial for the design of step sizes \u03b2n, \u03b3n. 2. Building upon the convergence results for iterates xn, we analyze the algorithm (4) driven by the SRRW kernel in (3) with step sizes \u03b2n and \u03b3n separated into three disjoint cases: (i) \u03b2n = o(\u03b3n), and we say that \u03b8n is on the slower timescale compared to xn; (ii) \u03b2n =\u03b3n, and we say that \u03b8n and xn are on the same timescale; (iii) \u03b3n = o(\u03b2n), and we say that \u03b8n is on the faster timescale compared to xn. For any \u03b1 \u22650 and let k = 1, 2 and 3 refer to the corresponding cases (i), (ii) and (iii), we show that \u03b8n a.s. \u2212 \u2212 \u2212 \u2212 \u2192 n\u2192\u221e\u03b8\u2217 and (\u03b8n \u2212\u03b8\u2217)/ p \u03b2n dist. \u2212 \u2212 \u2212 \u2212 \u2192 n\u2192\u221eN \u0010 0, V(k) \u03b8 (\u03b1) \u0011 , featuring distinct asymptotic covariance matrices V(1) \u03b8 (\u03b1), V(2) \u03b8 (\u03b1) and V(3) \u03b8 (\u03b1), respectively. The three matrices coincide when \u03b1 = 0,6. Moreover, the derivation of the CLT for cases (i) and (iii), 4This near-zero sampling variance implies a significantly smaller variance than even an i.i.d. sampling counterpart, while adhering to graph topological constraints of token algorithms. 5Sampling covariance corresponds to only the empirical distribution xn in (4b). 6The \u03b1 = 0 case is equivalent to simply running the base Markov chain, since from (3) we have K[\u00b7] = P, thus bypassing the SRRW\u2019s effect and rendering all three cases nearly the same. 4 \fPublished as a conference paper at ICLR 2024 for which (4) corresponds to two-timescale SA with controlled Markov noise, is the first of its kind and thus a key technical contribution in this paper, as expanded upon in Section 3. 3. For case (i), we show that V(1) \u03b8 (\u03b1) decreases to zero (in the sense of Loewner ordering introduced in Section 2.1) as \u03b1 increases, with rate O(1/\u03b12). This is especially surprising, since the asymptotic performance benefit from using the SRRW kernel with \u03b1 in (3), to drive the noise terms Xn, is amplified in the context of distributed learning and estimating \u03b8\u2217; compared to the sampling case, for which the rate is O(1/\u03b1) as mentioned earlier. For case (iii), we show that V(3) \u03b8 (\u03b1) = V(3) \u03b8 (0) for all \u03b1 \u22650, implying that using the SRRW in this case provides no asymptotic benefit than the original base Markov chain, and thus performs worse than case (i). In summary, we deduce that V(1) \u03b8 (\u03b12) \u03b11 > 0 and \u03b1 > 0.7 4. We numerically simulate our SA-SRRW algorithm on various real-world datasets, focusing on a binary classification task, to evaluate its performance across all three cases. By carefully choosing the function H in SA-SRRW, we test the SGD and algorithms driven by SRRW. Our findings consistently highlight the superiority of case (i) over cases (ii) and (iii) for diverse \u03b1 values, even in their finite time performance. Notably, our tests validate the variance reduction at a rate of O(1/\u03b12) for case (i), suggesting it as the best algorithmic choice among the three cases. 2 PRELIMINARIES AND MODEL SETUP In Section 2.1, we first standardize the notations used throughout the paper, and define key mathematical terms and quantities used in our theoretical analyses. Then, in Section 2.2, we consolidate the model assumptions of our SA-SRRW algorithm (4). We then go on to discuss our assumptions, and provide additional interpretations of our use of generalized step-sizes. 2.1 BASIC NOTATIONS AND DEFINITIONS Vectors are denoted by lower-case bold letters, e.g., v \u225c[vi] \u2208RD, and matrices by upper-case bold, e.g., M \u225c[Mij] \u2208RD\u00d7D. M\u2212T is the transpose of the matrix inverse M\u22121. The diagonal matrix Dv is formed by vector v with vi as the i\u2019th diagonal entry. Let 1 and 0 denote vectors of all ones and zeros, respectively. The identity matrix is represented by I, with subscripts indicating dimensions as needed. A matrix is Hurwitz if all its eigenvalues possess strictly negative real parts. 1{\u00b7} denotes an indicator function with condition in parentheses. We use \u2225\u00b7\u2225to denote both the Euclidean norm of vectors and the spectral norm of matrices. Two symmetric matrices M1, M2 follow Loewner ordering M1 0 if and only if aij > 0. Markov chains satisfying the detailed balance equation, where \u00b5iPij = \u00b5jPji for all i, j \u2208N, are termed time-reversible. For such chains, we use (\u03bbi, ui) (resp. (\u03bbi, vi)) to denote the i\u2019th left (resp. right) eigenpair where the eigenvalues are ordered: \u22121 < \u03bb1 \u2264\u00b7 \u00b7 \u00b7 \u2264\u03bbN\u22121 < \u03bbN = 1, with uN = \u00b5 and vN = 1 in RN. We assume eigenvectors to be normalized such that uT i vi = 1 for all i, and we have ui =D\u00b5vi and uT i vj =0 for all i, j \u2208N. We direct the reader to Aldous & Fill (2002, Chapter 3.4) for a detailed exposition on spectral properties of time-reversible Markov chains. 2.2 SA-SRRW: KEY ASSUMPTIONS AND DISCUSSIONS Assumptions: All results in our paper are proved under the following assumptions. (A1) The function H : RD \u00d7 N \u2192RD, is a continuous at every \u03b8 \u2208RD, and there exists a positive constant L such that \u2225H(\u03b8, i)\u2225\u2264L(1 + \u2225\u03b8\u2225) for every \u03b8 \u2208RD, i \u2208N. (A2) Step sizes \u03b2n and \u03b3n follow \u03b2n =(n+1)\u2212b, and \u03b3n =(n+1)\u2212a, where a, b \u2208(0.5, 1]. 7In particular, this is the reason why we advocate for a more general step size \u03b3n = (n+1)\u2212a in the SRRW iterates with a < 1, allowing us to choose \u03b2n = (n + 1)\u2212b with b \u2208(a, 1] to satisfy \u03b2n = o(\u03b3n) for case (i). 5 \fPublished as a conference paper at ICLR 2024 (A3) Roots of function h(\u00b7) are disjoint, which comprise the globally attracting set \u0398 \u225c n \u03b8\u2217|h(\u03b8\u2217)=0, \u2207h(\u03b8\u2217) + 1{b=1} 2 I is Hurwitz o \u0338= \u2205of the associated ordinary differential equation (ODE) for iteration (4c), given by d\u03b8(t)/dt=h(\u03b8(t)). (A4) For any (\u03b80, x0, X0) \u2208RD \u00d7 Int(\u03a3) \u00d7 N, the iterate sequence {\u03b8n}n\u22650 (resp. {xn}n\u22650) is P\u03b80,x0,X0-almost surely contained within a compact subset of RD (resp. Int(\u03a3)). Discussions on Assumptions: Assumption A1 requires H to only be locally Lipschitz albeit with linear growth, and is less stringent than the globally Lipschitz assumption prevalent in optimization literature (Li & Wai, 2022; Hendrikx, 2023; Even, 2023). Assumption A2 is the general umbrella assumption under which cases (i), (ii) and (iii) mentioned in Section 1 are extracted by setting: (i) a < b, (ii) a = b, and (iii) a > b. Cases (i) and (iii) render \u03b8n, xn on different timescales; the polynomial form of \u03b2n, \u03b3n widely assumed in the twotimescale SA literature (Mokkadem & Pelletier, 2006; Zeng et al., 2021; Hong et al., 2023). Case (ii) characterizes the SA-SRRW algorithm (4) as a single-timescale SA with polynomially decreasing step size, and is among the most common assumptions in the SA literature (Borkar, 2022; Fort, 2015; Li et al., 2023). In all three cases, the form of \u03b3n ensures \u03b3n \u22641 such that the SRRW iterates xn in (4b) is within Int(\u03a3), ensuring that K[xn] is well-defined for all n \u22650. In Assumption A3, limiting dynamics of SA iterations {\u03b8n}n\u22650 closely follow trajectories {\u03b8(t)}t\u22650 of their associated ODE, and assuming the existence of globally stable equilibria is standard (Borkar, 2022; Fort, 2015; Li et al., 2023). In optimization problems, this is equivalent to assuming the existence of at most countably many local minima. Assumption A4 assumes almost sure boundedness of iterates \u03b8n and xn, which is a common assumption in SA algorithms (Kushner & Yin, 2003; Chen, 2006; Borkar, 2022; Karmakar & Bhatnagar, 2018; Li et al., 2023) for the stability of the SA iterations by ensuring the well-definiteness of all quantities involved. Stability of the weighted empirical measure xn of the SRRW process is practically ensured by studying (4b) with a truncation-based procedure (see Doshi et al., 2023, Remark 4.5 and Appendix E for a comprehensive explanation), while that for \u03b8n is usually ensured either as a by-product of the algorithm design, or via mechanisms such as projections onto a compact subset of RD, depending on the application context. We now provide additional discussions regarding the step-size assumptions and their implications on the SRRW iteration (4b). SRRW with General Step Size: As shown in Benaim & Cloez (2015, Remark 1.1), albeit for a completely different non-linear Markov kernel driving the algorithm therein, iterates xn of (4b) can also be expressed as weighted empirical measures of {Xn}n\u22650, in the following form: xn = Pn i=1 \u03c9i\u03b4Xi + \u03c90x0 Pn i=0 \u03c9i , where \u03c90 = 1, and \u03c9n = \u03b3n Qn i=1(1 \u2212\u03b3i), (5) for all n > 0. For the special case when \u03b3n = 1/(n+1) as in Doshi et al. (2023), we have \u03c9n = 1 for all n \u22650 and xn is the typical, unweighted empirical measure. For the additional case considered in our paper, when a < 1 for \u03b3n as in assumption A2, we can approximate 1 \u2212\u03b3n \u2248e\u2212\u03b3n and \u03c9n \u2248n\u2212aen(1\u2212a)/(1\u2212a). This implies that \u03c9n will increase at sub-exponential rate, giving more weight to recent visit counts and allowing it to quickly \u2018forget\u2019 the poor initial measure x0 and shed the correlation with the initial choice of X0. This \u2018speed up\u2019 effect by setting a < 1 is guaranteed in case (i) irrespective of the choice of b in Assumption A2, and in Section 3 we show how this can lead to further reduction in covariance of optimization error \u03b8n = \u03b8\u2217in the asymptotic regime. Additional assumption for case (iii): Before moving on to Section 3, we take another look at the case when \u03b3n = o(\u03b2n), and replace A3 with the following, stronger assumption only for case (iii). (A3\u2032) For any x\u2208Int(\u03a3), there exists a function \u03c1 : Int(\u03a3)\u2192RD such that \u2225\u03c1(x)\u2225\u2264L2(1+\u2225x\u2225) for some L2 >0, Ei\u223c\u03c0[x][H(\u03c1(x), i)]=0 and Ei\u223c\u03c0[x][\u2207H(\u03c1(x), i)] + 1{b=1} 2 I is Hurwitz. While Assumption A3\u2032 for case (iii) is much stronger than A3, it is not detrimental to the overall results of our paper, since case (i) is of far greater interest as impressed upon in Section 1. This is discussed further in Appendix C. 6 \fPublished as a conference paper at ICLR 2024 3 ASYMPTOTIC ANALYSIS OF THE SA-SRRW ALGORITHM In this section, we provide the main results for the SA-SRRW algorithm (4). We first present the a.s. convergence and the CLT result for SRRW with generalized step size, extending the results in Doshi et al. (2023). Building upon this, we present the a.s. convergence and the CLT result for the SA iterate \u03b8n under different settings of step sizes. We then shift our focus to the analysis of the different asymptotic covariance matrices emerging out of the CLT result, and capture the effect of \u03b1 and the step sizes, particularly in cases (i) and (iii), on \u03b8n \u2212\u03b8\u2217via performance ordering. Almost Sure convergence and CLT: The following result establishes first and second order convergence of the sequence {xn}n\u22650, which represents the weighted empirical measures of the SRRW process {Xn}n\u22650, based on the update rule in (4b). Lemma 3.1. Under Assumptions A1, A2 and A4, for the SRRW iterates (4b), we have xn a.s. \u2212 \u2212 \u2212 \u2212 \u2192 n\u2192\u221e\u00b5, and \u03b3\u22121/2 n (xn \u2212\u00b5) dist. \u2212 \u2212 \u2212 \u2212 \u2192 n\u2192\u221eN(0, Vx(\u03b1)), where Vx(\u03b1) = N\u22121 X i=1 1 2\u03b1(1 + \u03bbi) + 2 \u22121{a=1} \u00b7 1 + \u03bbi 1 \u2212\u03bbi uiuT i . (6) Moreover, for all \u03b12 > \u03b11 > 0, we have Vx(\u03b12) 0 restricts us from obtaining an explicit form for the covariance term corresponding to SA iterate errors \u03b8n \u2212\u03b8\u2217. On the other hand, for k \u2208{1, 3}, the nature of two-timescale structure causes \u03b8n and xn to become asymptotically independent with zero correlation terms inside V(k)(\u03b1) in (10), and we can explicitly deduce V(k) \u03b8 (\u03b1). We now take a deeper dive into \u03b1 and study its effect on V(k) \u03b8 (\u03b1). Covariance Ordering of SA-SRRW: We refer the reader to Appendix F for proofs of all remaining results. We begin by focusing on case (i) and capturing the impact of \u03b1 on V(1) \u03b8 (\u03b1). Proposition 3.4. For all \u03b12 > \u03b11 > 0, we have V(1) \u03b8 (\u03b12) 0, we have V(1) \u03b8 (\u03b1) 0. In view of Proposition 3.4 and Corollary 3.5, the advantages of case (i) become prominent. 4 SIMULATION In this section, we simulate our SA-SRRW algorithm on the wikiVote graph (Leskovec & Krevl, 2014), comprising 889 nodes and 2914 edges. We configure the SRRW\u2019s base Markov chain P as the MHRW with a uniform target distribution \u00b5 = 1 N 1. For distributed optimization, we consider the following L2 regularized binary classification problem: min\u03b8\u2208RD n f(\u03b8) \u225c1 N PN i=1 log \u0010 1 + e\u03b8T si \u0011 \u2212yi \u0000\u03b8T si \u0001 + \u03ba 2 \u2225\u03b8\u22252o , (12) where {(si, yi)}N i=1 is the ijcnn1 dataset (with 22 features, i.e., si \u2208R22) from LIBSVM (Chang & Lin, 2011), and penalty parameter \u03ba = 1. Each node in the wikiVote graph is assigned one data point, thus 889 data points in total. We perform SRRW driven SGD (SGD-SRRW) and SRRW driven stochastic heavy ball (SHB-SRRW) algorithms (see (13) in Appendix A for its algorithm). We fix the step size \u03b2n = (n + 1)\u22120.9 for the SA iterates and adjust \u03b3n = (n + 1) \u2212a in the SRRW iterates to cover all three cases discussed in this paper: (i) a = 0.8; (ii) a = 0.9; (iii) a = 1. We use mean square error (MSE), i.e., E[\u2225\u03b8n\u2212\u03b8\u2217\u22252], to measure the error on the SA iterates. Our results are presented in Figures 2 and 3, where each experiment is repeated 100 times. Figures 2a and 2b, based on wikiVote graph, highlight the consistent performance ordering across different \u03b1 values for both algorithms over almost all time (not just asymptotically). Notably, curves for \u03b1 \u22655 outperform that of the i.i.d. sampling (in black) even under the graph constraints. Figure 2c on the smaller Dolphins graph (Rossi & Ahmed, 2015) 62 nodes and 159 edges illustrates that the points of (\u03b1, MSE) pair arising from SGD-SRRW at time n = 107 align with a curve in the form of g(x)= c1 (x+c2)2 +c3 to showcase O(1/\u03b12) rates. This smaller graph allows for longer simulations to observe the asymptotic behaviour. Additionally, among the three cases examined at identical \u03b1 values, Figures 3a 3c confirm that case (i) performs consistently better than the rest, underscoring its superiority in practice. Further results, including those from non-convex functions and additional datasets, are deferred to Appendix H due to space constraints. 9 \fPublished as a conference paper at ICLR 2024 102 103 104 105 106 Number of steps (n) 10 2 10 1 MSE n * 2 CASE (i):a = 0.8, b = 0.9 CASE (ii):a = 0.9, b = 0.9 CASE (iii):a = 1, b = 0.9 (a) \u03b1 = 1, SGD-SRRW 102 103 104 105 106 Number of steps (n) 10 2 10 1 MSE n * 2 CASE (i):a = 0.8, b = 0.9 CASE (ii):a = 0.9, b = 0.9 CASE (iii):a = 1, b = 0.9 (b) \u03b1 = 5, SGD-SRRW 102 103 104 105 106 Number of steps (n) 10 2 10 1 MSE n * 2 CASE (i):a = 0.8, b = 0.9 CASE (ii):a = 0.9, b = 0.9 CASE (iii):a = 1, b = 0.9 (c) \u03b1 = 10, SGD-SRRW Figure 3: Comparison of the performance among cases (i) (iii) for \u03b1 \u2208{1, 5, 10}. 5" + }, + { + "url": "http://arxiv.org/abs/2401.09339v2", + "title": "Central Limit Theorem for Two-Timescale Stochastic Approximation with Markovian Noise: Theory and Applications", + "abstract": "Two-timescale stochastic approximation (TTSA) is among the most general\nframeworks for iterative stochastic algorithms. This includes well-known\nstochastic optimization methods such as SGD variants and those designed for\nbilevel or minimax problems, as well as reinforcement learning like the family\nof gradient-based temporal difference (GTD) algorithms. In this paper, we\nconduct an in-depth asymptotic analysis of TTSA under controlled Markovian\nnoise via central limit theorem (CLT), uncovering the coupled dynamics of TTSA\ninfluenced by the underlying Markov chain, which has not been addressed by\nprevious CLT results of TTSA only with Martingale difference noise. Building\nupon our CLT, we expand its application horizon of efficient sampling\nstrategies from vanilla SGD to a wider TTSA context in distributed learning,\nthus broadening the scope of Hu et al. (2022). In addition, we leverage our CLT\nresult to deduce the statistical properties of GTD algorithms with nonlinear\nfunction approximation using Markovian samples and show their identical\nasymptotic performance, a perspective not evident from current finite-time\nbounds.", + "authors": "Jie Hu, Vishwaraj Doshi, Do Young Eun", + "published": "2024-01-17", + "updated": "2024-02-13", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG", + "math.OC" + ], + "main_content": "INTRODUCTION Two-timescale stochastic approximation (TTSA) serves as a cornerstone algorithm for identifying the root (x\u2217, y\u2217) of two coupled functions, i.e., Proceedings of the 27th International Conference on Artificial Intelligence and Statistics (AISTATS) 2024, Valencia, Spain. PMLR: Volume 238. Copyright 2024 by the author(s). \u00af h1(x\u2217, y\u2217) \u225cE\u03be\u223c\u00b5[h1(x\u2217, y\u2217, \u03be)] = 0, \u00af h2(x\u2217, y\u2217) \u225cE\u03be\u223c\u00b5[h2(x\u2217, y\u2217, \u03be)] = 0, (1) where \u00b5 is a probability vector and typically only noisy observations h1(x, y, \u03be), h2(x, y, \u03be) are accessible (Kushner and Yin, 2003; Borkar, 2022). If either of the two functions h1, h2 is decoupled, e.g., h1(x, y, \u03be) \u2261h1(x, \u03be), TTSA degenerates into stochastic approximation (SA) as a special case, which itself has a wide range of applications, including, but not limited to, stochastic optimization (Bottou et al., 2018; Gower et al., 2019), reinforcement learning (RL) (Srikant and Ying, 2019; Dalal et al., 2020; Patil et al., 2023), and adaptive Markov chain Monte Carlo (MCMC) (Benaim et al., 2012; Avrachenkov et al., 2021; Doshi et al., 2023). In this paper, our primary focus is the analysis of the asymptotic behavior exhibited by a general nonlinear TTSA with Markovian noise, establishing a central limit theorem (CLT) to explore the effect of coupled variables (x, y). By leveraging this CLT, we address two applications: improvement of asymptotic performance in optimization algorithms, and the derivation of statistical property from a family of gradient-based TD (GTD) algorithms in RL. The recursion of the TTSA algorithm considered in this work is described as follows: ( xn+1 = xn + \u03b2n+1h1(xn, yn, \u03ben+1), yn+1 = yn + \u03b3n+1h2(xn, yn, \u03ben+1), (2) where \u03b2n, \u03b3n are decreasing step sizes at different rates,1 {\u03ben}n\u22650 is a random sequence over a finite set \u039e. For instance, in stochastic bilevel optimizations, Hong et al. (2023) deploys TTSA to simultaneously optimize both primal and dual variables. Likewise, the work by Lin et al. (2020) highlights the applicability of TTSA in solving minimax problems for optimizing two competing objectives. In RL, a family of GTD algorithms utilize the two-timescale structure (Sutton et al., 2009; Dalal et al., 2018, 2020; Li 1For example, when the step size \u03b2n is much smaller than \u03b3n, i.e., \u03b2n = o(\u03b3n), iterates xn converges slower than iterates yn, thereby xn is on the slow timescale and yn is on the fast timescale. arXiv:2401.09339v2 [stat.ML] 13 Feb 2024 \fCLT for TTSA with Markovian Noise: Theory and Applications Table 1: Overview of TTSA literature. Loc. Lipschitz: locally Lipschitz; high-prob. bound: high-probability bound; Mart. diff.: Martingale difference noise; exo. MC: exogenous Markov chain, independent of TTSA iterates (x, y); ctrl. MC: controlled Markov chain, where the transition kernel is determined by iterates (x, y). Except for a.s. convergence, all other result types inherently include a.s. convergence. Existing Works Result Type Noise Type Loc. Lipschitz Nonlinear Konda and Tsitsiklis (2004) CLT Mart. diff. \u00d7 \u00d7 Mokkadem and Pelletier (2006) CLT Mart. diff. \u2713 \u2713 Dalal et al. (2018) high-prob. bound Mart. diff. \u00d7 \u00d7 Borkar and Pattathil (2018) high-prob. bound Mart. diff. \u00d7 \u2713 Doan (2022, 2024); Hong et al. (2023) finite-time bound Mart. diff. \u00d7 \u2713 Karmakar and Bhatnagar (2018) a.s. convergence ctrl. MC \u00d7 \u2713 Yaji and Bhatnagar (2020) a.s. convergence ctrl. MC \u2713 \u2713 Gupta et al. (2019); Haque et al. (2023) finite-time bound exo. MC \u00d7 \u00d7 Doan (2021b) finite-time bound exo. MC \u00d7 \u2713 Khodadadian et al. (2022) finite-time bound ctrl. MC \u00d7 \u00d7 Barakat et al. (2022) finite-time bound ctrl. MC \u00d7 \u00d7 Zeng et al. (2021) finite-time bound ctrl. MC \u00d7 \u2713 Our Work CLT ctrl. MC \u2713 \u2713 et al., 2023a). Specifically, in these algorithms, the primary value function estimates update on slower timescale, while auxiliary variables or correction terms update on faster timescale. Furthermore, modern energy systems, such as power systems and smart grids, use TTSA for dynamic decision making (Lopez-Ramos et al., 2017; Yang et al., 2019). In the realm of game theory, a noteworthy application is Generative Adversarial Networks, where the game between a generator and a discriminator can be tackled using TTSA (Prasad et al., 2015; Heusel et al., 2017). In this paper, we focus on the Markovian sequence {\u03ben}, which plays an important role in the TTSA algorithm and is inherent in many applications.2 In distributed learning, token algorithms utilize a random walk, enabling tokens to traverse distributed agents over a graph, each possessing local datasets, and iteratively update model parameters, thus facilitating collaborative stochastic optimization across agents (Hu et al., 2022; Triastcyn et al., 2022; Hendrikx, 2023; Even, 2023). Apart from employing SGD iterates to minimize an objective function, such token algorithms of the form (2) can also address distributed bilevel or minimax problems that have been recently studied in Gao (2022); Gao et al. (2023). Meanwhile, in RL, the environment itself is modeled as a Markov Decision Process (MDP), which by design incorporates Markovian properties. When an agent interacts with this environment, the trajectory {\u03ben} it follows, i.e., a se2While the noise sequence {\u03ben} is common to both recursions in the TTSA algorithm (2), it allows for two distinct Markov chains for each recursion. Further details can be found in Section 2.1. quence of states, actions, and rewards, is inherently Markovian. Notably, this Markovian sequence can be influenced by the agent\u2019s adaptive policy, as seen in actor-critic algorithms, yielding a controlled Markov chain dependent on the iterates (xn, yn) (Karmakar and Bhatnagar, 2018; Yaji and Bhatnagar, 2020; Zeng et al., 2021). These examples underscore the importance of the Markovian sequence in the development of both theoretical frameworks and practical implementations of various learning algorithms. 1.1 Related Works Finite-time vs Asymptotic Analysis: The convergence properties of SA have been studied extensively using both asymptotic (Kushner and Yin, 2003; Fort, 2015; Borkar, 2022; Li et al., 2023b) and finite-time (Srikant and Ying, 2019; Karimi et al., 2019; Chen et al., 2022) analyses. While recent trends have shown a preference for non-asymptotic analysis, discussions in Meyn (2022, Chapter 1.2) point out the oftenunderestimated significance of asymptotic statistics. This notion is highlighted in Mou et al. (2020); Chen et al. (2020); Srikant (2024), which demonstrate the broader applicability of CLT beyond purely asymptotic contexts. Specifically, the limiting covariance matrix, central to the CLT, finds its presence in highprobability bounds (Mou et al., 2020), and in finitetime bounds on mean square error (Chen et al., 2020) as well as 1-Wasserstein distance to measure the rate of convergence to normality (Srikant, 2024). Further underscoring its significance, Hu et al. (2022) showcases its accuracy in capturing the rate of conver\fJie Hu, Vishwaraj Doshi, Do Young Eun gence compared to the mixing rate of the underlying Markov chain, frequently employed in finite-time analysis (Karimi et al., 2019; Chen et al., 2022). TTSA with Martingale Difference Noise: For the TTSA algorithm (2), the stochastic sequence being an i.i.d. sequence {\u03ben} allows for the decomposition of the noisy observation h1(x, y, \u03ben) into \u00af h1(x, y) and a Martingale difference noise term h1(x, y, \u03ben)\u2212\u00af h1(x, y); a similar decomposition applying to h2. In the case of Martingale difference noise, an extensive body of research focuses on the analysis of CLT results (Konda and Tsitsiklis, 2004; Mokkadem and Pelletier, 2006), high-probability bounds (Dalal et al., 2018; Borkar and Pattathil, 2018), and finite-time bounds (Doan, 2022, 2024; Hong et al., 2023) for both linear and nonlinear TTSA, as shown in Table 1. TTSA with Markovian Noise \u2014 Asymptotic Results and Suboptimal Finite-Time Bounds: Recently, increasing attention has been shifted towards analyzing TTSA with Markovian noise sequences {\u03ben}, which introduces technical challenges due to inherent bias in hi(x, y, \u03ben) as an estimator of \u00af hi(x, y) for i = 1, 2. Karmakar and Bhatnagar (2018); Yaji and Bhatnagar (2020) delve into the almost sure convergence of nonlinear TTSA with Markovian noise, showing that the two iterates xn, yn asymptotically estimate the related differential inclusions, which are a generalized version of ordinary differential equations (ODEs). Yaji and Bhatnagar (2020) further relax to the locally Lipschitz functions h1, h2, which is commonly seen in the machine learning literature such as low-rank matrix recovery (Recht et al., 2010), tensor factorization problem (Kolda and Bader, 2009), and deep neural networks with unbounded Hessian matrices (Zhang and Hong, 2020, Appendix H). Meanwhile, the mixing rate properties of Markov chains have been predominantly utilized for the finitetime analysis of both linear (Gupta et al., 2019; Kaledin et al., 2020; Doan, 2021a; Khodadadian et al., 2022; Barakat et al., 2022; Haque et al., 2023) and nonlinear TTSA (Doan, 2021b; Zeng et al., 2021) with Markovian noise.3 Notably, the latter two works align closely with our TTSA settings. However, Doan (2021b) only provided a finite-time bound for the combined error of both iterations, i.e., E[\u2225xn \u2212x\u2217\u22252 + \u03b2n \u03b3n \u2225yn \u2212y\u2217\u22252] at a suboptimal rate of O(n\u22122/3) with a specific choice of step sizes \u03b2n = (n + 1)\u22121 and \u03b3n = (n + 1)\u22122/3, while we show in Section 2.3 that for large n, the combined error should approximately 3While nonlinear TTSA with Markovian noise is currently the most general framework, our emphasis is not solely on generalization. As we will demonstrate in Section 3, this setting has substantive implications in both stochastic optimization and RL. decrease to zero at the speed of O(\u03b2n) = O(n\u22121). A similar bound for E[\u2225xn \u2212x\u2217\u22252] under the more general controlled Markov noise setting is provided in Zeng et al. (2021) at the suboptimal rate of O(n\u22122/3) with the same choice of step sizes. Thus, even the state-of-the-art finite-time bounds in Doan (2021b); Zeng et al. (2021) do not preciously capture the leading term that determines the performance of each iterates xn, yn. A comprehensive non-asymptotic analysis with rate matching the CLT scale (i.e., O(\u03b2n), O(\u03b3n)) has yet to be performed in the nonlinear TTSA with controlled Markovian noise under general decreasing step sizes \u03b2n, \u03b3n. 1.2 Our Contributions In this paper, we study the CLT of both iterates xn and yn in nonlinear TTSA with controlled Markovian noise, where h1, h2 are only locally Lipschitz continuous. Although Yaji and Bhatnagar (2020) considered more general set-valued functions h1, h2, they only obtained almost sure convergence. In contrast, we here target single-valued functions that are more common in the machine learning literature and extend the scope to include CLT results. Our work further generalizes the CLT analysis of the two-timescale framework in Mokkadem and Pelletier (2006) still a state-of-the-art CLT result for Martingale difference noise by necessitating a deeper exploration into the Markovian noise {\u03ben}n\u22650, given that hi(x, y, \u03ben) \u2212\u00af hi(x, y), i = 1, 2 are no longer Martingale difference. Utilizing our CLT results, we demonstrate the impact of sampling strategies on the limiting covariance across a wide class of distributed optimization algorithms. Extending beyond the vanilla SGD setting studied in Hu et al. (2022), we show that improved sampling strategies lead to better performance for general TTSA including, but not restricted to, SGD variants and algorithms tailored for stochastic bilevel and minimax problems. Moreover, in the RL context, we introduce first of its kind statistical characterization of GTD2 and TDC algorithms with nonlinear function approximation (Maei et al., 2009) using Markovian samples. Using both theoretical and empirical results, we show that their asymptotic performance coincides, as evidenced by identical covariance matrix in our CLT. Such conclusions are not possible via current finitetime bounds (Doan, 2021b; Zeng et al., 2021). Notations. We use \u2225\u00b7\u2225to denote both the Euclidean norm of vectors and the spectral norm of matrices. Two symmetric matrices M1, M2 follow Loewner ordering M1 >L M2 (resp. \u2018\u2265L\u2019) if M1 \u2212M2 is positive definite (resp. positive semi-definite). A matrix is Hurwitz if all its eigenvalues possess strictly negative real parts. The function 1(\u00b7) is an indicator func\fCLT for TTSA with Markovian Noise: Theory and Applications tion. \u2207xh(x, y) stands for the Jacobian matrix of the vector-valued function h(x, y) with respect to the variable x. C1 function f means that function f is both continuous and differentiable. We use \u2018 d \u2212 \u2212 \u2192\u2019 for the convergence in distribution and N(0, V) is the Gaussian random vector with covariance matrix V. 2 MAIN RESULTS In this section, we analyze the asymptotic behavior of the TTSA algorithm (2) with Markovian noise. First, we provide assumptions and the almost sure convergence result in Section 2.1. Before presenting our main CLT result in Section 2.3, we explain how our result is achieved by transforming the TTSA iteration into a single-timescale SA-like recursion, and introduce some key components related to asymptotic covariance of the iterates. This transformation resembles that in Konda and Tsitsiklis (2004); Mokkadem and Pelletier (2006) but with a fresh perspective by accounting for biased errors due to Markovian noise, as elaborated upon in Section 2.2. 2.1 Key Assumptions and a.s. Convergence A1. The step sizes \u03b2n \u225c(n+1)\u2212b and \u03b3n \u225c(n+1)\u2212a, where 0.5 < a < b \u22641. A2. For the C1 function h1 : Rd1 \u00d7 Rd2 \u00d7 \u039e \u2192Rd1, there exists a positive constant L1 such that \u2225h1(x, y, \u03be)\u2225\u2264L1(1 + \u2225x\u2225+ \u2225y\u2225) for every x \u2208 Rd1, y \u2208Rd2, \u03be \u2208\u039e. The same condition holds for the C1 function h2 as well. A3. Consider a C1 function \u03bb : Rd1 \u2192Rd2. For every x \u2208Rd1, the following three properties hold: (i) \u03bb(x) is the globally attracting point of the related ODE \u02d9 y = \u00af h2(x, y); (ii) \u2207y\u00af h2(x, \u03bb(x)) is Hurwitz; (iii) \u2225\u03bb(x)\u2225\u2264L2(1 + \u2225x\u2225) for some positive constant L2. Additionally, let \u02c6 h1(x) \u225c \u00af h1(x, \u03bb(x)), there exists a set of disjoint roots \u039b \u225c {x\u2217: \u02c6 h1(x\u2217) = 0, \u2207x\u02c6 h1(x\u2217)+ 1{b=1} 2 I is Hurwitz}, which is also the globally attracting set for trajectories of the related ODE \u02d9 x = \u02c6 h1(x). A4. {\u03ben}n\u22650 is an iterate-dependent Markov chain on finite state space \u039e. For every n \u2265 0, P(\u03ben+1 = j|xm, ym, \u03bem, 0 \u2264m \u2264n) = P(\u03ben+1 = j|xn, yn, \u03ben = i) = Pi,j[xn, yn], where the transition kernel P[x, y] is continuous in x, y, and the Markov chain generated by P[x, y] is ergodic so that it admits a stationary distribution \u03c0(x, y), and \u03c0(x\u2217, \u03bb(x\u2217)) = \u00b5. A5. supn\u22650(\u2225xn\u2225+ \u2225yn\u2225) < \u221ea.s. In Assumption (A1), the step sizes \u03b2n, \u03b3n decay polynomially at distinct rates, i.e., \u03b2n = o(\u03b3n), which is standard in the TTSA literature (Zeng et al., 2021; Doan, 2021b; Hong et al., 2023). Assumption (A2) ensures that C1 functions h1, h2 are locally Lipschitz and grow at most linearly with respect to the norms of their parameters, as also assumed in Yaji and Bhatnagar (2020). This is a far less stringent condition compared to the globally Lipschitz assumption used in most of the recent works, as listed in Table 1. Assumption (A3) is crucial for the analysis of iterates (xn, yn), which can be seen as a stochastic discretization of the ODEs \u02d9 x = \u02c6 h1(x) and \u02d9 y = \u00af h2(x, y). This assumption guarantees the global asymptotic stability of these two ODEs, as demonstrated in Yaji and Bhatnagar (2020); Doan (2021b). The linear growth of \u03bb(x) is a milder condition than the globally Lipschitz assumption in Borkar and Pattathil (2018); Karmakar and Bhatnagar (2018); Zeng et al. (2021); Doan (2021b). Assumption (A4) is standard to guarantee the asymptotic unbiasedness of h1, h2 in the existing literature on TTSA with Markovian noise (Karmakar and Bhatnagar, 2018; Yaji and Bhatnagar, 2020; Khodadadian et al., 2022; Barakat et al., 2022). It is worth noting that {\u03ben} naturally allows for an augmentation of the form \u03ben \u225c(Xn, Yn), with two independent Markovian noise sequences {Xn}, {Yn} corresponding to iterates {xn} and {yn}, respectively. In this case, the functions h1 and h2 act only on the entries of \u03ben related to Xn and Yn. Assumption (A5) assumes the a.s. boundedness of the coupled iterates (xn, yn), which is commonly seen in the TTSA literature (Karmakar and Bhatnagar, 2018; Yaji and Bhatnagar, 2020). A similar stability condition is also found in the SA literature (Delyon et al., 1999; Borkar, 2022; Li et al., 2023b). In practice, to stabilize the TTSA algorithm (2) under Markovian noise, one could adopt algorithmic modifications from the SA literature, including the projection method onto (possibly expanding) compact sets (Chen, 2006; Andrieu and Vihola, 2014) or the truncation method with a restart process (Fort, 2015; Fort et al., 2016). Lemma 2.1 (Almost Sure Convergence). Under Assumptions (A1) (A5), iterates (xn, yn) in (2) almost surely converge to a set of roots, i.e., (xn, yn) \u2192 S x\u2217\u2208\u039b(x\u2217, \u03bb(x\u2217)) a.s. Lemma 2.1 follows from Yaji and Bhatnagar (2020, Theorem 4) by verifying the conditions therein and we defer the details to Appendix A.1. While they studied broader set-valued functions h1, h2 within the realm of stochastic recursive inclusion, they did not explore the CLT result. This is likely due to existing gaps in the \fJie Hu, Vishwaraj Doshi, Do Young Eun CLT analysis even for single-timescale stochastic recursive inclusion, as mentioned in Borkar (2022, Chapter 5). In contrast, we focus on single-valued functions h1, h2, as prevalent in the machine learning literature. This paves the way for the first CLT result, Theorem 2.2, for the general TTSA with controlled Markovian noise, as demonstrated in Table 1. In the following section, we will conduct a more detailed analysis of the asymptotic behavior of iterates (xn, yn) near equilibrium (x\u2217, \u03bb(x\u2217)) for some x\u2217\u2208\u039b. 2.2 Overview of the CLT Analysis for (xn, yn) Assumption (A1) puts {yn}n\u22650 on a \u2018faster timescale\u2019 compared to {xn}n\u22650, and has implications on convergence rates of the two sequences. Under additional conditions on the function \u00af h2(\u00b7, \u00b7) in Assumption (A3), the sequence {yn} can be approximated by {\u03bb(xn)} for large time step n, where \u03bb(x) is an implicit function solving \u00af h2(x, \u03bb(x)) = 0. Loosely speaking, when n is large enough, the fast iterates yn are nearly convergent to the root \u03bb(xn) of \u00af h2(xn, \u00b7). Iterates xn on the slower timescale then guide the roots \u03bb(xn) of the iterates yn until they reach y\u2217= \u03bb(x\u2217), which also satisfies \u00af h1(x\u2217, \u03bb(x\u2217)) = 0. Consequently, resembling Konda and Tsitsiklis (2004, Section 2) and Mokkadem and Pelletier (2006, Section 2.3), we can show that {xn} is now approximated by iterating a singletimescale SA update rule, independent of {yn} but instead driven by {\u03bb(xn)}, whose derivation we detail in what follows. For i \u2208{1, 2}, define Qi1 \u225c\u2207x\u00af hi(x, y) \f \f (x,y)=(x\u2217, y\u2217), Qi2 \u225c\u2207y\u00af hi(x, y) \f \f (x,y)=(x\u2217, y\u2217), \u2206(i) n \u225chi(xn, yn, \u03ben+1) \u2212\u00af hi(xn, yn), \u02dc \u2206(i) n \u225chi(xn, \u03bb(xn), \u03ben+1) \u2212\u00af hi(xn, \u03bb(xn)). Adding and subtracting \u00af h1(xn, yn) and \u00af h2(xn, yn) to iterates xn and yn in (2) respectively, and taking their Taylor expansions at (xn, yn)=(x\u2217, y\u2217), gives us xn+1 = xn+\u03b2n+1(Q11(xn\u2212x\u2217) + Q12(yn\u2212y\u2217)) + \u03b2n+1\u2206(1) n +\u03b2n+1O(\u2225xn\u2212x\u2217\u22252+\u2225yn\u2212y\u2217\u22252), (3) yn+1 = yn+\u03b3n+1(Q21(xn\u2212x\u2217) + Q22(yn\u2212y\u2217)) + \u03b3n+1\u2206(2) n +\u03b3n+1O(\u2225xn\u2212x\u2217\u22252 + \u2225yn\u2212y\u2217\u22252). (4) Re-arranging (4) by placing the yn \u2212y\u2217on the lefthand side yields yn\u2212y\u2217=\u03b3\u22121 n+1Q\u22121 22 (yn+ 1\u2212yn)\u2212Q \u22121 22Q21(xn\u2212x\u2217) + Q\u22121 22 \u2206(2) n +O(\u2225xn\u2212x\u2217\u22252 + \u2225yn\u2212y\u2217\u22252). (5) By substituting the above into (3), and then replacing (approximating) yn with \u03bb(xn), we get xn+ 1 =xn+\u03b2n+ 1Kx(xn\u2212x\u2217)+\u03b2n+ 1 \u02dc \u2206x n+\u03b2n+ 1Rn, (6) where Kx \u225cQ11\u2212Q12Q\u22121 22 Q21, \u02dc \u2206x n \u225c\u02dc \u2206(1) n \u2212Q12Q\u22121 22 \u02dc \u2206(2) n , (7) and Rn is comprised of residual errors from the earlier Taylor expansion and approximation of yn by \u03bb(xn). The term \u02dc \u2206x n can be further decomposed using the Poisson equation technique (Benveniste et al., 2012; Meyn, 2022) as \u02dc \u2206x n =[M (1) n+1\u2212Q12Q\u22121 22 M (2) n+1] + [ \u02dc H(xn, \u03bb(xn), \u03ben+1) \u2212\u02dc H(xn, \u03bb(xn), \u03ben)], where M (1) n+1 and M (2) n+1 are Martingale difference terms adapted to filtration Fn \u225c\u03c3(x0, y0, \u03be0, \u00b7 \u00b7 \u00b7 , \u03ben). The exact expressions for the Martingale difference terms can be found in Appendix A.2.1, equations 9(a) and 9(b). The second summand including the \u02dc H terms, whose exact expression is provided in Appendix A.2.1, involves consecutive Markovian noise terms \u03ben+1 and \u03ben which are responsible for biased errors in the iteration for xn. These additional terms are not present in existing works that focus only on i.i.d. stochastic inputs (Konda and Tsitsiklis, 2004; Mokkadem and Pelletier, 2006), even though their analysis leads to equations similar to (6). In Appendix A.2.2, we show that the \u02dc H terms along with residual errors Rn at each step are o(\u221a\u03b2n), and thus do not influence the CLT result for iterates xn of the slower timescale. Consequently, the approximation yn = \u03bb(xn), together with the aforementioned analysis leading to o(\u221a\u03b2n), now allows us to analyze (6) as essentially a single-timescale SA with Markovian noise. We then apply Fort (2015, Proposition 4.1) to extract a CLT result, i.e., we prove that \u03b2\u22121/2 n (xn \u2212x\u2217) d \u2212 \u2192N(0, Vx), (8) where Vx solves the Lyapunov equation Ux + (Kx + 1{\u03b2n=O(1/n)} 2 I)Vx + Vx(Kx + 1{\u03b2n=O(1/n)} 2 I)T = 0, Ux \u225clim s\u2192\u221e 1 sE \uf8ee \uf8f0 s X n=1 \u02dc \u2206x\u2217 n ! s X n=1 \u02dc \u2206x\u2217 n !T \uf8f9 \uf8fb, (9) and \u02dc \u2206x\u2217 n represents \u02dc \u2206x n measured at xn = x\u2217for all n. Through Q22, Q21 and \u02dc \u2206(2) n , both Kx and Ux capture the effect of deterministic field \u00af h2(\u00b7, \u00b7) on the asymptotic behavior of xn. The matrix Ux incorporates the effect of Markovian noise sequence {\u03ben}n\u22650 through \u02dc \u2206(1) n and \u02dc \u2206(2) n , which will be utilized in Proposition 3.2 to identify the effect of the underlying Markov chain on the asymptotic behavior of iterates xn. We show in Appendix A.2.1 that Ux can also be written as \fCLT for TTSA with Markovian Noise: Theory and Applications Ux = \u0002 I \u2212Q12Q\u22121 22 \u0003 \u0014 U11 U12 U21 U22 \u0015 \u0002 I \u2212Q12Q\u22121 22 \u0003T , (10) where Uij \u225clim s\u2192\u221e 1 sE \uf8ee \uf8f0 s X n=1 \u02dc \u2206(i)\u2020 n ! s X n=1 \u02dc \u2206(j)\u2020 n !T \uf8f9 \uf8fb, (11) \u02dc \u2206(i)\u2020 n denotes \u02dc \u2206(i) n measured at xn = x\u2217for all n and i, and U12 = UT 21. For an i.i.d. sequence {\u03ben} with marginal \u00b5, Uij = E\u03be\u223c\u00b5[hi(x\u2217, y\u2217, \u03be)hj(x\u2217, y\u2217, \u03be)T ] degenerates to the marginal covariance of functions hi(x\u2217, y\u2217, \u00b7), hj(x\u2217, y\u2217, \u00b7), and (8) aligns with previously established CLT results for linear (Konda and Tsitsiklis, 2004) and nonlinear TTSA (Mokkadem and Pelletier, 2006), both with Martingale difference noise. 2.3 Central Limit Theorem of TTSA with Controlled Markovian Noise Without loss of generality, our remaining results are stated while conditioning on the event that {xn \u2192 x\u2217, yn \u2192y\u2217}, for some x\u2217\u2208\u039b and y\u2217= \u03bb(x\u2217). Our main CLT result is as follows, with its proof deferred to Appendix A.2. Theorem 2.2 (Central Limit Theorem). Under Assumptions (A1) \u2013 (A5), \u03b2\u22121/2 n (xn \u2212x\u2217) \u03b3\u22121/2 n (yn \u2212y\u2217) ! d \u2212 \u2212 \u2192N \u0012 0, \u0012Vx 0 0 Vy \u0013\u0013 , (12) where the limiting covariance matrices Vx \u2208 Rd1\u00d7d1, Vy \u2208Rd2\u00d7d2 are given by Vx = Z \u221e 0 e t \u0012 Kx+ 1{b=1} 2 I \u0013 Uxe t \u0012 Kx+ 1{b=1} 2 I \u0013T dt, Vy = Z \u221e 0 etQ22U22etQT 22dt, (13) with Kx, Ux and U22 defined in (7), (9) and (11), respectively. Theorem 2.2 suggests that iterates (xn, yn) evolve asymptotically independently, as evidenced by the zero covariance of off-diagonal terms in (12). This is due to the diminishing correlation between (xn \u2212x\u2217) and (yn \u2212y\u2217) at a rate of O(\u03b2n/\u03b3n), a characteristic of the two-timescale setup, aligning with existing CLT findings for TTSA with Martingale difference noise (Konda and Tsitsiklis, 2004; Mokkadem and Pelletier, 2006). The limiting covariance matrix Vy is solely determined by the local function h2 and x\u2217without an additional term 1{a=1} 2 I due to a < 1 by assumption (A1), implying minimal effect of xn on the asymptotic behavior of yn. In contrast, Vx is significantly impacted by iterates yn since matrices Kx, Ux are comprised of functions h2 and \u00af h2. As a special case, when h1(x, y, \u03be) in the TTSA algorithm is independent of the variable y, i.e., h1(x, y, \u03be) \u2261h1(x, \u03be), then \u2207yh1(x, \u03be) = 0 for any y \u2208Rd2, implying Q12 = 0. According to Theorem 2.2, xn is decoupled from iterates yn and reduces to the single-timescale SA with Markovian noise, where Vx in (13), in view of (10) with Q12 = 0, becomes Vx = Z \u221e 0 e t \u0012 \u2207\u00af h1(x\u2217)+ 1{b=1} 2 I \u0013 U11e t \u0012 \u2207\u00af h1(x\u2217)+ 1{b=1} 2 I \u0013T dt. This Vx is in line with the existing CLT result for the single-timescale SA with controlled Markovian noise (Delyon, 2000; Benveniste et al., 2012; Fort, 2015) under the same locally Lipschitz condition on h1(x, \u03be), as stated in Assumption (A2). The limiting covariance matrices Vx and Vy are related to the mean square error (MSE) of their corresponding iterative errors xn \u2212x\u2217and yn \u2212y\u2217. For large enough n, the diagonal entries of Vx are approximated by eT i Vxei \u2248eT i E[(xn \u2212x\u2217)(xn \u2212x\u2217)T ]ei/\u03b2n for all i \u2208{1, \u00b7 \u00b7 \u00b7 , d1}, where ei is the i-th canonical vector. Then, the MSE of the iterate error xn \u2212x\u2217 can be estimated as E[\u2225xn \u2212x\u2217\u22252] = Pd1 i=1 eT i E[(xn\u2212 x\u2217)(xn\u2212x\u2217)T ]ei \u2248\u03b2n Pd1 i=1 eT i Vxei. This implies that E[\u2225xn \u2212x\u2217\u22252] resembles the trace4 of Vx, and decreases at a rate of \u03b2n. Similar arguments also hold for E[\u2225yn \u2212y\u2217\u22252] and Vy. 3 APPLICATIONS 3.1 Performance Ordering in TTSA The limiting covariance matrices Vx, Vy described in (13) for nonlinear TTSA with Markovian noise inherently incorporate the properties of the underlying Markov chain completely in terms of matrices Ux and U22, as defined in (10) and (11). This raises an intuitive question: If we can control the stochastic input sequence {\u03ben}n\u22650, how does it influence the performance of the TTSA algorithm? This question was originally studied by Hu et al. (2022), which introduces the notion of efficiency ordering of Markov chains, a metric prevalent in the MCMC literature, in the context of SGD algorithms, and proves that the presence of \u2018better\u2019, more efficient sampling strategy leads to improved SGD performance. Broadening this concept, we show that such performance improvements are applicable to the 4Sum of diagonal entries of a matrix. \fJie Hu, Vishwaraj Doshi, Do Young Eun SGD [Hu et al. 2022] Asymptotic Covariance Vx, Vy Higher Variance Lower Variance Base Efficient TTSA: SGD variants Algo. for Bilevel, Minimax [This work] Higher Variance Lower Variance Base Efficient Sampling Strategy \ud835\udf09\ud835\udc5b\ud835\udc5b\u22650 Employed Algorithm Figure 1: Efficiency Ordering: From SGD to TTSA. general TTSA framework, beyond mere SGD algorithms, as depicted in Figure 1. To better understand this, let UZ(g) \u225clims\u2192\u221e1 sE[\u2206s\u2206T s ] be the sampling covariance matrix for a vector-valued function g : \u039e \u2192Rd and stochastic process {Zn}, where \u2206s =Ps n=1(g(Zn) \u2212E\u00b5[g]) and E\u00b5[g]=P i\u2208\u039e g(i)\u00b5i. Definition 3.1 (Efficiency Ordering, (Mira, 2001; Hu et al., 2022)). For two Markov chains {Wn} and {Zn} with identical stationary distribution \u00b5, we say {Zn} is more sampling-efficient than {Wn}, denoted as W \u2aaf Z, if and only if UW (g) \u2265L UZ(g) for any vectorvalued function g. Examples of sampling strategies following Definition 3.1 include random and single shuffling paradigms (Ahn et al., 2020; Safran and Shamir, 2020), which are shown to be more sampling-efficient when compared to i.i.d. sampling. Another example, relevant in the context of token algorithms in distributed learning, is the so-called non-backtracking random walk (NBRW) (Alon et al., 2007; Lee et al., 2012; Ben-Hamou et al., 2018), which is more sampling-efficient than simple random walk (SRW). We point the reader to Hu et al. (2022, Section 4) for more detailed discussions, where more efficient sampling strategies employed in SGD algorithms lead to reduced asymptotic covariance of iterate errors. With two efficiency-ordered sampling strategies, we now extend the same performance ordering to TTSA, the proof of which can be found in Appendix A.3.1. Proposition 3.2. For the TTSA algorithm (2), given two different underlying Markov chains {Wn}n\u22650 and {Zn}n\u22650 that are efficiency ordered, i.e., W \u2aafZ, we have V(W ) x \u2265L V(Z) x and V(W ) y \u2265L V(Z) y . Proposition 3.2 enables us to expand the scope of Hu et al. (2022) by employing sampling-efficient strategies to a wider class of optimization problems within the TTSA framework. Specifically, our scope extends existing results as follows: (i) From vanilla SGD to its variants: The TTSA structure accommodates many SGD variants for finite-sum 102 103 104 105 Number of steps (n) 10 4 10 3 10 2 10 1 MSE xn x * 2 SRW NBRW i.i.d. sampling shuffling (a) MSE 102 103 104 105 Number of steps (n) 10 1 100 Rescaled MSE xn x * 2/ n SRW NBRW i.i.d. sampling shuffling (b) Rescaled MSE Figure 2: Comparison of the performance among different sampling strategies in momentum SGD. minimization, including the Polyak-Ruppert averaging (Ruppert, 1988; Polyak and Juditsky, 1992) and momentum SGD (Gadat et al., 2018; Li et al., 2022). Other variants of SGD in the TTSA framework, e.g., signSGD and normalized SGD, are provided in Xiao et al. (2023, Section 4.3) with detailed expressions. (ii) From finite-sum minimization to bilievel and minimax problems: Many algorithms within the TTSA framework can handle bilevel and minimax problems. For instance, Hong et al. (2023, Algorithm 1) effectively deals with both inner and outer objectives in bilevel optimization, while the stochastic gradient descent ascent algorithm (Lin et al., 2020, Algorithm 1) seeks saddle points in the minimax problem. From Proposition 3.2, all the above algorithms enjoy improved asymptotic performance when driven by more efficient samples. For instance, in the token algorithm setting (Hu et al., 2022; Triastcyn et al., 2022; Hendrikx, 2023; Even, 2023), a token can employ NBRW over SRW to solve various optimization problems with these TTSA algorithms. When random access of each data point is possible, Hu et al. (2022, Lemma 4.2) highlights that through a statespace augmentation, shuffling algorithms \u2013 conceptualized as Markov chains \u2013 outperform i.i.d. sampling, achieving zero sampling covariance. Using Proposition 3.2, we can show that this leads to zero limiting covariance Vx, Vy for all algorithms represented as TTSA. The superiority of shuffling techniques over i.i.d. sampling has indeed been studied for specific stochastic optimization settings, such as minimax optimization (Das et al., 2022; Cho and Yun, 2022) and SGD with momentum (Tran et al., 2021). However, Proposition 3.2 firmly establishes this at a much broader scope as described in (i) and (ii), such as bilevel optimization with shuffling methods, whose finite-time analysis remains an open problem. Simulations. We present numerical experiments for different sampling strategies employed in the momentum SGD algorithm to solve the L2-regularized binary classification problem using the dataset a9a (with 123 features) from LIBSVM (Chang and Lin, 2011). \fCLT for TTSA with Markovian Noise: Theory and Applications Specifically, to simulate the token algorithm in distributed learning, we employ NBRW and SRW as the stochastic input to the momentum SGD on the wikiVote graph (Leskovec and Krevl, 2014), comprising 889 nodes and 2914 edges.5 Each node on the wikiVote graph is assigned with one data point from the dataset a9a, thus 889 data points in total. We also assess the momentum SGD\u2019s performance under i.i.d. sampling and single shuffling using the same dataset of size 889. In Figure 2(a), we observe that NBRW has a smaller MSE than SRW across all time n, with a similar trend for single shuffling over i.i.d. sampling. Figure 2(b) demonstrates that the rescaled MSEs of NBRW, SRW and i.i.d. sampling approach some constants, while the curve for single shuffling still decreases in linear rate because eventually the limiting covariance matrix therein will be zero. We defer the detailed simulation settings and more simulation results to Appendix A.5. 3.2 Asymptotic Behavior of Nonlinear GTD Algorithms The CLT result not only allows comparison of limiting covariance matrices of two efficiency-ordered stochastic inputs in distributed learning, but also offers insights into an algorithm\u2019s asymptotic performance, as showcased in Table 1. This is particularly relevant in RL where the stochastic sequence {\u03ben} is generated by a given policy and thus uncontrollable. An important aspect in RL is policy evaluation in MDP with the primary goal of estimating the value function of a given policy, which is essential for further policy improvement (Sutton and Barto, 2018). In this part, we focus on a family of gradient-based TD learning (GTD) algorithms, which are instances of TTSA (Maei et al., 2009; Wang et al., 2021). We leverage Theorem 2.2 to derive the pioneering statistical properties of these algorithms when using nonlinear value function approximation and Markovian samples for policy evaluation. Tabular methods for estimating the value function, such as SARSA, have been widely used, but can be problematic when the state-action space is large (Sutton and Barto, 2018). TD learning with linear function approximation has been extensively studied (Srikant and Ying, 2019; Doan et al., 2019; Wang et al., 2020; Li et al., 2023a). In contrast to linear function approximation, nonlinear approaches, e.g. neural networks, are more practical choices known for their strong representation capabilities and eliminating the need for feature mapping (Wai et al., 2020; Wang et al., 2021). However, Tsitsiklis and Van Roy (1997) notes the potential divergence of TD learning with nonlinear function approximation. Addressing the divergence, Maei 5We incorporate both NBRW and SRW with importance reweighting to achieve a uniform target distribution. et al. (2009) introduces nonlinear GTD2 and TDC algorithms with almost sure convergence guarantees. These methods iterate over gradients of the meansquare projected Bellman error (MSPBE) in order to obtain the best estimate of the nonlinear value function that minimizes MSPBE (Maei et al., 2009; Xu and Liang, 2021; Wang et al., 2021). While non-asymptotic analyses of GTD2 and TDC algorithms have been established for both i.i.d. and Markovian settings with linear approximations (Karmakar and Bhatnagar, 2018; Dalal et al., 2018, 2020; Kaledin et al., 2020; Li et al., 2023a), results for the nonlinear function approximation remain scarce since MSEPBE becomes nonconvex and the two-timescale update rule is nonlinear. For asymptotic analysis, Karmakar and Bhatnagar (2018) studies the almost sure convergence of general TTSA and applies it to nonlinear TDC algorithm, extending from i.i.d. (Maei et al., 2009) to Markovian samples. This analysis can also be applied to nonlinear GTD2 algorithm. Only a few works (Xu and Liang, 2021; Wang et al., 2021) provide non-asymptotic analysis specifically for nonlinear TDC algorithm with Markovian samples and constant step sizes while the results cannot be extrapolated to nonlinear GTD2 algorithm. Therefore, a comprehensive analysis of these algorithms with Markovian samples under decreasing step sizes remains lacking in RL. We now summarize nonlinear GTD2 and TDC algorithms, followed by their asymptotic results in Proposition 3.3. An MDP is defined as a 5-tuple (S, A, P, r, \u03b1), where S and A are the finite state and action spaces, and P and r are transition kernel and reward function, with \u03b1 being a discount factor. A policy \u03c0 maps each state s\u2208S onto an action probability distribution \u03c0(\u00b7|s), with \u00b5\u03c0 being the corresponding stationary distribution. The Markovian samples {sn} then follow the transition probability P(s, s\u2032) = P a\u2208AP(s\u2032|s, a)\u03c0(a|s). The value function for policy \u03c0 from initial state s is W \u03c0(s)=E\u03c0[P\u221e n=0 \u03b1nrn|s0 = s], where rn \u225cr(sn, an, sn+1). The GTD2 and TDC algorithms estimate W \u03c0(s) via nonlinear functions Wx(s) and its feature function \u03d5x(s) = \u2207xWx(s) parameterized by x. For linear approximation Wx(s) = \u03d5(s)T x, \u03d5(s) is independent of x. However, with nonlinear Wx(s), \u03d5x(s) depends on x. Defining TD error as \u03b4n = rn + \u03b1Wxn(sn+1) \u2212Wxn(sn), the iterates (xn, yn) of the GTD2 and TDC algorithms admit an equilibrium (x\u2217, y\u2217), with x\u2217ensuring Esn\u223c\u00b5[\u03b4n(x\u2217)\u03d5x\u2217(sn)] = 0, and y\u2217= 0. Details of these algorithms and conditions for the following CLT results are in Appendix A.4.1. Proposition 3.3. For both nonlinear GTD2 and TDC \fJie Hu, Vishwaraj Doshi, Do Young Eun 102 103 104 105 106 107 108 Number of steps (n) 10 3 10 2 10 1 MSE (x2 n) GTD2 TDC Line nVx (a) MSE 0.8 0.6 0.4 0.2 0.0 0.2 0.4 0.6 0.8 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 Histogram for GTD2 GTD2 Fit Theory Histogram for TDC TDC Fit (b) Histograms of \u03b2\u22121/2 n xn Figure 3: Comparison of nonlinear GTD2 and TDC algorithms in the 5-state random walk task. algorithms under Markovian samples, we have lim n\u2192\u221exn = x\u2217 a.s. and lim n\u2192\u221eyn = 0 a.s. 1 \u221a\u03b2n (xn \u2212x\u2217) d \u2212 \u2212 \u2192N(0, Vx), 1 \u221a\u03b3n yn d \u2212 \u2212 \u2192N(0, Vy), where Vx, Vy are identical for both algorithms. The proof of Proposition 3.3 and the exact forms of Vx, Vy are in Appendix A.4.2. This proposition offers a state-of-the-art performance analysis of nonlinear GTD2 and TDC algorithms in RL, employing Markovian samples and general decreasing step sizes. While Doan (2021b); Zeng et al. (2021) provide finite-time bounds within the general TTSA framework, their applicability to nonlinear GTD2 and TDC algorithms is restricted by specific choice of the step sizes, as explained in Section 1.1. The usefulness of these finitetime results are further limited due to the lack of any definitive indication regarding the tightness of the bounds associated with these two algorithms. Moreover, empirical studies (Dann et al., 2014; Ghiassian et al., 2020) have not consistently favored either one of the two algorithms when compared across all tasks, leading to a lack of consensus regarding which one is the better performing overall. Proposition 3.3 clarifies that, in the long run, both GTD2 and TDC algorithms exhibit identical behaviors under the CLT scaling. Simulations. We consider a 5-state random walk task (Dann et al., 2014; Sutton and Barto, 2018) for nonlinear GTD2 and TDC algorithms. Each state can transit to the right or left next state with probability 0.5, with reward +0.5 if turning right or \u22120.5 otherwise. Let discount factor \u03b1 = 0.9, we consider the nonlinear value function Wx(s)=a(s)(e\u03bax\u22121) for a scalar parameter x, where a = [\u22122, \u22126, \u22123, \u22124, \u22125], \u03ba = 0.1. The ground truth W(s) = 0 for s \u2208[5] such that x\u2217= 0 for Wx(s) achieves the accurate approximation. Figure 3(a) illustrates the long-term performance of GTD2 and TDC algorithms. Starting from n = 107, they align with the line \u03b2nVx, with Vx being a scalar from Proposition 3.3. This reaffirms the relationship between MSE and CLT, as detailed in Section 2.3. Figure 3(b) displays a histogram of \u03b2\u22121/2 n xn, generated from 100 independent trials at n = 108 for both algorithms. We show that their experimental density curves are close to the theoretical Gaussian density with zero mean and variance Vx. We defer the detailed simulation settings, calculations related to Figure 3, and additional simulation results to Appendix A.5. 4" + }, + { + "url": "http://arxiv.org/abs/2311.11121v1", + "title": "Development of MKIDs in the Optical and Near-infrared Bands for SPIAKID", + "abstract": "SpectroPhotometric Imaging in Astronomy with Kinetic Inductance Detectors\n(SPIAKID) aims at designing, building, and deploying on the sky a\nspectrophotometric imager based on microwave kinetic inductance detectors\n(MKIDs) in the optical and near-infrared bands. MKIDs show a fast response and\nthe ability to resolve photon energy compared to the conventional\nCharge-coupled Devices (CCDs). In this paper, we present the design and\nsimulation of the MKID arrays for SPIAKID. The detectors consist of four arrays\nwith each array of 20,000 lumped-element pixels, and each array will be read\nwith 10 readout lines. %The array is designed to have resonances between 4-8GHz\nwith a frequency spacing of 2 MHz and a coupling quality factor (Qc) of about\n50000. The meander material of the resonators is trilayer TiN/Ti/TiN to have\nbetter uniformity of the critical temperature across the array. We also present\nthe measurement result for a test array with $30\\times30$ pixels which is a\nsubset of the designed 2000-pixel array to verify the design and fabrication.\nThe current measured best energy resolving power $R = E/\\Delta E$ is 2.4 at\n$\\lambda = 405~$nm and the current medium R is around 1.7. We have also\nobserved the response of the TiN/Ti/TiN is much smaller than expected.", + "authors": "Jie Hu, Paul Nicaise, Faouzi Boussaha, Jean-Marc Martin, Christine Chaumont, Alexine Marret, Florent Reix, Josiane Firminy, Thibaut Vacelet, Viet Dung Pham, Michel Piat, Elisabetta Caffau, Piercarlo Bonifacio", + "published": "2023-11-18", + "updated": "2023-11-18", + "primary_cat": "astro-ph.IM", + "cats": [ + "astro-ph.IM" + ], + "main_content": "Introduction SPIAKID is an ERC-funded project that aims to open the way to a new class of wide-range, wide-field, high-efficiency, and high-angular-resolution MKIDs-based spectro-imagers via a demonstrator instrument at NTT telescope in Chile. The primary objective of the project is to perform a detailed study of the stellar populations of at least one ultra-faint dwarf galaxy[1\u20134] (UFD) in the local group. Other science cases are the follow-up observations of sources of gravitational waves[5] and afterglows of gamma-ray bursts (GRBs)[6], characterization of the minor bodies of the solar system, and detection of exoplanet transits and exoplanet transit spectroscopy[7]. MKIDs show a great advantage[8, 9] over CCD cameras for their intrinsic energy resolution as well as the ability to record the arrival time of the photons. This means that for any object in the Field-of-View, one records the number of photons arrived over a given time interval and for any chosen wavelength bin, that is a spectrum, without the need to disperse the light with a prism or grating. The resolving power of such spectrum, E/\u2206E, is dictated by the performance of the detector. SPIAKID aims at a Field-of-View of 2\u2019 x 2\u2019 in the sky. This can be achieved with four MKID arrays of 20,000 pixels each on the focal plane. Due to budgetary restrictions, we shall equip the focal plane with four arrays, but we shall read only two arrays with 10 feed lines each. Each feedline will read out 2000 pixels. The wavelength range covered by our detectors will cover the optical and near-infrared (0.4 \u00b5m to 1.6 \u00b5m). An MKID pixel in the optical band and the near-infrared band is a superconducting resonator usually consisting of an interdigital capacitor and a meander line. Each pixel has a unique resonance frequency. All the pixels share the same meander design. Multiplexing is realized by tuning the capacitance by changing the finger length of the prototype interdigital capacitor. The resonance frequency spacing is usually on the order of 2 MHz. The meander is usually made of superconductors with higher normal resistivity, such as TiN[10\u201312], TiN/Ti/TiN[13], PtSi[14], Hafnium[15], Hafnium/Indium[16] and \u03b2-Tantalum[17], which is quite challenging to keep high resistivity and high film quality. TiN/Ti/TiN for better uniformity over the wafer, relative ease of fabrication, as well as high quality. Designing a 2000-pixel array is also not easy. First, as the capacitance becomes smaller, the length change in the finger of the interdigital capacitor becomes smaller, eventually becoming less than 1 \u00b5m, which is difficult for fabrication, especially with regular uv lithography. The second is frequency collision originating from the fabrication uncertainty, especially from the size of the meander. 2 \fIn this paper, we introduce the design of the MKID array for SPIAKIDs with an increasing gap in the capacitor to reduce the resonance frequency sensitivity to finger length as the resonance frequency increases. We will also present the measurement result of an MKIDs array with 30\u00d730 pixels, which is a subset of the designed 2000 pixels array. This resonance frequency ranges from 4-8 GHz, with a frequency spacing of about 4 MHz, which is chosen based on the limited internal quality factor of our current fabrication. The array is a key step in verifying the design and the fabrication procedure for the full array for SPIAKID. 2 MKIDs Design and Simulation The design goal of the MKID array for SPIAKID is to design an MKID array with resonance frequencies ranging from 4 to 8 GHz with a length variation in the finger of each capacitor greater than 1 \u00b5m and a resonance spacing of about 2 MHz, making it possible to fabricate the MKID array in ordinary lithography. Here, we gradually increase the gap between the fingers of the capacitor to reduce the capacitance sensitivity to the length of the finger. One of the designed MKIDs is shown in Fig.1. AC R(t) L(t) Fig. 1 One of the designed MKID pixels in the array with resonance around 6 GHz and the equivalent circuit of MKIDs We select the material of the meander to be TiN/Ti/TiN[13] mainly to improve the uniformity of the film across the wafer. The film of the meander is made of TiN/Ti/TiN with a critical temperature of 1.75 K and thickness to be 10/10/10 deposited on a sapphire substrate. The resistivity of the trilayer film is about 93 \u00b5\u2126\u00b7cm, corresponding to kinetic inductance[18] Lk \u224824.5 pH/\u25a1. The meander is a double-folded meander to reduce the crosstalk between the pixels[19]. The size of the meander is 36\u00d736 \u00b5m2, to accommodate the optics from the telescope, which includes a microlens set that is about 0.7 mm above the MKIDs array. The width of the meander line is 2.5 \u00b5m and the gap between the meander line is 0.5 \u00b5m. The distance between the adjacent 3 \fFig. 2 (A): Simulated Cc0 versus the number of the fingers in the capacitor that has been shorted. (B): Tuned Qc versus the resonance frequency. (C): The change of finger length \u2206l versus the resonance frequency. The inset shows the statistics of \u2206l for the 2000-pixel array. MKID pixels is 180 \u00b5m, which corresponds to 0.45\u201d on the sky based on our current optical design. The width of the fingers in the capacitor is fixed at 1 \u00b5m, while the spacing between them changes from 1 \u00b5m to 4.5 \u00b5m when the resonance frequency changes from 4 to 8 GHz. We couple the resonator to the feedline with a coupling bar. The coupling quality factor of the MKIDs scales with the coupling capacitor as follows[20] Qc = 2 Z0L0C2 c \u03c93 r (1) where Z0 is the characteristic impedance of the feedline, L0 is the total inductance in the resonator, Cc is the coupling capacitor to the feedline and \u03c9r = 2\u03c0fr is the angular frequency with fr the resonance frequency. And Cc can be further expressed as a parallel connection of the coupling capacitor Ccc and the parasitic capacitor to the ground Cc0 as Cc = Ccc + Cc0 (2) Ccc can be adjusted by tuning the length of the coupling bar by simulating the resonator with different lengths of the coupling bar and fitting the S21 of the resonator with S21 = 1 \u2212 QL/Qc 1 + 2jQL\u03b4x (3) where QL is the quality factor of the resonator and \u03b4x = (f \u2212fr)/fr is the fraction of frequency shift to the resonance frequency fr. Cc0 corresponds to the coupling capacitance when Ccc = 0 and tends to change with resonance frequency. To tune the Qc to be around the desired value, which is 50,000 in our case, Cc0 is simulated with different capacitance tuned by the finger length in the interdigital capacitor as is shown in Fig. 2-(A). Cc0 tends to increase and 4 \f4K LNF-LNC0.3_14B -20dB 1K 0.1K 70K MKIDs NbTi Coax CuTi Coax 35dB -10dB RCDAT-8000-60 300K Laser LP405C1 15dB Pulse Generator TG5012A 30dB ZVA-183W-S+ AD(0.4-6.0) Oscilloscope HDO6034 DD-100 Nb shield Metglas 2714A -20dB SMA100B XMA 208220dB Double DC Block Fig. 3 Detailed measurement setup for the MKIDs in the ADR saturate, which means that Qc cannot be tuned further. To solve this problem, we adjust the gap width in the interdigital capacitor and the ground bar width, as shown in Fig. 1 to tune the Cc0. We further tune Qc by tuning Ccc, which can be obtained by sweeping the coupling bar length. Once Cc0 and Ccc are obtained, we interpolate the length of the coupling bar to obtain the desired Qc which is shown in Fig. 2-(B). Finally, we linearly interpolate each finger to obtain the array. The change in finger length \u2206l is shown in Fig. 2-(C), and the statistics of \u2206l are shown in the inset of Fig. 2-(C), which shows that most of \u2206l is larger than 1 \u00b5m. 3 MKIDs Characterization The MKIDs is measured in an adiabatic demagnetization refrigerator (ADR) at Laboratoire Astroparticule & Cosmologie (APC). The detailed measurement setup is shown in Fig. 3. The stray magnetic field is shielded by a niobium cylinder of 1.5 mm thickness and sheets of metglas 2714a around MKIDs. The MKIDs are readout by a standard homodyne mixing scheme. The input signal is generated by a signal generator attenuated 20 dB, 10 dB, and 20 dB on 4 K, 1 K, and 100 mK to reduce the thermal noise. The output signal from MKIDs is first amplified by an LNA on the 4K stage and amplified further by two room-temperature amplifiers, which is the main source for the readout noise The signal is down-converted to DC by an IQ mixer and then sampled by an oscilloscope. Two double DC blocks are placed between the 4K and 1K stages for the operation of the heat switch in the cryostat. The MKIDs array is illuminated by an optical fiber that is placed 35 mm above the pixels. The laser is modulated by a 250 Hz pulse from the pulse generator. The output power of the laser is estimated to be a few pW outside the cryostat, attenuated by a digital step attenuator. The pulse response of the MKID is sampled by an oscilloscope (HDO6034) at 100 MHz. The S21 is measured by replacing the IQ mixer with a VNA. It is the same measurement setup used in our previous publications[12, 21]. The Measured MKIDs array has 30 \u00d7 30 pixels, which is a subset of the designed 2000-pixel array. The MKIDs were fabricated by magneton sputtering in the clean room in Paris Observatory. The picture of the fabricated MKIDs is shown in Fig. 4-(B). The measured S21 of the array is shown in Fig. 4-(A). The resonance frequency starts 5 \fFig. 4 Measured S21 for the MKIDs array and its statistics. (A): The measured S21 between 46 GHz. The inset shows the transmission between 5.78-5.86 GHz. (B): Photon of measured MKIDs chp. (C): The resonance frequency versus the resonance index. (D): The fitted Qi versus resonance frequency. (E): The statistics of the Qi from 4 GHz, which is in good agreement with the simulation. The resonance frequency versus the resonance index shown shows quite a good linearity, which indicates the cross-coupling between the pixels is acceptable[19], as is shown in Fig. 4-(C). We have fitted Qi with Eq. (3) in Fig. 4-(D) and it doesn\u2019t show significant frequency dependence. The yield of the array is around 75%. The median internal quality factor Qi is around 30, 000 and it doesn\u2019t show significant frequency dependence, as is shown in Fig. 4-(D). We show the single photon performance of a pixel with a resonance frequency of around 5.8 GHz in Fig. 5. The Qi of the pixel is around 53,000. Fig. 5-(A) shows the single photon phase response of the pixel at different bath temperatures (Tbath) that is averaged from the full width at half maximum (FWHM) in the pulse statistics and the inset shows the pulse maximum that increases about 50% from Tbath = 150 mK to Tbath = 300 mK. The noise spectrum at different temperatures is shown in Fig. 5-(B). The pulse statistics is shown in Fig. 5-(C), which shows the E/\u2206E to be around 2.1 @405 nm. The 1-\u03c3 width of the 1-photon peak and the 0-photon peak is 1.1 deg and 0.8 deg respectively, which indicates the noise from readout system is not significant. It can be seen from Fig. 5-(D) that energy resolving power increases a bit as Tbath increases, which can be attributed to the increasing signal-to-noise ratio (SNR) and a reduction in the noise of the two-level system[22, 23], as indicated in Fig. 5-(C). This phenomenon has also been observed in single-layer TiN MKIDs[12], but much 6 \fless significant. The E/\u2206E is estimated to be around 1.2-1.4 for the same pixel based on our measurement on another array with the same design. It should also be noted that the E/\u2206E we obtained is comparable with those published results[24], considering the volume of our meander V = 32.4 \u00b5m3. The response of the MKIDs is much smaller than what we expected. The expected response of the MKIDs is[25] \u03d5qp = \u03b1S2QL N0V \u2206\u00b7 \u03b7E \u2206 (4) where \u03b1 \u22481 is the fraction of the kinetic inductance. S2 = 2.73 is so-called the MattisBardeen factor[26] with f0 = 5.8 GHz, N0 is the single spin density on the Fermi level, \u03b7 is the pair-breaking efficiency, E = 3.06 eV is the photon energy of a 405 nm photon, and \u2206\u22481.76kBTc is the energy gap of the superconductor. If we assume \u03b7 = 0.6 and N0 = 6.0\u00d71010eV\u22121\u00b5m\u22123, which is the value for a single layer TiN[27], with QL \u224814000, the estimated \u03d5qp \u224830 deg, about 5 times higher than the value we have measured. In this case, there are two possible reasons. The first is the N0 for the TiN/Ti/TiN film could be much higher than the single layer TiN. The other possible reason is the pair-breaking efficiency \u03b7 is much smaller in the TiN/Ti/TiN film due to the different quasi-particle energy in different layers. Fig. 5 (A): Averaged pulse of single photon response at 405 nm at different Tbath. The inset shows the maximum of the pulse response versus Tbath. (B): Noise spectrum measured at different Tbath. (C): The pulse statistics at Tbath = 250 mK. (D): Fitted energy resolution and SNR versus Tbath. We show the statistics of the energy-resolving power of the pixels with resonance frequency between 4-6 GHz measured at 250 mK. About 330 resonators have been 7 \fFig. 6 (A): Statistics of E/\u2206E for resonators from 4.3 GHz to 6 GHz, around 330 pixels are measured. (B-D):E/\u2206E versus Qi, f0 and Qi/Qc measured individually, about 70% of which show a single photon response. The median E/\u2206E is about 1.7. We do not observe significant dependence between E/\u2206E and the internal quality Qi as well as fr. E/\u2206E tends to increase when the ratio between Qi and Qc increases, which is shown in Fig. 6-(D), which is reasonable as when Qi > Qc, the readout noise in the system tends to be suppressed as the ratio of resonance circle increases. The main reason for this low E/\u2206E is that the response of the TiN/Ti/TiN film is unexpectedly low. We would like to reduce the size to 25x25 \u00b5m2 or even 15x15 \u00b5m2. The second is to optimize the film quality. Currently, the Tc of the TiN/Ti/TiN is around 1.75 K. We would further optimize the film to have a Tc = 1.2 \u22121.4 K. In this case, we hope we can get an improvement of E/\u2206E with a factor of 3-4. We stress, however, that even this low E/\u2206E is sufficient for the main science goal of SPIAKID, namely the characterization of the stellar populations of Ultra Faint Dwarf galaxies (UFDs). We reached this conclusion after analyzing synthetic fluxes computed from model stellar atmospheres. The main conclusion is that stellar parameters can be extracted from a SPIAKID spectrum even if E/\u2206E is as low as currently afforded by our detectors, provided it is known for any given wavelength. We used theoretical stellar fluxes, computed from one-dimensional model stellar atmospheres in hydrostatic equilibrium, computed with a resolving power R = E/\u2206E = 200. We used fluxes of two models with the same effective temperature and gravity, but different concentrations of elements heavier than He, and scaled the fluxes to the absolute magnitude predicted by theoretical isochrones of two different ages. Such a combination is what you can expect to find among the stellar populations of UFDs. We verified that the monochromatic magnitude difference between the two fluxes is about 8 \f0.2 magnitudes, both for the original fluxes and for those degraded to R=2.5. Hence it shall be, in principle, possible to derive stellar parameters from the observed SPIAKID spectra, by fitting theoretical spectra. This requires, of course, that the resolving power be known at any given wavelength. This information shall be provided by the SPIAKID calibration plan. 4" + } + ], + "Chunjing Xu": [ + { + "url": "http://arxiv.org/abs/0711.3594v1", + "title": "Clustering with Transitive Distance and K-Means Duality", + "abstract": "Recent spectral clustering methods are a propular and powerful technique for\ndata clustering. These methods need to solve the eigenproblem whose\ncomputational complexity is $O(n^3)$, where $n$ is the number of data samples.\nIn this paper, a non-eigenproblem based clustering method is proposed to deal\nwith the clustering problem. Its performance is comparable to the spectral\nclustering algorithms but it is more efficient with computational complexity\n$O(n^2)$. We show that with a transitive distance and an observed property,\ncalled K-means duality, our algorithm can be used to handle data sets with\ncomplex cluster shapes, multi-scale clusters, and noise. Moreover, no\nparameters except the number of clusters need to be set in our algorithm.", + "authors": "Chunjing Xu, Jianzhuang Liu, Xiaoou Tang", + "published": "2007-11-22", + "updated": "2007-11-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction Data clustering is an important technique in many applications such as data mining, image processing, pattern recognition, and computer vision. Much e\ufb00ort has been devoted to this research [12], [9], [15], [13], [8], [3], [18], [1]. A basic principle (assumption) that guides the design of a clustering algorithm is: Consistency: Data within the same cluster are closed to each other, while data belonging to di\ufb00erent clusters are relatively far away. According to this principle, the hierarchy approach [10] begins with a trivial clustering scheme where every sample is a cluster, and then iteratively \ufb01nds the closest (most similar) pairs of clusters and merges them into larger clusters. This technique totally depends on local structure of data, without optimizing a global function. An easily observed disadvantage of this approach is that it often fails when a data set consists of multi-scale clusters [18]. Besides the above consistency assumption, methods like the K-means and EM also assume that a data set has some kind of underlying structures (hyperellipsoid-shaped or Gaussian distribution) and thus any two clusters can be separated by hyperplanes. In this case, the commonly-used Euclidean distance is suitable for the clustering purpose. With the introduction of kernels, many recent methods like spectral clustering [13], [18] consider that clusters in a data set may have more complex shapes other than compact sample clouds. In this general case, kernel-based techniques are used to achieve a reasonable distance measure among the samples. In [13], the eigenvectors of the distance matrix play a key role in clustering. To overcome the problems such as multi-scale clusters in [13], Zelnik-manor and Perona proposed self-tuning spectral clustering, in which the local scale of the data and the structure of the eigenvectors of the distance matrix are considered [18]. Impressive results have been demonstrated by spectral clustering and it is 1 \fregarded as the most promising clustering technique [17]. However, most of the current kernel related clustering methods, including spectral clustering that is uni\ufb01ed to the kernel K-means framework in [5], need to solve the eigenproblem, su\ufb00ering from high computational cost when the data set is large. In this paper, we tackle the clustering problem where the clusters can be of complex shapes. By using a transitive distance measure and an observed property, called K-means duality, we show that if the consistency condition is satis\ufb01ed, the clusters of arbitrary shapes can be mapped to a new space where the clusters are more compact and easier to be clustered by the K-means algorithm. With comparable performance to the spectral algorithms, our algorithm does not need to solve the eigenproblem and is more e\ufb03cient with computational complexity O(n2) than the spectral algorithms whose complexities are O(n3), where n is the number of samples in a data set. The rest of this paper is structured as follows. In Section 2, we discuss the transitive distance measure through a graph model of a data set. In Section 3, the duality of the K-means algorithm is proposed and its application to our clustering algorithm is explained. Section 4 describes our algorithm and presents a scheme to reduce the computational complexity. Section 5 shows experimental results on some synthetic data sets and benchmark data sets, together with comparisons to the K-means algorithm and the spectral algorithms in [13] and [18]. The conclusions are given in Section 6. 2 Ultra-metric and Transitive Distance In this section, we \ufb01rst introduce the concept of ultra-metric and then de\ufb01ne one, called transitive distance, for our clustering algorithm. 2.1 Ultra-metric An ultra-metric D for a set of data samples V = {xi|i = 1, 2, \u00b7 \u00b7 \u00b7 , n} \u2282Rl is de\ufb01ned as follows: 1) D : V \u00d7 V \u2192R is a mapping, where R is the set of real numbers. 2) D(xi, xj) \u22650, 3) D(xi, xj) = 0 if and only if xi = xj, 4) D(xi, xj) = D(xj, xi), 5) D(xi, xj) \u2264max{D(xi, xk), D(xk, xj)} for any xi, xj, and xk in V . The last condition is called the ultra-metric inequality. The ultra-metric may seem strange at the \ufb01rst glance, but it appears naturally in many applications, such as in semantics [4] and phylogenetic tree analysis [14]. To have a better understanding of it, we next show how to obtain an ultra-metric from a traditional metric where the triangle inequality holds. In Fig. 1, the distance between samples xp and xq is larger than that between xp and xs from the usual viewpoint of the Euclidean metric. A more reasonable metric on the data set should give a closer relationship (thus smaller distance) between xp and xq than that between xp and xs since xp and xq lie in the same cluster but xp and xs do not. A common method to overcome this di\ufb03culty is to create a non-linear mapping \u03c6 : V \u2282Rl \u2192V \u2032 \u2282Rs, (1) such that the images of any two clusters in Rs can be split linearly. This method is called the kernel trick and is overwhelmingly used in recent clustering schemes. Usually the mapping that can reach this goal is hard to \ufb01nd. Besides, another problem arises when the size of the data set increases; these schemes usually depend on the solution to the eigenproblem, the time complexity of which is O(n3) generally. Can we have a method that can overcome the above two problems and still achieve the kernel e\ufb00ect? In Fig. 1(a), we observe that xp and xq are in the same cluster only because the other samples marked by a circle exist; otherwise it makes no sense to argue that xp and xq are closer than xp and xs. In other words, the samples marked by a circle contribute the information to support this observation. 2 \fFigure 1: (a) A two-moon data set used to demonstrate the transitive distance, where samples of one cluster are denoted by circles and samples of another cluster are denoted by dots. (b) Maps of transitive distance matrices with di\ufb00erent orders. Let us also call each sample a messenger. Take xu as an example. It brings some messsage from xp to xq and vice versa. The way that xp and xq are closer than the Euclidean distance between them can be formulated as D(xp, xq) \u2264max{d(xp, xu), d(xu, xq)}, (2) where d(\u00b7, \u00b7) is the Euclidean distance between two samples, and D(\u00b7, \u00b7) is the distance we are trying to \ufb01nd that can re\ufb02ect the true relationship between samples. In (2), xu builds a bridge between xp and xq in this formulation. When more and more messengers come in, we can de\ufb01ne a distance through k of these messengers. Let P = xu1xu2 \u00b7 \u00b7 \u00b7 xuk be a path with k vertices, where xu1 = xp and xuk = xq. A distance between xp and xq with P is de\ufb01ned as DP(xp, xq) = max xui xui+1 \u2208P 1\u2264i\u2264k\u22121 {d(xui, xui+1)}. (3) We show an example in Fig. 1(a), where a path P from xp to xq is given. The new distance between xp and xq through P equals d(xu, xv), which is smaller than the original distance d(xp, xq). For samples xp and xs, there are also paths between them, such as the path Q, which also result in new distances between them smaller than d(xp, xs). However, no matter how the path is chosen, the new distance between xp and xs is always larger than or equal to the smallest gap between the two clusters as follows. Given two samples in a data set, we can have many paths connecting them. Therefore we de\ufb01ne the new distance, called the transitive distance, between two samples as follows. De\ufb01nition 1. Given the Euclidean distance d(\u00b7, \u00b7), the derived transitive distance between samples xp, xq \u2208V with order k is de\ufb01ned as Dk(xp, xq) = min P\u2208Pk max e\u2208P {d(e)}, (4) where Pk is the set of paths connecting xp and xq, each such path is composed of at most k vertices, e def = xixj, and d(e) def = d(xi, xj). In Fig. 1(b), we show the maps of transitive distance matrices for the data set in Fig. 1(a) with orders from 1 to 6, where a larger intensity denotes a smaller transitive distance. In this data set, there 3 \fare 50 samples, and the samples in each cluster are consecutively labeled. From these maps, we can see that when k is larger, the ratios of the inter-cluster transitive distances to the intra-cluster transitive distances tend to be larger. In other words, if more messengers are involved, the obtained transitive distances better represent the relationship among the samples. When the order k = n, where n is the number of all the samples, we denote Dn with D for simplicity. The following proposition shows that D is an ultrametric. Proposition 1. The transitive distance D is an ultrametric on a given data set. The proof of Proposition 1 is simple and omitted here. So given a data set V and its distance matrix E, we can obtain another ultrametric distance matrix E\u2032 through De\ufb01nition 1. In [6], an O(n3) algorithm is given to derive E\u2032 from E. In Section 4, we propose an algorithm which is almost O(n2) to obtain E\u2032. It is worth mentioning that although we use d(\u00b7, \u00b7) to denote the Euclidean distance for convenience in the previous discussion, we can replace d(\u00b7, \u00b7) with any other traditional distance (metric) in De\ufb01nition 1 and still have Proposition 1. Therefore, in what follows, d(\u00b7, \u00b7) is used to denote any traditional distance. 2.2 Kernel Trick by the Transitive Distance In this section, we show that the derived ultra-metric well re\ufb02ects the relationship among data samples and a kernel mapping with a promising property can be obtained. First we introduce a lemma from [11] and [7]. Lemma 1. Every \ufb01nite ultrametric space consisting of n distinct points can be isometrically embedded into a n \u22121 dimensional Euclidean space. With Lemma 1, we have the mapping1 \u03c6 : (V \u2282Rl, D) \u2192(V \u2032 \u2282Rs, d\u2032), (5) where \u03c6(xi) = x\u2032 i \u2208V \u2032, s = n \u22121, and n is the number of points in a set V . We also have d\u2032(\u03c6(xi), \u03c6(xj)) = D(xi, xj), where d\u2032(\u00b7, \u00b7) is the Euclidean distance in Rs, i.e., the Euclidean distance between two points in V \u2032 equals its corresponding ultrametric distance in V . Before giving an important theorem, we de\ufb01ne the consistency stated in Section 1 precisely. De\ufb01nition 2. A labeling scheme {(xi, li)} of a data set V = {xi|i = 1, 2, \u00b7 \u00b7 \u00b7 , n}, where li is the cluster label of xi, is called consistent with some distance d(\u00b7, \u00b7) if the following condition holds: for any y / \u2208C and any partition C = C1 \u222aC2, we have d(C1, C2) < d(y, C), where C \u2282V is some cluster, y \u2208V , d(C1, C2) def = min xi\u2208C1 xj\u2208C2 d(xi, xj) is the distance between the two sets C1 and C2, and d(y, C) def = minx\u2208C d(y, x) is the distance between a point y and the set C. The consistency requres that the intra-cluster distance is strictly smaller than the inter-cluster distance. This might be too strict in some practical applications, but it helps us reveal the following desirable property for clustering. Theorem 1. If a labeling scheme of a data set V = {xi|i = 1, 2, \u00b7 \u00b7 \u00b7 , n}, is consistent with a distance d(\u00b7, \u00b7), then given the derived transitive distance D and the embedding \u03c6 : (V, D) \u2192(V \u2032, d\u2032), the convex hulls of the images of the clusters in V \u2032 do not intersect with each other. The proof of the theorem can be found in Appendix A. An example of the theorem is illustrated in Fig. 2. A data set V with 50 points in R2 is mapped (embedded) into R49, a much higher dimensional Euclidean space, where the convex hulls of the two clusters do not intersect. Moreover, the Euclidean distance between any two samples in V \u2032 is equal to the transitive distance between these two samples in V . The convex hulls of the two clusters intersect in R2 but do not in R49, meaning that they are linearly separable in a higher dimensional Euclidean space. We can see that the embedding \u03c6 is a desirable kernel mapping. 1We use d(\u00b7, \u00b7) to denote a traditional distance in V and d\u2032(\u00b7, \u00b7) the Euclidean distance in V \u2032. 4 \fFigure 2: Mapping a set of 50 data samples in V \u2282R2 to V \u2032 \u2282R49. Figure 3: (a) Clustering result obtained by the K-means algorithm on the original data set V . (b) Clustering result obtained by the K-means algorithm on Z derived from the distance matrix of V . Only one sample has di\ufb00erent labelings from the two results. Obviously, the clustering of V \u2032 is much easier than the clustering of V . It seems that the K-means algorithm can be used to perform the clustering of V \u2032 easily. Unfortunately, we only have the distance matrix E\u2032 = [d\u2032 ij] = [Dij] of V \u2032, instead of the coordinates of x\u2032 i \u2208V \u2032, which are necessary for the K-means algorithm. In Section 3, we explain how to circumvent this problem. 3 K-Means Duality Let E = [dij] be the distance matrix obtained from a data set V = {xi|i = 1, 2, \u00b7 \u00b7 \u00b7 , n}. From E, we can derive a new set Z = {zi|i = 1, 2, \u00b7 \u00b7 \u00b7 , n}, with zi \u2208Rn being the ith row of E. Then we have the following observation, called the duality of the K-means algorithm. Observation (K-means duality): The clustering result obtained by the K-means algorithm on Z is very similar to that obtained on V if the clusters in V are hyperellipsoid-shaped. We have this observation based on a large number of experiments on di\ufb00erent data sets. Most data sets were randomly generated with multi-Gaussian distributions. From more than 100 data sets where each set contains 200 samples, we compared the results obtained by the K-means alogrithms on original data sets V \u2019s and their corresponding sets Z\u2019s. As a whole, the sample labeling di\ufb00erence is only 0.7%. One example is shown in Fig. 3, in which only one sample is labeled di\ufb00erently by the two clustering methods. The matrix perturbation theory [16] can be used to explain this observation. We begin with an ideal case by supposing that the inter-cluster sample distances are much larger than the intra-cluster sample distances (obviously, the clustering on this kind of data sets is easy). In the ideal case, let the distance between any two samples in the same cluster be 0. If the samples are arranged in such a way that those 5 \fin the same cluster are indexed by successive integers, then the distance matrix will be such a matrix: \u02c6 E = \uf8eb \uf8ec \uf8ec \uf8ed E1 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 E2 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 Ek \uf8f6 \uf8f7 \uf8f7 \uf8f8 } n1 rows } n2 rows } nk rows (6) where Ei = 0, 1 \u2264i \u2264k, represents the distance matrix within the ith cluster, n1 + n2 + \u00b7 \u00b7 \u00b7 + nk = n, and k denotes the number of clusters. Let \u02c6 Z = {\u02c6 zi|i = 1, 2, \u00b7 \u00b7 \u00b7 , n} with \u02c6 zi being the ith row of \u02c6 E. Then in this ideal case, we have \u02c6 z1 = \u02c6 z2 = \u00b7 \u00b7 \u00b7 = \u02c6 zn1, \u02c6 zn1+1 = \u02c6 zn1+2 = \u00b7 \u00b7 \u00b7 = \u02c6 zn1+n2, \u00b7 \u00b7 \u00b7 , \u02c6 zn\u2212nk+1 = \u02c6 zn\u2212nk+2 = \u00b7 \u00b7 \u00b7 = \u02c6 zn. Therefore, if \u02c6 Z is considered as a data set to be clustered, the distance between any two samples in each cluster is still 0. On the other hand, for two samples in di\ufb00erent clusters, say, \u02c6 z1 and \u02c6 zn1+1, we have \u02c6 z1 = ( n1 z }| { 0, \u00b7 \u00b7 \u00b7 , 0, d1,n1+1, \u00b7 \u00b7 \u00b7 , d1,n1+n2, \u00b7 \u00b7 \u00b7 ), (7) \u02c6 zn1+1 = (dn1+1,1, \u00b7 \u00b7 \u00b7 , dn1+1,n1, 0, \u00b7 \u00b7 \u00b7 , 0 | {z } n2 , dn1+1,n1+n2+1, \u00b7 \u00b7 \u00b7 ), (8) and d(\u02c6 z1, \u02c6 zn1+1) \u2265 v u u t n1+n2 X j=n1+1 d2 1,j + n1 X j=1 d2 n1+1,j \u226b0. (9) Thus, the distance between any two samples in di\ufb00erent clusters is still large. The distance relationship in the original data set is preserved completely in this new data set \u02c6 Z. Obviously, the K-means algorithm on the original data set can give the same result as that on \u02c6 Z in this ideal case. In general cases, a perturbation P is added to \u02c6 E, i.e., E = \u02c6 E + P, where all the diagonal elements of P are zero. The matrix perturbation theory [16] indicates that the K-means clustering result on the data set Z that is derived from E is similar to that on \u02c6 Z if P is not dominant over \u02c6 E. Our experiments and the above analysis support the observation of the K-means duality. Now we are able to give a solution to the problem mentioned at the end of Section 2.2. From Theorem 1, we can map a data set V to V \u2032 \u2282Rn\u22121 where the clustering is easier if the clusters with the original distance are consistent in V . The problem we need to handle is that in Rn\u22121 we only have the distance matrix instead of the coordinates of the samples in V \u2032. From the analysis of the K-means duality in this section, we can perform the clustering based on the distance matrix by the K-means algorithm. Therefore, the main ingredients for a new clustering algorithm are already available. 4 A New Clustering Algorithm Given a data set V = {xi|i = 1, 2, \u00b7 \u00b7 \u00b7 , n}, our clustering algorithm is described as follows. In step 2), we need to compute the transitive distance with order n between any two samples in V , or equivalently, to \ufb01nd the transitive edge, which is de\ufb01ned below. De\ufb01nition 3. For a weighted complete graph G = (V, E) and any two vertices xp, xq \u2208V , the transitive edge for the pair xp and xq is an edge e = xuxv, such that e lies on a path connecting xp and xq and Dpq = D(xp, xq) = d(xu, xv). An example of a transitive edge is shown in Fig. 1(a). Because the number of paths between two vertices (samples) is exponential in the number of the samples, the brutal searching for the transitive distance between two samples is infeasible. It is necessary to design a faster algorithm to carry out this task. The following Theorem 2 is for this purpose. Without loss of generality, we assume that the weights of edges in G are distinct. This can be achieved by slight perturbations of the positions of the data samples. After this modi\ufb01cation, the clustering result of the data will not be changed if the perturbation are small enough. 6 \fAlgorithm 1 Clustering Based on the Transitive Distance and the K-means Duality 1) Construct a weighted complete graph G = (V, E) where E = [dij]n\u00d7n is the distance matrix containing the weights of all the edges and dij is the distance between samples xi and xj. 2) Compute the transitive distance matrix E\u2032 = [d\u2032 ij] = [Dij] based on G and De\ufb01nition 1, where Dij is the transitive distance with order n between samples xi and xj. 3) Perform clustering on the data set Z\u2032 = {z\u2032 i|i = 1, 2, \u00b7 \u00b7 \u00b7 , n} with z\u2032 i being the ith row of E\u2032 by the K-means algorithm and then assign the cluster label of z\u2032 i to xi, i = 1, 2, \u00b7 \u00b7 \u00b7 , n. Theorem 2. Given a weighted complete graph G = (V, E) with distinct weights, each transitive edge lies on the minimum spanning tree e G = (V, e E) of G. The proof of Theorem 2 can be found in Appendix B. This theorem suggests an e\ufb03cient algorithm to compute the transitive matrix E\u2032 = [d\u2032 ij]n\u00d7n which is shown in Algorithm 2. Next we analyze the computational complexity of this algorithm. Algorithm 2 Computing the transitive distance matrix E\u2032 = [d\u2032 ij]n\u00d7n 1) Build the minimum spanning tree e G = (V, e E) from G = (V, E). 2) Initialize a forest F \u2190e G. 3) Repeat 4) For each tree T \u2208F do 5) Cut the edge with the largest weight wT and partition T into T1 and T2. 6) For each pair (xi, xj), xi \u2208T1, xj \u2208T2 do 7) d\u2032 ij \u2190wT 8) End for 9) End for 10) Until each tree in F has only one vertex. Building the minimum spanning tree from a complete graph G needs time very close to O(n2) by the algorithm in [2]2. When Algorithm 2 stops, total n non-trivial tree3 have been generated. The number of the edges in each non-trivial tree is not larger than n. Therefore, the total time taken by searching for the edge with the largest weight on each tree (step 5) in the algorithm is bounded by O(n2). Steps 6\u20138 are for \ufb01nding the values for the elements of E\u2032. Since each element of E\u2032 is visited only once, the total time consumed by steps 6\u20138 is O(n2). Thus the computational complexity of Algorithm 2 is about O(n2). 2The fastest algorithm [2] to obtain a minimum spanning tree needs O(e\u03b1(e, n)) time, where e is the number of edges and \u03b1(e, n) is the inverse of the Ackermann function. The function \u03b1 increases extremely slowly with e and n, and therefore in practical applications it can be considered as a constant not larger than 4. In our case, e = O(n2) for a complete graph, so the complexity for building a minimum spanning tree is about O(n2). 3A non-trivial tree is a tree with at least one edge. 7 \fFigure 4: (a) The minimum spanning tree and the clustering result by our algorithm. (b) The minimum spanning tree and the clustering result by the hierarchical clustering. The dashed lines are the cutting edges. The number of clusters is 3. Considering the time O(n2) for building the distance matrix E, and the fact that the complexity of the K-means algorithm4 is close to O(n2), we conclude that the computational complexity of Algorithm 1 is about O(n2). Although the minimum spanning tree is used to help clustering in both the hierarchical clustering and our algorithm, the motivations and e\ufb00ects are quite di\ufb00erent. In our case, the minimum spanning tree is for generating a kernel e\ufb00ect (to obtain the relationship among the samples in a high dimensional space according to Theorem 1), with which the K-means algorithm provides a global optimization function for clustering. Whereas in the hierarchical clustering, each iteration step only focuses on the local sample distributions. This di\ufb00erence leads to distinct algorithms in handling the data obtained from the minimum spanning tree. We carry out the K-means algorithm on the derived Z\u2032 according to the K-means duality, while the hierarchical clustering cuts c \u22121 largest edges from the minimum spanning tree, where c is the number of clusters. In Fig. 4, we show a data set clustered by the two approaches. The multi-scale data set makes the hierarchical clustering give an unreasonable result. 5 Experiments We have applied the proposed algorithm to a number of clustering problems to test its performance. The results are compared with those by the K-means algorithm, the NJW spectral clustering algorithm [13] and the self-tuning spectral clustering algorithm [18]. For each data set, the NJW algorithm needs manually tuning of the scale and the self-tuning algorithm needs to set the number of nearest neighbors. On the contrary, no parameters are required to set for our algorithm. In this comparisons, we show the best clustering results that are obtain by adjusting the parameters in the two spectral clustering algorithms. All the numbers of clusters are assumed to be known. 5.1 Synthetic Data Sets Eight synthetic data sets are used in the experiments. Bounded in a region (0, 1) \u00d7 (0, 1), these data sets are with complex cluster shapes, multi-scale clusters, and noise. The clustering results are shown in Fig. 5. Note that the results obtained by the K-means algorithm are not given because it is obvious that it cannot deal with these data sets. In Figs. 5(a)\u2013(c), all the three algorithms obtain the same results. Figs. 5(d)\u2013(f) and (g)\u2013(i) show three data sets on which the self-tuning algorithm gives di\ufb00erent results from the other two algorithms. 4The time complexity of the K-means algorithm is O(npq), where p and q are the number of iterations and the dimension of the data samples, respectively. The data set Z\u2032 in Algorithm 1 is in Rn and thus q = n. In practical applications, p can be considered as smaller than a \ufb01xed positive number. 8 \fFigure 5: Clustering results by our algorithm and the two spectral algorithms. (a)(b)(c) Results by the three algorithms. (d)(e)(f) Results by the NJW algorithm and ours. (g)(h)(i) Results by the selftuning algorithm. (j) Result by the NJW algorithm. (k) Result by the self-tuning algorithm and ours. (l)(m)(n) Results by our algorithm, the NJW algorithm, and the self-tuning algorithm, respectively. 9 \fFigure 6: The error rates of the four algorithms on the ten data sets constructed from the USPS database. The self-tuning algorithm fails to cluster the data sets no matter how we tune its parameter. Figs. 5(j) and (k) show two clustering results where the data set is with multi-scale clusters. The former is produced by the NJW algorithm and the latter by the self-tuning and our algorithms. To cluster the data set in Figs. 5(l)\u2013(n) is a challenging task, where two relatively tightly connected clusters are surrounded by uniformly distributed noise samples (the third cluster). Our algorithm obtains the more reasonable result (Fig. 5(l)) than the results by another two algorithms (Figs. 5(m) and (n)). From these samples, we can see that our algorithm performs similar to or better than the NJW and self-tuning spectral clustering algorithms. This statement applies to many other data sets we have tried, which are not shown here due to the limitation of space. 5.2 Data Sets from the USPS Database USPS database is an image database provided by the US Postal Service. There are 9298 handwriting digit images of size 16 \u00d7 16 from \u201c0\u201d to \u201c9\u201d in the database, from which we construct ten data sets from this database. Each set has 1000 images selected randomly with two, three, or four clusters. Each image is treated as a point in a 256-dimensional Euclidean space. The following \ufb01gure shows the error rates of the four algorithms on these sets. In this experiments, the parameters for the NJW and self-tuning algorithms are tuned carefully to obtain the smallest error rates. These results show that as a whole, our algorithm achieves the smallest error rate, and the K-means and self-tuning algorithms perform worst. 5.3 Iris and Ionosphere Data Sets We also test the algorithms on two commonly-used data sets, Iris and Ionosphere, in UCI machine learning database. Iris consists of 150 samples in 3 classes, each with 50 samples. Each sample has 4 features. Ionosphere contains 354 samples in 2 classes and each sample has 34 features. In Table 1 we show the error rates of the four algorithms clustering on these data sets. For the NJW and self-tuning algorithms, we have to adjust their parameters (\u03b4 and N)5 to obtain the smallest error rates, which are shown in the table. Our algorithm results in the smallest error rates among the four algorithms. 5.4 Remarks From the experiments, we can see that compared with the K-means algorithm, our algorithm and the spectral algorithms can handle the clustering of a data set with complex cluster shapes. Compared 5We tried di\ufb00erent \u03b4 from 0.01 to 0.1 with step 0.001 and 0.1 to 4 with step 0.1, and di\ufb00erent N from 2 to 30 with step 1. 10 \fTable 1: Error rates of the four algorithms on Iris and Ionosphere data sets K-means NJW Self-tuning Ours Iris 0.11 0.09 (\u03b4 = 0.40) 0.15 (N = 5) 0.07 Ionosphere 0.29 0.27 (\u03b4 = 0.20) 0.30 (N = 6) 0.15 with the spectral algorithms, our algorithm has comparable or better performance and does not need to adjust any parameter. In the above experiments, since we have the ground truth for each data set, we can try di\ufb00erent parameters in the NJW and self-tuning algorithms so that they produce the best results. However, we do not know which parameters should be the best for unsupervised data clustering in many applications. Another advantage of our algorithm over the spectral algorithms is that its computational complexity is close to O(n2), while the spectral algorithms\u2019 complexities are O(n3). 6" + } + ], + "Jiazheng Xing": [ + { + "url": "http://arxiv.org/abs/2308.09346v1", + "title": "Boosting Few-shot Action Recognition with Graph-guided Hybrid Matching", + "abstract": "Class prototype construction and matching are core aspects of few-shot action\nrecognition. Previous methods mainly focus on designing spatiotemporal relation\nmodeling modules or complex temporal alignment algorithms. Despite the\npromising results, they ignored the value of class prototype construction and\nmatching, leading to unsatisfactory performance in recognizing similar\ncategories in every task. In this paper, we propose GgHM, a new framework with\nGraph-guided Hybrid Matching. Concretely, we learn task-oriented features by\nthe guidance of a graph neural network during class prototype construction,\noptimizing the intra- and inter-class feature correlation explicitly. Next, we\ndesign a hybrid matching strategy, combining frame-level and tuple-level\nmatching to classify videos with multivariate styles. We additionally propose a\nlearnable dense temporal modeling module to enhance the video feature temporal\nrepresentation to build a more solid foundation for the matching process. GgHM\nshows consistent improvements over other challenging baselines on several\nfew-shot datasets, demonstrating the effectiveness of our method. The code will\nbe publicly available at https://github.com/jiazheng-xing/GgHM.", + "authors": "Jiazheng Xing, Mengmeng Wang, Yudi Ruan, Bofan Chen, Yaowei Guo, Boyu Mu, Guang Dai, Jingdong Wang, Yong Liu", + "published": "2023-08-18", + "updated": "2023-08-18", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Compared with general action recognition, few-shot action recognition requires limited labeled samples to learn new categories quickly. It can avoid the massive, timeconsuming, and labor-consuming data annotation commonly associated with supervised tasks, making it more adaptable for industrial applications. According to this advantage, increasing attention has been directed toward the field of few-shot action recognition [4, 27, 30, 35, 44, 14, 23, 38]. However, since few-shot action recognition has *Equal Contribution. \u2020Corresponding author. Class Prototype Construction Previous Works Class Prototype Construction Hyrsm Query Video Feature Support Set Video Features Similarity Score Class Prototype Construction Ours (b) Class Prototype Matching Frame-level Matching Class Prototype Matching Tuple-level Matching Class Prototype Matching Class Prototype Matching Ours (c) (a) 0.16 0.15 0.15 0.15 0.14 0.12 0.13 0.01 0.15 0.16 0.18 0.15 0.16 0.15 0.15 0.13 0.06 0.15 0.39 0.12 0.15 0.14 0.15 0.15 0.15 0.12 0.15 0.05 0.14 0.20 0.15 0.15 0.15 0.15 0.15 0.15 0.37 0.13 0.14 0 0 0 0.09 0.04 0 0.01 0.01 0.02 0 0.08 0.01 0 0 0 0 0 0.15 0.03 0 Acc = 80% Acc = 80% Acc = 100% TRX OTAM Ours 0.39 0.43 0.68 0.36 0.15 0.15 0.18 0.43 0.52 0.37 0.40 0.93 0.91 1.00 0.73 0 0.01 0 1.00 Figure 1. (a): Similarity visualization between query and support videos with different methods on the 5-way 1-shot task of UCF101 [29]. A higher score indicates a greater degree of similarity. TRX [27] misclassifies the drumming as the jumping jack, and OTAM [4] misidentifies the high jump as the long jump. Our method identifies all categories of videos accurately. (b): Different types of class prototype construction. Previous works did not do any information interaction among different videos. HyRSM [35] operates an inter-relation function without leveraging label-informed supervision. Our method utilizes the graph network with label-informed supervision to learn the correlation between different videos. (c): Different types of class prototype matching. Frame-level matching [46, 4, 35] uses single individual frames for matching, while tuple-level [30, 38, 27] matching combines several frames into a tuple as the matching unit. Our method combines both to complement each other\u2019s shortcomings. arXiv:2308.09346v1 [cs.CV] 18 Aug 2023 \flimited learning material, learning well-generalized models are challenging. Current attempts to address the above problems [4, 46, 41, 27, 30, 38, 35] mainly adopt the metric-based framework and episode training to solve the difficulty of model migration on new categories. Empirically, we observed that previous approaches failed to effectively address the problem of misclassification of videos from similar categories. Taking the action of the high jump and long jump as an instance, some methods (e.g., OTAM [4]) is easy to confuse the two classes by assigning close prediction scores due to their similarity in scenes and sub-actions, as shown in Fig. 1(a). We have analyzed the main reasons from three folds. (i) Class prototype construction: task-oriented class features can optimize videos\u2019 intraand inter-class correlation. As shown in Fig. 1(b), most previous work has yet to use the whole task video features to extract relevant discriminative patterns. Although HyRSM [35] manipulates interrelationship functions on different videos to get task-specific embeddings, it does not explicitly optimize intraand inter-class correlations. (ii) Matching mechanisms: proper matching mechanisms need to be established to solve the confusion problem of similar videos. As shown in Fig. 1(c), current work almost all use a simple class prototype matching mechanism. Some methods use the framelevel matching mechanism [46, 4, 35], which is suitable for spatial-related datasets [16, 29, 5], and the others use the tuple-level(multiple frames combined into a tuple) matching mechanism [30, 38, 27] that is appropriate for temporalrelated datasets [13]. None of these previous methods can cope with video tasks of variable types well. (iii) Feature modeling: a powerful and highly discriminative feature is first needed to distinguish similar classes. Most previous works model the temporal feature through hand-designed temporal alignment algorithms [46, 4] or simple temporal attention operations [35, 38], leading to a simplistic exploration of the temporal relationship without dissecting it into more detailed patch and channel temporal relations to analyze. Based on the above observations, we propose a novel method for few-shot action recognition, dubbed GgHM, a short for Graph-guided Hybrid Matching. Specifically, we apply a graph neural network (GNN) for constructing taskoriented features, as shown in Fig.1(b). It could interactively transfer information between video features in a task to enhance the prior knowledge of the unknown video. We utilize the ground truth of the constructed graph edges to explicitly learn the correlation of these video features to supervise the similarity score learning between the query and support videos. Second, as shown in Fig.1(c), we propose a hybrid prototype matching strategy that combines framelevel and tuple-level matching based on the bidirectional Hausdorff Distance. Although the Hausdorff metric framelevel matching can alleviate the strictly ordered constraints of acquiring better query-support correspondences, it fails to capture temporal order. As a result, it can be confused for actions with similar action scenes strongly dependent on temporal order, e.g., putting something in the box and taking something out of it. However, the construction of tuples strictly follows a chronological order, which can compensate for the frame-level matching problem. Fig.1(a) visualizes the predicted similarities between query and support videos with different methods on the 5-way 1-shot task of UCF101 [29]. Our method achieves more discriminative results for similar videos in each task compared to OTAM [4] and TRX [27]. Additionally, we design a learnable dense temporal modeling module to consolidate the representation foundation. It includes a temporal patch and temporal channel relation modeling block, and their combination allows for dense temporal modeling in both spatial and channel domains. Finally, extensive experiments on four widely-used datasets demonstrate the effectiveness of our method. In summary, we make the following contributions: \u2022 We apply a graph neural network to guide the taskoriented features learning during the class prototype construction, explicitly optimizing the intraand interclass correlation within video features. \u2022 We propose a hybrid class prototype matching strategy based on the frameand tuple-level prototype matching, giving rise to effectively coping with video tasks of multivariate styles. \u2022 We design a learnable dense temporal modeling module consisting of a temporal patch and temporal channel relation modeling block for dense temporal modeling in both spatial and channel domains. 2. Related Works 2.1. Few-shot Image Classification Few-shot image classification uses the episodic training paradigm, using a handful of labeled training samples from similar tasks to represent a large amount of labeled training samples. Recent years, research on few-shot image classification can be mainly classified into two categories: adaptation-based and metric-based methods. The adaptionbased approaches aim to find a network initialization that can be fine-tuned for unknown tasks using a small amount of labeled data, called gradient by gradient. The classical adaptation-based approaches are MAML [10], Reptile [25], and related deeper researches include [21, 32]. The metricbased approaches aim to learn a feature space and compare task features through different matching strategies, called learning to compare. The representative methods are Prototypical Networks [28], Matching Networks [31]. And there \fare many methods [40, 39, 8, 18] that aim to improve upon these approaches. Our method is inspired by them and belongs to the metric-based category. 2.2. Few-shot Video Action Recognition The core idea of few-shot action recognition is similar to that of few-shot image classification, but the former task is more complex than the latter owning to an additional temporal dimension. Due to high computational resources and long experimental time, adaptation-based methods( MetaUVFS [26]) have received little attention in few-shot action recognition. The existing research mainly applies metricbased learning, but with different focuses. Some methods focus on feature representation enhancement. For example, STRM [30] employs local and global enrichment modules for spatiotemporal modeling, HyRSM [35] uses the hybrid relation modeling to learn task-specific embeddings, and SloshNet [38] utilizes a feature fusion architecture search module to exploit the low-level spatial features and a long-term and short-term temporal modeling module to encode complementary global and local temporal representations. Other methods focus on class prototype matching strategies. For example, OTAM [4] proposes a temporal alignment module to calculate the distance value between the query video and the support set videos, TRX [27] matches each query sub-sequence with all sub-sequences in the support set, HyRSM [35] designs a bidirectional Mean Hausdorff metric to more flexibly find the correspondences between different videos. Additionally, TRPN [34], MORN [24] focus on combining visual and semantic features, and AMeFu-Net [11] centers on using depth information to assist learning. Unlike these previous methods, our method focuses on distinguishing videos from similar categories by optimizing intraand inter-class class correlation within video features during the prototype construction and building a hybrid prototype matching strategy to effectively handle video tasks of multivariate styles. 3. Method 3.1. Problem Formulation Few-shot learning is based on using a small number of labeled training samples from similar tasks as a proxy for many labeled training samples. For few-shot action recognition, it aims to classify an unlabeled query video into one of the N action categories in the support set with limited K samples per action class, which can be considered an Nway K-shot task. Like most previous studies, we adopt an episode training paradigm followed by [4, 35, 14, 17, 38], where episodes are randomly selected from extensive data collection. In each episode, we suppose that the set S consists of N \u00d7 K samples from N different action classes, and Sn k = {sn k1, sn k2, \u00b7 \u00b7 \u00b7 , sn kT } represents the k-th video in class n \u2208{1, \u00b7 \u00b7 \u00b7 , N} randomly sampled T frames. The query video denotes Q = {q1, q2, \u00b7 \u00b7 \u00b7 , qT } sampled T frames. 3.2. Architecture Overview Our overall architecture is illustrated in Fig.2. For the frame-selecting strategy, we follow previous work TSN [33], where the input video sequence is divided into T segments, and snippets are extracted from each segment. For simplicity and convenience, we discuss the process of the 5-way 1-shot problem and consider that the query set Q contains a single video. In this way, the query video Q = {q1, q2, \u00b7 \u00b7 \u00b7 , qT } and the class support set videos Sn = {sn 1, sn 2, \u00b7 \u00b7 \u00b7 , sn T } (Sn \u2208S = \b S1, S2, \u00b7 \u00b7 \u00b7 , S5\t ) pass through the feature extractor to obtain the query feature FQ and the support features FSn(FSn \u2208FS) in each episode. Next, we input FS and FQ to the proposed learnable dense temporal modeling module to obtain enhanced temporal features f FS and f FQ. We apply to mean pooling operation on f FS and f FQ in the temporal dimension to obtain the relation node features g Favg S and g Favg Q for the following graph network. Then, the relation node features are taken into the graph network with initial edge features for relation propagation. The updated edge features with enhanced temporal features generate task-oriented features Ftask S and Ftask Q and obtain the loss Lgraph through a graph metric. Finally, the task-oriented features are fed into the hybrid class prototype matching metric to get the class prediction b yQ and loss Lmatch. For better clarity and consistency with the algorithm procedure, we will first introduce our learnable dense temporal modeling module, followed by the graph-guided prototype construction, and finally the hybrid prototype matching strategy. Details are shown in the subsequent subsections. 3.3. Learnable Dense Temporal Modeling Module (LDTM) The action classification process relies heavily on temporal context information. Inspired by some temporal modeling methods based on attention mechanism [1, 43, 9, 37, 2], we design a learnable dense temporal modeling module, which consists of a temporal patch relation modeling block and a temporal channel relation modeling block, as shown in Fig.3. The two blocks are complementary, and their combination allows for dense temporal modeling in both the spatial and channel domains. Compared to PST [37], which uses a fixed patch shift strategy and a channel shift strategy, our learnable patch and channel temporal relation modeling enables the extraction of richer features. Patch Temporal Relation Modeling (PTRM). Given a video feature map output by the feature extractor F\u2208RN\u00d7T \u00d7C\u00d7H\u00d7W , we first reshape it to a sequence as Fseq1\u2208RN\u00d7HW \u00d7C\u00d7T and then fed it into the temporal \fquery video class support set videos Backbone Feature Extractor Patch Temporal Relation Modeling Learnable Dense Temporal Modeling Module Channel Temporal Relation Modeling Temporal Modeling Graph-guided Class Prototype Construction Prototype Construction Relation Propagation Relation Node Similarity Score Task-oriented Features Prototype Matching Hybrid Class Prototype Matching Strategy Frame-Level Hausdorff MatchingMetric Tuple-Level Hausdorff MatchingMetric Tuple Combination Metric Fusion Function Mean Pooling Figure 2. Overview of GgHM. For simplicity and convenience, we discuss the case: the 5-way 1-shot problem and the query set Q with a single video. The support set video features FS and query video feature FQ are obtained by the feature extractor. The enhanced temporal features f FS and f FQ are obtained by the learnable dense temporal modeling module. The task-oriented features FS taskand FQ task are obtained by the graph-guided prototype construction module. The b yQ is the class prediction of the query video, and the loss Lmatch and Lgraph are the standard cross-entropy loss. \u2295indicates element-wise weighted summation. Channel Temporal Relation Modeling Self-Attention Q K V Layer Norm FFN Patch Temporal Relation Modeling Spatial Attention Temporal MLP Learnable Patch Shift Spatial Attention Original Features lnput Features Learnable Channel Shift Spatial Attention Learnable Dense Temporal Modeling Module Space Attention Figure 3. The architecture of learnable dense temporal modeling module. \u2295denotes element-wise summation. MLP to get hidden temporal feature HT : \\ l abel {} \\textb f { H }_{T} = relu\\left (\\textbf {W}_{t1}\\textbf {F}_{seq1}\\right )\\textbf {W}_{t2}+ \\textbf {F}_{seq1} (1) where Wt1 and Wt2\u2208RT \u00d7T are the learnable weights for temporal information interaction of different video frames. Then, HT with rich video spatiotemporal information are inserted into the original features Fseq1, making the singleframe video feature contain semantic information for all video frames. The temporal patch relation modeling feature Ftp is obtained by: \\l a be l { } \\ t e xtbf { F} _{ t p }\\ le ft [\\ : ,\\ n, \\ : \\ , \\ :\\rig ht ] = \\\\ \\left \\{ \\begin {array}{c} \\textbf {F}_{seq1}\\left [\\ :,\\ n,\\ :\\ ,\\ :\\right ] \\,\\, if\\,\\,n\\%gap=0\\\\ \\textbf {H}_{T}\\left [\\ :,\\ n,\\ :\\ ,\\ :\\right ] \\,\\, if\\,\\,n\\%gap \\neq 0\\\\ \\end {array} \\right . (2) where n is the patch index and gap is a positive integer to control the frequency of the patch shift. After the learnable patch shift operation, the feature Ftp is reshaped as F\u2217 tp\u2208RNT \u00d7HW \u00d7C and do spatial self attention. This way collects the temporal information of the different video frames sparsely within the frame but sacrifices the original spatial information within every frame. To alleviate this problem, we do the weighted summation between spatialonly and spatiotemporal attention results, given by: \\l a bel {} \\t ex t b f { F} _{tp} = \\gamma SA_{spa}\\left (\\textbf {F}_{tp}^{*}\\right ) + \\left (\\textbf 1-\\gamma \\right )SA_{spa}\\left (\\textbf {F}^{*}\\right ) (3) where SAspa stands for the spatial attention operation, F\u2217\u2208RNT \u00d7HW \u00d7C is reshaped from F and \u03b3 \u2208[0, 1] is a hyperparameter. Channel Temporal Relation Modeling (CTRM). We first reshape F as Fseq2\u2208RNHW \u00d7C\u00d7T . Then it is fed it into a learnable channel shift operation to obtain the temporal channel relation modeling feature Ftc. Concretely, the learnable channel shift operation is a 1D channel-wise temporal convolution adopted to learn independent kernels for \feach channel. Formally, the learnable channel shift operation can be formulated as: \\la be l { } \\textbf {F}_{tc}^{t,c} = \\sum _{i}\\textbf {K}_{c,i}\\textbf {F}_{seq2}^{c,t+i} (4) where t and c denote the temporal and channel dimensions of the feature map, respectively. Kc,i indicates the temporal kernel weights of the c-th channel, Fc,t+i seq2 \u2208Fseq2 is the input c-th channel feature and Ft,c tc \u2208Ftc is the output c-th channel feature. After that, the final temporal channel relation modeling feature Ftc is obtained through a spatial attention and we do the weight summation between Ftp and Ftc to obtain the final enhanced temporal features e F as follows: \\ l abel {} \\w idetilde {\\textbf {F}} = \\beta \\textbf {F}_{tp} + \\left (\\textbf 1-\\beta \\right )\\textbf {F}_{tc} (5) where \u03b2 \u2208[0, 1] is a hyperparameter. In summary, PTRM aggregates temporal information for parts of patches while CTRM learns the temporal shift of channels. As a result, our LDTM could achieve sufficient temporal relation modeling in both the spatial and channel dimensions in a dense and learnable way. 3.4. Graph-guided Prototype Construction(GgPC) We design a graph-guided prototype construction module to enhance the priori knowledge of the unknown video and explicitly optimize the intraand inter-class correlation within video features. We draw inspiration from few-shot image classification methods based on graph neural networks [12, 15, 22, 6], which utilize graph networks to optimize intra-cluster similarity and inter-cluster dissimilarity and transform the image classification problems into node or edge classification problems. Different from this, directly feeding the video features (usually after the temporal pooling operation) into the graph network can lead to unsatisfactory results due to the loss of temporal information. Therefore, we only use graph networks as guidance to optimize features\u2019 intraand inter-class correlation. The overall framework of the proposed graph-guided prototype construction module is shown in Fig.4, and the overall algorithm is summarized in Algorithm.1. For simplicity and convenience, we discuss the process of the NSway 1-shot problem and consider that the query set Q contains NQ videos. This process can be divided into two stages: Graph neural network (GNN) propagation and taskoriented features obtaining. For GNN propagation, the temporally enhanced features e F after doing the Mean Pooling operation in the temporal dimension g Favg are used as node features V for graph network initialization. Edge features A represent the relationship between two nodes, i.e., the strength of intraand inter-class relationships, and their initialization depends on the labels. The propagation includes the node aggregation and edge aggregation process. After Relation Node Node Aggregation Relation Node Features Initial Edge Features Temporal Enhanced Features Mean Pooling Propagation Edge Aggregation Update Similarity Score Output MLP Task-oriented Features Metric Fusion Function Matrix Multiplication Graph-guided Class Prototype Construction Module Figure 4. The overall framework of the proposed graph-guided prototype construction model. Consider that the query set Q contains one video for simplicity and convenience. completing the graph propagation, we use a Select operation to extract the similarity score from the updated edge features in the last layer. Select means that the edge features related to each query video feature are selected from the output entire edge features, and a total of NQ new edge features are formed further. For task-oriented features obtaining, the details are shown in Algorithm.1 where fF NN is a feed-forward network, femb and ffuse are MLPs, and \u2297 indicates the matrix multiplication. Meanwhile, the Select process is summarized in Algorithm.2. For K-shot (K > 1) tasks, when constructing node features, we perform mean pooling on the features of support videos of the same category in the feature dimension, while keeping other aspects consistent with the 1-shot task. To sum up, the task-oriented features Ftask are obtained by fusing enhanced temporal features e F with features Fgraph guided by graph networks to preserve the temporality of features. Through the guidance of GNN, every query video feature has its special support features, and the class correlation within video features is optimized explicitly. 3.5. Hybrid Prototype Matching Strategy (HPM) Frame-level matching uses single individual frames, while tuple-level matching combines several frames into a tuple as the matching unit. HyRSM [35] applies the Hausdorff Distance metric as the prototype matching method, which can alleviate the strictly ordered constraints of acquiring better query-support correspondences, but it fails to capture temporal order. This matching metric is easily confused for actions with similar action scenes but strongly depends on temporal order,e.g., pick up a glass of water and put down a glass of water. To solve this problem, we design a hybrid prototype matching strategy that combines frame-level and tuple-level matching based on the bidirectional Hausdorff Distance. This approach effectively copes with video tasks of diverse styles. Given the task-oriented features Ftask S , Ftask Q , the m-th support video \fAlgorithm 1: The process of graph-guided prototype construction(GgPC) 1 Us indicates the unsqueeze operation, R indicates the repeat operation. 2 Input: f FS\u2208RNS\u00d7T \u00d7C, f FQ\u2208RNQ\u00d7T \u00d7C, e F = f FS S f FQ = e F\u2208R(NS+NQ)\u00d7T \u00d7C 3 Output: Ftask Q \u2208RNQ\u00d7T \u00d7C, Ftask S \u2208RNQ\u00d7NS\u00d7T \u00d7C 4 Initialize: g Favg = Mean pool(e F, dim = 1) /* GNN Propagation */ 5 Graph: G = (V, A; S S Q) , v0 i = g Favg i , a0 ij, \u2200i, j \u2208S S Q 6 for l = 1, \u00b7 \u00b7 \u00b7 , L do 7 for i = 1, \u00b7 \u00b7 \u00b7 , |V| do 8 vl i = NodeAggregation(vl\u22121 j , al\u22121 ij ) 9 end 10 for (i, j) = 1, \u00b7 \u00b7 \u00b7 , |A| do 11 al ij = EdgeAggregation(vl j, al\u22121 ij ) 12 end 13 end 14 Similarity Score: Msiam = Select(aL ij[0])\u2208RNQ\u00d7(NS+1)\u00d7(NS+1) /* Get Task-Oriented Features */ 15 Optimized Features: Fnode S = g Favg S .Us(0).R(NQ, 1, 1) Fnode = Cat([Fnode S , g Favg Q .Us(1)], dim = 1) Fgraph = fF F N(Msiam \u2297femb(Fnode)) FS graph = Fgraph[:, : NS, :].Us(1).R(1, T, 1, 1) FQ graph = Fgraph[:, NS :, :].Us(1).R(1, T, 1) 16 Task-oriented Features: Fhid S = f FS.Us(0).R(NQ, 1, 1, 1) FS task = ffuse(Cat([Fhid S , Fgraph S ], dim = 2)) FQ task = ffuse(Cat([ f FQ, Fgraph Q , dim = 2])) feature in the k class and the p-th query video feature indicates sk m\u2208RT \u00d7C, qp\u2208RT \u00d7C, respectively. For singleframe matching, we apply a bidirectional Mean Hausdorff metric as follow: \\labe l { } \\b egin { s p lit } \\math c al {D} _ {fra m e } = \\frac { 1}{ T} \\Bigg [ \\sum _ {\\ tex t b f {s}^k_{m,i}\\in \\textbf {s}^k_m}{\\left ( \\min _{\\textbf {q}_{p,j}\\in \\textbf {q}_p} \\!\\:\\left \\| \\textbf {s}^k_{m,i}\\textbf {q}_{p,j}\\right \\| \\right )} \\\\+\\sum _{\\textbf {q}_{p,j}\\in \\textbf {q}_p}{\\left ( \\min _{\\textbf {s}^k_{m,i}\\in \\textbf {s}^k_m} \\!\\:\\left \\| \\textbf {q}_{p,j} -\\textbf {s}^k_{m,i} \\right \\| \\right )}\\Bigg ] \\end {split} (6) where sk m,i represents the i-th frame feature of sk m, qp,j indicates the j-th frame feature of qp, and they have a total T frames. For tuple-level prototype matching, we combine two frames into one tuple and iterate through all combinaAlgorithm 2: The process of Select operation 1 Input: aL ij[0]\u2208R(NS+NQ)\u00d7(NS+NQ) 2 Output: Msiam\u2208RNQ\u00d7(NS+1)\u00d7(NS+1) 3 Similarity Score: Msiam = List() 4 for nQ = 1, \u00b7 \u00b7 \u00b7 , NQ do 5 msiam = Zeros((NS + 1) \u00d7 (NS + 1)) 6 msiam[: NS, : NS] = aL ij[0][: NS, : NS] 7 msiam[: NS, \u22121] = aL ij[0][: NS, NS + nQ] 8 msiam[\u22121, : NS] = aL ij[0][NS + nQ, : NS] 9 msiam[\u22121, \u22121] = aL ij[0][NS + nQ, NS + nQ] 10 Msiam.Append(msiam) 11 end 12 Msiam = Stack(Msiam) tions to get L = 1 2 (T \u22121) T tuples for T frames, given by: \\l abe l {} \\be g in {spl it } \\t e xtbf { t s } ^k _ {m , i } = \\ l e ft [\\ t extbf { s}^k_ { m,i_1} + \\ t ex t bf {PE}(i_1),\\textbf {s}^k_{m,i_2}+\\textbf {PE}(i_2)\\right ] \\ 1\\leqslant i_1\\leqslant i_2\\leqslant T \\\\ \\textbf {tq}_{p,j} = \\left [\\textbf {q}_{p,j_1}+\\textbf {PE}(j_1),\\textbf {q}_{p,j_2}+\\textbf {PE}(j_2)\\right ] \\ 1\\leqslant j_1\\leqslant j_2\\leqslant T \\end {split} (7) where tsk m,i, tqp,j\u2208R2C, and each tuple follows the temporal information of the original frame. To this end, the Mean Hausdorff metric based on tuples can be formulated as: \\labe l { } \\be gin {sp l i t} \\mathcal { D}_{ tup l e} = \\ f r a c {1}{L} \\ B igg [\\ sum _{\\ t e xtbf { t s}^ k_{ m , i}\\in \\textbf {ts}^k_m}{\\left ( \\min _{\\textbf {tq}_{p,j}\\in \\textbf {tq}_p} \\!\\:\\left \\| \\textbf {ts}^k_{m,i}\\textbf {tq}_{p,j}\\right \\| \\right )} \\\\+\\sum _{\\textbf {tq}_{p,j}\\in \\textbf {tq}_p}{\\left ( \\min _{\\textbf {ts}^k_{m,i}\\in \\textbf {ts}^k_m} \\!\\:\\left \\| \\textbf {tq}_{p,j} -\\textbf {ts}^k_{m,i} \\right \\| \\right )}\\Bigg ] \\end {split} (8) Finally, the hybrid matching metric can be formulated as: \\label {} \\mat h ca l { D}_{hybrid} = \\alpha \\mathcal {D}_{tuple} + \\left (\\textbf 1-\\alpha \\right )\\mathcal {D}_{frame} (9) where \u03b1 \u2208[0, 1] is a hyperparameter. In a word, our proposed hybrid prototype matching strategy combines the advantages of both frameand tuple-level matching to cope with video tasks of multivariate styles well. 4. Experiments 4.1. Experimental Setup Datasets. We evaluate the performance of our method on four few-shot datasets, including Kinetics [5], HMDB51 [16], UCF101 [29], and SSv2 [13]. For Kinetics and SSv2, we use the splits provided by [4] and [47], where 100 classes were selected and divided into 64/12/24 action classes as the meta-training/meta-validation/metatesting set. Additionally, for UCF101 and HMDB51, we \fHMDB51 UCF101 SSv2 Kinetics Methods Reference Backbone 1-shot 5-shot 1-shot 5-shot 1-shot 5-shot 1-shot 5-shot MatchingNet [31] NeurIPS(16) ResNet-50 31.3 45.5 53.3 74.6 MAML [10] ICML(17) ResNet-50 30.9 41.9 54.2 75.3 ProtoNet [28] NeurIPS(17) C3D 54.2 68.4 74.0 89.6 33.6 43.0 64.5 77.9 TRN++ [45] ECCV(18) ResNet-50 38.6 48.9 68.4 82.0 CMN++ [46] ECCV(18) ResNet-50 34.4 43.8 57.3 76.0 TARN [3] BMVC(19) C3D 64.8 78.5 ARN [41] ECCV(20) C3D 45.5 60.6 66.3 83.1 63.7 82.4 OTAM [4] CVPR(20) ResNet-50 54.5 68.0 79.9 88.9 42.8 52.3 73.0 85.8 TTAN [19] ArXiv(21) ResNet-50 57.1 74.0 80.9 93.2 46.3 60.4 ITANet [42] IJCAI(21) ResNet-50 49.2 62.3 73.6 84.3 TRX [27] CVPR(21) ResNet-50 54.9* 75.6 81.0* 96.1 42.0 64.6 65.1* 85.9 TA2N [20] AAAI(22) ResNet-50 59.7 73.9 81.9 95.1 47.6 61.0 72.8 85.8 STRM [30] CVPR(22) ResNet-50 57.6* 77.3 82.7* 96.9 43.5* 66.0* 65.1* 86.7 MTFAN [36] CVPR(22) ResNet-50 59.0 74.6 84.8 95.1 45.7 60.4 74.6 87.4 HyRSM [35] CVPR(22) ResNet-50 60.3 76.0 83.9 94.7 51.5* 67.5* 73.7 86.1 HCL [44] ECCV(22) ResNet-50 59.1 76.3 82.5 93.9 47.3 64.9 73.7 85.8 Huang etal. [14] ECCV(22) ResNet-50 60.1 77.0 71.4 91.0 49.3 66.7 73.3 86.4 Nguyen etal. [23] ECCV(22) ResNet-50 59.6 76.9 84.9 95.9 43.8 61.7 74.3 87.4 SloshNet [38] AAAI(23) ResNet-50 59.4 77.5 86.0 97.1 46.5 68.3 70.4 87.0 GgHM ResNet-50 61.2 76.9 85.2 96.3 54.5 69.2 74.9 87.4 Table 1. State-of-the-art comparison on the 5-way k-shot benchmarks of HMDB51, UCF101, SSv2, Kinetics. The boldfacen and underline font indicate the highest and the second highest results. Note: * means our implementation. evaluate our method on the splits provided by [41]. Network Architectures. We utilize the ResNet-50 as the feature extractor with ImageNet pre-trained weights [7]. For LDTM, Wt1, Wt2 are two one-layer MLPs, and gap is set to 2. For GgPC, we apply one-layer GNN to obtain task-oriented features. More implementation details can be found in the appendix. Training and Inference. Followed by TSN [33], we uniformly sample 8 frames (T=8) of a video as the input augmented with some basic methods, e.g. random horizontal flipping, cropping, and color jit in training, while multicrops and multi-views in inference. For training, SSv2 were randomly sampled 100,000 training episodes, and the other datasets were randomly sampled 10,000 training episodes. Moreover, we used the Adam optimizer with the multi-step scheduler for our framework. For inference, we reported the average results over 10,000 tasks randomly selected from the test sets in all datasets. 4.2. Results As shown in Tab.1, our method GgHM achieves impressive results against the state-of-the-art methods in all datasets and few-shot settings. Our method especially achieves new state-of-the-art performance on Kinetics and SSv2 in all few-shot settings and HMDB in the 5-way 1-shot task, respectively. In other tasks, our method either achieves the second-highest result or achieves results that are very close to the SOTA. Our method performs impressively without any preference for datasets or the few-shot settings. In contrast, some methods perform unsatisfactorily in the 1-shot task (e.g., TRX [27], STRM [30], SloshNet [38]) or particular datasets (e.g., Nguyen etal. [23] on SSv2, MTFAN [36] on SSv2, Huang etal. [14] on UCF101). In addition, compared to our baseline HyRSM [35], which also utilizes the Hausdorff Distance metric as the class prototype matching strategy and focuses on building the task-oriented feature, the effect of our method is significantly improved. Specifically, compared to HyRSM, our method brings 0.9%, 1.3%, 3.0%, and 0.3% performance improvements in the 1-shot task of HMDB51, UCF101, SSv2, and Kinetics, respectively. In the 5-shot task, our method outperforms HyRSM significantly, bringing 0.3%, 1.6%, 2.7%, and 0.7% gain on HMDB51, UCF101, SSv2, and Kinetics, respectively. 4.3. Ablation Study Impact of the proposed components. To validate the contributions of each module (i.e. LDTM, GgPC, HPM) in our method, we experiment under 5-way 1-shot and 5-way 5-shot settings on the SSv2 dataset. Our baseline method only utilizes the frame-level bidirectional Mean Hausdorff metric as the prototype matching strategy without any extra modules. As shown in Tab. 2, we observe that each component is effective. Specifically, compared to the baseline, the HPM module can bring 0.6% and 0.7% accuracy improvement on 1-shot and 5-shot tasks, the GgPC module \fLDTM GgPC HPM 1-shot 5-shot 44.6 56.0 45.2 56.7 49.0 61.5 51.8 64.9 50.1 63.4 52.2 65.8 53.9 68.7 54.5 69.2 Table 2. The impact of proposed modules on SSv2 in the 5-way 1-shot and 5-way 5-shot settings. can bring 4.4% and 5.5% performance improvement on two tasks, and the LDTM module can bring 7.2% and 8.9% performance gain on two tasks. Additionally, stacking modules can enhance performance, indicating the complementarity between components. Combining all modules can get the best results, bringing 9.9% and 13.2% performance improvement on 1-shot and 5-shot tasks over the baseline. Impact of temporal modeling integration. To explore the impact of each temporal modeling module in LDTM and demonstrate their effectiveness, we experiment on the 5-way 1-shot and 5-way 5-shot tasks of SSV2 to ablate our proposed temporal relation modeling blocks. The PTRM block includes spatial attention, which indicates doing SelfAttention only on the spatial dimension. As shown in Tab.3, the CTRM block brings about a 1.0% and 1.9% accuracy improvement on the 1-shot and 5-shot tasks over the baseline. Moreover, the PTRM block obtains 1.5% and 2.3% gain on the 1-shot and 5-shot tasks over the baseline. The integration of these two blocks results in 2.9% and 3.7% gain on two tasks, respectively. Analysis of building the task-oriented features. To demonstrate the necessity of constructing task-specific features and compare the efficacy of various methods for constructing them, we conduct experiments on the 5-way 1-shot task of Kinetics and SSv2. Building task-oriented features can be divided into two categories: unsupervised and supervised. The critical difference between them is whether label information is used directly to constrain the construction of features. The Self-Attention method(HyRSM [35]) means that the task features (the set of support and query video features) do self-attention without using the label information to supervise. In contrast, our GNN method directly applies label information to do supervision, which Spatial Attention PTRM CTRM 1-shot 5-shot 51.6 65.5 52.6 67.4 53.1 67.8 54.5 69.2 Table 3. The impact of temporal modeling blocks integration on SSv2 in the 5-way 1-shot and 5-way 5-shot settings. Method Type Kinetics SSv2 None 72.9 52.2 Self-Attention unsupervised 74.1 53.7 GNN supervised 74.6 54.0 GNN(Transduction) supervised 74.9 54.5 Table 4. Analysis of building the task-oriented features on Kinetics and SSv2 in the 5-way 1-shot setting. Metric Kinetics SSv2 Frame-level matching 74.3 53.9 Tuple-level matching 74.1 54.2 Hybrid matching 74.9 54.5 Table 5. Comparisons of different prototype matching strategies on Kinetics and SSv2 in the 5-way 1-shot setting. Param \u03b1 0 0.2 0.4 0.6 0.8 1.0 Kinetics 74.3 74.6 74.9 74.5 74.3 74.1 SSv2 53.9 54.1 54.2 54.5 54.3 54.2 Table 6. The impact of the varying fusion parameter \u03b1 of hybrid prototype matching on Kinetics and SSv2 in the 5-way 1-shot setting. can explicitly optimize the video features\u2019 intraand interclass correlation. As shown in Tab.4, the Self-Attention method can bring 1.2% and 1.5% gain on Kinetics and SSv2 over the baseline each, which can demonstrate the necessity of building task-oriented features. Moreover, our GNN method(each query feature owns a graph) can bring 1.7% and 1.8% gain over the baseline on two datasets, respectively, showing the advantage of the supervised method. Moreover, our GNN method with transduction(all query features in the same graph) brings a 2.0% and 2.3% accuracy improvement on two datasets. Comparisons of different prototype matching strategies. To analyze different prototype matching strategies, we experiment on the 5-way 1-shot task of Kinetics and SSv2 with different prototype matching methods to evaluate the 1.00 0 0 0 0 0 1.00 0 0 0 0.47 0 0 1.00 0 0 0.04 0 0 0 1.00 0 0.73 0 0 0 0 1.00 0.85 0.28 0.05 0.58 0.81 Examples from Kinetics Examples from SSV2 0.38 0.29 0.09 0.84 0.13 0.53 0.47 0.10 0.05 0.04 0.28 0.26 0.73 0.37 0.14 0.14 0.55 0.27 0.29 0.58 0.56 0.26 0.36 0.57 0.43 1.00 0.15 0.34 0.46 0.65 0.32 0.25 0.46 0.34 0.28 0.31 0.55 0.84 0.42 0.20 0.15 0.10 0.20 0.09 0.26 0.79 0.29 0.05 0.58 0.23 0.23 0.16 0.20 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.79 0.78 0.75 0.94 0.81 0.84 1.00 1.00 1.00 1.00 1.00 1.00 0.72 0.92 0.85 0.48 Acc = 80% 1.00 0 0 0 0 0 1.00 0 0 0 0.66 0 0 1.00 0 0 0 0 0 1.00 0 0.09 0 0 0 0 1.00 0.21 0.81 0.27 0.49 0.05 0.02 0.15 0.53 0.30 0.38 0.14 0.42 0.05 0.06 0.10 0.04 0.15 0.14 0.58 0.54 0.36 0.08 0.17 0.07 0.13 1.00 0.64 0.19 0.28 0.38 0.54 0.06 0.06 0.55 0.67 0.79 0.05 0.25 0.79 0.19 0.45 0.16 0.06 0.21 0.40 0.35 0.27 0.52 0.46 0.10 0.41 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 0.40 0.22 0.93 0.22 0.49 0.15 1.00 1.00 1.00 1.00 1.00 1.00 0.22 0.21 0.89 Acc = 80% 0.95 0.96 0.93 0.35 0.96 0.81 0.66 0.52 0.42 Figure 5. Visualization of the updated edge features output by the GNN. GT stands for the ground truth and ACA represents the accuracy calculation area. A higher score indicates a greater degree of similarity. We can use the features in the accuracy calculation area directly to obtain task recognition results. \f0.08 0.08 0.21 0 0 0 0 0 0 0.04 0.06 0.57 0.22 0 0.91 0.04 0.37 0.16 0.15 0.11 0.13 0.13 0.11 0.10 0.09 0.11 0.05 0.10 0.13 0.17 0.11 0.09 0.10 0.04 0.11 0.04 0.10 0.01 0 0 0 0 0.01 0.02 0 0.05 0.79 0 0 0 Acc = 80% Acc = 80% Acc = 100% OTAM TRX Ours 0.21 0.77 0 0 0.02 0.07 0.52 0.15 0.12 0.20 0.11 0.02 0 0.12 0.04 0.14 0.12 0.11 0.11 0.19 0.15 0.07 0.06 0.04 0.29 0.04 0.14 0.11 0.16 0.73 0.46 0.05 0.21 0 0 0.02 0.04 0.01 0.02 0 0.06 0.18 0.03 0.07 0.12 0.08 0 0.86 0.04 0.08 0 0.01 0 0.06 0 0.03 0 0.02 0.17 0.02 0.07 0.13 0.01 0 0.68 Acc = 60% Acc = 80% Acc = 80% OTAM TRX Ours 0.30 0.54 0.17 0.07 0.04 0.14 0.03 0.23 0.01 0.66 0.01 0.27 0 0.72 0.16 0.37 0.12 0.13 0.17 0.16 0.12 0.11 0.11 0.10 0.10 0.15 0.10 0.19 0.10 0.13 0.07 0.10 0.20 0.06 0.10 0.13 0.11 0.27 0.03 0.11 0.06 0.08 0.11 0.51 0.22 0.05 0.09 0 0 0 0 0.03 0 0 0 0 0 0 0 0 0.97 0 0.13 0 Acc = 60% Acc = 80% Acc = 100% OTAM TRX Ours 0.18 0.11 0.25 0.57 0.10 0.15 0 0.12 0.37 0.50 0.10 0.03 0.16 1.00 0 0 0.16 0.16 0.05 0.15 0.14 0.22 0.15 0.15 0.15 0.16 0.15 0.05 0.06 0.30 0.57 0.17 0.10 0 0.05 0.07 0.11 0.05 0.05 0.03 0.09 0.03 0 0.05 0.10 0 0.05 0.67 0 0 0 0.01 0 0 0 0 0 0 0 0 0.05 0 0 0.86 Acc = 80% Acc = 100% Acc = 100% OTAM TRX Ours 0.17 0.09 0.05 0.16 0.22 0.22 0.12 0.07 0.12 0.08 0.03 0 0 0 0 0.13 Examples from Kinetics Examples from SSv2 Examples from HMDB51 Examples from UCF101 1.0 0.84 0.09 0.57 0.50 0.61 1.0 0.87 0.89 0.89 0.79 0.29 0.06 0.59 0.90 0.72 0.11 0.74 0.98 0.67 0.29 0.54 0.71 0.97 1.00 0.65 0.62 0.34 0.24 0.57 0.75 0.80 0.82 0.78 1.00 0.95 1.00 0.48 0.43 0.50 0.33 1.00 Figure 6. Similarity visualization between query and support videos with different methods on the 5-way 1-shot task of Kinetics, SSv2, HMDB51, and UCF101. A higher score indicates a greater degree of similarity. effectiveness of our hybrid matching strategy. All the methods are based on the bidirectional Mean Hausdorff metric and the experiment results are shown in Tab.5. Our hybrid matching strategy brings a 0.6% and 0.6% accuracy improvement on two datasets over the frame-level matching strategy. Meanwhile, it obtains 0.8% and 0.3% gain on two datasets over the tuple-level matching strategy, respectively. Impact of the varying fusion parameter of hybrid prototype matching. Tab.6 shows the impact of the varying fusion parameter \u03b1 in hybrid prototype matching. As part of our experiments, we perform the 5-way 1-shot task on Kinetics and SSV2. The parameter \u03b1 denotes the weight assigned to the frameand tuple-level matching in the final fusion. From the results, the optimal values of parameter \u03b1 are 0.4 for Kinetics and 0.6 for SSv2. Visualization of the update edge features output by GNN. As shown in Fig.5, we visualize two examples of the updated edge features output by the GNN and the ground truth on Kinetics and SSv2 in the 5-way 1-shot setting. The edge features\u2019 value can be seen as the similarity score between two video features. From the visualization, GNN as guidance can well optimize video features\u2019 interand intraclass correlation, in which updated edge features are very close to the similarity matrix corresponding to the ground truth. Meanwhile, the intermediate output recognition results of GNN obtained by the edge features in the accuracy calculation area can also achieve high accuracy. Similarity visualization. Fig.6 visualizes the predicted similarities between query and support videos with different methods on the 5-way 1-shot task of Kinetics, SSv2, HMDB51, and UCF101. Our method achieves more discriminative results for similar videos in each task compared to OTAM [4] and TRX [27]. The results presented here demonstrate the effectiveness of our method in distinguishing videos from similar categories, as it has significantly improved both the prediction accuracy and intra-/inter-class correlation within video features. 5." + }, + { + "url": "http://arxiv.org/abs/2308.01532v1", + "title": "Multimodal Adaptation of CLIP for Few-Shot Action Recognition", + "abstract": "Applying large-scale pre-trained visual models like CLIP to few-shot action\nrecognition tasks can benefit performance and efficiency. Utilizing the\n\"pre-training, fine-tuning\" paradigm makes it possible to avoid training a\nnetwork from scratch, which can be time-consuming and resource-intensive.\nHowever, this method has two drawbacks. First, limited labeled samples for\nfew-shot action recognition necessitate minimizing the number of tunable\nparameters to mitigate over-fitting, also leading to inadequate fine-tuning\nthat increases resource consumption and may disrupt the generalized\nrepresentation of models. Second, the video's extra-temporal dimension\nchallenges few-shot recognition's effective temporal modeling, while\npre-trained visual models are usually image models. This paper proposes a novel\nmethod called Multimodal Adaptation of CLIP (MA-CLIP) to address these issues.\nIt adapts CLIP for few-shot action recognition by adding lightweight adapters,\nwhich can minimize the number of learnable parameters and enable the model to\ntransfer across different tasks quickly. The adapters we design can combine\ninformation from video-text multimodal sources for task-oriented spatiotemporal\nmodeling, which is fast, efficient, and has low training costs. Additionally,\nbased on the attention mechanism, we design a text-guided prototype\nconstruction module that can fully utilize video-text information to enhance\nthe representation of video prototypes. Our MA-CLIP is plug-and-play, which can\nbe used in any different few-shot action recognition temporal alignment metric.", + "authors": "Jiazheng Xing, Mengmeng Wang, Xiaojun Hou, Guang Dai, Jingdong Wang, Yong Liu", + "published": "2023-08-03", + "updated": "2023-08-03", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Few-shot action recognition aims to quickly learn new action categories using limited labeled samples. Compared with general action recognition, the main distinction of fewshot action recognition lies in the extremely small amount of labeled data in each task and the variety of task types. *Corresponding authors. 20 30 40 50 60 70 80 90 Number of Tunable Parameters (M) 35 40 45 50 55 60 Accuracy (%) OTAM RN50 HyRSM RN50 TRX RN50 STRM RN50 CLIP-FSAR RN50 CLIP-FSAR ViT-B/16 MA-CLIP ViT-B/32 MA-CLIP ViT-B/16 Model Performance Comparison Figure 1. Performance comparison of different few-shot action recognition methods under the 5-way 1-shot settings on the SSv2Small dataset, including our MA-CLIP, OTAM [3], TRX [33], STRM [38], HyRSM [46], and CLIP-FSAR [44]. Bubble or star size indicates the recognition accuracy. Our MA-CLIP achieves the highest recognition accuracy with the least number of tunable parameters. Therefore, few-shot action recognition requires models to possess the ability to quickly transfer between different tasks, making this work extremely difficult. Previous methods [3, 61, 56, 33, 38, 49, 46, 22, 14] mainly used metricbased framework and episode training to solve the transfer to new classes. However, despite the short training time for each task, it requires a large amount of training on similar tasks to enable the model to have strong generalization capabilities across various tasks. Therefore, relying solely on the above solutions still requires the model to spend much time training on different datasets, which somewhat hinders its application in industry. With the development of computer vision, more and more large foundation visual models [34, 54, 43, 39, 15, 19] have emerged. Their key to these models is providing an excellent pre-trained model, which can be fine-tuned for arXiv:2308.01532v1 [cs.CV] 3 Aug 2023 \fdownstream tasks to provide strong transferability. By utilizing large foundation models such as CLIP for downstream tasks such as action recognition [42, 30], segmentation [35, 28, 50], object detection [10, 58], et al., the \u201cpre-training, fine-tuning\u201d paradigm leverages the power of robust pre-trained models, thus eliminating the need to train a network from scratch and obtaining impressive performance. Due to the powerful generalization ability of the CLIP pre-trained model, applying it to few-shot action recognition tasks can significantly reduce the number of similar training tasks during the fine-tuning stage to save training time. Furthermore, CLIP is a multimodal model, and for few-shot action recognition tasks with limited visual samples, introducing the additional textual features can serve as a powerful aid. CLIP-FSAR [44] follows this approach using CLIP [34] and has achieved good results. However, this method has at least two drawbacks. First, each task has limited trainable labeled samples for few-shot action recognition, so it is necessary to minimize the number of training parameters as much as possible to avoid the overfitting phenomenon. Insufficient complete fine-tuning will increase the consumption of computational resources and time and may disrupt the good generalized representation of the foundation models. Second, large foundation models are mostly image pre-trained models, while videos have an extra-temporal dimension compared to images. One of the challenges in few-shot recognition is how to perform temporal modeling effectively. We applied CLIP to perform zero-shot action recognition and found that while it performed well on spatial datasets, its performance was not ideal on temporal datasets, highlighting the importance of temporal modeling. CLIP-FSAR only uses an additional temporal module to extend the image model, which cannot fully integrate temporal information in videos. To overcome the above drawbacks, we followed a new approach called parameter-efficient fine-tuning, i.e., PEFT, that efficiently utilizes large foundation models. PEFT was initially applied in natural language processing (NLP) [12, 21, 55, 13] and has made remarkable progress in computer vision (CV) [1, 17, 16, 51, 47] in recent years. The core idea is to keep the large pre-trained foundation model frozen to achieve robust performance while only fine-tuning a small number of extra parameters. This idea is very well-suited for few-shot action recognition, which can minimize the number of learnable parameters and enable the model to possess the ability to transfer across different tasks quickly. In addition, our task involves video understanding, while most large foundation models are based on images lacking temporal understanding. To address this issue, adding a small number of trainable parameters for temporal modeling in the large foundation model proves effective, such as AIM [51] and Vita-CLIP [47]. Based on these findings, we propose a novel method for few-shot action recognition, dubbed MA-CLIP, a shot for Multimodal Adaptation of CLIP. Specifically, we adopt the idea of PEFT and choose CLIP [34] as our baseline due to its multimodal capability. We freeze CLIP\u2019s pretrained image and text encoders during fine-tuning and add some lightweight adapters [12, 51] and tunable parameters. As CLIP is a foundation model for image-text pairs, the adapters we design can combine the bi-modal information of the videos (spatiotemporal information) and texts (semantic information) for task-oriented modeling. Meanwhile, we design a text-guided prototype construction module based on the attention mechanism to fully utilize the video-text multimodal information and enhance the representation of video class prototypes. Finally, our MA-CLIP is plug-and-play and can be used in any different few-shot action recognition temporal alignment metric, i.e., video matcher. Extensive experiments unequivocally demonstrate that our method attains exceptional performance while employing the fewest tunable parameters, as depicted in Fig.1. In summary, we make the following contributions: \u2022 We propose a novel method to adapt CLIP for fewshot action recognition by adding lightweight adapters and relatively few tunable parameters. The adapters we designed can combine information from video-text multimodal sources for task-oriented modeling, which is fast, efficient, and has low training costs. \u2022 Based on the attention mechanism, we design a textguided prototype construction module that can fully utilize video-text information to further enhance the representation of video prototypes. \u2022 Our plug-and-play method can be used in any different few-shot action recognition temporal alignment metric. Experiments demonstrate that our method performs excellently using any metric in various task settings. \u2022 Extensive experiments on five widely used datasets have shown that our method can achieve outstanding performance with minor trainable parameters. 2. Related Works 2.1. Few-shot Learning Few-shot learning leverages the episodic training paradigm, wherein a limited number of labeled training samples from a large number of related tasks are utilized to effectively represent a substantial volume of labeled training samples. In recent years, research on few-shot learning can be mainly classified into adaptation-based and metricbased methods. The former [8, 31, 26] aims to find a network initialization that can be fine-tuned for unknown tasks using limited labeled data, called gradient by gradient. The \flatter [36, 40, 53, 52, 6, 23] aims to acquire knowledge of feature space and compare task features using various matching strategies, referred to as learning to compare. 2.2. Few-shot Action Recognition The core concept of few-shot action recognition is akin to few-shot learning, but including the temporal dimension amplifies the problem\u2019s difficulty. Despite the potential benefits of adaptation-based methods (e.g., MetaUVFS [32]), these approaches have received limited attention in few-shot action recognition due to their high computational requirements and extensive experimental time. Instead, existing research predominantly emphasizes metricbased learning approaches with varying focuses. On the one hand, some methods focus on class prototype matching strategies. TRX [33] matches each query sub-sequence with all sub-sequences in the support set, facilitating correspondences between different videos. OTAM [3] introduces a temporal alignment module to calculate the distance value between query and support set videos. On the other hand, some approaches aim to enhance feature representation. For instance, SloshNet [49] leverages a feature fusion architecture search module to exploit low-level spatial features, combining it with long-term and shortterm temporal modeling modules to encode complementary global and local temporal representations. STRM [38] adopts local and global enrichment modules for spatiotemporal modeling, while HyRSM [46] utilizes hybrid relation modeling to learn task-specific embeddings. With the development of large foundation visual models, how to apply them in downstream tasks is receiving increasing attention. CLIP-FSAR [44] makes attempts using the CLIP pre-trained model and designs a video-text contrastive objective and a prototype modulation, achieving good results. However, completely fine-tuning the visual encoder would increase computational costs and risk catastrophic forgetting. Additionally, CLIP is an image pre-trained model that CLIP-FSAR does not extend the visual encoder for temporal modeling. Our approach will address the problems encountered in the CLIP-FSAR mentioned above. 2.3. Parameter-efficient Fine-tuning (PEFT) for Vision Models With the development of an increasing number of largescale visual foundational models [34, 54, 43, 39, 15, 19], more and more attention is focused on parameter-efficient fine-tuning, i.e., PEFT. PEFT, initially employed in natural language processing (NLP) [12, 21, 55, 13], has exhibited impressive advancements in computer vision (CV) in recent times. The fundamental concept revolves around preserving the immutability of the extensive pre-trained models to ensure consistent and reliable performance, focusing solely on refining a limited set of additional parameters. The application of PEFT in computer vision can be broadly categorized into two main approaches: Adapter-based and Prompt-tuning-based. The design of the Adapter originated from [12]. It adds two adapters with residual structures in each transformer layer to fine-tune the model. During the fine-tuning process, the parameters of the original transformer are frozen, and only the parameters of the adapter layers are learned. Inspired by this, AIM [51] applied Adapter technology in action recognition. In each ViT [7] (Vision Transformer) block, AIM designed three adapters for spatial, temporal, and joint adaptation, achieving excellent results. Prompt-tuning refers to the flexible adjustment of prompts, which significantly impacts the final performance of the model. The pioneering use of prompt-tuning in the visual domain was by VPT [16]. It introduced learnable prompts within ViT while freezing the other training parameters in the network and achieved impressive results in downstream tasks related to image processing. Inspired by this, Vita-CLIP [47] designed prompt-tuning specifically for videos, which proposed the video summary tokens, frame-level prompts, and video-level prompts, achieving impressive results. Due to Adapter\u2019s simplicity and AIM\u2019s success in action recognition, we choose the Adapter-based method as our PEFT method. 3. Method 3.1. Problem Formulation In the case of few-shot action recognition, the goal is to categorize an unlabeled query video into one of M action categories in the support set, with only limited K samples allotted per action class. This can be considered an M-way K-shot task. Comparable to prior research studies, we follow the episode training framework outlined by [3, 61, 56, 33, 38, 49, 46, 22, 14], where episodes are chosen randomly from a vast pool of collected data. In each episode, we assume that the set S comprises M \u00d7 K samples originating from M different action classes. Additionally, Sm k = {sm k1, sm k2, \u00b7 \u00b7 \u00b7 , sm kT } denotes the k-th video in class m \u2208{1, \u00b7 \u00b7 \u00b7 , M} randomly sampled with T frames. Finally, the query video represents Q = {q1, q2, \u00b7 \u00b7 \u00b7 , qT } sampled with T frames. 3.2. Architecture Overview We choose CLIP [34] as the pre-trained foundation model, a dual-encoder structure composed of visual and text encoders. CLIP can simultaneously encode input images and texts and map them into the same vector space. It can perform cross-modal reasoning and achieve mutual conversion between images and texts. In few-shot action recognition, since there are limited labeled video samples in each task, enhancing the semanticity of videos to a great extent can be achieved by mining the semantic information of label texts and associating them with corresponding video \ffeatures. CLIP has been pre-trained on 400 million webcrawled image-text pairs, making the model highly generalizable. We choose the ViT [7] architecture in CLIP as our visual encoder. In addition, to align with textual descriptions during pre-training, input texts are usually utilized with prompt templates (the selection method of prompt templates is detailed in Sec.4.1.2). To minimize the number of trainable parameters in the model as much as possible, allowing the model to possess the ability to transfer across different tasks rapidly, we froze the pre-trained image and text encoders during fine-tuning and added some learnable lightweight adapters. We present our overall architecture in Fig.2. For the frame-selecting strategy, we employ the approach previously used in TSN [41], which involves dividing the input video sequence into T segments and extracting snippets from each segment. We will focus on a specific scenario for simplicity and convenience: the 5-way 1-shot problem and the query set Q with a single video. In this way, the query video Q = {q1, q2, \u00b7 \u00b7 \u00b7 , qT } and the class support set videos Sm = {sm 1 , sm 2 , \u00b7 \u00b7 \u00b7 , sm T } \u0000Sm \u2208S = \b S1, S2, \u00b7 \u00b7 \u00b7 , S5\t\u0001 pass through the visual encoder (TMA) to obtain the query feature FQ and the support features Fm S (Fm S \u2208FS) in each episode. Similarly, the text descriptions Cm\u0000Cm \u2208C = \b C1, C2, \u00b7 \u00b7 \u00b7 , C5\t \u0001 pass through the text encoder to obtain text features Fm T \u0000Fm T \u2208FT \u0001 . Then we apply global average pooling operation to the features FS and FQ to obtain features Favg S and Favg Q . The Kullback-Leibler divergence losses LS2T and LQ2T are obtained by the cosine similarity metric between Favg S , Favg Q , and the text feature FT , which adapts CLIP to the few-shot action recognition task. Meanwhile, the probability distribution pQ2T is obtained using the cosine similarity metric. Then, features FS and FQ are passed through a text-guided prototype construction module (TPCM) with weight sharing to obtain the final features before the prototype matching process, denoted by f FS and f FQ. Finally, the enhanced features are fed into the prototype matching metric to obtain the probability distribution pQ2S and loss LQ2S. 3.3. Task-oriented Multimodal Adaptation (TMA) To minimize the number of tunable parameters as much as possible to avoid the overfitting phenomenon and fully leverage the spatiotemporal information in the videos and the semantic information in the texts, we propose a new method to adapt image pre-trained models for few-shot action recognition by adding lightweight adapters. The adapters we design can combine the bi-modal information of the videos and texts for task-oriented modeling. We choose ViT [7] as our visual encoder. Specifically, consider a video clip V \u2208RT \u00d7H\u00d7W \u00d73, where H, W represent the spatial size and T represents the number of frames. Each frame t \u2208{1 \u00b7 \u00b7 \u00b7 T} is divided into N non-overlapping square patches {xt,i}N i=1 \u2208RP 2\u00d73 of size P \u00d7 P, with the total number of patches being N = HW/P 2. Then the patches {xt,i}N i=1 \u2208RP 2\u00d73 are then projected into the path embeddings xt,p \u2208RN\u00d7D through a linear projection E \u2208R3P 2\u00d7D. An additional learnable [class] token xcls \u2208RD to the embedded patch sequence xt,p is presented for each frame as x(0) t = [xcls; xt,p] \u2208R(N+1)\u00d7D. The final per-frame token sequence fed into the ViT blocks is given by: z(0) t = x(0) t + epos (1) where epos \u2208R(N+1)\u00d7D represents the spatial position encoding. As shown in Fig.3(b), each ViT block consists of several components, including a multiheaded self-attention (MSA) mechanism, a multilayer perceptron (MLP) layer, the layer normalization (LN), and skip connections. Formally, the computation of a ViT block can be formulated as: z\u2032(l) t = z(l\u22121) t + MSA \u0010 LN \u0010 z(l\u22121) t \u0011\u0011 (2) z(l) t = z\u2032(l) t + MLP \u0010 LN \u0010 z\u2032(l) t \u0011\u0011 (3) where z(l\u22121) t and z(l) t represent per-frame input and the output of the l-th ViT block, respectively. And the video level representation at the l-th layer can be represented as z(l) = h z(l) 0 \u00b7 \u00b7 \u00b7 z(l) t \u00b7 \u00b7 \u00b7 z(l) T i . Inspired by the vision parameter-efficient fine-tuning techniques [1, 17, 16, 51, 47], we obey their ideas that keep the large pre-trained foundation model frozen to achieve robust performance while only fine-tuning a small number of extra parameters. Due to Adapter\u2019s [12] simplicity and AIM\u2019s [51] success in action recognition, we propose a task-oriented multimodal adaptation based on Adapter, which can be divided into three parts: temporal adaptation, multimodal adaptation, and joint adaptation. As shown in Fig.3(a), Adapter has a straightforward structure that includes two fully connected layers (FC), an activation layer, and a residual connection. The first FC layer maps the input to a lower dimension, while the second FC layer maps the input back to its original dimension. The support and query set branches\u2019 network structures are represented in Fig.3(c) and Fig.3(d), respectively. Since the label information of the support set data is known while that of the query set is unknown in each task, their network structures differ accordingly. Moreover, inspired by AIM [51], we reuse the pre-trained self-attention layer in the image model for temporal and multimodal adaptation to minimize the number of trainable parameters. By changing the dimensions of the input, the self-attention layer can be used in different ways. In what follows, we will introduce three types of adaptation respectively. \fquery video class support set videos Adapters ViT Block Adapters ViT Block ... Task-oriented Multimodal Adaptation (TMA) Adapters ViT Block Adapters ViT Block ... Task-oriented Multimodal Adaptation (TMA) Share Class Label Run Long jump Sit Shoot gun Pull up Text Encoder Temporal Alignment Metric Text-guided Prototype Construction Module (TPCM) GAP GAP Text-guided Prototype Construction Module (TPCM) Metric Frozen Tuned Metric Cosine Similarity GAP Global Average Pooling Metric Prototype Construction Prototype Construction Prototype Matching Visual Encoder Visual Encoder Figure 2. Overview of MA-CLIP. We will focus on a specific scenario for simplicity and convenience: the 5-way 1-shot problem and the query set Q with a single video. The support set video features FS and query video feature FQ are obtained by the visual encoder(TMA). Similarly, text features FT are obtained using a text encoder. The text-guided prototype construction module (TPCM) generates the final features before the prototype matching process, denoted by f FS and f FQ. The probability distribution pQ2T is obtained using cosine similarity metric, and pQ2S is calculated using prototype matching metric. The loss LQ2S is the standard Cross-Entropy loss and LS2T , LQ2T are Kullback-Leibler divergence (KL) loss. 3.3.1 Temporal Adaptation Since videos have an additional temporal dimension compared to images, temporal modeling is crucial for video tasks. Based on this, we design temporal adaptation for temporal modeling. Compared to AIM, we only use the [class] token xcls as the input for temporal modeling, greatly reducing the computational costs. Specifically, for the lth layer given the input video [class] token embedding x(l\u22121) cls \u2208 RT \u00d71\u00d7D, we reshape it into x(l\u22121) T A \u2208R1\u00d7T \u00d7D. Then we feed x(l\u22121) T A into temporal adaptation to learn the temporal relationships between multiple frames, given by: x(l) T A = x(l\u22121) T A + Adapter \u0010 T-MSA \u0010 LN \u0010 x(l\u22121) T A \u0011\u0011\u0011 (4) where x(l\u22121) T A and x(l) T A denotes the temporal adaptation input and output of the lth transformer block. Self-attention operates on the temporal dimension T to explore the temporal relationships between multiple frames. Inspired by AIM [51], the Adapter structure maintains the same configuration as illustrated in Fig.3(a). However, the skip connection is removed to prevent the influence of temporal adaptation during the initial training phase. 3.3.2 Multimodal Adaptation After the temporal adaptation, we aim to integrate spatiotemporal information with text semantic information to perform multimodal adaptation to achieve task-oriented feature enhancement. Specifically, we feed the text description corresponding to the video Cm \u2208C into the text encoder to get text features Fm T \u0000Fm T \u2208FT \u0001 , which the text encoder is frozen to avoid the extra computation cost and catastrophic forgetting phenomenon. To facilitate the fusion of multimodal data, we have processed the text features Fm T \u2208R1\u00d7D\u2032 as follows: FMA T = Repeat (FCtext (Fm T )) (5) where FCtext \u2208RD\u2032\u00d7D aims to align text features with video features in the feature dimension, and the FCtext weights are shared across all layers of the visual transformer. The Repeat operation duplicates text features T times to obtain FMA T \u2208RT \u00d71\u00d7D. For the support set branch, given the temporal adapted features x(l) T A \u2208 RT \u00d71\u00d7D, the input video features z(l\u22121) \u2208RT \u00d7(N+1)\u00d7D and the text features FMA T \u2208RT \u00d71\u00d7D, we concatenate these features together along the spatial dimension to obtain the feature z(l\u22121) MA-S = h z(l\u22121); x(l) T A; FMA T i \u2208RT \u00d7(N+3)\u00d7D, where N denotes the total number of patches. However, \fthe corresponding text labels for the videos are unknown for the query set branch, so we can only concatenate the input video features z(l\u22121) and temporal adapted features x(l) T A to obtain z(l\u22121) ST A-Q = h z(l\u22121); x(l) T A i \u2208RT \u00d7(N+2)\u00d7D. For the support set branch, we feed z(l\u22121) MA-S into multimodal adaptation to integrate spatiotemporal information with text semantic information as shown in Fig.3(c), written by: z(l) MA-S = z(l\u22121) MA-S + Adapter \u0010 M-MSA \u0010 LN \u0010 z(l\u22121) MA-S \u0011\u0011\u0011 (6) where z(l\u22121) MA-S and z(l) MA-S denotes the multimodal adaptation input and output of the lth transformer block. Similarly, we feed z(l\u22121) MA\u2212Q into spatiotemporal adaptation to explore spatiotemporal relationships for the query set branch as shown in Fig.3(d), given by: z(l) ST A-Q = z(l\u22121) ST A-Q+Adapter \u0010 ST-MSA \u0010 LN \u0010 z(l\u22121) ST A-Q \u0011\u0011\u0011 (7) where z(l\u22121) ST A-Q and z(l) ST A-Q denote the spatiotemporal adaptation input and output of the lth transformer block. The Adapter\u2019s structure is the same as shown in Fig.3(b). The multimodal adaptation and spatiotemporal adaptation processes share weight parameters, allowing query and support samples to be in the same feature space. Due to the variation of videos within the same category in different tasks, the fusion of textual semantic information for that category has achieved task-oriented feature enhancement. 3.3.3 Joint Adaptation Temporal adaptation and multimodal adaptation each have their roles, which can combine information from video-text multimodal sources for task-oriented modeling. Lastly, we introduce joint adaptation, in which an Adapter is parallel to the MLP layer to tune the final representations jointly. Specifically, to ensure the consistency of each layer of the transformer block in the spatial dimension, we perform the Select operation on z(l) MA-S and z(l) ST A-Q, taking the first N + 1 features in the spatial dimension of them. Joint adaptation can be computed as follows: z(l) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 z(l) MA-S + MLP \u0010 LN \u0010 z(l) MA-S \u0011\u0011 + r \u00b7 Adapter \u0010 LN \u0010 z(l) MA-S \u0011\u0011 if i = 0 z(l) ST A-Q + MLP \u0010 LN \u0010 z(l) ST A-Q \u0011\u0011 + r \u00b7 Adapter \u0010 LN \u0010 z(l) ST A-Q \u0011\u0011 if i = 1 (8) where i = 0 refers to the support set branch and i = 1 refers to the query set branch. In this context, r is a scaling factor that regulates the influence of the Adapter\u2019s output weight. Frozen Tuned FC Down GELU FC Up \uff08a\uff09Adapter Layer Norm MSA Layer Norm MLP \uff08b\uff09ViT Block Class Tokens Layer Norm T-MSA Adapter C Layer Norm ST-MSA Layer Norm MLP Adapter Temproal Adaptation Adapter Spatiotemporal Adaptation Class Tokens Layer Norm T-MSA Adapter C Layer Norm M-MSA Layer Norm MLP Adapter Temproal Adaptation Adapter Multimodal Adaptation Video Features Video Features Text Features Text FC Repeat Joint Adaptation Joint Adaptation Select Select C Concat \uff08c\uff09Support Set Branch \uff08d\uff09Query Set Branch Patch Token Class Token Text Token Figure 3. (a) shows the structure of the Adapter, and (b) shows the structure of a standard ViT block. (c) and (d) illustrate how we adapt the standard ViT block for the support and query set videos. Note that T-MSA, M-MSA, and ST-MSA share weights but are applied to different inputs. 3.4. Text-guided Prototype Construction Module (TPCM) In few-shot action recognition, the quality of class prototype construction will directly affect the results of class prototype matching. The better the video class prototypes construct, the higher recognition accuracy performs, and vice versa. However, many existing methods [3, 61, 56, 33, 38, 49, 46, 22, 14] only use a limited number of videos from each category to construct class prototypes, making distinguishing similar classes in each task difficult. Recently, the success of multimodal methods in action recognition [42, 30, 18, 27, 47] has demonstrated that it is possible to understand and represent the semantic information contained in the video more accurately by jointly modeling the video data and relevant textual information. Therefore, based on the attention mechanism, we design a text-guided \fprototype construction module that can fully utilize videotext information to enhance the representation of video prototypes and optimize the intra-class and inter-class correlations of videos. For the support set branch, given the support set adapted features Fm S \u2208FS and the corresponding text features Fm T \u2208FT , we apply the cross-attention to them to obtain the enhanced features f Fm S \u2208f FS. Specifically, the query-key-value triplets qm S , km S , vm S obtained process can be written as: qm S = Fm S + Repeat (Fm T ) (9) km S = vm S = Concat ([Fm S ; Fm T ]) (10) where Fm S \u2208RT \u00d7D\u2032, Fm T \u2208R1\u00d7D\u2032, qm S \u2208RT \u00d7D\u2032, km S = vm S \u2208R(T +1)\u00d7D\u2032, and Repeat aims to copy Fm T T times. Then, we apply the multi-head attention (MHA) and a feed-forward network to obtain the enhanced support video feature f Fm S \u2208RT \u00d7D\u2032 as shown in Fig.4(a), given by: \u00af F m S = qm S + MHA (qm S , km S , vm S ) (11) f Fm S = \u00af F m S + FFN \u0000\u00af F m S \u0001 (12) where MHA consists of the layer normalization and a multihead attention layer, and FFN consists of the layer normalization and an MLP layer. Similarly, we perform the same operation on the query set videos to explore the temporal relationships, as shown in Fig.4(b). However, the difference is that qm Q = km Q = vm Q = Fm Q \u2208RT \u00d7D\u2032 since it does not have corresponding textual features. Note that the support and query set branches share the parameter weights of all modules to reduce computation costs while ensuring that query and support samples are in the same feature space. 3.5. Metric Loss and Predictions The existing few-shot action recognition works [3, 61, 56, 33, 38, 49, 46, 22, 14], typically based solely on visual information, classify a query video by comparing the temporal-aligned distances between the query video and the support set prototypes. With the advent of text information and visual-language pre-trained model CLIP, text features can now be utilized to classify query videos. This means that query videos can be classified by matching not only with the prototypes of the support set (visual branch) but also with the corresponding text features of the support set (text branch), as shown in Fig.2. For the visual branch, given the support prototype enhanced feature f Fm S \u2208f FS and the query enhanced feature f Fq \u2208f FQ, the distance Dq,Sm can be calculated as: Dq,Sm = M \u0010 f Fq, f Fm S \u0011 (13) where M denotes the temporal alignment metric, and Dq,Sm \u2208Dq,S. Based on the distances Dq,S, we can obtain the probability distribution over support classes pQ2S Text Features Video Features C Multi-head Attention FFN Video Features Multi-head Attention FFN Support Set Branch Query Set Branch (a) (b) q k v q k v Repeat Concat C Figure 4. (a) and (b) respectively show the structure of the TPCM module for the support set and query set branch. \u2295denotes element-wise summation. and use a standard cross-entropy loss LQ2S to optimize the model parameters. For the text branch, given the adapted support set prototype feature Fm S \u2208FS, adapted query feature Fq \u2208FQ, and corresponding text feature Fm T \u2208FT , we apply global average pooling on temporal dimension to the features Fm S and Fq to obtain Fm-avg S and Favg q . To bring the pairwise representations of videos and labels closer to each other, we define symmetric similarities between the two modalities using cosine distances in the similarity calculation module, given by: s (Fm-avg S , Fm T ) = \u27e8Fm-avg S , Fm T \u27e9 \u2225Fm-avg S \u2225\u2225Fm T \u2225 (14) s \u0000Favg q , Fm T \u0001 = Favg q , Fm T \u000b \r \rFavg q \r \r \u2225Fm T \u2225 (15) where s (Fm-avg S , Fm T ) \u2208s (Favg S , FT ) and s \u0000Favg q , Fm T \u0001 \u2208 s \u0000Favg Q , FT \u0001 . Based on the cosine similarities s (Favg S , FT ) and s \u0000Favg Q , FT \u0001 , we can obtain the softmax-normalized video-to-text similarity scores pS2T and pQ2T . Inspired by ActionCLIP [42], we define the KullbackLeibler (KL) divergence as the video-text contrastive loss LS2T and LQ2T . By optimizing contrastive loss, the CLIP model can be adapted to our downstream task. Finally, we integrate the losses of both the visual and textual branches, given by: L = \u03b1 \u00b7 1 2 (LS2T + LQ2T ) + (1 \u2212\u03b1) \u00b7 LQ2S (16) Similarly, we also combine the query set video prediction distributions from both the visual and text branches, written as: p = \u03b1 \u00b7 pQ2T + (1 \u2212\u03b1) \u00b7 pQ2S (17) where \u03b1 \u2208[0, 1] is an adjustable hyperparameter. \f4. Experiments 4.1. Experimental Setup 4.1.1 Datasets Our method\u2019s performance is assessed on five datasets that can be classified into two categories: 1) spatialrelated datasets, including Kinetics [4], HMDB51 [20], and UCF101 [37]. 2) temporal-related datasets, including SSv2Full [9] and SSv2-Small [9]. For spatial-related datasets, the recognition of actions primarily relies on background information, with temporal information playing a minor role. On the other hand, the situation is precisely the opposite for temporal-related datasets, where the key to action recognition lies in temporal modeling. Referring to the previous setups [3, 62, 61] on Kinetics, SSv2-Full, SSv2-Small, we select 100 classes and divide them into 64/12/24 action classes as training/validation/testing classes. For UCF101 and HMDB51, we evaluate our method on the splits provided by [56]. 4.1.2 Network Architectures We choose CLIP [34] as our pre-trained foundation model for efficient fine-tuning, where the visual encoder is ViTB/32 [7] or ViT-B/16 [7], while the text encoder is a 12layer, 512-wide transformer with eight attention heads. However, due to the previous works [3, 61, 56, 33, 38, 49, 46, 22, 14] that used ResNet-50 [11] pre-trained on ImageNet [5] as the backbone, we provided a version of utilizing pre-trained CLIP ResNet50 without the TMA module as our visual encoder. Meanwhile, we set the bottleneck ratio of Adapters to 0.25 in the TMA module, the same as AIM [51]. For the prompt templates of the text encoder, we follow the same approach as ActionCLIP [42]. In training, a prompt template is randomly selected from 18 candidate templates for each video. However, the vector is obtained during inference by utilizing all 18 prompt templates as inputs and taking their average. For the temporal alignment metric M, we choose OTAM [3] as our baseline metric. 4.1.3 Training and Inference Following TSN [41], we uniformly select 8 frames (T=8) of a video as the input augmented with some fundamental techniques, such as random horizontal flipping, cropping, and color jitter in training, while center crop in inference. For training, SSv2-Full and SSv2-Small randomly sample 100,000 training episodes, and the other datasets randomly sample 10,000 training episodes. Meanwhile, we freeze the pre-trained foundation model and only fine-tune lightweight adapters during the training process if the visual encoder is ViT-B/32 or ViT-B/16. If the visual encoder is ResNet50, we only freeze the text encoder and fully fine-tune the visual encoder. Moreover, our framework uses the Adam optimizer with the multi-step scheduler. As for inference, the average results of 10,000 tasks randomly sampled from the test sets in all datasets are reported in our experiments. 4.2. Results 4.2.1 Results on Spatial-related Datasets For spatial-related datasets, the recognition of actions primarily relies on background information, with temporal modeling playing a minor role. CLIP is the large foundation image pre-trained model that mainly relies on background information to recognize images. Therefore, finetuning CLIP on spatial-related datasets will result in a significant improvement in few-shot action recognition. Our approach reports results using three different visual encoders. The CLIP-RN50 model has a fully fine-tuned visual encoder since it does not have an Adapter structure. On the other hand, the two ViT-B models only fine-tune lightweight adapter modules during the training process. As shown in Tab.1, even our CLIP-RN50 model significantly improves accuracy in any task setting compared to excellent methods (such as TRX [33], STRM [38], HyRSM [46], SloshNet [49], MoLo [45], et al.) that use ImageNet pretraining. Compared to CLIP-FSAR [44], which uses the same CLIP pre-training and temporal alignment metric, our MA-CLIP achieves better results in multiple datasets and task settings. Specifically, compared to CLIP-FSAR using the same ViT-B/16 as the visual encoder, our method brings 6.3%, 0.9% performance improvements in the 1-shot task of of HMDB51 and Kinetics, and 0.2%, 0.6% gains in the 5-shot task of HMDB51 and Kinetics, respectively. 4.2.2 Results on Temporal-related Datasets For temporal-related datasets, the key to action recognition is temporal information. The performance improvement from CLIP\u2019s pre-trained weights is less significant than those for spatial-related datasets. However, our model still shows excellent results due to its remarkable capacity in temporal modeling. We report three model results using different visual encoders as shown in Tab.2. Compared to the baseline OTAM [3], our MA-CLIP using CLIP-RN50 as the visual encoder can bring 16.1%, 16.0% performance improvements in the 1-shot task, and 9.8%, 11.3% accuracy gains in the 5-shot task of SSv2-Small and SSv2-Full, respectively. Meanwhile, our CLIP-RN50 model achieves the best performance in the 1-shot task compared to all the methods using ResNet-50 as the visual encoder in all temporal-related datasets. Compared to CLIP-FSAR [44], which uses the same CLIP pre-training and temporal alignment metric, our method has a significant performance improvement. Specifically, compared to CLIP-FSAR with the same highest configuration (ViT-B/16), our MA-CLIP brings 4.5%, 2.7% accuracy improvements in the 1-shot \fTable 1. State-of-the-art comparison on the 5-way k-shot benchmarks of the spatial-related benchmarks including HMDB51, SSv2, and Kinetics. The boldfacen and underline font indicate the highest and the second highest results. Note: * means our implementation. For Fine-tuning, \u201cFull\u201d indicates the full fine-tuning of the visual encoder, and \u201cPEFT\u201d indicates the parameter-efficient fine-tuning of the visual encoder. HMDB51 UCF101 Kinetics Method Reference Pre-training Fine-tuning 1-shot 5-shot 1-shot 5-shot 1-shot 5-shot MatchingNet [40] NeurIPS(16) INet-RN50 Full 53.3 74.6 MAML [8] ICML(17) INet-RN50 Full 54.2 75.3 ProtoNet [36] NeurIPS(17) Full 54.2 68.4 74.0 89.6 64.5 77.9 TRN++ [60] ECCV(18) INet-RN50 Full 68.4 82.0 CMN++ [61] ECCV(18) INet-RN50 Full 57.3 76.0 TARN [2] BMVC(19) Full 64.8 78.5 ARN [56] ECCV(20) Full 45.5 60.6 66.3 83.1 63.7 82.4 OTAM [3] CVPR(20) INet-RN50 Full 54.5 68.0 79.9 88.9 73.0 85.8 TTAN [24] ArXiv(21) INet-RN50 Full 57.1 74.0 80.9 93.2 ITANet [57] IJCAI(21) INet-RN50 Full 73.6 84.3 TRX [33] CVPR(21) INet-RN50 Full 54.9* 75.6 81.0* 96.1 65.1* 85.9 TA2N [25] AAAI(22) INet-RN50 Full 59.7 73.9 81.9 95.1 72.8 85.8 STRM [38] CVPR(22) INet-RN50 Full 57.6* 77.3 82.7* 96.9 65.1* 86.7 MTFAN [48] CVPR(22) INet-RN50 Full 59.0 74.6 84.8 95.1 74.6 87.4 HyRSM [46] CVPR(22) INet-RN50 Full 60.3 76.0 83.9 94.7 73.7 86.1 HCL [59] ECCV(22) INet-RN50 Full 59.1 76.3 82.5 93.9 73.7 85.8 Huang etal. [14] ECCV(22) INet-RN50 Full 60.1 77.0 71.4 91.0 73.3 86.4 Nguyen etal. [29] ECCV(22) INet-RN50 Full 59.6 76.9 84.9 95.9 74.3 87.4 SloshNet [49] AAAI(23) INet-RN50 Full 59.4 77.5 86.0 97.1 70.4 87.0 MoLo (OTAM) [45] CVPR(23) INet-RN50 Full 59.8 76.1 85.4 95.1 73.8 85.1 CLIP-FSAR [44] ArXiv(23) CLIP-RN50 Full 69.4 80.7 92.4 97.0 90.1 92.0 CLIP-FSAR [44] ArXiv(23) CLIP-ViT-B/16 Full 77.1 87.7 97.0 99.1 94.8 95.4 MA-CLIP CLIP-RN50 Full 73.3 82.1 91.8 96.6 92.8 93.0 MA-CLIP CLIP-ViT-B/32 PEFT 77.3 83.9 92.7 97.2 93.5 94.3 MA-CLIP CLIP-ViT-B/16 PEFT 83.4 87.9 96.5 98.6 95.7 96.0 task, and 1.2%, 0.2% accuracy gains in the 5-shot task of SSv2-Small and SSv2-Full, respectively. For the SSv2Small datasets, even our ViT-B/32 model can perform better than CLIP-FSAR\u2019s ViT-B/16 model. 4.3. Ablation Study 4.3.1 Impact of The Proposed Components To validate the contributions of each module (i.e. TMA, TPCM) in our method, we experiment under 5-way 1-shot settings on the SSv2-Small and SSv2-Full datasets. Our multimodal baseline method chooses CLIP-ViT-B/32 as our visual encoder and freezes all the learnable weights without extra modules. As shown in Tab.3, we observe each component is effective. Specifically, compared to the baseline, the TMA module can bring 13.5% and 16.3% accuracy improvements on SSv2-Small and SSv2-Full, and the TPCM module can bring 16.9% and 19.6% on two datasets. Combining all modules can get the best results, bringing 27.7% and 31.7% accuracy gains on SSv2-Small and SSv2-Full over the baseline. 4.3.2 Effectiveness of The Adaptation Components To demonstrate the effectiveness of our proposed adaptation in TMA, we compare our method to two baselines. We choose CLIP-ViT-B/32 as our visual encoder. The first baseline is a frozen space-only model without any adaptation, freeing all the trainable parameters of the visual and text encoder but not including the TPCM module. Compared to the first baseline, the second baseline fully finetuned the visual encoder without any adaptation. As shown in Tab.4, the fine-tuned visual-only model can bring 12.4% performance improvement over the first baseline but the number of tunable parameters increases from 3.15M to 90.99M. Our method aims to add a few tunable parameters in a fully frozen visual model without compromising the pre-trained weights to achieve better performance than the fully fine-tuned model. In Tab.4, after multimodal adaptation, the frozen model achieves comparable performance with the full fine-tuned visual-only model (54.0 vs. 53.9) with less than one-tenth of the parameter count of the latter (7.94M vs. 90.99M). After adding temporal and joint adaptation, they bring 1.9% and 0.6% performance improve\fTable 2. State-of-the-art comparison on the 5-way k-shot benchmarks of the temporal-related benchmarks including SSv2-Small, and SSv2-Full. The boldfacen and underline font indicate the highest and the second highest results. Note: * means our implementation. For Fine-tuning, Full indicates the full fine-tuning of the visual encoder, and PEFT indicates the parameter-efficient fine-tuning of the visual encoder. SSv2-Small SSv2-Full Method Reference Pre-training Fine-tuning 1-shot 5-shot 1-shot 5-shot MatchingNet [40] NeurIPS(16) INet-RN50 Full 31.3 45.5 MAML [8] ICML(17) INet-RN50 Full 30.9 41.9 TRN++ [60] ECCV(18) INet-RN50 Full 38.6 48.9 CMN++ [61] ECCV(18) INet-RN50 Full 34.4 43.8 36.2 48.8 OTAM [3] CVPR(20) INet-RN50 Full 36.4 48.0 42.8 52.3 TTAN [24] ArXiv(21) INet-RN50 Full 46.3 60.4 ITANet [57] IJCAI(21) INet-RN50 Full 39.8 53.7 49.2 62.3 TRX [33] CVPR(21) INet-RN50 Full 36.0* 56.7* 42.0* 64.6 TA2N [25] AAAI(22) INet-RN50 Full 47.6 61.0 STRM [38] CVPR(22) INet-RN50 Full 37.1* 55.3* 43.1* 68.1 MTFAN [48] CVPR(22) INet-RN50 Full 45.7 60.4 HyRSM [46] CVPR(22) INet-RN50 Full 40.6 56.1 54.3 69.0 HCL [59] ECCV(22) INet-RN50 Full 38.7 55.4 47.3 64.9 Huang etal. [14] ECCV(22) INet-RN50 Full 38.9 61.6 49.3 66.7 Nguyen etal. [29] ECCV(22) INet-RN50 Full 43.8 61.1 SloshNet [49] AAAI(23) INet-RN50 Full 46.5 68.3 MoLo (OTAM) [45] CVPR(23) INet-RN50 Full 41.9 56.2 55.0 69.6 CLIP-FSAR [44] ArXiv(23) CLIP-RN50 Full 52.1 55.8 58.7 62.8 CLIP-FSAR [44] ArXiv(23) CLIP-ViT-B/16 Full 54.6 61.8 62.1 72.1 MA-CLIP CLIP-RN50 Full 52.5 57.8 58.8 63.6 MA-CLIP CLIP-ViT-B/32 PEFT 56.5 62.3 61.9 64.5 MA-CLIP CLIP-ViT-B/16 PEFT 59.1 64.5 63.3 72.3 Table 3. The impact of proposed modules on SSv2-Small and SSv2-Full in the 5-way 1-shot task. The visual encoder is ViTB/32. TMA TPCM SSv2-Small SSv2-Full 28.8 30.2 42.3 46.5 45.7 49.8 56.5 61.9 ments, respectively. Our final model brings a 2.6% accuracy improvement compared to the fine-tuned visual-only model, but the number of tunable parameters is only one-fifth. 4.3.3 Comparison Between Multimodal Adaptation and Spatiotemporal Adaptation To compare multimodal and spatiotemporal adaptation fairly, we conduct experiments on the 5-way 1-shot task of SSv2-Small and SSv2-Full. As shown in Sec.3.3.2, the difference between multimodal and spatiotemporal adaptation lies in whether or not to add text features to do self-attention with spatiotemporal features for support videos. As shown in Tab.5, using multimodal adaptation instead of spatiotemporal adaptation results in 0.6% and 0.7% performance improvements on the SSv2-Small and SSv2-Full datasets, respectively. The experimental results reveal that enhancing the semantic representation of visual features by introducing textual features is effective in Adapter. 4.3.4 Comparison of Different Prototype Construction Methods To demonstrate the effectiveness of our proposed module and compare the efficacy of various methods for prototype construction, we conduct the experiments on the 5-way 1shot task of SSv2-Small. We choose CLIP-ViT-B/32 as our visual encoder, and the transformer includes the multi-head self-attention and a feed-forward network. The first baseline unimodal transformer indicates the features FS and FQ doing self-attention on the temporal dimension. The difference between the second (CLIP-FSAR[44]) and first baseline is that the text features FT are stacked along the temporal dimension before performing self-attention on the support features FS. We set all the layers of the transformer to be one. As shown in Tab.6, our TPCM module brings 8.6% and 1.8% performance improvements compared to the unimodal transformer and multimodal transformer on SSv2Small, respectively. Based on the experimental results, our TPCM module demonstrates a higher level of efficacy in effectively leveraging textual information as guidance to integrate visual and textual features. This integration leads \fTable 4. Effectiveness of the Adapter components on SSv2-Small in the 5-way 1-shot task. The visual encoder is ViT-B/32. Method Param (M) Tunable Param (M) Acc Frozen 154.43 3.15 42.3 Fine-tuned visual-only 154.43 90.99 53.9 Frozen + multimodal adaptation 159.21 7.94 54.0 + temporal adaptation 166.31 15.04 55.9 + joint adaptation 169.81 18.54 56.5 Table 5. Effectiveness comparison between multimodal adaptation and spatial adaptation on SSv2-Small and SSv2-Full in the 5-way 1-shot task. The visual encoder is ViT-B/32. Method Dataset Acc Spatiotemporal Adaptation SSv2-Small 55.9 Multimodal Adaptation SSv2-Small 56.5 Spatiotemporal Adaptation SSv2-Full 61.2 Multimodal Adaptation SSv2-Full 61.9 to the attainment of more robust class prototype representations. Table 6. Comparison of different prototype construction methods on SSv2-Small in the 5-way 1-shot task. The transformer includes the multi-head self-attention and a feed-forward network. The visual encoder is ViT-B/32. Method Visual Encoder Acc Unimodal Transformer ViT-B/32 47.9 Multimodal Transformer ViT-B/32 54.7 TPCM ViT-B/32 56.5 4.3.5 Method Effectiveness on Different Temporal Alignment Metrics We conduct the experiments using different temporal alignment metrics on the 5-way 1-shot task of Kinetics and SSv2-Small to demonstrate that our model is plug-andplay. We choose CLIP-ViT-B/32 as our visual encoder. We adopt three different temporal alignment metrics, including OTAM [3], Bi-MHM [46], and TRX [33]. As displayed in Tab.7, our method can adapt to any temporal alignment metric, and the final accuracies are closely correlated to the metric\u2019s performance. Moreover, irrespective of the temporal alignment metric employed, our MA-CLIP consistently achieves the most outstanding performance comparing the baselines, which serves as compelling evidence for the superiority of our model. 4.3.6 Unimodal Model vs. Multimodal Model We also compare the performance between the unimodal model and multimodal model, as well as the impacts of different pre-training. We experiment with different pretraining and model modalities on the 5-way 1-shot task of Kinetics and SSv2-Small. We conducted experiments on multiple temporally aligned metrics. We provided two baselines for each metric: an ImageNet [5] pre-trained unimodal model and a CLIP pre-trained unimodal model. We choose ViT-B/32 as our visual encoder, and all baseline models\u2019 visual encoders are fully fine-tuned. As shown in Tab.7, using a CLIP pre-trained single-tower model can lead to performance improvements compared to ImageNet pretrained model, but these improvements are still relatively limited. However, when using our proposed MA-CLIP multimodal model, there is a significant improvement in performance on two datasets. Specifically, our MA-CLIP consistently achieves a minimum accuracy improvement of 15% over the unimodal model utilizing ImageNet pre-training and a minimum performance improvement of 10% over the unimodal model using CLIP pre-training on two datasets. These results, on the one hand, demonstrate the importance of text information for few-shot action recognition tasks and, on the other hand, proves the effectiveness of our approach. 4.3.7 Full Fine-tuning vs. Adaptation In Tab.8, we conduct experiments on the 5-way 1-shot task of SSv2-Small to make a fair comparison between full finetuning and adaptation, which indicates the TMA module we proposed here. We choose ViT/B-32 as our visual encoder. As shown in Tab.8, our adaptation method can bring 2.6% and 2.8% accuracy improvements on SSv2-Small and SSv2-Full over the full fine-tuning model, respectively. Our adaptation method implements multimodal fusion and temporal modeling, while the full fine-tuning method does not achieve this. However, our method has only one-fifth (18.54M vs. 90.99M) of tunable parameters compared to the full fine-tuning method, requires 1.6G (11.9G vs 13.5G) less memory usage, and takes 0.4 (3.0H vs. 3.4H) hours less time to train for 10,000 tasks on a single RTX3090. The experimental results demonstrate that our MA-CLIP is fast, efficient, and has low training costs. 4.3.8 Comparison of Different Methods for The Number of Training Tasks. To demonstrate the significance of applying large-scale foundation pre-trained models in few-shot action recognition, significantly reducing the number of training tasks and dramatically improving recognition accuracy. We conduct experiments on SSv2-Small, Kinetics in the 5-way 1-shot task to compare the number of training tasks and accuracy among different methods. The visual encoder is ViT-B/32 \fTable 7. Method effectiveness on different temporal alignment metrics on SSv2-Small and Kinetics in the 5-way 1-shot task. And effectiveness comparison between the unimodal model and the multimodal model. The visual encoder is ViT-B/32. Temporal Alignment Metric Model Modality Pre-training Kinetics SSv2-Small OTAM [3] Unimodal INet-ViT-B/32 75.8 38.2 OTAM [3] Unimodal CLIP-ViT-B/32 83.7 44.8 MA-CLIP(OTAM) Multimodal CLIP-ViT-B/32 93.5 56.5 Bi-MHM [46] Unimodal INet-ViT-B/32 75.2 39.5 Bi-MHM [46] Unimodal CLIP-ViT-B/32 83.2 45.5 MA-CLIP(Bi-MHM) Multimodal CLIP-ViT-B/32 93.2 56.9 TRX [33] Unimodal INet-ViT-B/32 67.2 37.3 TRX [33] Unimodal CLIP-ViT-B/32 82.8 42.7 MA-CLIP(TRX) Multimodal CLIP-ViT-B/32 92.8 52.4 Table 8. Effectiveness comparison between full fine-tuning and adaptation on SSv2-Small and SSv2-Full in the 5-way 1-shot task. The visual encoder is ViT-B/32. \u201dMemory(G)\u201d refers to the amount of video memory usage, and \u201dTime(H)\u201d indicates the time required to train 10,000 tasks, measured in hours on a single RTX3090. Method Dataset Tunable Param (M) Memory(G) Time(H) Acc Full fine-tuning SSv2-Small 90.99 13.5 3.4 53.9 Adaptation SSv2-Small 18.54 11.9 3.0 56.5 Full fine-tuning SSv2-Full 90.99 13.5 3.4 59.1 Adaptation SSv2-Full 18.54 11.9 3.0 61.9 Table 9. Comparison of different methods for the number of training tasks on SSv2-Small, Kinetics in the 5-way 1-shot task. The visual encoder is ViT-B/32. MA-CLIP\u2019s temporal alignment metric is OTAM [3]. Method Dataset Pre-training Num of training tasks Acc OTAM [3] SSv2-Small INet-ViT-B/32 80000 38.2 HYRSM [46] SSv2-Small INet-ViT-B/32 75000 40.4 TRX [33] SSv2-Small INet-ViT-B/32 80000 37.3 MA-CLIP SSv2-Small CLIP-ViT-B/32 20000 56.5 OTAM [3] Kinetics INet-ViT-B/32 10000 83.7 HYRSM [46] Kinetics INet-ViT-B/32 10000 83.2 TRX [33] Kinetics INet-ViT-B/32 10000 82.8 MA-CLIP Kinetics CLIP-ViT-B/32 1000 93.5 and MA-CLIP\u2019s temporal alignment metric is OTAM. As shown in Tab. 9, using the ViT/B-32 model, our MA-CLIP achieves at least a 15% improvement in accuracy compared to other methods that use ImageNet pre-training, while the number of training tasks is only one-fourth of theirs on SSv2-Small. Similarly, on Kinetics, our MA-CLIP achieves at least a 10% improvement in accuracy while the number of training tasks is only one-tenth of other methods. Based on the above results, applying large-scale foundation models to few-shot recognition is necessary. 4.3.9 Attention Visualization of MA-CLIP Fig.5 shows the attention visualization of our MA-CLIP on SSv2-Small in the 5-way 1-shot setting. Corresponding to the original RGB images (left), the attention maps of the unimodal full fine-tuning model using CLIP pre-trained weights (middle), which we have mentioned in Sec.4.3.6 are compared to the attention maps with our MA-CLIP (right). As shown in Fig.5, the attention maps generated by MA-CLIP focus more on action-related objects and reduce attention to the background and unrelated objects. These observations provide empirical evidence of the effectiveness of our MA-CLIP in enhancing semantic and spatiotemporal representation. 5." + }, + { + "url": "http://arxiv.org/abs/2301.07944v2", + "title": "Revisiting the Spatial and Temporal Modeling for Few-shot Action Recognition", + "abstract": "Spatial and temporal modeling is one of the most core aspects of few-shot\naction recognition. Most previous works mainly focus on long-term temporal\nrelation modeling based on high-level spatial representations, without\nconsidering the crucial low-level spatial features and short-term temporal\nrelations. Actually, the former feature could bring rich local semantic\ninformation, and the latter feature could represent motion characteristics of\nadjacent frames, respectively. In this paper, we propose SloshNet, a new\nframework that revisits the spatial and temporal modeling for few-shot action\nrecognition in a finer manner. First, to exploit the low-level spatial\nfeatures, we design a feature fusion architecture search module to\nautomatically search for the best combination of the low-level and high-level\nspatial features. Next, inspired by the recent transformer, we introduce a\nlong-term temporal modeling module to model the global temporal relations based\non the extracted spatial appearance features. Meanwhile, we design another\nshort-term temporal modeling module to encode the motion characteristics\nbetween adjacent frame representations. After that, the final predictions can\nbe obtained by feeding the embedded rich spatial-temporal features to a common\nframe-level class prototype matcher. We extensively validate the proposed\nSloshNet on four few-shot action recognition datasets, including\nSomething-Something V2, Kinetics, UCF101, and HMDB51. It achieves favorable\nresults against state-of-the-art methods in all datasets.", + "authors": "Jiazheng Xing, Mengmeng Wang, Yong Liu, Boyu Mu", + "published": "2023-01-19", + "updated": "2023-04-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "main_content": "Introduction With the development of deep learning, a large amount of excellent work has emerged in the \ufb01eld of action recognition (Li et al. 2022a; Liu et al. 2022b; Wang et al. 2022; Feichtenhofer et al. 2019; Wang et al. 2018). Most studies use large amounts of labeled data to perform video understanding or classi\ufb01cation tasks to learn video representations. Such approaches are unsatisfactory in industrial applications because of the massive time-consuming and laborconsuming data annotation. On the contrary, the core assumption of few-shot learning is using only a handful of labeled training samples from numerous similar tasks as a surrogate for large quantities of labeled training samples. *Co-corresponding author. \u2020Corresponding author. Copyright \u00a9 2023, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. riding or walking \u00a0with horse high jump taking a ring out of the box (a) (b) \u00a0Riding unicycle Video Frames Time Time TRX Ours: SloshNet Figure 1: (a): Some examples in few-shot action recognition (b): Visualization of the attention map from the recent work TRX (Perrett et al. 2021) and our proposed SloshNet. Therefore, the attention on few-shot learning methods is increasing daily. The task of few-shot action recognition aims to classify an unlabeled query video into one of the action categories in the support set (usually \ufb01ve categories) with limited samples per action class. Inspired by few-shot image recognition (Finn, Abbeel, and Levine 2017; Doersch, Gupta, and Zisserman 2020; Elsken et al. 2020; Ma et al. 2020), existing few-shot video action recognition methods mainly focus on comparing the similarity of different videos in the feature space for recognition. However, videos have an extra temporal dimension compared to images, so that it is insuf\ufb01cient to represent the whole video as a single feature vector. Therefore, the spatial-temporal feature modeling becomes one of the core problems of few-shot action recognition. Specially, spatial feature aims to express spatial semantic information for every single frame. In some cases, a video could be recognized with only a single frame like the example of the \ufb01rst row in Fig. 1(a). Current approaches (Bishay, Zoumpourlis, and Patras 2019; Kumar and Narang 2021; Li et al. 2022b) arXiv:2301.07944v2 [cs.CV] 8 Apr 2023 \fusually extract the spatial features through a TSN (Wang et al. 2016) model. However, they usually consider the highlevel spatial features as default but ignore the evenly crucial low-level spatial features, which contain rich texture information. Fusing the low-level spatial features with the high-level ones could compensate and even highlight the low-level semantic features. For the temporal features, we classify them into two categories, long-term and short-term temporal features. Long-term temporal features present the relationship between spatial appearance features of different timestamps, which has also been a hot topic in previous works. For instance, the action of \u201chigh jump\u201d in Fig. 1(a) is easily mistaken for \u201crunning\u201d if the feature of jumping into the mat in the last frame is not integrated into all the previous features. Existing methods (Zhu and Yang 2018; Cao et al. 2020; Perrett et al. 2021) model the long-term temporal features mainly through hand-designed temporal alignment algorithms during the class prototype construction process, which aims to obtain better global features for comparison. On the other hand, short-term temporal features represent the motion characteristics of adjacent frames, i.e., focus on the local temporal relation modeling. For example, in Fig. 1(a), without the short-term temporal information, it is hard to classify whether the action is \u201ctaking the ring out of the box\u201d or \u201cputting the ring in the box\u201d. Nevertheless, we have observed that the short-term temporal modeling remains unexplored for the few-shot action recognition task. The critical insight of our work is to provide powerful spatial-temporal features, making it possible to realize effective recognition with a common frame-level class prototype matcher for few-shot action recognition. To this end, we propose a novel method for few-shot action recognition, dubbed SloshNet, a short for Spatial, long-term temporal and shortterm temporal features integrated Network. Speci\ufb01cally, to exploit the low-level spatial features, we \ufb01rst design a feature fusion architecture search module (FFAS) to automatically search for the best fusion structure of the low-level and high-level spatial features in different scenarios. Low-level features focus more on texture and structural information, while high-level features focus more on the semantic information, and their combination can enhance the representation of spatial features. Furthermore, based on the extracted spatial appearance features, we introduce a long-term temporal modeling module (LTMM) to model the global temporal relations. Meanwhile, we design another short-term temporal modeling module (STMM) to encode the motion characteristics between the adjacent frame representations and explore the optimal integration of long-term and short-term temporal features. For class prototype matcher, we follow a frame-level method TRX (Perrett et al. 2021), using an attention mechanism to match each query sub-sequence with all sub-sequences in the support set and aggregates this evidence. Fig. 1(b) shows the learned attentions of our SloshNet with TRX, where the attention learned by our method is highly concentrated and more correlated with the action subject, demonstrating the effectiveness of the spatial-temporal modeling of our SloshNet. The main contributions of our work can be summarized as follows: \u2022 We propose a simple and effective network named SloshNet for few-shot action recognition, which integrates spatial, long-term temporal and short-term temporal features. \u2022 We design a feature fusion architecture search module (FFAS) to automatically search for the best combination of the low-level and high-level spatial features. \u2022 We introduce a long-term temporal modeling module (LTMM) and design a short-term temporal modeling module (STMM) based on the attention mechanism to encode complementary global and local temporal representations. \u2022 The extensive experiments on four widely-used datasets (Something-Something V2, SSV2 (Goyal et al. 2017), Kinetics (Carreira and Zisserman 2017), UCF101 (Soomro, Zamir, and Shah 2012), and HMDB51 (Kuehne et al. 2011)) demonstrate the effectiveness of our methods. Related Works Few-Shot Image Classi\ufb01cation The core problem of few-shot image classi\ufb01cation is to obtain satisfactory prediction results based on a handful of training samples. Unlike the standard training methods in deep learning, few-shot image classi\ufb01cation uses the episodic training paradigm, making a handful of labeled training samples from numerous similar tasks as a surrogate for many labeled training samples. Existing mainstream methods of few-shot classi\ufb01cation can mainly be classi\ufb01ed as adaptation-based and metric-based. The adaptation-based approaches aim to \ufb01nd a network initialization that can be \ufb01ne-tuned for unknown tasks using few data, called gradient by gradient. The evidence of adaptation-based approaches can be clearly seen in the cases of MAML (Finn, Abbeel, and Levine 2017) and Reptile (Nichol and Schulman 2018). The metric-based approaches aim to \ufb01nd a \ufb01xed feature representation in which the target task can be embedded and classi\ufb01ed. The effectiveness of this kind of approach has been exempli\ufb01ed in Prototypical Networks (Snell, Swersky, and Zemel 2017) and Matching Networks (Vinyals et al. 2016). In addition, CrossTransformer (Doersch, Gupta, and Zisserman 2020) aligns the query and support set based on co-occurrences of image patches that combine metric-based features with task-speci\ufb01c adaptations. Few-Shot Video Action Recognition Inspired by few-shot image classi\ufb01cation, MetaUVFS (Patravali et al. 2021) apply the adaptation-based method and design an action-appearance aligned meta-adaptation module to model spatial-temporal relations of actions over unsupervised hard-mined episodes. However, the adaptationbased method requires high computational resources and long experimental time, so it is less commonly used in few-shot action recognition compared to the metric-based method. In this \ufb01eld, scholars have developed different subdivisional concerns about the metric-based approach. Some of metric-based approaches (Zhu and Yang 2018; Cao et al. 2020; Li et al. 2022b) focus on hand-designed temporal \falignment algorithms during the class prototype construction process. Simultaneously, TRPN (Wang et al. 2021) focuses on combining visual and semantic features to increase the uniqueness between similar action classes, and another work (Liu et al. 2022a) focuses on frame sampling strategies to avoid omitting critical action information in temporal and spatial dimensions. Moreover, unlike the above methods of prototype matching at the video level, some methods (Perrett et al. 2021; Thatipelli et al. 2022) inspired by CrossTransformer match each query sub-sequence with all sub-sequences in the support set, which can match actions at different speeds and temporal shifts. Our method focuses on modeling spatial-temporal relations based on a handful of labeled data. We can obtain good predictions by feeding rich spatial-temporal features to a common frame-level class prototype matcher like TRX (Perrett et al. 2021). Method Fig. 2 illustrates our overall few-shot action recognition framework. The query video Q and the class support set videos Sk passed through the feature extractor, and store the output features of each layer in a feature bank. The features from the feature bank are input into the feature fusion architecture search module (FFAS) to obtain the spatial fusion feature FQ SP , FSk SP . Next, we do the weighted summation of the fused feature and the original last layer feature from the feature bank with a learnable parameter \u03b3 to obtain the enhanced spatial feature. Followed previous works (Yang et al. 2020; Zhu et al. 2021b; Thatipelli et al. 2022), we model the temporal relation after the acquisition of spatial features to obtain better spatial-temporal integration features. Therefore, the obtained spatial features will be passed through a long-term temporal modeling module (LTMM) and a short-term temporal modeling module (STMM) to model long-term and short-term temporal characteristics FQ LT , FSk LT , FQ ST , FSk ST in parallel. Then do the fusion with another learnable parameter, which adaptively fuses the two kinds of temporal features. Finally, the class prediction b yQ of the query video Q and loss L are obtained by a frame-level prototype matcher. Details are shown in the subsequent subsections. Problem Formulation The few-shot action recognition is considered an N-way, K-shot task. It assigns an unlabeled query video to one of the N classes in the support set, each containing K-labeled videos that were not seen during the training process. We follow an episode training paradigm in line with most previous works (Zhu and Yang 2018; Cao et al. 2020; Perrett et al. 2021), where episodes are randomly drawn from an extensive data collection, and each episode is seen as a task. In each task, we let Q = {q1, q2, \u00b7 \u00b7 \u00b7 , ql} denote a query video randomly sampled l frames, and Sk m = \b sk m1, sk m2, \u00b7 \u00b7 \u00b7 , sk ml \t represents the mth video in class k \u03f5 K randomly sampled l frames. FFAS: Feature Fusion Architecture Search Module The low-level features extracted in the earlier layers of the feature extractor focus more on the structure and texture information, while the high-level features extracted in the last layers focus more on the semantic information. The fusion of them helps improve the spatial representations. Inspired by (Liu, Simonyan, and Yang 2018; Ghiasi, Lin, and Le 2019), we design a feature fusion architecture search module (FFAS). Our goal is to fuse features from different layers output by the feature extractor with an auto-search fusion module, which enables us to \ufb01nd the best combination of the low-level and high-level spatial characteristics in different scenarios. Speci\ufb01cally, we give the features of each layer (total L layers) F = {F1, \u00b7 \u00b7 \u00b7 , Fi, \u00b7 \u00b7 \u00b7 , FL} (Fi \u2208 RNT \u00d7Ci\u00d7Hi\u00d7Wi) where N, T, C, H, W are the batch size, time, spatial, height, and width, respectively. To facilitate the subsequent fusion process, we align each layer feature\u2019s spatial and channel dimension to the last layer feature, i.e.: Fi \u2208RNT \u00d7 Ci\u00d7 Hi\u00d7 Wi \u2192RNT \u00d7 C\u00d7 H\u00d7 W as follows: Fi = Modulealign (Fi) (1) where Modulealign here is a 3 \u00d7 3 convolution layer. After feature alignment, each layer feature will be updated with the fusion of all previous layers\u2019 features as: Fj = X i k. Moreover, during network training, Cij is computed via pseudo-inverse, which implicitly assumes the full-rankness of Ai, Aj. We refer readers to the empirical validation in Sec. 4.2. We defer the proof of Prop. 2 to Supp. Material. In fact, a similar claim has been formulated and proven in [52] (see Sec. 3.1 therein), but in the context of map refinement via promoting cycle consistency. Though being technically similar, the theoretical argument of [52] and that of ours have fundamentally different implications. More specifically, the former justifies a testtime optimization algorithm, which is used to promote cycle consistency of maps among a fixed test shape collection. While the latter suggests that spectral cycle consistency has been ensured and further leveraged to enhance the universal feature extractor (independent of the test data) during training in any DFM framework following the generic pipeline presented in Sec. 3.1. 4. Two-branch Deep Functional Maps It has long been recognized, both theoretically and empirically, that optimizing purely in the spectral domain is not sufficient. As a toy example, a trivial solution attaining global optima can be constructed as follows: Suppose that we have learned a feature extractor F\u0398, which returns the respective eigenbasis transformed by a universal A0. That is, Gi = \u03a6iA0, \u2200i, which implies Ai = \u03a6\u2020 iGi = A0, \u2200i. Then we have Cij \u2261Ik, which exactly satisfies Edesc(Cij) = Ereg(Cij) = 0, \u2200i, j. However, it probably induces poor point-wise maps. In fact, in [39] the authors have proposed to use an ICPlike technique to encourage the estimated functional maps to be induced by some point-wise maps. In [36], the authors propose a spectral upsampling method for map refinement, which essentially converts maps back and forth between spectral and spatial domains. Moreover, the following lemma from [36] sheds light on the necessity of taking both spectral and spatial representations into consideration. Lemma 1 Given a pair of shapes S1, S2 each having nonrepeating Laplacian eigenvalues, which are the same. A point-wise map T : S1 \u2192S2 is an isometry if and only if the corresponding functional map C in the complete Laplacian basis is both diagonal and orthonormal. The above lemma suggests that apart from promoting the structural properties of functional maps, it is also critical to enforce them to be associated with certain point-wise maps, or termed as properness of functional maps in [42]. Finally, we remark that some recent DFM advances also promote the properness of the resulting spectral maps. For instance, AttentiveFMaps [29] follows the spirit of ZoomOut [36] and explicitly performs a conversion between spectral and spatial map representations across different dimensions of eigenbasis; UDMSM [9] constructs explicitly a universal shape in the feature space, and enforce the spectral map estimation to be consistent with the spatial maps induced via the universal shape. 4.1. Two-branch Map Estimation In this part, we leverage our observation made in Prop. 2 and propose a novel, simple yet effective design of unsupervised deep functional maps, which introduces a new branch that independently estimates maps from spatial perspective. Our key insight is that, once cycle consistency is valid and Ai is of full row rank, Ai can be seen as a functional map from a universal latent shape, S0, to Si. This perspective has been explored in several prior works [52, 24, 25], we provide the following details to be self-contained. The above assumption implies Cij = AjA\u2020 i. Then Cij can be interpreted as a functional map composition from Si to S0, followed by a map from S0 to Sj. On the other hand, one can align the spectral embeddings of Sj to that of Si by simply transforming the former by Cij. Indeed, we convert Cij into the point-wise map by the nearest neighbor searching between the rows of \u03a6jCij and that of \u03a6i. From this point of view, denoting the virtual spectral embedding of the latent shape by \u03a60, \u03a6iAi can be then treated as the spectral embedding of Si aligned to that of S0. Therefore, given a pair of shapes Si, Sj, since we have aligned their eigenbasis to the canonical frame defined by the virtual spectral embedding \u03a60, we can align the spectral embedding of Si to \u03a60 by computing \u03a6iAi. Once all the spectral embeddings are aligned to the canonical embedding domain regarding \u03a60, we can compute the soft point-wise map between Si and Sj by nearest neighbor searching between the rows of \u03a6iAi and those of \u03a6jAj. Based on the above derivation, given the learned features projected in the spectral domain, Ai, Aj, and a pair of indices p \u2208[1..ni], q \u2208[1..nj], we can compute point-wise maps. Firstly we compute residual: \\l a bel {eqn : delta} \\delta _{qp} = \\Vert \\B _i[p]\\A _i \\B _j[q]\\A _j \\Vert _2, (2) where \u03a6i[p] denotes the p-th row of \u03a6i, and similarly we define \u03a6j[q]. The soft point-wise map \u03a0 \u2208Rnj\u00d7ni is then given by: \\la be l {eqn:pi} \\ Pi (q, p) = \\frac {\\exp (-{\\alpha \\delta _{qp}})}{\\sum _{p'} \\exp (-\\alpha \\delta _{qp'})}. (3) Note that by construction, each row of \u03a0 is non-negative and sums up to 1, forming a probability distribution. The parameter \u03b1 controls the entropy of each distribution \u2013 the smaller/larger \u03b1 is, the fuzzier/sharper the distribution is. \fFigure 3. We train our two-branch DFM and a vanilla singlebranch version on DT4D-H and monitor spectrally and spatially cycle consistency along the training. Instead of manually tuning the optimal \u03b1, we propose a learning scheme that dynamically controls \u03b1 over training, which is inspired by curriculum learning [5]. We defer the respective details to Sec. 4.3. We convert the soft point-wise map to a functional map by \\ l ab el {eqn:cc} \\C _2 = \\B _j^{\\dagger } \\Pi \\B _i. (4) In the end, we enforce C2 to be consistent with C1, the intermediate output from the FMreg layer. To summarize, thanks to the spectral cycle consistency we identify in Sec. 3.2, we are allowed to construct a spectral latent shape and induce spatial maps via it. By enforcing the spatial estimations to be consistent with the spectral ones, we obtain a spatially and spectrally consistent deep functional maps framework. 4.2. Network Design As shown in Fig. 2, our two-branch network is built upon a standard DFM framework. In the following, we denote by C1 and C2 the estimated functional maps from the original FMreg layer and our novel branch, respectively. Specifically, we use DiffusionNet [47] as our feature extractor. And WKS [4] descriptors are fed into it as initialization of learned features. We borrow the FMreg layer from [12]. It takes both Edesc(C) and commutativity with the Laplace-Beltrami operator into consideration, where the latter is given as: L_ { \\ mbox { lap} } = \\big \\Vert \\C _{1} {\\Lambda }_1 {\\Lambda }_2 \\C _{1} \\big \\Vert ^2, (5) where \u039b1 and \u039b2 are diagonal matrices of the LaplaceBeltrami eigenvalues on the two shapes. The estimation of C2 has been described in detail in Sec. 4.1. In the end, we formulate the training loss as: \\lab el { eqn : lo s s} \\ mat h cal {L}(\\C _1, \\C _2) = \\| \\C _{1}^T \\C _{1} \\matr {I}\\|^2 + \\Vert \\C _1 \\C _2\\Vert ^2, (6) where the first term promotes the orthogonality of C1, while the second term promotes the consistency between the functional maps estimated from different branches. Finally, we remark that by combining the FMreg layer and L(C1, C2), we have incorporated every factor in Lemma 1 into our design. Conceptual Validation In this part, we train a network on DT4D-H dataset (see Sec. 5.1 for details) with our two-branch network, and a single-branch variant without our spatial map estimation branch. We monitor and plot the following quantities along training: (1) Average spectral cycle consistency over sampled triplets, i.e., 1 M P (i,j,k) \u2225CkiCjkCij \u2212I\u22252/\u2225I\u22252; (2) Average spatial cycle consistency over sampled triplets, i.e., the mean Euclidean deviation from composed maps Tki \u25e6Tjk \u25e6Tij to identity map on Si. Here n = 80 is the number of training shapes, and M = 1000 is the number of sampled triplets. The behavior of the blue curves after 4500 iterations verifies our argument that spectrally cycle consistency does not imply spatially cycle consistency. On the other hand, by introducing our two-branch design, the discrepancy is well compensated and evidently better cycle consistencies in both spatial and spectral domains are achieved. 4.3. Updating Scheme of \u03b1 in Eqn. (3) The soft point-map conversion (Eqn. 3) has been applied in several prior works [32, 29], which all set \u03b1 to be a manually selected constant. Ideally, we expect \u03a0 in Eqn. (3) to be close to a permutation matrix, i.e., each row forms a binary vector. This seems to suggest a preference for a large \u03b1. Unfortunately, it would severely hinder network training, since the learned features and maps are of low quality in the early stage. On the other hand, a small \u03b1 can alleviate such difficulty but falls short of fully pushing functional maps to be proper. As demonstrated in Sec. 5.5, neither small nor large \u03b1 produces satisfying results. Based upon the above analysis, we propose a novel updating scheme, which is inspired by curriculum learning [5]. Namely, we initiate a small \u03b1 at the beginning of network training and increase it by a constant step size for every fixed number of epochs. As shown in Tab. 1, 2, 3, our scheme does not rely on hyperparameter tuning but also achieves state-of-the-art results. 4.4. Implementation Details We implement our network with PyTorch [41]. We use four DiffusionNet blocks [47] as feature backbone and borrow the functional map block with Laplacian regularizer from [12]. The dimension of the Laplace-Beltrami eigenbasis is set to 50. WKS [4] descriptors are used as the input signal to our network. The dimensions of the input and the output descriptors are both set to 128. During training, the value of the learning rate is set to 2e-4 with ADAM optimizer. In all experiments, we train our method for 10,000 iterations with a batch size of 1. Following the learning \fstrategy in Sec. 4.3, we initialize \u03b1 to 1 and increase it by 5 per epoch. As indicated in Eqn. (6), We weigh equally the orthogonality loss with respect to C1 and the residual between C1 and C2. More implementation details are provided in the Supp. Material. 5. Experimental Results In this section, we conduct an extensive set of experiments of non-rigid shape matching on various datasets including humanoids and animals. We test on both nearisometric and non-isometric shape pairs. Our method is compared to a set of competitive baselines including axiomatic, supervised, weakly-supervised, and unsupervised learning methods. We emphasize that in this section, all the maps from the learning-based pipelines are directly inferred from the trained models, without any post-processing procedure. We evaluate the matching results in terms of a mean geodesic error on shapes normalized to unit area. Finally, our point-wise maps are all inferred by converting the output functional maps, as all the other DFM frameworks. 5.1. Datasets FAUST r: The remeshed version [43] of FAUST dataset[6] contains 100 human shapes. Following [45], it is split into 80/20 for training and testing. SCAPE r: The remeshed version [43] of SCAPE dataset[2] contains 71 human shapes. Following [45], it is split into 51/20 for training and testing. SHREC19 r: The remeshed version of SHREC19 dataset [35] collects 44 human shapes from 11 independent datasets with distinctive poses and styles. We abandon shape 40 due to its partiality, we test on 407 pairs among the rest 43 shapes, which come with ground-truth. DT4D-H [34]: The remeshed subset of the large scale animation dataset DeformingThings4D [31]. In particular, DT4D-H includes 10 categories of humanoid shapes undergoing significant pose and style variances, forming a challenging benchmark. SMAL r: The remeshed SMAL dataset [57] contains 49 animal shapes with 8 species. We follow the setting from [29], which splits 29 (5 species) and 20 (3 species) shapes for training and testing. TOSCA r: The remeshed TOSCA dataset [7] contains multiple shape categories. We choose 4 animal categories, including cat, dog, horse, and wolf to verify the generalization performance of networks trained on SMAL r. Note that we only infer the intra-category maps, due to the absence of ground-truth inter-category maps. We refer readers to Supp. Material for visualizations illustrating the variability of the above datasets. 5.2. Near-isometric Shape Matching In this part, we perform comparisons with an array of non-rigid shape matching methods: (1) Axiomatic methods including ZoomOut [36], BCICP [43], IsoMuSh [17], Smooth Shells [13], CZO [25]; (2) Supervised learning methods including TransMatch [51], GeomFMaps [12], and supervised version of AttentiveFMaps [29]; (3) Unsupervised learning methods including NeuroMorph [14], SyNoRiM [16], Deep Shell [15], AttentiveFMaps [29], UDMSM [9], DUO-FM [11]. For all learning-based methods, we train models on FAUST r and SCAPE r respectively. Tab. 1 reports results on both standard tests and more challenging generalizations. We observe a trade-off between the two tasks, methods that performs the best in the former (e.g., supervised AttentiveFMaps and UDMSM) tend to overfit, and therefore suffer poor generalization (especially to SREHC19 r). Meanwhile, our default setting, denoted by Ours, achieves reasonable performance in the standard tests but also outperforms the external baselines in 3 out of 4 generalization tests. Especially, in generalizing to SHREC19 r, Ours outperforms the external baselines by a large margin, resulting in 41% (3.8 vs. 6.4) and 46% (4.5 vs. 8.4) error reduction upon the second best. We highlight that post-processing with cycle consistency generally depends on the initialized map quality and the size of the test set (e.g., \u22653 shapes). In contrast, we leverage cycle consistency to improve the feature extractor during training. We also report the results from post-processing techniques based on cycle consistency [17, 25, 16] in Tab. 1. They are significantly outperformed by our method, which is inferred per-pair and without any post-processing. We further augment the dimension of functional maps in network training to 80 (same as UDMSM), which is beneficial to near-isometric matching [29]. It is evident that Ours (80 dim) achieves on-par performance with the regarding state-of-the-art methods in standard tests. On the other hand, due to the significant variability between SHREC19 r and the training sets (see Supp. Material), augmenting dimension leads to worse generalization than before (5.5 vs. 3.8, 5.8 vs. 4.5). Nevertheless, even in this case, our method outperforms the external baselines in all generalization tests by a notable margin. 5.3. Non-isometric Shape Matching We also train our network on non-isometric datasets, SMAL r and DT4D-H, and compare it with the stateof-the-art baselines including DeepShells [15], AttentiveFMaps [29], UDMSM [9] and DUO-FM [11]. SMAL r: We follow the split and input descriptors from [29] (more details are provided in the Supp. Material). Tab. 2 reports results on SMAL r, our method \fTable 1. Mean geodesic errors (\u00d7100) on FAUST r, SCAPE r, and SHREC19 r. The best and the second best are highlighted. Train FAUST r SCAPE r Method Test FAUST r SCAPE r SHREC19 r SCAPE r FAUST r SHREC19 r ZM[36] 6.1 \\ \\ 7.5 \\ \\ BCICP[43] 6.4 \\ \\ 11.0 \\ \\ IsoMuSh[17] 4.4 \\ \\ 5.6 \\ \\ Smooth Shell[13] 2.5 \\ \\ 4.7 \\ \\ CZO[25] 2.2 \\ \\ 2.5 \\ \\ TransMatch[51] 2.7 33.6 21.0 18.3 18.6 38.8 GeomFMaps[12] 2.6 3.3 9.9 3.0 3.0 12.2 AttentiveFMaps[29] supervised 1.4 2.2 9.4 1.7 1.8 12.2 NeuroMorph[14] 8.5 28.5 26.3 29.9 18.2 27.6 SyNoRiM[16] 7.9 21.7 25.5 9.5 24.6 26.8 Deep Shell[15] 1.7 5.4 27.4 2.5 2.7 23.4 AttentiveFMaps[29] 1.9 2.6 6.4 2.2 2.2 9.9 UDMSM[9] 1.5 7.3 21.5 2.0 8.6 30.7 DUO-FM[11] 2.5 4.2 6.4 2.7 2.8 8.4 Ours unsupervised 2.3 2.6 3.8 2.4 2.5 4.5 Ours (80 dim) 1.7 2.6 5.5 2.2 2.0 5.8 Table 2. Mean geodesic errors (\u00d7100) on SMAL r. The best and the second best are highlighted correspondingly. Train SMAL r Method Test SMAL r TOSCA r DeepShell[15] 29.3 8.7 GeomFMaps[12] 7.6 24.5 AttentiveFMaps[29] 5.4 20.9 UDMSM[9] 24.6 21.7 DUO-FM[11] 32.8 15.3 Ours 5.4 7.9 achieves the best performance, which is on-par with AttentiveFMaps [29]. To evaluate generalization performance, we use the trained models to directly infer intra-category maps within TOSCA r. It turns out that AttentiveFMaps and GeomFMaps both suffer from significant performance drops (\u00d73.8 and \u00d73.2 larger geodesic error). It is also worth noting that DeepShells achieves the second-best generalization score in the relatively simpler task. However, it fails dramatically regarding the base task. In contrast, our method achieves the best balance between learning in difficult non-isometric pairs and generalizing to relatively easy near-isometric pairs. DT4D-H: We follow the train/test (198/95) split of [29], but ignore the categories mousey and ortiz in both train and test, due to the lack of inter-category map labels regarding them, resulting a split of 168/80. We emphasize that we conduct training and test in a category-agnostic manner, i.e., no class label is used, and the training pairs can consist of shapes from arbitrary two categories. This is significantly different from [29], in which training pairs are selected according to clustering information. Obviously, our setting is more practical, but also more challenging. For completeness, we report results under the setting of [29] in Supp. Material Figure 4. Qualitative evaluation of spatial cycle consistency of different methods. Even composed along a path of 8 highly deformed shapes, our resulting map remains close to identity, while all the baselines fail significantly. and our method outperforms the baselines in both intraand inter-category evaluation by a notable margin. Tab. 3 reports results on DT4D-H, in which we preserve 80 shapes for test and train networks with 168 and 80 shapes, respectively. Note that we report mean geodesic errors over all possible test shape pairs, which may undergo significant distortions (see, e.g., Fig. 1). Our method obtains a 67.2%(7.7vs.22.4) geodesic error reduction with respect to the second-best baseline. On top of that, we also test the generalization of the trained model on near-isometric benchmarks \u2013 our method also generalizes the best in generalization to FAUST r and SCAPE r. The same pattern is observed when the training set is reduced by more than half. Remarkably, our network trained on the reduced set still outperforms all the baselines trained on the full set. Overall, we attribute our performance on matching challenging non-isometric shapes (Tab. 3) and generalizing to unseen shapes (Tab. 1) to our effort to promote both spec\fTable 3. Mean geodesic errors (\u00d7100) on DT4D-H. The best and the second best are highlighted correspondingly. Train DT4D-H (168) DT4D-H (80) Method Test DT4D-H FAUST r SCAPE r DT4D-H FAUST r SCAPE r DeepShell[15] 27.0 4.9 6.5 29.3 4.7 7.0 AttentiveFMaps[29] 25.7 3.4 6.4 28.9 2.7 6.3 UDMSM[9] 46.8 43.3 47.9 49.7 42.5 40.0 DUO-FM[11] 22.4 10.0 12.2 24.7 8.0 9.2 Ours 7.7 3.1 6.1 9.0 2.6 6.2 Table 4. Mean geodesic errors (\u00d7100) of SURFMNet and our variant trained on 4 datasets Method SURFMNet SURFMNet + Ours FAUST r 6.0 3.5 SCAPE r 6.8 3.4 SMAL r 20.4 13.3 DT4D-H 18.3 15.0 tral and spatial cycle consistency. Especially, the isometry assumption is likely violated in the former case, thus cycle consistency, as a generic prior, plays an important role of regularizing maps. As an illustration, we present a qualitative evaluation on the point-wise cycle consistency in Fig. 4. We sample 8 shapes from the test set of DT4D-H (one from each category) and compose the maps along the path (S1 \u2192S2 \u2192 \u00b7 \u00b7 \u00b7 \u2192S8 \u2192S1) with respect to different approaches. It is evident that due to the significant distortion undergoing among the shapes, all but our method fail to preserve cycle consistency in this demanding test, while our composing map approximates the identity map on S1. It also aligns nicely with the quantitative results reported in Tab. 3. 5.4. Integration with SURFMNet [45] Our two-branch design can be easily incorporated into any existing DFM framework following the general design outlined in Sec. 3.1. To demonstrate this, we modify the SURFMNet [45], one of the earliest approaches of unsupervised DFM, by adding our new branch. Tab. 4 shows the matching accuracy on the four main benchmarks. It is evident that in every case, incorporating our design leads to significant error reduction ranging from 18% to 50% . Especially, in the near-isometric cases, we obtain 41.6% and 50% error reduction respectively. Note that the absolute scores, 0.035, 0.034, are reasonable even compared to the state-of-the-art results reported in Tab. 1. 5.5. Ablation Study In this section, we present a set of ablation studies consisting of two parts. The first part verifies the rationality of our method, and the second part demonstrates the robustness of our method. We conduct all experiments on SMAL r dataset [57]. First of all, instead of using the updating scheme in Sec. 4.3, we test the performance of our pipeline using two fixed values of \u03b1 in Eqn. (3): \u03b1 = 1 and \u03b1 = 50. Compared to our proposed model, the two variants yield a noticeable performance drop. Especially in the case \u03b1 = 50, the network fails to deliver reasonable matching results. We believe it is because a large \u03b1 amplifies the noise of maps learned at the early training stage. Then we justify our two-branch network design. Removing spatial branch amounts to training a standard singlebranch DFM. To remove the spectral branch, we remove the FMreg layer and instead use our new branch to compute point-wise maps and convert them to functional maps. In the end, we modify the training loss so that it covers descriptor preservation, commutativity with the Laplace-Beltrami operator, and orthogonality (the latter two compensate the removed FMreg layer). The accuracy drop reported in the third and fourth row of Tab. 6 clearly suggests the necessity of our two-branch design. In our experiments, we always use the full resolution meshes (\u223c5k vertices) and compute in Eqn. (2) with all of the 128 descriptors. We anticipate that efficiency can become an issue when the input mesh resolution is high, and/or we would like to increase the size of learned descriptors. Therefore, we test the robustness of our pipeline with respect to down-sampling, which is commonly used in functional maps-based frameworks [36, 29]: 1) We downsample 3000 vertices on each shape via furthest point sampling; 2) In order to down-sample the feature dimension, we operate as the following during training: given a A1, A2, we perform SVD on A1, i.e., A1 = U1\u03a31V T 1 , then we set \u02c6 A1 = A1 \u02c6 V1, and set \u02c6 A2 = A2 \u02c6 V1, where \u02c6 V1 is the first m columns of V1. We set m = 30 by replacing Ai with \u02c6 Ai in Eqn. (2). The results in the bottom two rows show that the above operation has a relatively minor effect on the performance, proving the robustness of our method. 6." + } + ], + "Piercarlo Bonifacio": [ + { + "url": "http://arxiv.org/abs/1009.1848v2", + "title": "Cu I resonance lines in turn-off stars of NGC 6752 and NGC 6397. Effects of granulation from CO5BOLD models", + "abstract": "Context. Copper is an element whose interesting evolution with metallicity is\nnot fully understood. Observations of copper abundances rely on a very limited\nnumber of lines, the strongest are the Cu I lines of Mult. 1 at 324.7 nm and\n327.3 nm which can be measured even at extremely low metallicities. Aims. We\ninvestigate the quality of these lines as abundance indicators. Method. We\nmeasure these lines in two turn-off (TO) stars in the Globular Cluster NGC 6752\nand two TO stars in the Globular Cluster NGC 6397 and derive abundances with 3D\nhydrodynamical model atmospheres computed with the CO5BOLD code. These\nabundances are compared to the Cu abundances measured in giant stars of the\nsame clusters, using the lines of Mult. 2 at 510.5 nm and 578.2 nm. Results.\nThe abundances derived from the lines of Mult. 1 in TO stars differ from the\nabundances of giants of the same clusters. This is true both using CO5BOLD\nmodels and using traditional 1D model atmospheres. The LTE 3D corrections for\nTO stars are large, while they are small for giant stars. Conclusions. The Cu I\nresonance lines of Mult. 1 are not reliable abundance indicators. It is likely\nthat departures from LTE should be taken into account to properly describe\nthese lines, although it is not clear if these alone can account for the\nobservations. An investigation of these departures is indeed encouraged for\nboth dwarfs and giants. Our recommendation to those interested in the study of\nthe evolution of copper abundances is to rely on the measurements in giants,\nbased on the lines of Mult. 2. We caution, however, that NLTE studies may imply\na revision in all the Cu abundances, both in dwarfs and giants.", + "authors": "Piercarlo Bonifacio, Elisabetta Caffau, Hans-G\u00fcnter Ludwig", + "published": "2010-09-09", + "updated": "2010-09-19", + "primary_cat": "astro-ph.SR", + "cats": [ + "astro-ph.SR" + ], + "main_content": "Introduction There is no wide consensus on the nucleosynthetic origin of copper, and the complex picture drawn by the observations has no straightforward interpretation. Multiple channels can contribute to the production of this element. According to Bisterzo et al. (2004) there are \ufb01ve such channels: explosive nucleosynthesis, either in Type II supernovae (SNII) or in Type Ia supernovae (SNIa), slow neutron capture (s\u2212process), either weak (i.e. taking place in massive stars in conditions of hydrostatic equilibrium during He and C burning) or main (i.e. occurring in the inter-shell region of low-mass asymptotic giant branch stars) and the weak sr-process. The latter occurs in massive stars in the C-burning shell when neutron densities reach very high values, intermediate between typical s\u2212process neutron densities (109 \u22121011 cm\u22123; Despain 1980) and r\u2212process neutron densities (1020 \u22121030 cm\u22123; Kratz et al. 2007). The contribution of the s\u2212process, both weak and main, to the solar system Cu abundance is estimated by Travaglio et al. (2004a) to be 27%. Explosive nucleosynthesis in SNII can account for 5% to 10% Send o\ufb00print requests to: Piercarlo.Bonifacio@obspm.fr \u22c6Based on observations made with the ESO Very Large Telescope at Paranal Observatory, Chile (Programmes 71.D-0155, 75.D-0807, 76.B0133) \u22c6\u22c6Gliese Fellow of the solar system Cu (Bisterzo et al. 2004). The contribution of SNIa is probably less well known, however, as pointed out by McWilliam & Smecker-Hane (2005), the available SNIa yields of Cu are rather low (Travaglio et al. 2004b; Thielemann et al. 1986). Bisterzo et al. (2004) claim that the bulk of cosmic Cu has indeed been produced by the weak sr-process. Observations of copper abundances in Galactic stars show a decrease in the Cu/Fe ratio at low metallicities. This was \ufb01rst suggested by Cohen (1980), on the basis of the measurements in giant stars of several Globular Clusters of di\ufb00erent metallicities (Cohen 1978, 1979, 1980). It was not until the comprehensive study of Sneden et al. (1991) that this trend was clearly de\ufb01ned in a robust way, resting on measurements in a large sample of stars. Recent studies of \ufb01eld (Mishenina et al. 2002; Bihain et al. 2004) and Globular Cluster stars (Shetrone et al. 2001, 2003; Simmerer et al. 2003; Yong et al. 2005) have con\ufb01rmed this trend (see Fig. 1 of Bisterzo et al. 2004). Somewhat at odds with these general results are the observations of the Globular Cluster \u03c9 Cen (Cunha et al. 2002; Pancino et al. 2002). Even though this cluster shows a sizeable spread in metal abundances (\u22122.20 \u2264[Fe/H] \u2264\u22120.70, Johnson et al. 2008), the Cu/Fe abundance ratio is nearly constant, with no discernible trend. Observations in Local Group galaxies (Shetrone et al. 2001, 2003) show that metal-poor populations display low Cu/Fe ra1003 \fBonifacio et al.: Copper resonance lines Fig. 1. Cu i 324.7 nm line in the programme stars. The spectra are displaced vertically by 0.4 units, with respect to each other, for display purposes. tios, similar to what is observed in Galactic stars of comparable metallicity. However, McWilliam & Smecker-Hane (2005) noted that the metal-rich population of the Sgr dSph displays considerably lower Cu/Fe ratios than Galactic stars of comparable metallicity. This result is con\ufb01rmed by the measurements of Sbordone et al. (2007), who also include stars of the Globular Cluster Terzan 7, associated to the Sgr dSph. The majority of the Cu measurements are based on the Cu i lines of Mult. 21, sometimes one line of Mult. 7 is used. The exceptions are the measurements of Bihain et al. (2004) and Cohen et al. (2008), who use the resonance lines of Mult. 1 and Prochaska et al. (2000) who, to our knowledge, are the only ones who have made use of the strongest line of Mult. 6 in the near infrared. While for stars of metallicity above \u20131.0 one may have a choice of several lines to use, when going to metal-poor stars, for instance below \u20131.5, the only usable Cu abundance indicators are the two strongest lines of Cu i Mult. 2 in giant stars and the resonance lines of Mult. 1 in both dwarfs and giants. The advantage of the lines of Mult. 1 is that they are very strong, at high metallicity they are strongly saturated and therefore not ideal for abundance work, but, they remain measurable down to 1 we refer to the multiplet designation of Moore (1945) Fig. 2. Cu i 327.3 nm line in the programme stars. The spectra are displaced vertically by 0.4 units, with respect to each other, for display purposes. an extremely low metallicity. Bihain et al. (2004) have been able to measure the 327.3nm line in the extremely metal-poor dwarf G64-12 ([Fe/H]\u223c\u22123). Observationally the main disadvantage of Mult. 1 is that it lies in the UV, fairly near to the atmospheric cut-o\ufb00. A very e\ufb03cient UV spectrograph, like UVES and a large telescope, like VLT, may circumvent this problem. There are many spectra suitable for the measurement of the Cu i lines of Mult. 1 in stars of di\ufb00erent metallicities in the ESO archive2. The main purpose of our investigation is to assess the quality of the Cu i lines of Mult. 1 as abundance indicators. Our strategy is to compare for the \ufb01rst time Cu abundances in main sequence and giants of the same cluster, because Cu is not expected to be easily destroyed or created. This test will indicate the reliability of our modelling. Globular Clusters NGC 6397 and NGC 6752 span an interesting range in metallicity \u20132.0 to \u20131.5, which is relevant for a large fraction of the observations in \ufb01eld stars. 2 http://archive.eso.org 1004 \fBonifacio et al.: Copper resonance lines Table 1. Atmospheric parameters of the programme stars. Star Te\ufb00 log g [Fe/H] \u03be K [cgs] dex km s\u22121 Cl* NGC 6752 GVS 4428 6226 4.28 -1.52 0.70 Cl* NGC 6752 GVS 200613 6226 4.28 -1.56 0.70 Cl* NGC 6397 ALA 1406 6345 4.10 -2.05 1.32 Cl* NGC 6397 ALA 228 6274 4.10 -2.05 1.32 Cl* NGC 6397 ALA 2111 6207 4.10 -2.01 1.32 HD 218502 6296 4.13 -1.85 1.00 2. Observations and equivalent width measurements The spectra analysed here are the same as in Pasquini et al. (2004) and Pasquini et al. (2007) and described in the above papers. They were obtained with the UVES spectrograph (Dekker et al. 2000) at the ESO VLT-Kueyen 8.2m telescope. We here use the blue arm spectra, which are centred at 346 nm. Both clusters were observed with a 1\u2032\u2032 slit and a 2 \u00d7 2 on-chip binning, which yields a resolution of about 40 000. The reduced spectra were downloaded from the ESO archive, thanks to the improved strategies for optimal extraction (Ballester et al. 2006), the S/N ratios are greatly improved compared to what was previously available. The equivalent widths (EWs) of the two Cu i lines of Mult. 1 were measured with the IRAF task splot and are provided in the on-line Table 6. In addition to the four cluster stars we analyse the \ufb01eld star HD 218502 as a reference. Its atmospheric parameters are close to those of the cluster stars. For this star we work with the data used by Pasquini et al. (2004) as well as with the data observed in 2005, in the course of ESO programme 76.B-0133 (see Smiljanic et al. 2008). For this star we used six spectra: two with 1. \u2032\u20320 and 2 \u00d7 2 binning, two with 1. \u2032\u20320 and 1 \u00d7 1 binning, two with 1. \u2032\u20322 and 1 \u00d7 1 binning. Each pair of spectra was coadded, the equivalent widths were measured on the coadded spectrum and then the three equivalent widths were averaged. The spectra of all the \ufb01ve stars analysed here are shown in Fig. 1 and 2. One of the goals of the present analysis is to compare the Cu abundances in the TO stars with those measured in giant stars of the same cluster. For NGC 6752 we can rely on the recent analysis by Yong et al. (2005), who analysed 38 giants in this cluster, making use of high-resolution high-S/N ratio UVES spectra. The atomic data used by Yong et al. (2005) are the same as those here used. The analysis is based on 1D ATLAS models and LTE spectrum synthesis. That the ATLAS models employed by Yong et al. (2005) use the approximate overshooting option in ATLAS, while those we use do not, brings about only minor di\ufb00erences. Thus the measurements of Yong et al. (2005) are directly comparable to our own. The measurements of Yong et al. (2005) are, however, based only on the strongest line of Mult. 2. Because we aim to compare the abundance derived from the lines of Mult. 1 and Mult. 2 we retrieved UVES reduced spectra of one of the stars of Yong et al. (2005) from the ESO archive: Cl* NGC 6752 YGN 30. We used three spectra of 1800 s obtained with the dichroic # 1, the blue arm spectrum was centred at 346 nm and the red arm spectrum at 580 nm. The slit was set at 1. \u2032\u20320 in the blue arm and 0. \u2032\u20327 in the red arm; the CCD binning was 1 \u00d7 1 for both arms. The corresponding resolution is \u223c45 000 for the blue arm and \u223c60 000 in the red arm. For the cluster NGC 6397, though, we were unable to \ufb01nd any recent analysis that included the measurement of Cu. In fact the only measurement of Cu in this cluster which we could \ufb01nd is due to Gratton (1982). In order to make the Gratton (1982) measurements directly comparable to our own we used the published EWs of the Cu i lines of Mult. 2 and derive the abundances with our models, spectrum synthesis codes, and atomic data. 3. Cu abundances 3.1. Atomic data To determine the Cu abundances for the TO stars we used the Cu i resonance lines of Mult. 1 at 324.7 nm and 327.3 nm. The log g f values were taken from Bielski (1975) and the hyper\ufb01ne structure and isotopic shifts for the 63Cu and 65Cu isotopes from Kurucz (1999). We used the same sources for the two lines of Mult. 2 at 510.5 nm and 587.2 nm that we used for the giant stars. The line list used for the computations is given on-line in Table 5. The line at 327.3 nm is free from blends in metal-poor TO stars and the continuum is usually easily determined. The stronger 324.7 nm line, though, lies in a more complex spectral region. The only truly blending feature is a weak OH line (324.7615nm), but, the line is on the red wing of a complex blend, mainly of iron lines, of which several have poor log g f values. The continuum is more di\ufb03cult to determine for this line, given the larger line crowding in this region. We experimented with di\ufb00erent choices for the Van der Waals broadening of the lines, the ABO theory (Anstee & O\u2019Mara 1995; Barklem & O\u2019Mara 1997; Barklem et al. 1998a,b) and the WIDTH approximation (Kurucz 1993a, 2005; Castelli 2005, see also Ryan 1998). For the transitions under consideration the WIDTH approximation and the ABO theory yield almost identical values. 3.2. Atmospheric parameters The adopted atmospheric parameters for our programme stars are given in Table 1 and were taken from Pasquini et al. (2004) and Pasquini et al. (2007). For the giant star Cl* NGC 6752 YGN 30 we adopted the atmospheric parameters of Yong et al. (2005). For the two giants in NGC 6397 we adopted the atmospheric parameters of Gratton (1982). For the reader\u2019s convenience the atmospheric parameters of the giant stars are provided here on-line in Table 7. 3.3. Model atmospheres and spectrum synthesis. For each star we computed a 1D model atmosphere using version 9 of the ATLAS code (Kurucz 1993a, 2005) under Linux (Sbordone et al. 2004; Sbordone 2005). We used the opacity distribution functions described by Castelli & Kurucz (2003) and microturbulent velocity 1 km s\u22121, the mixing-length parameter, \u03b1MLT, was set to 1.25, and the overshooting switched o\ufb00. This model atmosphere was used as input to the SYNTHE code (Kurucz 1993b, 2005), with di\ufb00erent Cu abundances, to compute a curve-of-growth for each line. The Cu abundances were derived by interpolating in these curves of growth. The corresponding abundances are given in the second column of Table 2, the \u03c3 is the variance of the abundances of the two lines. The abundances for the individual lines can be found on-line in Table 6. The use of three dimensional hydrodynamical simulations to describe stellar atmospheres (hereafter 3D models) has led to the important notion that the outer layers present steeper temperature gradients than predicted by traditional 1D static model atmospheres (Asplund et al. 1999; Asplund 2005; 1005 \fBonifacio et al.: Copper resonance lines 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 A(Cu) -2.4 -2.2 -2.0 -1.8 -1.6 -1.4 Log(EW/\u03bb) Fig. 3. Curves of growth (COG) for the Cu i 327.3 nm transition. The dotted line to the left is the COG for the 3D model d3t63g40m20n01, the three solid lines to the right are those for the corresponding 1DLHD model for three values of the microturbulent velocity: 0.5, 1.0 and 1.5 km s\u22121, bottom to top. The dashed line is the 3D COG shifted arbitrarily by +0.58 dex along the x-axis. This highlights that the shape of the 3D COG di\ufb00ers from that of the corresponding 1D COGs and therefore the 3D correction depends on the EW of the transition. Gonz\u00b4 alez Hern\u00b4 andez et al. 2008) and that this e\ufb00ect is considerably more pronounced for metal-poor stars. In addition to the di\ufb00erent mean temperature pro\ufb01le, the 3D models di\ufb00er from traditional 1D models because they account for the horizontal temperature \ufb02uctuations. Both e\ufb00ects may or may not be important, depending on the line formation properties of the transition under consideration. In order to investigate these e\ufb00ects for the Cu i lines we used several 3D models computed with the code CO5BOLD (Freytag et al. 2002, 2003; Wedemeyer et al. 2004). The characteristics of the 3D models employed in this study are given in Table 3. The line formation computations for the 3D models were performed with the Linfor3D code3. For each 3D model we used also two reference 1D models: the \u27e83D\u27e9and the 1DLHD, which we de\ufb01ne below. The \u27e83D\u27e9models are computed on-the-\ufb02y by Linfor3D by averaging the 3D model over surfaces of equal Rosseland optical depth and time. The \u27e83D\u27e9model has, by construction, the mean temperature structure of the CO5BOLD model, therefore the difference in abundance A(3D)-A(\u27e83D\u27e9), allows us to single out the e\ufb00ects caused by temperature \ufb02uctuations (see Ca\ufb00au & Ludwig 2007). The 1DLHD model is a 1D, plane parallel, LTE, static, model atmosphere and employs the same micro-physics and opacity as the CO5BOLD models; it is computed with the LHD code. These models are our models of choice to de\ufb01ne the \u201c3D correction\u201d as A(3D) A(1DLHD), where A is the abundance of any given element. More details on the LHD models may be found in Ca\ufb00au & Ludwig (2007) and Ca\ufb00au et al. (2010). In any given Linfor3D run we made computations also for the \u27e83D\u27e9model and for a 1DLHD model, with the same Te\ufb00, log g, and metallicity as the 3D model. The computation of a 3D model is still very time consuming, even on modern computers (several months), it would be impractical to compute a speci\ufb01c 3D model for any set of our 3 http://www.aip.de/\u223cmst/Linfor3D/linfor 3D manual.pdf Table 2. Copper abundances for the programme stars. Star A(Cu) \u03c3 A(Cu) \u03c3 1D 3D Cl* NGC 6752 GVS 4428 3.23 0.08 2.56 0.16 Cl* NGC 6752 GVS 200613 3.01 0.05 2.23 0.07 Cl* NGC 6397 ALA 1406 1.33 0.03 0.74 0.05 Cl* NGC 6397 ALA 228 1.30 0.03 0.73 0.05 Cl* NGC 6397 ALA 2111 1.19 0.02 0.60 0.02 HD 218502 1.52 0.09 0.95 0.04 Table 4. Mean copper abundances for the two clusters. Star A(Cu) \u03c3 A(Cu) \u03c3 1D 3D NGC 6752 dwarfs 3.04 0.07 2.28 0.12 NGC 6752 giants 2.03 0.05 1.98 0.05 NGC 6397 dwarfs 1.25 0.05 0.63 0.04 NGC 6397 giants 1.40 0.17 1.30 0.17 atmospheric parameters. Our strategy is therefore the following: we perform an abundance analysis with ATLAS model atmospheres computed for the desired set of atmospheric parameters; we use a grid of 3D models with atmospheric parameters that bracket the desired ones and compute the relevant 3D corrections by linear or bi-linear interpolation in the grid, as appropriate; the 3D abundance is obtained by applying the interpolated 3D correction to the 1D abundance. The line formation computations were performed using SYNTHE for the ATLAS models and Linfor3D for all other models. As a consistency check we used Linfor3D with an ATLAS model as input and veri\ufb01ed that the line pro\ufb01les and EWs are consistent with the results derived from SYNTHE+ATLAS. The di\ufb00erence between the two line formation codes amounts to a few hundredths of dex in terms of abundance, a quantity that is irrelevant with respect to the size of the 3D corrections under consideration. For three out of the six TO stars under study, the Cu i lines are strong (EW> 4.0 pm), therefore they are surely in the saturation regime. Their 3D correction depends on the adopted microturbulence in the adopted reference 1D atmosphere. To take this into account, di\ufb00erent 1DLHD curves of growth were computed with microturbulent velocities of 0.5, 1.0 and 1.5 km s\u22121, to allow us interpolation to any desired value of \u03be. For weaker lines the microturbulent velocity does not play a fundamental rule, so that the 3D correction is mostly insensitive on the choice of this parameter. We \ufb01nd that the 3D curve of growth is not a simple translation of a 1D curve of growth, but has a distinct shape. An example to illustrate the e\ufb00ect is shown in Fig. 3. An immediate consequence is that for all lines we considered, the 3D correction depends on the EW, even for the weaker lines. We note that the dependence of the 3D correction on the microturbulence implies that the abundance obtained by applying the correction to an abundance derived from a 1D model depends on the adopted microturbulence. One of the reasons to prefer the use of 3D models is to avoid the use of this parameter. The correct way to treat this is, not to use the 3D correction, but to derive the abundance by an interpolation in a set of 3D curves of growth, or to use suitable \ufb01tting functions, as done for lithium by Sbordone et al. (2010). But this requires the use of a larger set of 3D models, which bracket the e\ufb00ective temperatures of the studied stars. For the purpose of the present exploratory investigation we believe 1006 \fBonifacio et al.: Copper resonance lines Table 3. CO5BOLD models employed in the study. Model Te\ufb00 log g [M/H] Nt time tc Resolution Box Size K s s Mm3 d3t50g25mm10n01 4990 2.5 \u22121.0 20 475990 1411.9 160 \u00d7 160 \u00d7 200 573.2 \u00d7 573.2 \u00d7 245.4 d3t50g25mm20n01 5020 2.5 \u22122.0 20 403990 1388.3 160 \u00d7 160 \u00d7 200 584.0 \u00d7 584.0 \u00d7 245.4 d3t63g40mm10n01 6260 4.0 \u22121.0 20 43800 12.2 140 \u00d7 140 \u00d7 150 26.0 \u00d7 26.0 \u00d7 12.8 d3t63g40mm20n01 6280 4.0 \u22122.0 16 27600 49.0 140 \u00d7 140 \u00d7 150 26.1 \u00d7 26.1 \u00d7 12.8 d3t63g45mm10n01 6240 4.5 \u22121.0 20 24960 16.0 140 \u00d7 140 \u00d7 150 7.0 \u00d7 7.0 \u00d7 4.0 d3t63g45mm20n01 6320 4.5 \u22122.0 19 9120 15.9 140 \u00d7 140 \u00d7 150 7.0 \u00d7 7.0 \u00d7 4.0 the approach of using 3D corrections is adequate, especially because the conclusion of the study is that 3D-LTE abundances are unreliable. Finally we point out that it may be that current 3D models do not correctly capture turbulence at small scales (Ste\ufb00en et al. 2009). If this were indeed the case all abundances derived from saturated lines are doubtful, whether they are derived applying a 3D correction or directly derived from the 3D curves of growth. We therefore computed the 3D correction for each of the measured EWs for each of the relevant 3D models. In general the 3D correction will also depend on the treatment of convection in the 1D reference model, hence on the adopted \u03b1MLT. The lines under consideration do not form in the deepest layers, which are the most a\ufb00ected by the choice of \u03b1MLT, thus they are insensitive to it. All 1DLHD models employed have \u03b1MLT= 1.0. The computed A(3D) \u2013 A(1DLHD) corrections, as well as the A(3D) \u2013 A(\u27e83D\u27e9) corrections are given on-line in Table 8. The 3D abundances provided in Table 2 are obtained by applying to the 1D abundances the 3D corrections in Table 6, which were obtained by interpolating the corrections in Table 8. The six stars under study have very similar e\ufb00ective temperatures. All are within roughly 100 K of the e\ufb00ective temperatures of the 3D models listed in Table 3 (Te\ufb00\u223c6300 K). Therefore it is not necessary to include more 3D models and perform an interpolation in Te\ufb00. The metallicities and gravities of the models in Table 3 bracket the metallicities and gravities in Table 1. We used a bi-linear interpolation in metallicity and gravity for the two stars of NGC 6752 and for HD 218502. The three stars of NGC 6397, though have a metallicity of almost \u20132.0, therefore a linear interpolation in surface gravity was su\ufb03cient. The computation of 3D models for typical giant stars is much more time consuming than for F-type and cooler dwarfs. The radiative relaxation time in the surface layers of warm giants becomes signi\ufb01cantly shorter than the dynamical time scale, which makes it computationally expensive to properly capture the time evolution of the system. At present we have only two fully relaxed models of metal-poor giants. The parameters are given in Table 3 and the metallicities are \u20131.0 and \u20132.0. The surface gravity is larger than that of the majority of the giant stars analysed in either cluster. Only the faintest stars analysed by Yong et al. (2005) have parameters Te\ufb00and log g close to those of our giant models. In spite of this we believe that we can use the 3D corrections derived from this model, as a \ufb01rst order approximation as representative of the corrections in giant stars of both clusters. This is possible because the 3D corrections are rather small, especially when compared to those of dwarf stars. This is partly because giant 3D models do not show a pronounced over-cooling, compared to 1D models, as dwarfs display. This conclusion is also based on the examination of several snapshots 0.0 0.4 0.8 1.2 dEW/dlog \u03c4 (pm) -6 -4 -2 0 log\u03c4\u03bb 0.0 2.0 4.0 Fig. 4. Contribution functions of the EW at disc-centre, de\ufb01ned in a way that their integral over log \u03c4\u03bb gives the EW (Magain 1986), for the Cu i 324.7 nm line and the model d3t63g40mm20n01 for two di\ufb00erent values of Cu abundance. In the upper panel A(Cu)=0.2, in the lower panel A(Cu)=1.7. The solid lines refer to the 3D model, the dashed lines to the corresponding 1DLHD model. -6 -4 -2 0 log \u03c4\u03bb 0.0 0.2 0.4 0.6 0.8 1.0 dEW/dlog \u03c4 (pm) Fig. 5. Contribution functions for the Cu i 510.5nm line and the model d3t50g25mm20n01.The solid line refers to the 3D model, the dashed line to the corresponding 1DLHD model. of not fully relaxed giants of di\ufb00erent atmospheric parameters, which we are in the process of computing. The 3D correction for the giant models is \u20130.1 dex for both the examined Cu i lines of Mult. 2 for the model of metallicity \u20132.0 and 0.0 dex (actually +0.01) for the model of metallicity \u20131.0. We apply a correction of \u20130.1 to the abundances of giant stars in NGC 6397 and \u20130.05 to those of NGC 6752. 1007 \fBonifacio et al.: Copper resonance lines 4. Results 4.1. HD 218502 The analysis of the reference star HD 218502 shows that the abundances derived from the two Cu i resonance lines (Table 6) agree well, both with 1D and 3D models. This gives us con\ufb01dence in the reliability of the atomic data used. It also suggests that with good quality data, the EWs of both lines can be satisfactorily measured in spite of the complexity of the spectral region, especially for the 324.7 nm line. Our 1D abundance (A(Cu)= 1.52 \u00b1 0.09) agrees, within errors with what reported by Bihain et al. (2004, A(Cu)= 1.70\u00b10.17). We note a small difference in the e\ufb00ective temperature adopted in the two analyses (about 100 K) and a di\ufb00erence in the data: our UVES spectra are of considerably higher quality than the CASPEC spectra used by Bihain et al. (2004), who measured only the 327.3nm line. This agreement is expected because the atomic data are the same in the two analyses and the 1D model atmospheres used are similar, the main di\ufb00erence being the overshooting. This check suggests that we may reasonably assume that our Cu abundances should be consistent with those of Bihain et al. (2004), which are based on the UV resonance lines. An inspection of Fig. 6 of Bihain et al. (2004) suggests that these measurements substantially agree with those of Mishenina et al. (2002), which are essentially based on the measurements of the lines of Mult. 2. Yet one should take into account the large error bars that derive from the relatively poor S/N ratio that is achievable in the UV range. The 3D correction for the lines of Mult.1 is large and the 3D abundance in this star is well below all the measurements in giant stars of similar metallicity. An application of 3D corrections to all measurements of Bihain et al. (2004) would probably break the agreement with the measurements of Mishenina et al. (2002). We have inspected the available red spectra of HD 218502, to see if any of the lines of Mult. 2 could be detected, but this was not the case. 4.2. TO stars in NGC 6752 and NGC 6397 For the cluster stars there is also a good consistency between the abundances derived from the two resonance lines for any given line, in spite of the much lower S/N ratios in the cluster stars. This suggests that there is no major inconsistency in the EW measurements. The weighted mean4 of the abundances of the clusters, reported in Table 4, displays an error in the mean that is reasonably small. We believe that the mean abundances for the two clusters obtained from the dwarf stars are indeed representative of the Cu i abundance derived from the lines of Mult. 1. Considering that each spectrum of a TO star in these clusters amounts to about 10 hours of integration with UVES, it is unlikely that in the near future better data or data for a larger number of stars will be available, although it would clearly be desirable. 4.3. Star Cl* NGC 6752 YGN 30 In this star, like in the other giant stars observed by Yong et al. (2005), both the lines of Mult. 1 and of Mult. 2 are measurable, which provides for a consistency check. We did not measure the 4 The error on the weighted mean has been taken to be the largest between r\u0012P 1 \u03c32 i \u0013\u22121 and r\u0012P 1 \u03c32 i \u0013\u22121 \u00d7 1 (n\u22121) P (xi\u2212)2 \u03c32 i , where xi are the data points \u03c3i are the associated errors, \u27e8x\u27e9is the mean value and n is the number of data point (see e.g., Agekan 1972, pages 144\u2013150). line at 324.7 nm since the region is extremely crowded in these cool giants, but the line at 327.3nm is clean and unblended. For Mult. 2 we only measured the 510.5nm line, since the other line is not present in the spectrum, because it falls in the gap between the two CCDs. The two lines (324.7nm and 510.5nm ) provide inconsistent results, the line of Mult. 1 provides an abundance that is 0.54 dex higher than that of the line of Mult. 2 (see online Table 6). The abundance we derive from the line of Mult. 2 substantiallly agrees with the measure of Yong et al. (2005), our abundance is 0.13 dex smaller. Of this di\ufb00erence 0.05 dex are because we use ATLAS models without overshooting, while Yong et al. (2005) use models with overshooting, the remaining di\ufb00erence should be attributed to a di\ufb00erence in the measured EW and possibly to the di\ufb00erent spectrum synthesis codes used (MOOG by Yong et al. 2005 and SYNTHE by us). The 3D corrections for both lines of Mult. 1 and Mult. 2 are small and agree to within a few hundredths of dex. The abundance from the two lines cannot be brought into agreement by using 3D models. 4.4. Giant stars in NGC 6397 The error in the mean of the two giants is 0.18 dex, which is essentially identical to the estimate of Gratton (1982) of 0.2 dex on the copper abundance. Even making use of 1D models the abundance of the giant stars is considerably higher than in the dwarf stars. 5. Effects of atmospheric parameters We intend to quantify the e\ufb00ect of changing atmospheric parameters on the derived abundances. The abundances derived from the Cu i resonance lines are fairly sensitive, for a neutral species, to the adopted surface gravity. For dwarf stars a change of \u00b10.25 dex in log g induces a change of \u22130.1 dex in abundance. For the giant stars they are only slightly less sensitive, \u22130.06 dex. On the other hand for giant stars the dependence on gravity of the abundances derived from the lines of Mult. 2 is very weak \u22130.01 dex. An inspection of on-line Table 8 allows us to estimate the e\ufb00ect of surface gravity on the 3D corrections for the lines of Mult. 1. By increasing the gravity by 0.5 dex, the 3D correction increases by 0.1 to 0.2 dex, depending on how saturated the line is. Since the 3D correction is negative, this means that it decreases in absolute value. The opposite trends with surface gravity on the 1D abundance and 3D correction imply that the two e\ufb00ects tend to cancel and the overall sensitivity of the 3D abundance on surface gravity is small. The dependence of abundances on e\ufb00ective temperatures for the lines of Mult. 1 is similar for dwarfs and giants, and is about \u00b10.2 dex for a change of \u00b1100 K in e\ufb00ective temperature. To evaluate the dependence of the 3D corrections on the e\ufb00ective temperature we used four models, extracted from the CIFIST grid (Ludwig et al. 2009). All four models have e\ufb00ective temperature around 5900 K, their metallicities are \u20131.0 and \u20132.0 and their log g 4.0 and 4.5. The result is that for a decrease of 300 K the 3D correction increases by 0.2 dex. Again the variation of the 3D correction goes in the opposite direction with respect to the variation in the 1D abundance. Combining the results we conclude that a decrease of 300 K in e\ufb00ective temperature results in a decrease by 0.4 dex in copper abundance (3D). For the giants stars we also estimated the variation of the abundances derived from the lines of Mult. 2, which amounts to about \u00b10.1 dex for a variation of \u00b1100 K in e\ufb00ective temperature. 1008 \fBonifacio et al.: Copper resonance lines 2 4 6 8 T [10 3 K] <3D> CO 5BOLD 1D LHD Teff = 6300 log g = 4.0 [M/H] = -2 6 bin model -6 -4 -2 0 2 4 6 8 T [10 3 K] <3D> CO 5BOLD 1D LHD Teff = 6300 log g = 4.0 [M/H] = -2 12 bin model Fig. 6. Comparison between the temperature structures of two models with Te\ufb00= 6300, log g= 4.0 and metallicity \u20132.0, computed with 6 and 12 opacity di\ufb00erence. The di\ufb00erence is not as large as it is at lower metallicities (see Fig. 1 of Behara et al. 2009). 6. Hydrodynamical models and spectrum synthesis. Clearly the large 3D corrections derived for the Cu i resonance lines are driven by the fact that in the outer layers the hydrodynamical models are considerably cooler than the corresponding 1D models, the so-called \u201covercooling\u201d. This shifts the ionisation equilibrium towards neutral copper, and as a consequence the line contribution function shows a strong peak in these layers, contrary to the 1D model (se Fig. 4). From a physical point of view this is expected, simply because the hydrodynamical model is observed to transport \ufb02ux through convection even in layers where the corresponding 1D model is formally stable against convection (overshooting). One should however ask to which extent the computed overcooling depends on the assumptions made. Bonifacio (2010) pointed out the di\ufb00erence in the overcooling for metal-poor giants computed with CO5BOLD and that computed by Collet et al. (2007). Behara et al. (2009) pointed out that for extremely metal-poor dwarfs the overcooling is considerably less in CO5BOLD models computed with 12 opacity bins than in models computed with 6 opacity bins, like the ones used here. For the time being we have relatively few CO5BOLD models computed with 12 opacity bins, among which we have one with parameters identical to one used in the present investigation: Te\ufb00= 6300, log g= 4.0 and metallicity = \u20132.0. In Fig. 6 we show Fig. 7. Comparison between the curves of growth for the 327.3 nm line computed for the two models shown in Fig. 6. Solid symbols correspond to the 6 bin model and open symbols to the 12 bin model. the mean temperature structures of the two models. Obviously the di\ufb00erence is smaller than what is displayed by the models 1 dex more metal-poor shown by Behara et al. (2009). The qualitative conclusion is con\ufb01rmed by comparison of the curves of growth. In Fig. 7 the curve of growth for the 327.3 nm line used in the present investigation is compared to the one computed from the corresponding 12 bin model. As can be appreciated from the plot, for a given equivalent width the 12 bin model will yield a Cu abundance that is higher by approximately 0.1 dex, thus correspondingly decreasing the 3D correction. Another matter of concern is that the current version of Linfor3D treats scattering as true absorption. That is to say, although scattering processes such as Rayleigh scattering o\ufb00hydrogen atoms are taken into account as opacity sources, in the solution of the transfer equation the source function is set equal to the local Planck function, without a term depending on the mean radiation intensity (S \u03bd = B\u03bd). To which extent can this approximation a\ufb00ect our computations, especially in the near ultraviolet, where scattering processes are a non-negligible source of opacity ? We assessed this by using 1D models and 1D spectrum synthesis. We used a slightly modi\ufb01ed version of the SPECTRV code in the SYNTHE suite so that scattering is treated as true absorption when the card SCATTERING OFF is set in the input model (see e.g. Castelli 1988). We computed line pro\ufb01les both with SCATTERING ON and SCATTERING OFF for the model at Te\ufb00= 6296 K log g= 4.0 and metallicity \u20132.0 (rlevant to HD 218502) and for the model Te\ufb00= 4943 log g= 2.42 and metallicity \u20131.5 (relevant to Cl* NGC 6752 YGN 30). It turns out that in both cases the di\ufb00erence is irrelevant to our analysis (0.6% in the continuum and 0.03% in the residual intensity for the dwarf model and 14% in the continuum and 2% in the residual intensity). The e\ufb00ect of treating scattering as true absorption is very similar on the continuum and in the lines, thus implying a small e\ufb00ect on the residual intensity and equivalent width. Recently Hayek et al. (2010) have introduced a proper treatment of scattering in their magneto-hydrodynamical simulation 1009 \fBonifacio et al.: Copper resonance lines code, BIFROST. For the Sun and solar-type stars they do not \ufb01nd a signi\ufb01cant impact of continuum scattering on the temperature structure. We thus believe that although the impact of a proper treatment of scattering needs to be investigated in metal-poor dwarfs, it seems unlikely that the results presented here will be seriously challenged by this. 7. Discussion If we consider the abundances in Table 4 at face value we are led to the inescapable conclusion that the Cu abundances in dwarfs and giants do not agree. Even though the abundances are almost compatible, within errors, at least in the 1D case, this systematic di\ufb00erence should not be overlooked. The behaviour is different between the two clusters: in NGC 6752 the dwarfs provide a higher abundance than the giants, while in the case of NGC 6397, they provide a lower value than the giants; this both using 1D and 3D models. For NGC 6752 the di\ufb00erence in 1D abundances is of one order of magnitude (1 dex), but this is reduced to only 0.3 dex if we look at the 3D abundances, which are almost compatible with the errors. The situation is reversed in NGC 6397, in 1D the abundances of dwarfs is only 0.15 dex lower than that in giants, while in 3D, the diferece is 0.9 dex. This behaviour may be understood in terms of the di\ufb00erent line formation properties in dwarfs and giants and how they change with di\ufb00erent Cu abundance. In Fig. 4 we show, as an example, the contribution functions of the EW at disc-centre for one of our models for a dwarf star, for two di\ufb00erent Cu abundances. In the top panel A(Cu)=0.2 and the line is on the linear part of the curve of growth, in the bottom panel, A(Cu)=1.7 and the line is saturated. In both cases the 3D contribution function is very di\ufb00erent from the 1D one and is peaked in the outer layers of the atmosphere. The formation of the lines of Mult. 2 in the atmospheres of giants is instead much less a\ufb00ected by 3D e\ufb00ects, as depicted in Fig. 5. The contribution functions of the lines of Mult. 1 in the giant model are morphologically similar to what is shown in Fig. 5, con\ufb01rming the weak overcooling present in this model. While the above arguments explain the behaviour of the 3D corrections, they have no bearing in the abundance di\ufb00erence that we \ufb01nd between dwarfs and giants. In these situations one has always to consider two possible alternatives: i) the di\ufb00erence is true and has an astrophysical origin; ii) the di\ufb00erence arises from shortcomings in the analysis. In our opinion hypothesis i) must be discarded. It seems extremely far fetched to devise a physical mechanism by which Cu should be overabundant in dwarf stars, like in NGC 6752, while the reverse is true in NGC 6397. Where giants have a higher Cu abundance, one could imagine to explain such a scenario either by invoking di\ufb00usion in TO stars, or Cu production in giant stars, or perhaps even a combination of both. However, an examination of Table 4 shows that the di\ufb00erences that need explanation are far too large to be created by di\ufb00usion, and the Cu production would also have to be highly e\ufb03cient. In addition note that the Cu abundances in the large sample of Yong et al. (2005) are extremely uniform, which speaks against Cu production. Thus we are left with the conclusion that the abundance determinations in either dwarfs or giants, or both, are wrong. Let us start by examining the 3D abundances in dwarf stars. Contribution functions like those shown in Fig. 4 must give rise to concerns about the LTE approximation used in our computations. The outer and less dense layers of the atmosphere, which contribute mostly to the line EW, are those in which the photon mean free path is longest and deviations from LTE may be expected. The situation is even more extreme in a 3D atmosphere where photons from a hot up-draft may transfer horizontally and overionise a neighbouring cool down-draft. A morphologically similar situation is indeed observed for the Li i doublet in metalpoor stars (Asplund et al. 2003; Cayrel et al. 2007). In LTE the contribution function displays a double peak, with a substantial contribution from outer cool layers, in NLTE this peak is entirely suppressed by overionisation. The nearly exact cancellation between 3D and NLTE correction that takes place for the Li i doublet, and results in 3D-NLTE abundances in very close agreement with 1D-LTE abundances, should not be taken as a general rule. Nevertheless we believe that the Cu i lines of Mult. 1 cannot be described by 3D-LTE computations, but NLTE e\ufb00ects should be properly accounted for. This leads to the question of how reliable the LTE approximation is for the 1D computations. That even in 1D the abundances in dwarfs and giants di\ufb00er by a factor of the order of 2, surely prompts to see if for NLTE computations the two sets of abundances may be brought into agreement. Bihain et al. (2004) tried to \ufb01t the Cu i lines of Mult. 1 in the solar spectrum, but were unable to reproduce the core of the lines. This was attributed mainly to the presence of a chromosphere that in\ufb02uences the cores of strong lines. Deviations from LTE, however, could be another, possibly concomitant, cause for the failure to reproduce the line core in LTE. For Pop II dwarf stars chromospheric effects should not be strong, in view of their old age, even if chromospheres were present. Let us \ufb01nally consider the Cu abundances in giant stars. Our limited computations suggest that they should not su\ufb00er large 3D e\ufb00ects. The test we conducted for the giant star in NGC 6752 suggests that the LTE synthesis does not allow us to reproduce satisfactorily the lines of Mult. 1 and of Mult. 2 with the same Cu abundance. Indeed the discrepancy is quite large (0.5 dex); signi\ufb01cant deviations from LTE for either or both sets of lines could be responsible for this. Which should be further investigated in order to produce reliable Cu abundances. The evolution of copper with metallicity, essentially based on the measurements of Mult. 2 in giant stars, shows a rather sharp drop in [Cu/Fe] around [Fe/H]=\u20131.5. This means that it takes place around A(Cu)=2.0. In the curve of growth for the 510.5 nm line of Mult. 2 for our giant model, this is roughly the abundance for which the line begins to enter in the saturation regime. If the Cu i lines of Mult. 2 su\ufb00er deviations from LTE it is likely that these depend on the line strength and may show a rather sharp change just when the line enters a saturation regime. This behaviour is observed, for instance, for the sodium D lines (see Fig. 6 of Andrievsky et al. 2007). These considerations render NLTE computations for Cu very desirable. Unfortunately, to our knowledge, up to now no such computations have been published, nor does a Cu model atom exist. Among the possible causes for the discrepancy in abundances between giants and dwarfs one may also consider errors in the atmospheric parameters. One may conclude that this cannot be the case by noticing the discrepancy between the Cu abundance derived from Mult. 1 and Mult. 2 in star Cl* NGC 6752 YGN 30, for which lines of both multiplets are measured. Given that the response of both multiplets to a change in e\ufb00ective temperature is similar, the con\ufb02icting results cannot be resolved by changing the e\ufb00ective temperature. Of course the discrepancy between the two multiplets can be resolved in 1D by invoking a higher microtrubulence, but it would be necessary to raise it by 1 km s\u22121. This increase would then cause a strong trend between iron abundances and equivalent widths. Furthermore this would not allow us to solve the discrepancy in the 3D analysis. Indeed while the 3D correction for Mult. 2 would not change sig1010 \fBonifacio et al.: Copper resonance lines ni\ufb01cantly with this increase in microturbulence, those of Mult. 1 would increase by about 0.4 dex, thus breaking the agreement between the two multiplets forced in 1D. Formally one can certainly \ufb01nd a value of the microturbulence that forces the 1D abundance plus 3D correction of the two mutliplets to be equal, while leaving a discrepancy in the 1D abundances. This however would again cause an abundance spread among lines of other elements (e.g. iron) and can hardly be invoked as a solution of the problem. Although at the moment we do not have enough 3D models for giant stars to perform a full 3D analysis, as done by Sbordone et al. (2010) for lithium, we believe that our results indicate that this analysis will provide discrepant abundances from the two multiplets. Let us further consider if changes in the atmospheric parameters of giant or dwarf stars in either cluster may allow us to reconcile their copper abundances. As discussed in Sect. 5 a decrease in e\ufb00ective temperature of 300 K in dwarf stars implies a decrease by about 0.4 dex. Therefore for the dwarf stars in NGC 6752 a decrease in e\ufb00ective temperature by about 225 K while keeping the temperature of giants constant, would reconcile the copper abundances of the to sets of stars. While not implausible, this change would certainly cause a mismatch in the abundances of other elements between giants and dwarfs, most notably iron, which would become less abundant in dwarfs than in giants. While one could argue that atmospheric phenomena such as di\ufb00usion may alter abundances of dwarf stars, it seems then contrived to invoke a di\ufb00erent behaviour between copper and iron. But let us now turn to the other cluster, NGC 6397. Here the situation is reversed, the dwarfs display a lower abundance than giants. However here one would need to invoke an increase in e\ufb00ective temperatures of the dwarf stars by over 500 K, placing them at Te\ufb00around 6700 K and the cluster turn-o\ufb00at about 6900 K. While these exceedingly high temperatures may be appealing (one would immediately solve the cosmological lithium problem!) they appear impossible to reconcile with the colours of the cluster and theoretical isochrones. Although the precise value of the atmospheric parameters assigned to the stars certainly plays a role in the di\ufb00erence in copper abundances between dwarfs and giants in the two clusters, we may dismiss the hypothesis that it may be cancelled by a suitable choice of parameters. 8." + }, + { + "url": "http://arxiv.org/abs/0704.2342v1", + "title": "Variations in the lithium abundances of turn off stars in the globular cluster 47 Tuc", + "abstract": "aims: Our aim is to determine Li abundances in TO stars of the Globular\nCluster 47 Tuc and test theories about Li variations among TO stars. method: We\nmake use of high resolution (R~ 43000), high signal-to-noise ratio (S/N=50--70)\nspectra of 4 turn off (TO) stars obtained with the UVES spectrograph at the\n8.2m VLT Kueyen telescope. results: The four stars observed, span the range\n1.6<~A(Li)} <~ 2.14, providing a mean A(Li) = 1.84 with a standard deviation of\n0.25 dex. When coupled with data of other two TO stars of the cluster,\navailable in the literature, the full range in Li abundances observed in this\ncluster is 1.6<~A(Li)<~ 2.3. The variation in A(Li) is at least 0.6 dex (0.7\ndex considering also the data available in the literature) and the scatter is\nsix times larger than what expected from the observational error. We claim that\nthese variations are real. A(Li) seems to be anti-correlated with A(Na) exactly\nas observed in NGC 6752. No systematic error in our analysis could produce such\nan anti-correlation. conclusions: Na production through p captures on 22Ne at\ntemperatures in excess of 3x10^7 K and the contemporary Li destruction could\nresult in this anti-correlation. However such nuclear processing cannot have\ntaken place in the stars themselves, which do not reach such high temperatures,\neven at their centre. This points towards the processing in a previous\ngeneration of stars. The low N/O ratios in the observed stars and the apparent\nlack of correlation between N an Li abundances, place a strong constraint on\nthe properties of this previous generation. Our results indicate a different\nbehaviour among the Globular Clusters so far studied as far as the abundance\npatterns are concerned.", + "authors": "Piercarlo Bonifacio, Luca Pasquini, Paolo Molaro, Eugenio Carretta, Patrick Fran\u00e7ois, Raffaele G. Gratton, Gael James, Luca Sbordone, Fran\u00e7ois Spite, Manuela Zoccali", + "published": "2007-04-18", + "updated": "2007-04-18", + "primary_cat": "astro-ph", + "cats": [ + "astro-ph" + ], + "main_content": "Introduction The Globular Cluster (GC) 47 Tuc is a good example of the metal-rich end population of these very old objects (age in the range 11 to 14 Gyr, Gratton et al. 2003). In view of its brightness it is one of the best studied GCs and with the advent of the UVES spectrograph at the ESO-VLT it has become possible to obtain abundances of individual stars on the Main Sequence, Send o\ufb00print requests to: P. Bonifacio \u22c6Based on observations made with the ESO VLT-Kueyen telescope at the Paranal Observatory, Chile, in the course of the ESO-Large program 165.L-0263 Correspondence to: P. Bonifacio with an accuracy previously possible only for Halo \ufb01eld stars which are several magnitudes brighter. It is one of the main targets of the ESO Large Programme 165.L-0263 (P.I. R.G. Gratton). The chemical composition of turn o\ufb00(TO) and subgiant (SG) stars from our UVES data is given in another paper of this series (Carretta et al. 2004), which provided a metallicity [Fe/H]=\u20130.64 for the TO stars of this cluster. It has been known for almost thirty years that 47 Tuc exhibits a bimodal distribution of CN band strengths, suggesting a bimodal distribution of N abundances, among giant stars (Norris & Freeman 1979) and also among TO stars (Briley et al. 1994). Thus abundance inhomogeneities among stars in this cluster are expected. \f2 Bonifacio et al.: Lithium in 47 Tuc In this paper we examine the Li abundances in the TO stars. The only previous investigation of Li in this cluster was performed by Pasquini & Molaro (1997) who could detect the Li line in two TO stars, using spectra obtained with EMMI on the 3.5m ESO NTT telescope. In the present investigation we shall make use of the equivalent widths measured by Pasquini & Molaro (1997). Old metal\u2013poor warm dwarfs in the Galactic Halo show a rather uniform lithium abundance, whichever their metallicity or e\ufb00ective temperature. This was discovered by Spite & Spite (1982) and is usually called the Spite Plateau. This behaviour of lithium is unique among chemical elements, all of which show a decreasing abundance with decreasing metallicity; lithium is the only element to display a plateau. The most obvious interpretation is still the one put forward by Spite & Spite (1982) that the lithium observed in the Spite Plateau is simply the \u201cprimordial\u201d lithium, that is the lithium that has been produced during the big bang, together with the other light nuclei, D, 3He and 4He. The abundance of the nuclei produced in this way depends on the baryon to photon ratio (\u03b7 = nb/n\u03b3) which cannot be deduced from \ufb01rst principles, but has to be somehow measured. The WMAP satellite has provided this ratio with an accuracy of the order of 4% (Spergel et al. 2003, 2006), the precise value being \u03b7 = 6.11 \u00b1 0.22 \u00d7 10\u221210. When inserted in standard big bang nucleosynthesis computations (SBBN) this value implies A(Li)1 = 2.64. Current estimates of the level of the Spite Plateau range from 2.1 (Bonifacio et al. 2007) to 2.3 (Bonifacio et al. 2002; Mel\u00b4 endez & Ram\u00b4 \u0131rez 2004). There is thus tension between the observed lithium abundances in stars and the predictions of SBBN, when the baryonic density derived from WMAP is adopted. The most obvious ways to reconcile these results are either to look for new physics at the time of nucleosynthesis or to \ufb01nd mechanism(s) which have depleted uniformly Li from the primordial value to what is currently observed in Halo stars. In order to test the theories which predict Li depletion, GCs are, in principle, an ideal target: the stars have the same age and chemical composition, at variance with what happens with \ufb01eld stars. E\ufb00ects of Li depletion could be obscured by other concurrent metallicity or age e\ufb00ects. The Li abundance in GCs is therefore of great importance; we may try to detect some of the features which are predicted by models, such as a mild scatter in Li abundances, above what is expected from observational errors, or the existence of \u201coutliers\u201d, i.e. heavily depleted stars. Our observations of the GC NGC 6397 (Bonifacio et al. 2002) show that all the observed stars in this metal poor cluster share the same abundance and there are no \u201coutliers\u201d. There is very little room for intrinsic scatter; this does not rule out any of the depletion models, but does place a very strong constraint to be ful\ufb01lled. Therefore the currently available observations seem to argue against any Li depletion in NGC 6397. 1 Throughout the paper we use the notation A(X) = log[N(X)/N(H)] +12. Our result has recently been challenged by Korn et al. (2006) who have claimed to have found a di\ufb00erence in lithium content between the TO stars and subgiant stars at an e\ufb00ective temperature around 5800 K. According to their interpretation these observations are in agreement with the di\ufb00usive models of Richard et al. (2005). It should however be kept in mind that the result of Korn et al. (2006) depends on their adopted temperature scale and that an increase of only 100 K of the temperatures of the TO stars, as suggested by the cluster photometry, would erase this di\ufb00erence. Note that such an increase in the Te\ufb00would at the same time erase the claimed \u201cdi\ufb00usion signatures\u201d also in Fe, Ca and Ti. The question is therefore still not settled. At variance, the GC NGC 6752, which is about a factor of 4 more metal rich than NGC 6397, displays a strong variation (up to 0.4 dex ) in Li abundances among TO stars (Pasquini et al. 2005). These variations, however, do not appear to be random, but are anti-correlated with the variations of sodium and nitrogen, and correlated with the variations of oxygen, in the same stars. Such variations cannot be produced by di\ufb00usion mechanisms, since the e\ufb00ect of di\ufb00usion would be similar for lithium and sodium. Neither can they arise from mixing occurring in the stars themselves, since the base of the convection zone in such stars attains a temperature of 1.5 MK, which is very far from the region of lithium burning (Piau 2005, private communication). Pasquini et al. (2005) suggest that the most likely source of such anomalies are intermediate mass asymptotic giant branch (IM-AGB) stars which have polluted either the material out of which the TO stars were formed, or their atmospheres. The nucleosynthetic signatures of IM-AGB stars are in qualitative agreement with the observed patterns, although quantitatively, none of the current models is capable of fully explaining the observations. We refer the reader to the papers of Fenner et al. (2004); Ventura & D\u2019Antona (2005); D\u2019Antona et al. (2006) and Ventura & D\u2019Antona (2006) where some of the problems of IM-AGB models in reproducing the observed abundance variations in GCs are discussed, as well as possible solutions. From a di\ufb00erent perspective Prantzos & Charbonnel (2006) examined the constraints on the cluster\u2019s Initial Mass Function in the case the polluters are IM-AGB stars or the winds of massive stars. Their main conclusion is that if IM-AGB stars were the main polluters the current mass of the clusters should be dominated by stellar remnants. Since this does not appear to be the case, they consider the winds of massive stars as a more attractive hypothesis. Following up on this idea, Decressin et al. (2007) investigated the possibility that winds of rotating massive stars are the polluters causing the observed abundance variations. We shall later discuss these results in the light of our \ufb01ndings. In this paper we explore the Li content of 47 Tuc, at the metal rich end of the metallicity range span by halo GCs. In spite of its relatively high metallicity 47 Tuc is very old (11.2 \fBonifacio et al.: Lithium in 47 Tuc 3 Table 1. Log of the observations star # \u03b1 \u03b4 date UT texp seeing J2000 d/m/y h:m:s s arcsec 952 00:21:39.14 -72:02:53.73 28/10/2001 00:37:18 6000 1. \u2032\u20326 952 28/10/2001 02:18:28 6000 1. \u2032\u20323 952 28/10/2001 06:14:43 2700 0. \u2032\u20328 975 00:20:52.72 -71:58:04.16 07/09/2000 06:28:57 5400 1. \u2032\u20327 975 07/09/2000 08:00:57 5400 1. \u2032\u20327 975 08/09/2000 05:42:13 3600 1. \u2032\u20325 975 08/09/2000 06:43:42 3600 2. \u2032\u20324 1012 00:21:26.27 -72:00:38.73 25/10/2001 03:31:42 6000 0. \u2032\u20326 1012 25/10/2001 05:12:35 6000 0. \u2032\u20328 1081 00:21:03.82 -72:06:57.74 29/08/2001 08:30:57 4500 0. \u2032\u20327 1081 30/08/2001 07:26:16 4500 0. \u2032\u20325 1081 30/08/2001 08:42:39 3600 0. \u2032\u20327 Gyr, Gratton et al. 2003) and the Galactic production should not have greatly enhanced its lithium content. 2. Observations and data reduction Our spectra were collected at ESO-Paranal with the UVES spectrograph (Dekker et al. 2000) at the Kueyen 8.2m telescope in the course of three runs, covering two years. The log of the observations is given in Table 1, the DIMM seeing was noted. The data were reduced using the UVES context within MIDAS. Di\ufb00erent spectra of the same star were coadded reaching S/N ratios in the range of 50 to 70 per pixel. The coadded spectra of the Li doublet for each star, together with the best \ufb01tting synthetic pro\ufb01le are shown in Fig.1. 3. Lithium abundances The equivalent widths of the Li doublet were measured by \ufb01tting synthetic spectra, as done by Bonifacio et al. (2002) and errors estimated through Monte Carlo simulations. The chemical composition and atmospheric parameters of these stars have been studied by Carretta et al. (2004). Both with respect to colours and Balmer line pro\ufb01les the four stars studied here are twins and share the same e\ufb00ective temperature and surface gravity. We adopt here the parameters of Carretta et al. (2004), namely Te\ufb00= 5832 K, log g = 4.05 (c.g.s. units) and a microturbulent velocity of 1.07 kms\u22121 for all the stars. The iron abundance measured for these stars by Carretta et al. (2004) is [Fe/H]= \u22120.64. We used the ATLAS code (Kurucz 1993; Kurucz 2005) to compute a model atmosphere, using the Opacity Distribution Function (ODF) of Castelli & Kurucz (2003) with [M/H]= \u22120.5, \u03be = 1 kms\u22121and \u03b1 elements enhanced by 0.4 dex. The procedure for Li abundance determination was the same as in Bonifacio et al. (2002): we iteratively computed synthetic spectra using the SYNTHE code (Kurucz 1993; Kurucz 2005) until the equivalent width of the synthetic spectrum matched the measured equivalent width. The abundances are given together with the equivalent widths in Table 2. For star # 952, for which abundances are not given in Carretta et al. (2004), we measured the Na abundance from the 616.1 nm, 818.3 nm and 819.4 nm lines, and derived log(Na/H)+12=A(Na)=5.61, taking into account the NLTE corrections of Gratton et al. (1999). The only two other TO stars of this cluster for which Li measures exist are the stars BHB 5 and BHB 7 (where BHB stands for Briley et al. 1994, who provide coordinates and \ufb01nding charts for these stars) that have been observed by Pasquini & Molaro (1997) with EMMI on the ESO 3.5m NTT telescope at a resolution R\u223c18000. These two stars have colours (see Table 2) which place them in the same position in the colour-magnitude diagram as the stars observed with UVES. We therefore decided to use the equivalent widths measured by Pasquini & Molaro (1997) and the same model atmosphere used for the stars observed with UVES to derive the abundances provided in Table 2. 4. Discussion 4.1. Are the Li variations real ? Although still very limited, the data suggest that there is a real scatter in the lithium abundances of this metal-rich cluster. The mean Li abundance is 1.84 with a standard deviation of 0.25 dex. We do not perform any correction for NLTE effects or standard depletion as done by Bonifacio et al. (2002) \f4 Bonifacio et al.: Lithium in 47 Tuc Fig. 1. Li doublet for the program stars, the best \ufb01tting synthetic pro\ufb01le is shown as a thin line. and Bonifacio (2002), since all the stars have the same Te\ufb00and these would be the same for all the stars and would have no e\ufb00ect on the dispersion in Li abundances. A Monte Carlo simulation of 1000 ensembles of 4 \u201dobservations\u201d with the errors reported in Table 2, as done by Bonifacio et al. (2002) and Bonifacio (2002), provides a mean dispersion of 0.063 dex with a standard deviation of 0.032 dex, we can claim that dispersion in excess of the expected measurement error is detected at the Fig. 2. Na abundances versus Li abundances for the four stars measured by us. Table 2. Equivalent widths and Li abundances for TO stars in 47 Tuc. Errors take into account only the uncertainty on equivalent widths, the e\ufb00ects of uncertainties in Te\ufb00are neglected. star # V B-V EW \u03c3MC S/N A(Li) \u03c3Li mag mag pm pm (1) (2) (3) (4) (5) (6) (7) (8) measures from our VLT-UVES data 952 17.36 0.557 2.77 0.20 78 1.95 0.04 975 17.33 0.597 1.41 0.30 54 1.58 0.11 1012 17.36 0.581 1.69 0.21 71 1.68 0.07 1081 17.37 0.587 3.91 0.24 47 2.14 0.04 measures of Pasquini & Molaro (1997) from NTT-EMMI data BHB 7 17.38 0.57 5.30 0.80 2.30 0.08 BHB 5 17.35 0.59 5.60 1.10 2.33 0.12 6 \u03c3 level. Since the errors in Table 2 arise only from errors in the equivalent widths one could argue that a real di\ufb00erence in the temperature of the stars could justify the scatter observed. A random scatter of 100 K in the e\ufb00ective temperatures would imply changes of the order of 0.08 dex in the derived Li abundances. If we increase all the errors in Li abundances by this amount and perform another Monte Carlo simulation the mean dispersion is 0.141 dex with a standard deviation of 0.063 dex. At this point the detection of extra scatter is marginal at only 1.8 \u03c3, yet still present. In order for the observed dispersion to be entirely consistent with the observational errors, we would have to increase the scatter in Te\ufb00up to 250 K. Given the sim\fBonifacio et al.: Lithium in 47 Tuc 5 Fig. 3. Na abundances versus N abundances for the four dwarf stars (opens symbols) and for the subgiant stars (Carretta et al. 2004; Carretta et al. 2005). ilarity of the spectra of the di\ufb00erent stars there is little support for the existence of such a spread in the e\ufb00ective temperatures of our stars. Furthermore, such a scatter in temperatures should also produce some scatter in derived iron abundances, which is not observed (Carretta et al. 2004). We believe it is simplest to accept that the stars observed by us do show real variations in the Li abundances. Clearly the observation of a larger sample of stars is required to \ufb01rmly establish the reality of Li variations. We have repeatedly tried in the last four years to obtain time at the VLT to observe further TO stars in this cluster, but have been unsuccessful. Let us assume, for the sake of discussion, that the spread in Li abundances is entirely due to observational errors. In this case it is legitimate to average the values in Table 2 to obtain the Li content in 47 Tuc. The mean A(Li) = 1.84 is 0.5 dex lower than the lithium content in NGC 6397. One is thus lead to the inescapable conclusion that Li has been depleted in this cluster. Under these assumptions the depletion would have been fairly uniform. We believe it is less contrived to assume that Li has been depleted in a non-uniform way and that this non-uniformity gives rise to the scatter in Li abundances. Finally one should not discard the information provided by the two stars observed by Pasquini & Molaro (1997), both of which show a higher A(Li) than any of the stars observed with UVES, and indeed to a level comparable to that observed in NGC 6397. This considerably strengthens the claim that there are indeed real variations in Li abundance among TO stars in 47 Tuc. Fig. 4. Li abundances versus N abundances for the four dwarf stars and for the subgiant stars (Carretta et al. 2004; Carretta et al. 2005). Only upper limits are available for the subgiants stars. 4.2. The Li-Na anti-correlation The Li abundances in 47 Tuc appear to be anti-correlated with the Na abundances; in our view this is a fact which further supports the reality of the abundance variations. A plot of A(Li) versus A(Na) is shown in Fig. 2. Kendall\u2019s \u03c4 test provide a probability of 82% that this anti-correlation is real. It is a bit low to make a strong claim, but with only 4 points it is di\ufb03cult to expect more de\ufb01nite results. If the spread in Li abundances were due to incorrect temperatures it could not possibly create such an anti-correlation, since any change in Te\ufb00produces changes of equal sign and comparable magnitude on both Li and Na. There is no way to create the anti-correlation by an incorrect choice of atmospheric parameters. If we accept the hypothesis that the \u201cunpolluted\u201d Na abundance of the cluster is provided by the stars with the highest Li abundance then it should be [Na/Fe]=\u20130.36, or perhaps even lower, if the stars BHB 5 and BHB 7 follow the trend traced by the other four stars. We examined again the EMMI spectra of the two stars observed by Pasquini & Molaro (1997) to see if it were possible to measure the Na abundance from those spectra. The only usable Na lines were those of the 615.4\u2013616.0nm doublet, which are weak in these warm TO stars. Given the low S/N of the spectra, the lines cannot be reliably used and only a very high (not signi\ufb01cant) upper limit can be obtained. \f6 Bonifacio et al.: Lithium in 47 Tuc Fig. 5. Li abundances as a function of heliocentric radial velocities The four stars observed by us are shown as \ufb01lled circles, while the two stars observed by Pasquini & Molaro (1997) are shown as open circles. 4.3. The need for \u201cpollution\u201d When considering \ufb01eld stars, the usual interpretation of relatively cool and metal-rich stars found below the Spite plateau is that Li is depleted in these stars due to a deeper convection zone and/or atomic di\ufb00usion (Michaud et al. 1984; Vauclair & Charbonnel 1995, 1998; Salaris & Weiss 2001; Richard, Michaud, & Richer 2002; Richard et al. 2002, 2005). In the case of 47 Tuc the observation of low Li abundances is accompanied by the \ufb01nding of the Li-Na anti-correlation, which implies that some of the polluting material has been processed at temperatures in excess of 3 \u00d7 107 K so that Li has been destroyed and Na created by p captures on 22Ne. These temperatures, are high enough for extensive burning of oxygen through the CNO cycle, which can well explain the NaO anti-correlation present in this cluster (Carretta et al. 2004). These temperatures are however too high to be found within TO cluster stars, which, even at the centre should not exceed a temperature of 2 \u00d7 107 K. Therefore the Na-O anti-correlation requires processing in a previous generation of stars and subsequent non-uniform pollution of the ISM. This view has to be consistent with the [N/O] ratio in the TO stars in this cluster, which has a mean value of -0.85 dex (Carretta et al. 2005), that is about 0.5 dex lower than what observed in \ufb01eld stars of the same metallicity (Israelian et al. 2004). Also the 12C/13C ratios (> 10, Carretta et al. 2005) argues against CNO cycling of material in the TO stars of 47 Tuc. This poses serious problems to explain the observed Li-Na anti-correlation. The polluting material probed by the abundances in the TO stars of 47 Tuc, must have experienced high temperatures in order to produce the LiNa anti-correlation, however we do not observe signature of extensive O burning, N production or 13C production. A further complication in this picture is the possible production of Li, e.g. via Cameron-Fowler mechanism (Cameron & Fowler 1971) or otherwise. Any production of Li, however, would tend to erase the Li-Na anti-correlation, it is therefore likely that there is no, or very little, Li production. In Fig. 3 we show [Na/Fe] as a function of [N/Fe] for both the dwarf and subgiant stars observed in 47 Tuc. There is a hint of a correlation, albeit with a very large scatter. We inspected all the spectra of the subgiant stars observed in this cluster with UVES, but could not convincingly detect the Li doublet in any of them. Considering the quality of the spectra we set an upper limit on the equivalent width of the Li doublet in these stars of 1 pm, this implies an upper limit A(Li)< 0.34. In Fig. 4 we show the Li abundances, or upper limits, as a function of [N/Fe], contrary to what observed in the cluster NGC 6752 (Pasquini et al. 2005), Li and N appear to be totally uncorrelated. We stress that some caution must be exerted in interpreting this data since too few stars have been observed. It should be noted that according to the measures of Briley et al. (1994), BHB 5 is CN-weak CH-strong, while BHB 7 is CN-strong CH-weak (see Fig. 7 of Briley et al. 1994). If we interpret the CN band strength in terms of N abundance, this would be further evidence of what is hinted at by Fig. 4: that Li abundance does not seem to be correlated to N abundance. It would clearly be of great interest to re-observe stars BHB 5 and BHB 7 with an 8m class telescope in order to perform a complete chemical analysis and improve their Li abundances. 4.4. Neutron capture elements: absence of an AGB signature It is interesting to look also at the abundances of the n\u2212capture elements in these stars, which have been measured by James et al. (2004). In this respect the cluster seems extremely homogeneous and the abundances are consistent with those observed in \ufb01eld stars of the same metallicity, with the exception of Sr, for which both the [Sr/Fe] and the [Sr/Ba] ratios are slightly higher than in \ufb01eld stars of the same metallicity. This situation makes it unlikely that these stars have been formed out of, or polluted by, material heavily enriched by s\u2212 process elements, as may be expected in the ejecta of AGB stars which have undergone thermal pulses. It should be here noted that a recent investigation of the chemical composition of AGB stars in this cluster (Wylie et al. 2006) has claimed a true scatter in the ratios of n\u2212capture elements. Since the stars currently observed on the AGB in 47 Tuc are of too low mass to have undergone the third dredgeup one should conclude that this inhomogeneity is intrinsic and not due to the self\u2013pollution. To further support this point of view Wylie et al. (2006) point out the large scatter in sodium abundances in their stars (+0.3 < \u223c[Na/Fe] < \u223c+1.0) and claim \fBonifacio et al.: Lithium in 47 Tuc 7 that this, and the variations in n\u2212capture elements could be explained by the presence of at least two separate stellar populations. This suggestion is intriguing, however the results of Wylie et al. (2006) for n\u2212capture elements are at odds with those of James et al. (2004) for the stars examined in the present paper and with those of Alves-Brito et al. (2005) for a sample of 5 giants stars in this cluster. Further investigation would be desirable to rule out the possibility of systematic di\ufb00erences among the di\ufb00erent analysis. Note that the Na enhancements observed by Wylie et al. (2006) are all larger than what observed in our TO stars. 4.5. Do all GCs evolve in a similar way ? It appears clear that GCs exhibit a considerable diversity in their abundance inhomogeneities. NGC 6397 displays no Li\u2013 Na anticorreation and a marked enhancement of N. NGC 6752 displays a well de\ufb01ned Li\u2013Na anti-correlation, accompanied by a rather large enhancement of nitrogen, compared to \ufb01eld stars, and a [N/O] ratio which ranges between 0.6 and 1.8, over two orders of magnitude larger than what observed in \ufb01eld stars of the same metallicity. 47 Tuc displays a Li\u2013Na anti-correlation, however N does not appear to be enhanced and subsolar values of [N/O] are found. Recently Bekki et al. (2007) have proposed a scenario by which GCs are formed at high redshift in dwarf galaxies, embedded in dark matter subhalos and the polluters are mainly the \ufb01eld IM-AGB stars of the host galaxy, which is subsequently tidally disrupted. Such a scenario has several appealing aspects, among which, in our view, the most interesting is that it may accomodate quite naturally di\ufb00erences in selfenrichment histories of GCs, which may be traced to the di\ufb00erent properties (masses, dynamics, age...) of the, now disrupted, host galaxies and dark matter subhalos. Currently such models are unable to explain the Na-O anti-correlations, and, by inference, they should likewise be unable to explain the Li-Na anti-correlation. However, more generally, the abundance pattern in the GC hosting dwarf galaxies may result from complex histories, which may allow to explain the variety of abundance patterns observed in GCs. The scenario of Bekki et al. (2007) is supported by the dynamical simulations of Gnedin & Prieto (2006), which suggest that GCs may form in giant molecular clouds within high redshift galaxies. By computing the orbits of such clusters in a Milky Way-sized galaxy Gnedin & Prieto (2006) conclude that all clusters found at distances larger than 10 kpc from the Galactic center were indeed formed in satellite galaxies, which have now been tidally disrupted. At this point it is perhaps worth to mention the signi\ufb01cative di\ufb00erence in HB morphology between NGC 6752 and 47 Tuc, the former beeing characterized by an extended blue tail. Of course this di\ufb00erence could be simply due to the di\ufb00erent metallicity of the two clusters, however the long blue tail in the HB of NGC 6752 could also be linked to stars polluted by He-enriched matter, according to the scenario suggested by D\u2019Antona & Caloi (2004). 4.6. Are massive stars the polluters ? The lack of nitrogen enhancement seems di\ufb03cult to reproduce using AGB stars as polluters. A distinct possibility is that the polluters are instead massive stars, an alternative which is favoured by Prantzos & Charbonnel (2006). These authors suggest that it is the wind of massive stars which is retained in the cluster, while the SN ejecta are lost, due to their higher speed of ejection. Meynet et al. (2006) present the results for models of 60 M\u2299star of very low metallicity (Z/Z\u2299= 10\u22128 and Z/Z\u2299= 10\u22125), both with and without rotation. According to these computations the wind of such a star, can provide large 12C/13C ratios, however very small N/O ratios, as observed in this cluster, only if the star has a low enough rotational velocity, so that rotational mixing is unimportant. If instead also the SN ejecta are mixed to the wind, due to the large production of O, the N/O ratio is considerably lowered, whatever the rotational velocity. It seems however likely that the fast S/N ejecta are lost to the cluster and do not contribute to its chemical evolution, at variance to what we expect from the relatively slow-moving winds. The investigation of Meynet et al. (2006) has been extended by Decressin et al. (2007), who computed also models for 20, 40, 60 and 120 M\u2299, for a metallicity of Z = 0.0005, which corresponds, roughly, to [Fe/H]=\u20131.5, adequate, e.g. for NGC 6752. These rotating models seem to produce winds with a composition apt to reproduce the C,N, O and Na variations in NGC 6752. We cannot apply directly these models to 47 Tuc, which is considerably more metal-rich. We note however that the winds of mass 60 and 120 M\u2299, according to Decressin et al. (2007), display low N/O ratios only up to the end of central Hburning, after this phase N/O is always greater than the solar ratio, which is \u223c0.1, while the observed N/O ratio in 47 Tuc is lower than solar. The 20 M\u2299model never provides wind with N/O > 1 and the 40 M\u2299does so only after the appearance of the He-burning products at the surface. Decressin et al. (2007) do not provide the fraction of 13C in the wind, so that we cannot tell if at any of these phases a low N/O is accompanied by high 12C/13C. Even if it were so, one would have to admit that the pollution was made only during such phases (or by stars of such masses, e.g. masses of 20 M\u2299or lower). Decressin et al. (2007) assume that the massive stars winds are essentially Li free and invoke a dilution of the wind with about 30% of pristine gas, in order to reproduce the lowest Li abundance observed in NGC 6752, assuming the pristine lithium was what derived from the SBBN predictions, and the baryon to photon ratio provided by WMAP. In the case of 47 Tuc, since some of the observed values of Li are even lower than in NGC 6752, these should have been formed almost exclusively out of the winds, with an addition of at most 9% of pristine material. It is certainly true that at the temperature necessary for sodium production the lithium should be completely destroyed, therefore to be consistent with the observed Li abundances, the processed material must anyway be diluted with material in which Li has been preserved. Such a pollution may provide the observed Li-Na \f8 Bonifacio et al.: Lithium in 47 Tuc anti-correlation, however the abundances of N and 13C should follow this pattern. 4.7. A kinematical signature of pollution ? An interesting feature emerges if we plot Li abundances as a function of radial velocities (see Fig.5): there is a mild hint (probability of correlation between radial velocity and Li abundance \u223c91 %) that the most Li-rich stars have a radial velocity di\ufb00erent from the less Li-rich. This may point towards a kinematic distinction between the more polluted and the less polluted stars, however, given the limited size of the sample it is premature to claim this is a real feature; nevertheless it surely prompts for the observation of a larger sample of stars. 5." + } + ] + }, + "edge_feat": {} + } +} \ No newline at end of file