diff --git "a/abs_29K_G/test_abstract_long_2405.02844v1.json" "b/abs_29K_G/test_abstract_long_2405.02844v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.02844v1.json" @@ -0,0 +1,418 @@ +{ + "url": "http://arxiv.org/abs/2405.02844v1", + "title": "SMCD: High Realism Motion Style Transfer via Mamba-based Diffusion", + "abstract": "Motion style transfer is a significant research direction in multimedia\napplications. It enables the rapid switching of different styles of the same\nmotion for virtual digital humans, thus vastly increasing the diversity and\nrealism of movements. It is widely applied in multimedia scenarios such as\nmovies, games, and the Metaverse. However, most of the current work in this\nfield adopts the GAN, which may lead to instability and convergence issues,\nmaking the final generated motion sequence somewhat chaotic and unable to\nreflect a highly realistic and natural style. To address these problems, we\nconsider style motion as a condition and propose the Style Motion Conditioned\nDiffusion (SMCD) framework for the first time, which can more comprehensively\nlearn the style features of motion. Moreover, we apply Mamba model for the\nfirst time in the motion style transfer field, introducing the Motion Style\nMamba (MSM) module to handle longer motion sequences. Thirdly, aiming at the\nSMCD framework, we propose Diffusion-based Content Consistency Loss and Content\nConsistency Loss to assist the overall framework's training. Finally, we\nconduct extensive experiments. The results reveal that our method surpasses\nstate-of-the-art methods in both qualitative and quantitative comparisons,\ncapable of generating more realistic motion sequences.", + "authors": "Ziyun Qian, Zeyu Xiao, Zhenyi Wu, Dingkang Yang, Mingcheng Li, Shunli Wang, Shuaibing Wang, Dongliang Kou, Lihua Zhang", + "published": "2024-05-05", + "updated": "2024-05-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Mamba", + "gt": "Motion style transfer is a significant research direction in multimedia\napplications. It enables the rapid switching of different styles of the same\nmotion for virtual digital humans, thus vastly increasing the diversity and\nrealism of movements. It is widely applied in multimedia scenarios such as\nmovies, games, and the Metaverse. However, most of the current work in this\nfield adopts the GAN, which may lead to instability and convergence issues,\nmaking the final generated motion sequence somewhat chaotic and unable to\nreflect a highly realistic and natural style. To address these problems, we\nconsider style motion as a condition and propose the Style Motion Conditioned\nDiffusion (SMCD) framework for the first time, which can more comprehensively\nlearn the style features of motion. Moreover, we apply Mamba model for the\nfirst time in the motion style transfer field, introducing the Motion Style\nMamba (MSM) module to handle longer motion sequences. Thirdly, aiming at the\nSMCD framework, we propose Diffusion-based Content Consistency Loss and Content\nConsistency Loss to assist the overall framework's training. Finally, we\nconduct extensive experiments. The results reveal that our method surpasses\nstate-of-the-art methods in both qualitative and quantitative comparisons,\ncapable of generating more realistic motion sequences.", + "main_content": "INTRODUCTION Motion style transfer is a significant research direction in multimedia applications. The objective is to transpose the style from the style reference onto the content motion while conserving the motion content. As such, the generated motion can possess features from both the content and style motion, thus enabling the swift switching between different styles for a digital humanoid\u2019s identical motion, as depicted in Figure 1. Employing this technology can dramatically enrich and heighten the realism of digital human motion. It is being broadly adapted into various multimedia contexts such as movies, games, the Metaverse and so on. Traditional methods for motion style transfer [1, 12, 25] mainly adopt a generation framework based on GAN [7]. However, GAN training is known to suffer from instability and convergence issues, arXiv:2405.02844v1 [cs.CV] 5 May 2024 \fPreprint, 2024, Conference Paper Ziyun Qian, et al leading to difficulties in generating high-fidelity, natural motion sequences. On the contrary, the diffusion framework process during training tends to be more stable and is typically easier to converge. Therefore, to address the aforementioned problems, we adopt the diffusion model as our generative framework and consider style motion sequences a diffusion condition for the first time. Consequently, we propose the Style Motion Conditioned Diffusion (SMCD) Framework. This framework is capable of learning motion detail features and style variations more comprehensively, generating motions with content and style motion characteristics, thereby achieving more realistic and natural motion style transfer. However, upon the proposition of the SMCD framework, we discover it failed to effectively extract the temporal information of the motion sequences, leading to the generation of disordered motion. To address this problem, we are inspired by the Mamba [8] model and thus propose the Motion Style Mamba (MSM) module. The MSM module effectively captures sequence temporal information utilizing the Selection Mechanism, preserving long-term temporal dependencies within a motion sequence. We are the first researchers to introduce the Mamba [8] model to the field of motion style transfer. Additionally, since we propose a new framework for motion style transfer, suitable loss functions to aid in training are currently lacking. In light of this, we specially design the Diffusion-based Content Consistency Loss and Diffusion-based Style Consistency Loss, tailoring them to the characteristics of our proposed SMCD Framework. These loss functions are utilized to constrain the content and style of the generated motions, and achieve better results. In the experiment section, we carry out extensive comparative tests using other methods. Visual effects and quantifiable indicators show that the motions generated by the proposed SMCD framework possess higher naturality and realism. Furthermore, it maintains the original motion style while generating various motions, such as walking, running, and jumping. In summary, the main contributions of this paper can be summarized as follows: \u2022 We propose a new motion style transfer framework, SMCD, for the first time, considering style motion sequences as conditions for diffusion to generate motions. \u2022 We first utilize the Mamba model [8] in the field of motion style transfer, and propose the MSM module. This module is designed to extract the temporal information of motion sequences better, thereby maintaining long-term dependencies in the time sequence of motion sequences. \u2022 Due to the lack of loss functions that fully adapt to our SMCD framework, we propose the Diffusion-based Content Consistency Loss and Diffusion-based Style Consistency Loss to assist in training for the first time, enabling the model to achieve improved results. \u2022 We conduct extensive experiments to evaluate our framework. The results indicate that our proposed SMCD framework surpasses the effects of state-of-the-art methods in terms of visual effects and quantitative indicators. 2 RELATED WORKS Motion Style transfer. Motion style transfer is a significant research area in multimedia applications. Early methods [3, 29] utilize handcrafted feature extraction to design different motion styles. These approaches, however, are inefficient and incapable of quickly generating large-scale stylized motions. Later, some methods [18, 34] attempt to employ machine learning for motion style transfer. However, these methods typically require a paired dataset for training, meaning they need a human avatar to perform the same motion using different styles, such as running in both a happy and a sad state, with nearly similar steps. Such an intricate process limited the creation of large-scale paired motion datasets. In recent years, specific methods [1, 4, 12, 25] borrow techniques from image style transfer, utilizing deep learning structures for digital human motion style transfer. These methods do not require paired training datasets and achieve sound motion style transfer effects. However, most adopt a Generative Adversarial Network (GAN) [7] based generation framework. GAN [7] training is known to suffer from instability and convergence issues, which results in difficulties in generating realistic, high-fidelity motion sequences. To resolve these problems, we propose a diffusion-based motion style transfer framework. Furthermore, we are the first to consider style motion as a condition within diffusion, allowing a more comprehensive learning of content and style features within a motion sequence. This results in a more realistic, more natural motion style transfer. Diffusion Generative Models. Diffusion consists of both a forward process and a reverse process, forming a Markovian architecture that reverses predetermined noise using neural networks and learns the underlying distribution of data. The researchers highly favor the diffusion model for its excellent performance in various research areas, such as image generation [22, 24, 30], video generation [9], reinforcement learning [13], 3D shape generation [45], and more, benefiting from the advances in learning-based technologies [35\u201341]. Compared to GANs [7] and VAEs [15], the diffusion model exhibits promising quality not only in image tasks but also in motion generation. The work [43] is the first text-based motion diffusion model that achieves body part-level control using fine-grained instructions. Tevet et al. [26] introduce a motion diffusion model, operating on raw motion data, and learn the relationship between motion and input conditions. The method [44] presents a retrievalaugmented motion diffusion model, leveraging additional knowledge from retrieved samples for motion synthesis. The research [33], in contrast to traditional diffusion models, devised a spatialtemporal transformer-based architecture as the core decoder, diverging from the conventional Unet backbone, to introduce diffusion into human motion prediction. Kim et al. [14] combine improved DDPM [19] and Classifier-free guidance [11] integrating diffusionbased generative models into the motion domain. The method [28] utilizes a Transformer-based diffusion model, couples with the Jukebox, to provide motion generation and editing suitable for dance. The effort [5] employs a 1D U-Net with cross-modal transformers to learn a denoising function, synthesizing long-duration motions based on contextual information such as music and text. Flaborea et al. [6] focus on the multimodal generation capability of diffusion models and the improved mode-coverage capabilities of diffusive techniques, applying them to detect video anomalies. However, \fSMCD: High Realism Motion Style Transfer via Mamba-based Diffusion Preprint, 2024, Conference Paper among the numerous diffusion-based frameworks, no work currently incorporates style motion as a condition and applies it to motion style transfer. 3 METHODOLOGY Pose Representation. We categorize the motion sequence input into the Style Motion Conditioned Diffusion (SMCD) framework into two types based on function. The first type, content motion sequence mc \u2208\ud835\udc454\ud835\udc3d\u00d7\ud835\udc41, has \ud835\udc41poses, each pose mci has 4\ud835\udc3ddimensions, i.e., mc = {\ud835\udc8eci}\ud835\udc41 \ud835\udc56=1. Similarly, the second type, style motion sequence \ud835\udc8fs \u2208R3\ud835\udc3d\u00d7\ud835\udc47, also has \ud835\udc41poses, each pose nsi has 3\ud835\udc3ddimensions, i.e., ns = {nsi}\ud835\udc41 \ud835\udc56=1. The content motion sequence \ud835\udc8ec can be represented using joint rotations with a source style c \u2208S. In contrast, the style motion sequence \ud835\udc8fs can be inferred from the relative motion of joint rotations to infer style, hence represented using joint rotations, with a target style s \u2208S. Here, S denotes the collection of all styles, \ud835\udc3d= 21 signifies the number of joints in the human skeleton. The objective of the SMCD framework is to generate a motion sequence that simultaneously possess the content characteristics of mc and the style features of ns, hence achieving motion style transfer. 3.1 Style Motion Conditioned Diffusion Framework A majority of current motion style transfer methodologies [2, 12, 25] predominantly adopt a generative framework based on GAN [7]. However, during training, GAN is prone to instability and convergence issues, often resulting in disorganized, chaotic motion sequences that struggle to embody a realistic, high-level natural motion style. On the contrary, the diffusion framework process during training tends to be more stable and is typically easier to converge. Therefore, to address the highlighted problems, we adopt a diffusion model as our generative framework. To ensure that the diffusion framework can learn the details of motion characteristics and style variations more comprehensively, we innovatively consider the style motion sequence ns as the condition C \u2208R\ud835\udc51\u00d7\ud835\udc41 for diffusion. Consequently, we propose the Style Motion Conditioned Diffusion (SMCD) Framework, achieving a more realistic and high-fidelity motion style transfer. We utilize the definition of diffusion delineated in DDPM [10], considering the forward diffusion process as a Markov noising process. By perpetually infusing Gaussian noise into the motion sequence m0 \u2208R\ud835\udc51\u00d7\ud835\udc41, we disrupt the motion sequence, thus obtaining {mt}T t=0, i.e., the full motion sequence at noising step t, where the m0 \u2208R\ud835\udc51\u00d7\ud835\udc41is drawn from the data distribution. This forward noising process can be defined as follows: \ud835\udc5e(mt | m0) \u223cN \u0010\u221a\u00af \ud835\udefc\ud835\udc61m0, (1 \u2212\u00af \ud835\udefc\ud835\udc61) I \u0011 , (1) where \u00af \ud835\udefc\ud835\udc61\u2208(0, 1) are monotonic decreasing constants, when approximating to 0, we can approximate mT \u223cN (0, \ud835\udc3c). We set timesteps T = 1000. 3.2 Motion Style Mamba Architecture Upon introducing the SMCD framework, the observation shows that the framework exhibited suboptimal performance in extracting temporal information from motion sequences, resulting in somewhat chaotic outcomes. Drawing inspiration from the Mamba model proposed by Gu et al. in reference [8], we propose the Motion Style Mamba (MSM) module to address this issue. This module employs a Selection Mechanism to more effectively capture the temporal dynamics of motion sequences, thereby preserving the long-term dependencies within the sequence and enhancing the efficacy of motion style transfer. To the best of our knowledge, we are the first to introduce the Mamba model for motion style transfer. The Motion Style Mamba (MSM) module primarily embeds independent temporal information into motion sequences. Prior to the input of motion sequences into the MSM module, it is requisite to subject the input motion sequences and temporal steps to the following processing procedures: Seq \ud835\udc47= \ud835\udc43\ud835\udc38\u0000concat \u0000\ud835\udc40\ud835\udc3f\ud835\udc43(\ud835\udc47), Linear \u0000\ud835\udc5b\ud835\udc60\u0001 , Linear \u0000\ud835\udc5a\ud835\udc50\u0001\u0001\u0001 , (2) where the temporal step size denotes as T, undergoes a projection through a multi-layer perceptron (MLP) comprising two linear layers succeeded by an activation layer, thereby mapping it into a continuous vector space. This process results in forming a latent vector that is amenable to manipulation by the Motion Style Mamba (MSM) module. \ud835\udc8f\ud835\udc60\u2208R3\ud835\udc3d\u00d7\ud835\udc47denotes to style motion sequence, \ud835\udc8e\ud835\udc84\u2208R4\ud835\udc3d\u00d7\ud835\udc41denotes to content motion sequence. Once processed through a linear layer, the two components are concatenated to form an augmented motion sequence. Upon undergoing positional encoding, this sequence is transformed into SeqT, which serves as the input for the Motion Style Mamba (MSM) module. Within the MSM module, the Mamba Block [8] undertakes the pivotal role of mapping temporal information via the temporal step size T onto both the content motion sequence and the style motion sequence while modulating the significance of the temporal information. Inside the Mamba Block, SeqT initially passes through a residual structure equips with an InstanceNorm (IN) layer, followed by feature extraction via Causal Conv1D [31]. The Causal Conv1D ensures that the value of each output is contingent solely upon its preceding input values. Moreover, the Selection Scan constitutes the core component of the Mamba Block, enabling the model to selectively update its internal state based on the current characteristics of the input data. This further refines to focus on temporal information, facilitating the capture of the temporal dependencies within the motion sequence. Utilizing the Selection Scan allows for a high degree of temporal alignment between the content motion and style motion, thereby circumventing the rigidity that may arise from asymmetrical motion sequences in the final output. The following formula can delineate the structure of the Mamba Block: \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4e0 \ud835\udc60= LN \u0000Seq\ud835\udc47 \u0001 , (3) \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4e\ud835\udc56 \ud835\udc60= LN \u0010 IN \u0010 \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4e\ud835\udc56\u22121 \ud835\udc60 \u0011\u0011 + IN \u0010 \u03a6 \u0010 \ud835\udf07 \u0010 IN \u0010 \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4e\ud835\udc56\u22121 \ud835\udc60 \u0011\u0011\u0011\u0011 , (4) \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4eres = LN \u0000Seq\ud835\udc47 \u0001 + \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4e\ud835\udc41 \ud835\udc60, (5) \fPreprint, 2024, Conference Paper Ziyun Qian, et al Linear Linear T MSM Style Motion \u2026 \u2026 Seq MLP \u2026 PE \ud835\udc8f\ud835\udc8f\ud835\udc94\ud835\udc94\ud835\udfcf\ud835\udfcf \ud835\udc8f\ud835\udc8f\ud835\udc94\ud835\udc94\ud835\udfd0\ud835\udfd0 \ud835\udc8f\ud835\udc8f\ud835\udc94\ud835\udc94\ud835\udc8f\ud835\udc8f \u0ddd \ud835\udc8e\ud835\udc8e\ud835\udfce\ud835\udfce \ud835\udfcf\ud835\udfcf \u0ddd \ud835\udc8e\ud835\udc8e\ud835\udfce\ud835\udfce \ud835\udfd0\ud835\udfd0 \u0ddd \ud835\udc8e\ud835\udc8e\ud835\udfce\ud835\udfce \ud835\udc8f\ud835\udc8f ... Content Motion \ud835\udc8e\ud835\udc8e\ud835\udc84\ud835\udc84\ud835\udfcf\ud835\udfcf \u2026 \ud835\udc8e\ud835\udc8e\ud835\udc84\ud835\udc84\ud835\udfd0\ud835\udfd0 \ud835\udc8e\ud835\udc8e\ud835\udc84\ud835\udc84\ud835\udc8f\ud835\udc8f ... ... Predicted Motion MSM Noisy Motion T Style Motion Diffuse 0 \u2192T-1 Style Motion T 1 MSM MSM Style Motion 1 \u0ddd \ud835\udc8e\ud835\udc8e\ud835\udfce\ud835\udfce Diffuse 0 \u21921 ... ... ... ... \u0ddd \ud835\udc8e\ud835\udc8e\ud835\udfce\ud835\udfce \u0ddd \ud835\udc8e\ud835\udc8e\ud835\udfce\ud835\udfce Figure 2: (Left) Overview of the Style Motion Conditioned Diffusion (SMCD) framework. The model inputs a content motion sequence mc with N poses in a noising step \ud835\udc61, as well as \ud835\udc61itself, and a style motion sequence \ud835\udc8fs considered as condition C. The Motion Style Mamba (MSM) module predicts the stylized motion m0 in each sampling step. (Right) Sampling MSM. Given the \ud835\udc8fs as condition C, we sample random noise mT at the dimensions of the desired motion, then iterate from T=1000 to 1. In each step \ud835\udc61, MSM predicts stylized motion m0 and diffuses it back to mT-1. where LN is the linear layer, IN is an Instance Normalization layer. \u03a6 is Selective Scan module, \ud835\udf07denotes to Causal Conv1D layer [31], \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4e\ud835\udc56 \ud835\udc60denotes the Mamba Block corresponding to the ith iteration of the cyclic process. Especially, \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4e0 \ud835\udc60denotes the input presented to the Mamba Block. \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4eres represents the output from the residual network that incorporates the Mamba Block as a constitutive element. After the Mamba Block structure facilitates the integration, the temporal information and motion sequences are consolidated and fed into a Multi-Head Attention (MHA) mechanism. This is further followed by the passage through a residual network augmented with a Position-wise Feed-Forward Network, which enhances the efficacy of the style transfer process. \ud835\udf0e= IN (LN ( Mamba res )) + \ud835\udc40\ud835\udc3b\ud835\udc34(LN ( Mamba res )) , (6) where \ud835\udf0erefers to the output of the residual network that includes the integration of MHA. The ultimate output of the MSM module \ud835\udc40\ud835\udc40\ud835\udc46\ud835\udc40can be articulated through the following equation: \ud835\udc40\ud835\udc40\ud835\udc46\ud835\udc40= \ud835\udc39\ud835\udc39\ud835\udc41(\ud835\udf0e) + IN(\ud835\udf0e), (7) where \ud835\udc39\ud835\udc39\ud835\udc41denotes the Position-wise Feed-Forward Network. 3.3 Training Objectives Our objective is to synthesize a motion sequence of length N that embodies both the characteristics of content motion and style motion under the given condition c in style motion sequence ns \u2208 \ud835\udc453\ud835\udc3d\u00d7\ud835\udc47. We model distribution \ud835\udc5d( m0 | C) as the reversed diffusion Mamba Block * N Input Linear Linear Wise position FFN Instance Norm Instance Norm MSM Block Linear MHA K Q V Linear Causal Conv1D Selective Scan Linear Instance Norm predicted motion Figure 3: Architecture of Motion Style Mamba (MSM) Module. process of iteratively cleaning mT. To better handle lengthy motion sequences and enhance computational efficiency, we propose the Motion Style Mamba (MSM) module. After noise mt, noising step t, and motion condition C are fed into the MSM module, we can directly predict the original motion sequence b m0, i.e., b m0 = MSM ( mt, t, C) = MSM ( mt, t, ns), without having to predict noise \ud835\udf16\ud835\udc61as the research [10] (see Figure 2 right). \fSMCD: High Realism Motion Style Transfer via Mamba-based Diffusion Preprint, 2024, Conference Paper Furthermore, we introduce the simple loss proposed by Ho et al. [10] to encourage the predicted motion sequence b m0 to be as consistent as possible with the original motion sequence m0: Lsimple = \ud835\udc38m0,\ud835\udc61\u223c[1,\ud835\udc47] h \u2225m0 \u2212MSM (mt, t, ns)\u22252 2 i . (8) Additionally, in light of the unique characteristics of the style motion conditioned diffusion framework proposed in this paper, we specially designe the Diffusion-based Content Consistency Loss (Eq.9) and Diffusion-based Style Consistency Loss (Eq.10). Diffusion-based Content Consistency Loss. When the inputted content motion sequence mc and style motion sequence ns share the same style (c=s), it would undoubtedly be ideal for the resulting generated motion to closely resemble content motion mc, regardless of the content of style motion ns. Due to the lack of loss functions that fully adapt to our SMCD framework, taking the above observation into account, we propose the Diffusion-based Content Consistency Loss under the style motion conditioned diffusion framework for the first time, aiming to constrain the motion content. In each iteration, two motion sequences with the same content are randomly selected from the dataset M to serve as the style motion and content motion, respectively. Subsequently, the Diffusion-based Content Consistency Loss is computed using the following formula: Ldcc = Emc,nc\u223cM \u2225\ud835\udc40\ud835\udc46\ud835\udc40(mc, t, nc) \u2212mc\u22251 . (9) Two fundamental differences exist between our loss function and the Content Consistency Loss proposed by Aberman et al. [2] : (1) Our loss function is diffusion-based, and the timestep t can control the forward noising process based on motion. (2) The style motion in our loss function acts as a condition for diffusion, aligning more closely with the overall framework of this paper. Diffusion-based Style Consistency Loss. Following the same line of thinking as the Diffusion-based Content Consistency Loss, we also propose the Diffusion-based Style Consistency Loss for the first time. In each iteration, we randomly select two motion sequences with the same style from the dataset M as the style motion and content motion, respectively. The motion generated should be closer to the style motion ns. We calculate the Diffusionbased Style Consistency Loss using the following formula: Ldsc = Enc,ns\u223cM \u2225\ud835\udc40\ud835\udc46\ud835\udc40(nc, t, ns) \u2212ns\u22251 . (10) Geometric losses. Geometric losses are also frequently adopted in motion generation [20, 23, 27, 28] to enhance the physical realism of the motion, prompting the model to generate more naturally coherent motions. We employ three expected geometric losses, which control (1) positions, (2) foot contact, and (3) velocities. Lpos = 1 \ud835\udc41 \ud835\udc41 \u2211\ufe01 \ud835\udc56=1 \r \r \r\ud835\udc39\ud835\udc3e \u0010 mi 0 \u0011 \u2212\ud835\udc39\ud835\udc3e \u0010 b mi 0 \u0011\r \r \r 2 2 , (11) Lfoot = 1 \ud835\udc41\u22121 \ud835\udc41\u22121 \u2211\ufe01 \ud835\udc56=1 \r \r \r \u0010 \ud835\udc39\ud835\udc3e \u0010 mi+1 0 \u0011 \u2212\ud835\udc39\ud835\udc3e \u0010 b mi 0 \u0011\u0011 \u00b7 \ud835\udc53\ud835\udc56 \r \r \r 2 2 , (12) Lvel = 1 \ud835\udc41\u22121 \ud835\udc41\u22121 \u2211\ufe01 \ud835\udc56=1 \r \r \r \u0010 mi+1 0 \u2212mi 0 \u0011 \u2212 \u0010 b mi+1 0 \u2212b mi 0 \u0011\r \r \r 2 2\u2032 (13) where \ud835\udc39\ud835\udc3e(\u00b7) is the forward kinematic function that converts joint angles into joint positions, and the \ud835\udc56superscript denotes the motion frame index. \ud835\udc53\ud835\udc56\u2208{0, 1}\ud835\udc3dis the binary foot contact mask for each frame \ud835\udc56, indicating whether the foot is in contact with the ground. It is set according to the binary ground truth data and mitigates foot sliding by offsetting the velocity when contact occurs. Our total training loss function is a combination of the above six losses: Ltotal = Lsimple + Ldcc + Ldsc + Lpos + Lvel + Lfoot . (14) 4 EXPERIMENT In this section, we conduct extensive experiments comparing the method presented in this paper with state-of-the-art methods in terms of visual effects and quantitative metrics. Subsequently, we also test the effectiveness of the SMCD framework in performing motion style transfer to unseen style to assess the model\u2019s generalizability in practical applications. Ultimately, we conduct extensive ablation experiments to validate the effectiveness of each component within the SMCD framework. 4.1 Implementation Details We train and test based on the Xia dataset [34]. This dataset\u2019s Motion clips include 8 motion styles and 5 motion contents. We reduce the original 120fps motion data to 60fps and obtain approximately 1500 motion sequences in total. Our framework is implemented in PyTorch and trains on an NVIDIA A800, with a batch size of 512, using the AdamW optimizer [17]. The training process takes about 10 hours each time. 4.2 Visual Effect Comparison We qualitatively compare the visual effects in motion style transfer from three aspects: style expressiveness, content preservation, and motion realism. This comparison involves our proposed SMCD framework, the method proposed by Aberman et al. [1] and StyleERD [25]. Due to the scarcity of open-source papers in the field of motion style transfer, our comparison is limited to the two methods mentioned above. The content motion and style motion adopted in the experiments originate from the dataset proposed by Xia et al. [34] Under ideal circumstances, the model should be capable of transferring the style of the style motion to the content motion while preserving the content of the content motion. Hence, the generated motion sequence should embody content and style motion characteristics. As seen in Figure 4, we conduct three sets of motion style transfers. The results show that the motions generated by our SMCD framework can more realistically reflect the style while retaining the original content, demonstrating higher style expressiveness and content preservation. On the other hand, the frameworks [1] and [25] struggle to transfer the motion style effectively. Regarding motion realism, motions generated by our SMCD framework are more realistic. In contrast, the other two methods exhibit flaws at the ankles, shoulders, and other areas, as highlighted in red boxes in Figure 4. \fPreprint, 2024, Conference Paper Ziyun Qian, et al Input style Input content Aberman et al. Style-ERD Ours Old walk into neutral style Proud walk into sexy style Strutting run into old style \u4e0d\u7528\u586b \u4e0d\u7528\u586b \u4e0d\u7528\u586b Figure 4: A comparative visual representation of the SMCD framework with the methods proposed by Aberman et al. [1] and Style-ERD [25]. The image depicts the flaws in the generated motions, denoted by red boxes. 4.3 Quantitative Evaluation Inspired by MoDi [21], we adopt the following metrics to evaluate our framework quantitatively: \u2022 FID (Fr\u00e9chet Inception Distance): This metric measures the difference between the distribution of motions generated in the latent space and real motions to evaluate the quality of generated motions. The lower the FID score, the smaller the distribution difference between the generated and real motions, indicating a higher quality of the motion generated. \u2022 KID (Kernel Inception Distance): Similar to FID, it utilizes convolution to extract motion features when calculating the distance between feature statistical data. Compared with FID, the KID score is more sensitive to the local structure and details of generated motions. A lower KID score indicates a higher quality of the generated motion. \u2022 Diversity: Evaluate the degree of diversity of the generated movements. The higher the value, the more diverse the movements generated, indicating better generation outcomes. We conduct quantitative comparison experiments on the Xia dataset [34], as demonstrated by the results in Table 1. The quantitative comparison results on the BFA dataset [2] can be seen in the supplementary material. Due to the limited availability of publicly accessible datasets in motion style transfer, we only compare these two mainstream datasets. Table 1 reveals that our proposed SMCD framework surpasses the baseline [2, 25] on most metrics, achieving optimal results. This success stems from our SMCD framework and MSM module, which excel in learning content and style motion features and fusing them effectively. At the same time, these elements \fSMCD: High Realism Motion Style Transfer via Mamba-based Diffusion Preprint, 2024, Conference Paper Table 1: A quantitative comparison with State-of-the-art methods on the Xia dataset [34]. The best scores are emphasized in bold. Method FID\u2193 KID\u2193 Diversity\u2191 Aberman et al. [2] 19.405 0.953 2.639 Style-ERD [25] 17.682 0.869 2.595 Ours 16.676 0.768 2.602 maintain the long-term dependencies in temporal sequence within the motion sequence, leading to the generation of more realistic motion sequences. 4.4 Generalizability Our model is capable of extracting styles from any given motion clip. However, in practical applications within the multimedia field, motion style transfer models will likely encounter style categories outside the training dataset. At times like this, whether the model can transfer styles from unseen styles determines its generalization and usability. To compare the generalizability of our proposed SMCD framework with other methods, we train the model on the Xia dataset [34], which does not include angry label motions. Then, we conduct tests on a dataset that included angry style motions. The results, as shown in Figure 5, illustrate that when faced with an unseen motion style angry, our SMCD framework can still learn its characteristics. Our framework achieve better motion style transfer effects than [1] and [25]. The other two methods that participate in the comparison exhibited flaws when transferring unseen styles, as indicated by the red boxes in Figure 5. The results of the generalizability comparison indicate that our framework is more generalizable and practical. Its ability to perform more effectively in various multimedia fields, such as movies, games, and the Metaverse, distinguishes it from other methods. 4.5 Ablation Studies In order to verify the necessity of each component in our model, we conduct extensive ablation experiments, removing the MSM module, the loss functions Lsimple , Ldcc, Ldsc respectively to train the model, and then utilize the same evaluation metrics as quantitative evaluation for validation. As shown in Table 2, the removal of any one component significantly degrades all evaluation metrics of the SMCD framework, with the most noticeable drop in performance for motion style transfer when the MSM module is removed. In addition, we also present the motion effect diagram generated by the model after removal, as illustrated in Figure 6. It can be observed that the motion has many flaws, and it does not effectively reflect the style of the motion. The results of the ablation experiment also affirm the effectiveness of each component in our SMCD framework; they all play integral roles and are indispensable. To further compare the motion style transfer performance of our proposed MSM module with other modules, we substitute the MSM module for four modules: STGCN [42], Transformer Encoder [32], iTransformer [16], and Mamba [8], and retrain the framework for comparative experiments. We leverage the same evaluation metrics Table 2: Ablation experiments on various components of the SMCD framework. The best scores are highlighted in bold. Setting FID\u2193 KID\u2193 Diversity\u2191 Ours w/o Lsimple 17.546 0.831 2.158 Ours w/o Ldcc 22.410 1.168 2.473 Ours w/o Ldsc 20.294 1.030 1.931 Ours w/o MSM 23.330 1.458 1.433 Ours 16.676 0.768 2.602 Table 3: Comparison results between the MSM module and other modules. The best scores are highlighted in bold. Module FID\u2193 KID\u2193 Diversity\u2191 STGCN [42] 21.119 1.021 2.269 Transformer [32] 18.977 0.952 2.080 iTransformer [16] 19.177 0.862 2.392 Mamba [8] 20.962 0.925 2.579 MSM(Ours) 16.676 0.768 2.602 as mentioned above to assess the performance. As shown in Table 3, our MSM module outperform all other modules on all quantitative evaluation metrics, fully demonstrating its superiority in achieving a better motion style transfer effect. We hypothesize that this success is due to the MSM module\u2019s superior ability to capture the temporal information and stylization characteristics of motion sequences, thereby effectively transferring styles while maintaining the long-term dependencies within the sequence. Due to space limitations, more ablation experiment results will be demonstrated in the supplementary materials. 4.6 User study In addition to the qualitative and quantitative comparisons, we conduct a user study to perceptually evaluate the realism, style expressiveness, and content preservation of our style transfer results. As detailed below, we recruite 50 volunteers to respond to a questionnaire consisting of three types of questions. In this part, we assess the realism of the generated motions. Two motions depicting the same type of content and style (such as a depressed walk) are presented to the volunteers. The motions originated from three different sources: (1) our original Xia dataset [34], (2) results generated by method [2], (3) results generated by StyleERD [25], and (4) results generated by our framework. Note that (2), (3), and (4) are all generated using similar inputs. Participants are asked, \"Which motion above looks more like actual walking?\" and must choose one of the four motion sources. Table 4 presents the realism ratios for each method in generating motions. It is easy to find out that 85.2% of our results are judged as realistic, closely resembling the proportion in the real Xia dataset [34]. Notably, this ratio is significantly higher than method [2] with 15.1% and Style-ERD [25] with 28.7%. Content Preservation and Style Transfer. This part compares our style transfer results with those generated by Aberman et al. [2] and Style-ERD [25] regarding content preservation and style \fPreprint, 2024, Conference Paper Ziyun Qian, et al Input unseen style Input content Aberman et al. Style-ERD Ours Neutral run Angry style Neutral run into angry style Neutral run into angry style Neutral run into angry style Childlike style Angry walk Angry walk into childlike style Angry walk into childlike style Angry walk into childlike style Figure 5: Illustration of Unseen Styles. Training on datasets [34] without the angry style, then testing conventionally to evaluate their generalizability when dealing with an unseen style. Red boxes highlight flaws in the generated motions. Input style Input content Ours \ud835\udc98\ud835\udc98/\ud835\udc90\ud835\udc90\u2112\ud835\udc85\ud835\udc85\ud835\udc85\ud835\udc85\ud835\udc85\ud835\udc85 Ours \ud835\udc98\ud835\udc98/\ud835\udc90\ud835\udc90\u2112\ud835\udc85\ud835\udc85\ud835\udc85\ud835\udc85\ud835\udc85\ud835\udc85 Ours (Full) Angry walk Neutral style Angry walk into neutral style Angry walk into neutral style Angry walk into neutral style Neutral style Sexy walk Sexy walk into neutral style Sexy walk into neutral style Sexy walk into neutral style Figure 6: The motion generated by the model trained post-removal of Ldcc and Ldsc. Red boxes highlight flaws in the generated motions. Table 4: The user study for realism ratios. Xia dataset [34] Aberman et al. [2] Style-ERD [25] Ours 88.9% 15.1% 28.7% 85.2% transfer. Volunteers are presented with a content input, a style input, and the results of motion style transfer from three models. They are initially asked to choose which model\u2019s motion content is closer to the input content, followed by selecting which model\u2019s motion style is closer to the input style. The results of the user study are shown in Table 5. The findings indicate that our method achieve the best content preservation and style transfer outcomes. 64.8% and 72.3% of the volunteers perceive that our method\u2019s motion content/style is closer to the input content/style. In contrast, the proportions for the other two methods [1] [25] were significantly lower than ours Table 5: The user study for content preservation and style transfer. Evaluation Metrics Aberman et al. [2] Style-ERD [25] Ours Content Preservation 20.7% 14.5% 64.8% Style Transfer 10.9% 16.8% 72.3% 5", + "additional_graph_info": { + "graph": [ + [ + "Ziyun Qian", + "Zeyu Xiao" + ], + [ + "Ziyun Qian", + "Dingkang Yang" + ], + [ + "Ziyun Qian", + "Shunli Wang" + ], + [ + "Zeyu Xiao", + "Zhiwei Xiong" + ], + [ + "Zeyu Xiao", + "Ruisheng Gao" + ], + [ + "Zeyu Xiao", + "Jiawang Bai" + ], + [ + "Dingkang Yang", + "Lihua Zhang" + ], + [ + "Dingkang Yang", + "Yuzheng Wang" + ], + [ + "Dingkang Yang", + "Zhaoyu Chen" + ], + [ + "Shunli Wang", + "Dingkang Yang" + ], + [ + "Shunli Wang", + "Lihua Zhang" + ], + [ + "Shunli Wang", + "Qing Yu" + ] + ], + "node_feat": { + "Ziyun Qian": [ + { + "url": "http://arxiv.org/abs/2405.02844v1", + "title": "SMCD: High Realism Motion Style Transfer via Mamba-based Diffusion", + "abstract": "Motion style transfer is a significant research direction in multimedia\napplications. It enables the rapid switching of different styles of the same\nmotion for virtual digital humans, thus vastly increasing the diversity and\nrealism of movements. It is widely applied in multimedia scenarios such as\nmovies, games, and the Metaverse. However, most of the current work in this\nfield adopts the GAN, which may lead to instability and convergence issues,\nmaking the final generated motion sequence somewhat chaotic and unable to\nreflect a highly realistic and natural style. To address these problems, we\nconsider style motion as a condition and propose the Style Motion Conditioned\nDiffusion (SMCD) framework for the first time, which can more comprehensively\nlearn the style features of motion. Moreover, we apply Mamba model for the\nfirst time in the motion style transfer field, introducing the Motion Style\nMamba (MSM) module to handle longer motion sequences. Thirdly, aiming at the\nSMCD framework, we propose Diffusion-based Content Consistency Loss and Content\nConsistency Loss to assist the overall framework's training. Finally, we\nconduct extensive experiments. The results reveal that our method surpasses\nstate-of-the-art methods in both qualitative and quantitative comparisons,\ncapable of generating more realistic motion sequences.", + "authors": "Ziyun Qian, Zeyu Xiao, Zhenyi Wu, Dingkang Yang, Mingcheng Li, Shunli Wang, Shuaibing Wang, Dongliang Kou, Lihua Zhang", + "published": "2024-05-05", + "updated": "2024-05-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "INTRODUCTION Motion style transfer is a significant research direction in multimedia applications. The objective is to transpose the style from the style reference onto the content motion while conserving the motion content. As such, the generated motion can possess features from both the content and style motion, thus enabling the swift switching between different styles for a digital humanoid\u2019s identical motion, as depicted in Figure 1. Employing this technology can dramatically enrich and heighten the realism of digital human motion. It is being broadly adapted into various multimedia contexts such as movies, games, the Metaverse and so on. Traditional methods for motion style transfer [1, 12, 25] mainly adopt a generation framework based on GAN [7]. However, GAN training is known to suffer from instability and convergence issues, arXiv:2405.02844v1 [cs.CV] 5 May 2024 \fPreprint, 2024, Conference Paper Ziyun Qian, et al leading to difficulties in generating high-fidelity, natural motion sequences. On the contrary, the diffusion framework process during training tends to be more stable and is typically easier to converge. Therefore, to address the aforementioned problems, we adopt the diffusion model as our generative framework and consider style motion sequences a diffusion condition for the first time. Consequently, we propose the Style Motion Conditioned Diffusion (SMCD) Framework. This framework is capable of learning motion detail features and style variations more comprehensively, generating motions with content and style motion characteristics, thereby achieving more realistic and natural motion style transfer. However, upon the proposition of the SMCD framework, we discover it failed to effectively extract the temporal information of the motion sequences, leading to the generation of disordered motion. To address this problem, we are inspired by the Mamba [8] model and thus propose the Motion Style Mamba (MSM) module. The MSM module effectively captures sequence temporal information utilizing the Selection Mechanism, preserving long-term temporal dependencies within a motion sequence. We are the first researchers to introduce the Mamba [8] model to the field of motion style transfer. Additionally, since we propose a new framework for motion style transfer, suitable loss functions to aid in training are currently lacking. In light of this, we specially design the Diffusion-based Content Consistency Loss and Diffusion-based Style Consistency Loss, tailoring them to the characteristics of our proposed SMCD Framework. These loss functions are utilized to constrain the content and style of the generated motions, and achieve better results. In the experiment section, we carry out extensive comparative tests using other methods. Visual effects and quantifiable indicators show that the motions generated by the proposed SMCD framework possess higher naturality and realism. Furthermore, it maintains the original motion style while generating various motions, such as walking, running, and jumping. In summary, the main contributions of this paper can be summarized as follows: \u2022 We propose a new motion style transfer framework, SMCD, for the first time, considering style motion sequences as conditions for diffusion to generate motions. \u2022 We first utilize the Mamba model [8] in the field of motion style transfer, and propose the MSM module. This module is designed to extract the temporal information of motion sequences better, thereby maintaining long-term dependencies in the time sequence of motion sequences. \u2022 Due to the lack of loss functions that fully adapt to our SMCD framework, we propose the Diffusion-based Content Consistency Loss and Diffusion-based Style Consistency Loss to assist in training for the first time, enabling the model to achieve improved results. \u2022 We conduct extensive experiments to evaluate our framework. The results indicate that our proposed SMCD framework surpasses the effects of state-of-the-art methods in terms of visual effects and quantitative indicators. 2 RELATED WORKS Motion Style transfer. Motion style transfer is a significant research area in multimedia applications. Early methods [3, 29] utilize handcrafted feature extraction to design different motion styles. These approaches, however, are inefficient and incapable of quickly generating large-scale stylized motions. Later, some methods [18, 34] attempt to employ machine learning for motion style transfer. However, these methods typically require a paired dataset for training, meaning they need a human avatar to perform the same motion using different styles, such as running in both a happy and a sad state, with nearly similar steps. Such an intricate process limited the creation of large-scale paired motion datasets. In recent years, specific methods [1, 4, 12, 25] borrow techniques from image style transfer, utilizing deep learning structures for digital human motion style transfer. These methods do not require paired training datasets and achieve sound motion style transfer effects. However, most adopt a Generative Adversarial Network (GAN) [7] based generation framework. GAN [7] training is known to suffer from instability and convergence issues, which results in difficulties in generating realistic, high-fidelity motion sequences. To resolve these problems, we propose a diffusion-based motion style transfer framework. Furthermore, we are the first to consider style motion as a condition within diffusion, allowing a more comprehensive learning of content and style features within a motion sequence. This results in a more realistic, more natural motion style transfer. Diffusion Generative Models. Diffusion consists of both a forward process and a reverse process, forming a Markovian architecture that reverses predetermined noise using neural networks and learns the underlying distribution of data. The researchers highly favor the diffusion model for its excellent performance in various research areas, such as image generation [22, 24, 30], video generation [9], reinforcement learning [13], 3D shape generation [45], and more, benefiting from the advances in learning-based technologies [35\u201341]. Compared to GANs [7] and VAEs [15], the diffusion model exhibits promising quality not only in image tasks but also in motion generation. The work [43] is the first text-based motion diffusion model that achieves body part-level control using fine-grained instructions. Tevet et al. [26] introduce a motion diffusion model, operating on raw motion data, and learn the relationship between motion and input conditions. The method [44] presents a retrievalaugmented motion diffusion model, leveraging additional knowledge from retrieved samples for motion synthesis. The research [33], in contrast to traditional diffusion models, devised a spatialtemporal transformer-based architecture as the core decoder, diverging from the conventional Unet backbone, to introduce diffusion into human motion prediction. Kim et al. [14] combine improved DDPM [19] and Classifier-free guidance [11] integrating diffusionbased generative models into the motion domain. The method [28] utilizes a Transformer-based diffusion model, couples with the Jukebox, to provide motion generation and editing suitable for dance. The effort [5] employs a 1D U-Net with cross-modal transformers to learn a denoising function, synthesizing long-duration motions based on contextual information such as music and text. Flaborea et al. [6] focus on the multimodal generation capability of diffusion models and the improved mode-coverage capabilities of diffusive techniques, applying them to detect video anomalies. However, \fSMCD: High Realism Motion Style Transfer via Mamba-based Diffusion Preprint, 2024, Conference Paper among the numerous diffusion-based frameworks, no work currently incorporates style motion as a condition and applies it to motion style transfer. 3 METHODOLOGY Pose Representation. We categorize the motion sequence input into the Style Motion Conditioned Diffusion (SMCD) framework into two types based on function. The first type, content motion sequence mc \u2208\ud835\udc454\ud835\udc3d\u00d7\ud835\udc41, has \ud835\udc41poses, each pose mci has 4\ud835\udc3ddimensions, i.e., mc = {\ud835\udc8eci}\ud835\udc41 \ud835\udc56=1. Similarly, the second type, style motion sequence \ud835\udc8fs \u2208R3\ud835\udc3d\u00d7\ud835\udc47, also has \ud835\udc41poses, each pose nsi has 3\ud835\udc3ddimensions, i.e., ns = {nsi}\ud835\udc41 \ud835\udc56=1. The content motion sequence \ud835\udc8ec can be represented using joint rotations with a source style c \u2208S. In contrast, the style motion sequence \ud835\udc8fs can be inferred from the relative motion of joint rotations to infer style, hence represented using joint rotations, with a target style s \u2208S. Here, S denotes the collection of all styles, \ud835\udc3d= 21 signifies the number of joints in the human skeleton. The objective of the SMCD framework is to generate a motion sequence that simultaneously possess the content characteristics of mc and the style features of ns, hence achieving motion style transfer. 3.1 Style Motion Conditioned Diffusion Framework A majority of current motion style transfer methodologies [2, 12, 25] predominantly adopt a generative framework based on GAN [7]. However, during training, GAN is prone to instability and convergence issues, often resulting in disorganized, chaotic motion sequences that struggle to embody a realistic, high-level natural motion style. On the contrary, the diffusion framework process during training tends to be more stable and is typically easier to converge. Therefore, to address the highlighted problems, we adopt a diffusion model as our generative framework. To ensure that the diffusion framework can learn the details of motion characteristics and style variations more comprehensively, we innovatively consider the style motion sequence ns as the condition C \u2208R\ud835\udc51\u00d7\ud835\udc41 for diffusion. Consequently, we propose the Style Motion Conditioned Diffusion (SMCD) Framework, achieving a more realistic and high-fidelity motion style transfer. We utilize the definition of diffusion delineated in DDPM [10], considering the forward diffusion process as a Markov noising process. By perpetually infusing Gaussian noise into the motion sequence m0 \u2208R\ud835\udc51\u00d7\ud835\udc41, we disrupt the motion sequence, thus obtaining {mt}T t=0, i.e., the full motion sequence at noising step t, where the m0 \u2208R\ud835\udc51\u00d7\ud835\udc41is drawn from the data distribution. This forward noising process can be defined as follows: \ud835\udc5e(mt | m0) \u223cN \u0010\u221a\u00af \ud835\udefc\ud835\udc61m0, (1 \u2212\u00af \ud835\udefc\ud835\udc61) I \u0011 , (1) where \u00af \ud835\udefc\ud835\udc61\u2208(0, 1) are monotonic decreasing constants, when approximating to 0, we can approximate mT \u223cN (0, \ud835\udc3c). We set timesteps T = 1000. 3.2 Motion Style Mamba Architecture Upon introducing the SMCD framework, the observation shows that the framework exhibited suboptimal performance in extracting temporal information from motion sequences, resulting in somewhat chaotic outcomes. Drawing inspiration from the Mamba model proposed by Gu et al. in reference [8], we propose the Motion Style Mamba (MSM) module to address this issue. This module employs a Selection Mechanism to more effectively capture the temporal dynamics of motion sequences, thereby preserving the long-term dependencies within the sequence and enhancing the efficacy of motion style transfer. To the best of our knowledge, we are the first to introduce the Mamba model for motion style transfer. The Motion Style Mamba (MSM) module primarily embeds independent temporal information into motion sequences. Prior to the input of motion sequences into the MSM module, it is requisite to subject the input motion sequences and temporal steps to the following processing procedures: Seq \ud835\udc47= \ud835\udc43\ud835\udc38\u0000concat \u0000\ud835\udc40\ud835\udc3f\ud835\udc43(\ud835\udc47), Linear \u0000\ud835\udc5b\ud835\udc60\u0001 , Linear \u0000\ud835\udc5a\ud835\udc50\u0001\u0001\u0001 , (2) where the temporal step size denotes as T, undergoes a projection through a multi-layer perceptron (MLP) comprising two linear layers succeeded by an activation layer, thereby mapping it into a continuous vector space. This process results in forming a latent vector that is amenable to manipulation by the Motion Style Mamba (MSM) module. \ud835\udc8f\ud835\udc60\u2208R3\ud835\udc3d\u00d7\ud835\udc47denotes to style motion sequence, \ud835\udc8e\ud835\udc84\u2208R4\ud835\udc3d\u00d7\ud835\udc41denotes to content motion sequence. Once processed through a linear layer, the two components are concatenated to form an augmented motion sequence. Upon undergoing positional encoding, this sequence is transformed into SeqT, which serves as the input for the Motion Style Mamba (MSM) module. Within the MSM module, the Mamba Block [8] undertakes the pivotal role of mapping temporal information via the temporal step size T onto both the content motion sequence and the style motion sequence while modulating the significance of the temporal information. Inside the Mamba Block, SeqT initially passes through a residual structure equips with an InstanceNorm (IN) layer, followed by feature extraction via Causal Conv1D [31]. The Causal Conv1D ensures that the value of each output is contingent solely upon its preceding input values. Moreover, the Selection Scan constitutes the core component of the Mamba Block, enabling the model to selectively update its internal state based on the current characteristics of the input data. This further refines to focus on temporal information, facilitating the capture of the temporal dependencies within the motion sequence. Utilizing the Selection Scan allows for a high degree of temporal alignment between the content motion and style motion, thereby circumventing the rigidity that may arise from asymmetrical motion sequences in the final output. The following formula can delineate the structure of the Mamba Block: \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4e0 \ud835\udc60= LN \u0000Seq\ud835\udc47 \u0001 , (3) \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4e\ud835\udc56 \ud835\udc60= LN \u0010 IN \u0010 \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4e\ud835\udc56\u22121 \ud835\udc60 \u0011\u0011 + IN \u0010 \u03a6 \u0010 \ud835\udf07 \u0010 IN \u0010 \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4e\ud835\udc56\u22121 \ud835\udc60 \u0011\u0011\u0011\u0011 , (4) \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4eres = LN \u0000Seq\ud835\udc47 \u0001 + \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4e\ud835\udc41 \ud835\udc60, (5) \fPreprint, 2024, Conference Paper Ziyun Qian, et al Linear Linear T MSM Style Motion \u2026 \u2026 Seq MLP \u2026 PE \ud835\udc8f\ud835\udc8f\ud835\udc94\ud835\udc94\ud835\udfcf\ud835\udfcf \ud835\udc8f\ud835\udc8f\ud835\udc94\ud835\udc94\ud835\udfd0\ud835\udfd0 \ud835\udc8f\ud835\udc8f\ud835\udc94\ud835\udc94\ud835\udc8f\ud835\udc8f \u0ddd \ud835\udc8e\ud835\udc8e\ud835\udfce\ud835\udfce \ud835\udfcf\ud835\udfcf \u0ddd \ud835\udc8e\ud835\udc8e\ud835\udfce\ud835\udfce \ud835\udfd0\ud835\udfd0 \u0ddd \ud835\udc8e\ud835\udc8e\ud835\udfce\ud835\udfce \ud835\udc8f\ud835\udc8f ... Content Motion \ud835\udc8e\ud835\udc8e\ud835\udc84\ud835\udc84\ud835\udfcf\ud835\udfcf \u2026 \ud835\udc8e\ud835\udc8e\ud835\udc84\ud835\udc84\ud835\udfd0\ud835\udfd0 \ud835\udc8e\ud835\udc8e\ud835\udc84\ud835\udc84\ud835\udc8f\ud835\udc8f ... ... Predicted Motion MSM Noisy Motion T Style Motion Diffuse 0 \u2192T-1 Style Motion T 1 MSM MSM Style Motion 1 \u0ddd \ud835\udc8e\ud835\udc8e\ud835\udfce\ud835\udfce Diffuse 0 \u21921 ... ... ... ... \u0ddd \ud835\udc8e\ud835\udc8e\ud835\udfce\ud835\udfce \u0ddd \ud835\udc8e\ud835\udc8e\ud835\udfce\ud835\udfce Figure 2: (Left) Overview of the Style Motion Conditioned Diffusion (SMCD) framework. The model inputs a content motion sequence mc with N poses in a noising step \ud835\udc61, as well as \ud835\udc61itself, and a style motion sequence \ud835\udc8fs considered as condition C. The Motion Style Mamba (MSM) module predicts the stylized motion m0 in each sampling step. (Right) Sampling MSM. Given the \ud835\udc8fs as condition C, we sample random noise mT at the dimensions of the desired motion, then iterate from T=1000 to 1. In each step \ud835\udc61, MSM predicts stylized motion m0 and diffuses it back to mT-1. where LN is the linear layer, IN is an Instance Normalization layer. \u03a6 is Selective Scan module, \ud835\udf07denotes to Causal Conv1D layer [31], \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4e\ud835\udc56 \ud835\udc60denotes the Mamba Block corresponding to the ith iteration of the cyclic process. Especially, \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4e0 \ud835\udc60denotes the input presented to the Mamba Block. \ud835\udc40\ud835\udc4e\ud835\udc5a\ud835\udc4f\ud835\udc4eres represents the output from the residual network that incorporates the Mamba Block as a constitutive element. After the Mamba Block structure facilitates the integration, the temporal information and motion sequences are consolidated and fed into a Multi-Head Attention (MHA) mechanism. This is further followed by the passage through a residual network augmented with a Position-wise Feed-Forward Network, which enhances the efficacy of the style transfer process. \ud835\udf0e= IN (LN ( Mamba res )) + \ud835\udc40\ud835\udc3b\ud835\udc34(LN ( Mamba res )) , (6) where \ud835\udf0erefers to the output of the residual network that includes the integration of MHA. The ultimate output of the MSM module \ud835\udc40\ud835\udc40\ud835\udc46\ud835\udc40can be articulated through the following equation: \ud835\udc40\ud835\udc40\ud835\udc46\ud835\udc40= \ud835\udc39\ud835\udc39\ud835\udc41(\ud835\udf0e) + IN(\ud835\udf0e), (7) where \ud835\udc39\ud835\udc39\ud835\udc41denotes the Position-wise Feed-Forward Network. 3.3 Training Objectives Our objective is to synthesize a motion sequence of length N that embodies both the characteristics of content motion and style motion under the given condition c in style motion sequence ns \u2208 \ud835\udc453\ud835\udc3d\u00d7\ud835\udc47. We model distribution \ud835\udc5d( m0 | C) as the reversed diffusion Mamba Block * N Input Linear Linear Wise position FFN Instance Norm Instance Norm MSM Block Linear MHA K Q V Linear Causal Conv1D Selective Scan Linear Instance Norm predicted motion Figure 3: Architecture of Motion Style Mamba (MSM) Module. process of iteratively cleaning mT. To better handle lengthy motion sequences and enhance computational efficiency, we propose the Motion Style Mamba (MSM) module. After noise mt, noising step t, and motion condition C are fed into the MSM module, we can directly predict the original motion sequence b m0, i.e., b m0 = MSM ( mt, t, C) = MSM ( mt, t, ns), without having to predict noise \ud835\udf16\ud835\udc61as the research [10] (see Figure 2 right). \fSMCD: High Realism Motion Style Transfer via Mamba-based Diffusion Preprint, 2024, Conference Paper Furthermore, we introduce the simple loss proposed by Ho et al. [10] to encourage the predicted motion sequence b m0 to be as consistent as possible with the original motion sequence m0: Lsimple = \ud835\udc38m0,\ud835\udc61\u223c[1,\ud835\udc47] h \u2225m0 \u2212MSM (mt, t, ns)\u22252 2 i . (8) Additionally, in light of the unique characteristics of the style motion conditioned diffusion framework proposed in this paper, we specially designe the Diffusion-based Content Consistency Loss (Eq.9) and Diffusion-based Style Consistency Loss (Eq.10). Diffusion-based Content Consistency Loss. When the inputted content motion sequence mc and style motion sequence ns share the same style (c=s), it would undoubtedly be ideal for the resulting generated motion to closely resemble content motion mc, regardless of the content of style motion ns. Due to the lack of loss functions that fully adapt to our SMCD framework, taking the above observation into account, we propose the Diffusion-based Content Consistency Loss under the style motion conditioned diffusion framework for the first time, aiming to constrain the motion content. In each iteration, two motion sequences with the same content are randomly selected from the dataset M to serve as the style motion and content motion, respectively. Subsequently, the Diffusion-based Content Consistency Loss is computed using the following formula: Ldcc = Emc,nc\u223cM \u2225\ud835\udc40\ud835\udc46\ud835\udc40(mc, t, nc) \u2212mc\u22251 . (9) Two fundamental differences exist between our loss function and the Content Consistency Loss proposed by Aberman et al. [2] : (1) Our loss function is diffusion-based, and the timestep t can control the forward noising process based on motion. (2) The style motion in our loss function acts as a condition for diffusion, aligning more closely with the overall framework of this paper. Diffusion-based Style Consistency Loss. Following the same line of thinking as the Diffusion-based Content Consistency Loss, we also propose the Diffusion-based Style Consistency Loss for the first time. In each iteration, we randomly select two motion sequences with the same style from the dataset M as the style motion and content motion, respectively. The motion generated should be closer to the style motion ns. We calculate the Diffusionbased Style Consistency Loss using the following formula: Ldsc = Enc,ns\u223cM \u2225\ud835\udc40\ud835\udc46\ud835\udc40(nc, t, ns) \u2212ns\u22251 . (10) Geometric losses. Geometric losses are also frequently adopted in motion generation [20, 23, 27, 28] to enhance the physical realism of the motion, prompting the model to generate more naturally coherent motions. We employ three expected geometric losses, which control (1) positions, (2) foot contact, and (3) velocities. Lpos = 1 \ud835\udc41 \ud835\udc41 \u2211\ufe01 \ud835\udc56=1 \r \r \r\ud835\udc39\ud835\udc3e \u0010 mi 0 \u0011 \u2212\ud835\udc39\ud835\udc3e \u0010 b mi 0 \u0011\r \r \r 2 2 , (11) Lfoot = 1 \ud835\udc41\u22121 \ud835\udc41\u22121 \u2211\ufe01 \ud835\udc56=1 \r \r \r \u0010 \ud835\udc39\ud835\udc3e \u0010 mi+1 0 \u0011 \u2212\ud835\udc39\ud835\udc3e \u0010 b mi 0 \u0011\u0011 \u00b7 \ud835\udc53\ud835\udc56 \r \r \r 2 2 , (12) Lvel = 1 \ud835\udc41\u22121 \ud835\udc41\u22121 \u2211\ufe01 \ud835\udc56=1 \r \r \r \u0010 mi+1 0 \u2212mi 0 \u0011 \u2212 \u0010 b mi+1 0 \u2212b mi 0 \u0011\r \r \r 2 2\u2032 (13) where \ud835\udc39\ud835\udc3e(\u00b7) is the forward kinematic function that converts joint angles into joint positions, and the \ud835\udc56superscript denotes the motion frame index. \ud835\udc53\ud835\udc56\u2208{0, 1}\ud835\udc3dis the binary foot contact mask for each frame \ud835\udc56, indicating whether the foot is in contact with the ground. It is set according to the binary ground truth data and mitigates foot sliding by offsetting the velocity when contact occurs. Our total training loss function is a combination of the above six losses: Ltotal = Lsimple + Ldcc + Ldsc + Lpos + Lvel + Lfoot . (14) 4 EXPERIMENT In this section, we conduct extensive experiments comparing the method presented in this paper with state-of-the-art methods in terms of visual effects and quantitative metrics. Subsequently, we also test the effectiveness of the SMCD framework in performing motion style transfer to unseen style to assess the model\u2019s generalizability in practical applications. Ultimately, we conduct extensive ablation experiments to validate the effectiveness of each component within the SMCD framework. 4.1 Implementation Details We train and test based on the Xia dataset [34]. This dataset\u2019s Motion clips include 8 motion styles and 5 motion contents. We reduce the original 120fps motion data to 60fps and obtain approximately 1500 motion sequences in total. Our framework is implemented in PyTorch and trains on an NVIDIA A800, with a batch size of 512, using the AdamW optimizer [17]. The training process takes about 10 hours each time. 4.2 Visual Effect Comparison We qualitatively compare the visual effects in motion style transfer from three aspects: style expressiveness, content preservation, and motion realism. This comparison involves our proposed SMCD framework, the method proposed by Aberman et al. [1] and StyleERD [25]. Due to the scarcity of open-source papers in the field of motion style transfer, our comparison is limited to the two methods mentioned above. The content motion and style motion adopted in the experiments originate from the dataset proposed by Xia et al. [34] Under ideal circumstances, the model should be capable of transferring the style of the style motion to the content motion while preserving the content of the content motion. Hence, the generated motion sequence should embody content and style motion characteristics. As seen in Figure 4, we conduct three sets of motion style transfers. The results show that the motions generated by our SMCD framework can more realistically reflect the style while retaining the original content, demonstrating higher style expressiveness and content preservation. On the other hand, the frameworks [1] and [25] struggle to transfer the motion style effectively. Regarding motion realism, motions generated by our SMCD framework are more realistic. In contrast, the other two methods exhibit flaws at the ankles, shoulders, and other areas, as highlighted in red boxes in Figure 4. \fPreprint, 2024, Conference Paper Ziyun Qian, et al Input style Input content Aberman et al. Style-ERD Ours Old walk into neutral style Proud walk into sexy style Strutting run into old style \u4e0d\u7528\u586b \u4e0d\u7528\u586b \u4e0d\u7528\u586b Figure 4: A comparative visual representation of the SMCD framework with the methods proposed by Aberman et al. [1] and Style-ERD [25]. The image depicts the flaws in the generated motions, denoted by red boxes. 4.3 Quantitative Evaluation Inspired by MoDi [21], we adopt the following metrics to evaluate our framework quantitatively: \u2022 FID (Fr\u00e9chet Inception Distance): This metric measures the difference between the distribution of motions generated in the latent space and real motions to evaluate the quality of generated motions. The lower the FID score, the smaller the distribution difference between the generated and real motions, indicating a higher quality of the motion generated. \u2022 KID (Kernel Inception Distance): Similar to FID, it utilizes convolution to extract motion features when calculating the distance between feature statistical data. Compared with FID, the KID score is more sensitive to the local structure and details of generated motions. A lower KID score indicates a higher quality of the generated motion. \u2022 Diversity: Evaluate the degree of diversity of the generated movements. The higher the value, the more diverse the movements generated, indicating better generation outcomes. We conduct quantitative comparison experiments on the Xia dataset [34], as demonstrated by the results in Table 1. The quantitative comparison results on the BFA dataset [2] can be seen in the supplementary material. Due to the limited availability of publicly accessible datasets in motion style transfer, we only compare these two mainstream datasets. Table 1 reveals that our proposed SMCD framework surpasses the baseline [2, 25] on most metrics, achieving optimal results. This success stems from our SMCD framework and MSM module, which excel in learning content and style motion features and fusing them effectively. At the same time, these elements \fSMCD: High Realism Motion Style Transfer via Mamba-based Diffusion Preprint, 2024, Conference Paper Table 1: A quantitative comparison with State-of-the-art methods on the Xia dataset [34]. The best scores are emphasized in bold. Method FID\u2193 KID\u2193 Diversity\u2191 Aberman et al. [2] 19.405 0.953 2.639 Style-ERD [25] 17.682 0.869 2.595 Ours 16.676 0.768 2.602 maintain the long-term dependencies in temporal sequence within the motion sequence, leading to the generation of more realistic motion sequences. 4.4 Generalizability Our model is capable of extracting styles from any given motion clip. However, in practical applications within the multimedia field, motion style transfer models will likely encounter style categories outside the training dataset. At times like this, whether the model can transfer styles from unseen styles determines its generalization and usability. To compare the generalizability of our proposed SMCD framework with other methods, we train the model on the Xia dataset [34], which does not include angry label motions. Then, we conduct tests on a dataset that included angry style motions. The results, as shown in Figure 5, illustrate that when faced with an unseen motion style angry, our SMCD framework can still learn its characteristics. Our framework achieve better motion style transfer effects than [1] and [25]. The other two methods that participate in the comparison exhibited flaws when transferring unseen styles, as indicated by the red boxes in Figure 5. The results of the generalizability comparison indicate that our framework is more generalizable and practical. Its ability to perform more effectively in various multimedia fields, such as movies, games, and the Metaverse, distinguishes it from other methods. 4.5 Ablation Studies In order to verify the necessity of each component in our model, we conduct extensive ablation experiments, removing the MSM module, the loss functions Lsimple , Ldcc, Ldsc respectively to train the model, and then utilize the same evaluation metrics as quantitative evaluation for validation. As shown in Table 2, the removal of any one component significantly degrades all evaluation metrics of the SMCD framework, with the most noticeable drop in performance for motion style transfer when the MSM module is removed. In addition, we also present the motion effect diagram generated by the model after removal, as illustrated in Figure 6. It can be observed that the motion has many flaws, and it does not effectively reflect the style of the motion. The results of the ablation experiment also affirm the effectiveness of each component in our SMCD framework; they all play integral roles and are indispensable. To further compare the motion style transfer performance of our proposed MSM module with other modules, we substitute the MSM module for four modules: STGCN [42], Transformer Encoder [32], iTransformer [16], and Mamba [8], and retrain the framework for comparative experiments. We leverage the same evaluation metrics Table 2: Ablation experiments on various components of the SMCD framework. The best scores are highlighted in bold. Setting FID\u2193 KID\u2193 Diversity\u2191 Ours w/o Lsimple 17.546 0.831 2.158 Ours w/o Ldcc 22.410 1.168 2.473 Ours w/o Ldsc 20.294 1.030 1.931 Ours w/o MSM 23.330 1.458 1.433 Ours 16.676 0.768 2.602 Table 3: Comparison results between the MSM module and other modules. The best scores are highlighted in bold. Module FID\u2193 KID\u2193 Diversity\u2191 STGCN [42] 21.119 1.021 2.269 Transformer [32] 18.977 0.952 2.080 iTransformer [16] 19.177 0.862 2.392 Mamba [8] 20.962 0.925 2.579 MSM(Ours) 16.676 0.768 2.602 as mentioned above to assess the performance. As shown in Table 3, our MSM module outperform all other modules on all quantitative evaluation metrics, fully demonstrating its superiority in achieving a better motion style transfer effect. We hypothesize that this success is due to the MSM module\u2019s superior ability to capture the temporal information and stylization characteristics of motion sequences, thereby effectively transferring styles while maintaining the long-term dependencies within the sequence. Due to space limitations, more ablation experiment results will be demonstrated in the supplementary materials. 4.6 User study In addition to the qualitative and quantitative comparisons, we conduct a user study to perceptually evaluate the realism, style expressiveness, and content preservation of our style transfer results. As detailed below, we recruite 50 volunteers to respond to a questionnaire consisting of three types of questions. In this part, we assess the realism of the generated motions. Two motions depicting the same type of content and style (such as a depressed walk) are presented to the volunteers. The motions originated from three different sources: (1) our original Xia dataset [34], (2) results generated by method [2], (3) results generated by StyleERD [25], and (4) results generated by our framework. Note that (2), (3), and (4) are all generated using similar inputs. Participants are asked, \"Which motion above looks more like actual walking?\" and must choose one of the four motion sources. Table 4 presents the realism ratios for each method in generating motions. It is easy to find out that 85.2% of our results are judged as realistic, closely resembling the proportion in the real Xia dataset [34]. Notably, this ratio is significantly higher than method [2] with 15.1% and Style-ERD [25] with 28.7%. Content Preservation and Style Transfer. This part compares our style transfer results with those generated by Aberman et al. [2] and Style-ERD [25] regarding content preservation and style \fPreprint, 2024, Conference Paper Ziyun Qian, et al Input unseen style Input content Aberman et al. Style-ERD Ours Neutral run Angry style Neutral run into angry style Neutral run into angry style Neutral run into angry style Childlike style Angry walk Angry walk into childlike style Angry walk into childlike style Angry walk into childlike style Figure 5: Illustration of Unseen Styles. Training on datasets [34] without the angry style, then testing conventionally to evaluate their generalizability when dealing with an unseen style. Red boxes highlight flaws in the generated motions. Input style Input content Ours \ud835\udc98\ud835\udc98/\ud835\udc90\ud835\udc90\u2112\ud835\udc85\ud835\udc85\ud835\udc85\ud835\udc85\ud835\udc85\ud835\udc85 Ours \ud835\udc98\ud835\udc98/\ud835\udc90\ud835\udc90\u2112\ud835\udc85\ud835\udc85\ud835\udc85\ud835\udc85\ud835\udc85\ud835\udc85 Ours (Full) Angry walk Neutral style Angry walk into neutral style Angry walk into neutral style Angry walk into neutral style Neutral style Sexy walk Sexy walk into neutral style Sexy walk into neutral style Sexy walk into neutral style Figure 6: The motion generated by the model trained post-removal of Ldcc and Ldsc. Red boxes highlight flaws in the generated motions. Table 4: The user study for realism ratios. Xia dataset [34] Aberman et al. [2] Style-ERD [25] Ours 88.9% 15.1% 28.7% 85.2% transfer. Volunteers are presented with a content input, a style input, and the results of motion style transfer from three models. They are initially asked to choose which model\u2019s motion content is closer to the input content, followed by selecting which model\u2019s motion style is closer to the input style. The results of the user study are shown in Table 5. The findings indicate that our method achieve the best content preservation and style transfer outcomes. 64.8% and 72.3% of the volunteers perceive that our method\u2019s motion content/style is closer to the input content/style. In contrast, the proportions for the other two methods [1] [25] were significantly lower than ours Table 5: The user study for content preservation and style transfer. Evaluation Metrics Aberman et al. [2] Style-ERD [25] Ours Content Preservation 20.7% 14.5% 64.8% Style Transfer 10.9% 16.8% 72.3% 5" + } + ], + "Zeyu Xiao": [ + { + "url": "http://arxiv.org/abs/2305.18994v1", + "title": "Toward Real-World Light Field Super-Resolution", + "abstract": "Deep learning has opened up new possibilities for light field\nsuper-resolution (SR), but existing methods trained on synthetic datasets with\nsimple degradations (e.g., bicubic downsampling) suffer from poor performance\nwhen applied to complex real-world scenarios. To address this problem, we\nintroduce LytroZoom, the first real-world light field SR dataset capturing\npaired low- and high-resolution light fields of diverse indoor and outdoor\nscenes using a Lytro ILLUM camera. Additionally, we propose the Omni-Frequency\nProjection Network (OFPNet), which decomposes the omni-frequency components and\niteratively enhances them through frequency projection operations to address\nspatially variant degradation processes present in all frequency components.\nExperiments demonstrate that models trained on LytroZoom outperform those\ntrained on synthetic datasets and are generalizable to diverse content and\ndevices. Quantitative and qualitative evaluations verify the superiority of\nOFPNet. We believe this work will inspire future research in real-world light\nfield SR.", + "authors": "Zeyu Xiao, Ruisheng Gao, Yutong Liu, Yueyi Zhang, Zhiwei Xiong", + "published": "2023-05-30", + "updated": "2023-05-30", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "eess.IV" + ], + "main_content": "Introduction The light field imaging technique enables the capture of the light rays not only at different locations but also from different directions [4]. The limited spatial resolution caused by the essential spatial-angular trade-off restricts the capability of light field in practical applications, such as post-capture refocusing [47, 66], disparity estimation [60,63,80], and seeing through occlusions [30,65,79]. Attempting to recover a high-resolution (HR) light field from its low-resolution (LR) observation, light field superresolution (SR) has emerged as a significant task in both academia and industry. Benefitting from the rapid development of deep learning techniques, convolutional neural network (CNN) based and vision Transformers based methods have demonstrated promising performance for light field SR [10, 19, 29, 39, 40, 42, 46, 58, 62\u201364, 67, 70, 72\u201374, 77, 78]. They outperform traditional non-learning-based methods [2,10,52] with appreciable improvements. However, existing deep methods remain limited because they are trained on simulated light field datasets which assume simple and uniform degradations (e.g., bicubic downsampling) due to the natural difficulty of collecting LR-HR light field pairs. Degradations in real applications are much more complicated and such simulated degradations usually deviate from real ones. This degradation mismatch makes 1 arXiv:2305.18994v1 [cs.CV] 30 May 2023 \f(a) (b) Subject Lytro ILLUM Tripod Focus Adjustor Focus Adjustor \u00d7 4 \u00d7 2 GT Ground-truth GT \u00d7 \ud835\udfd2 (b) (c) Figure 2. (a) The capturing system of the LytroZoom dataset. (b) One example of pixel-wise aligned pairs from LytroZoom-P. (c) One example of pixel-wise aligned pairs from LytroZoom-O. existing light field SR methods unpractical in real-world scenarios [8, 9]. Fig. 1 shows the light field SR results of a real-world light field captured by a Lytro ILLUM camera. We utilize the advanced IINet [63] to train several light field SR models using the mixed simulated dataset with bicubic degradation (IINet + BI) and light field pairs with authentic distortions in our LytroZoom (IINet + LytroZoomP/LytroZoom-O). The results clearly show that, the IINet trained on the simulated dataset (Fig. 1(b)) is less effective in super-resolving on a real-world light field (e.g., a light field captured by Lytro ILLUM). To remedy the above mentioned problem, it is highly desired that we have a light field SR dataset of paired LR-HR pairs more consistent with real-world degradations. However, collecting such a real-world light field SR dataset is non-trivial since the ground-truth HR light fields are tough to obtain. Inspired by [6], in which a real-world single image SR dataset is built upon the intrinsic resolution and field-of-view (FoV) degradation in realistic imaging systems, we capture images of the same scene (i.e., indoor and outdoor scenes) using a Lytro ILLUM camera (Fig. 2(a)) with different adjusted focal lengths. LR and HR light field pairs at different scales (e.g., \u00d72 and \u00d74) can be collected by adjusting the focal length. We utilize the alignment algorithm proposed in [5] to rectify distortions caused by different FoVs, including spatial misalignment, intensity variation, and color mismatching. Scenes that could not be rectified are eliminated from the dataset. As a result, we collect LytroZoom, a dataset of 94 aligned pairs featuring city scenes printed on postcards (LytroZoom-P), as well as 63 aligned pairs captured with outdoor scenes (LytroZoom-O), as illustrated in Fig. 2(b) and Fig. 2(c). Featuring scenes with spatial diversity and fine-grained details printed on postcards, and depth variations in outdoor scenes, LytroZoom presents a benchmark dataset for light field SR algorithms in real-world scenarios. As can be seen in Fig. 1(c)-(d), IINet trained on LytroZoom-P and finetuned on LytroZoom-O delivers much better results than the one trained on the simulated data. Compared with those simulated datasets, the degradation process in LytroZoom is much more complicated. In particular, real-world degradations exists in all frequency components and are spatially variant [15, 16, 49]. This motivates us to design a network which can consider omni-frequency information and enhance cross-frequency representations. In this paper, we propose the Omni-Frequency Projection Network (OFPNet) to efficiently solve the real-world light field SR problem. Specifically, we first decompose the input LR light field into high-frequency, middle-frequency and low-frequency components. Then, in order to improve the corresponding frequency representations, we employ multiple interactive branches that include frequency projection (FP) operations. Cores of the FP operation are iterative upsampling and downsampling layers to learn non-linear relationships between LR and HR frequency components for super-resolving a real-world light field. As shown in Fig. 1(e), our OFPNet can generate better results. This work makes three key contributions. (1) We collect LytroZoom, the first real-world paired LR-HR light field SR dataset, which overcomes the limitations of synthetic light field SR datasets and provides a new benchmark for training and evaluating real-world light field SR methods. (2) We demonstrate that light field SR models trained on LytroZoom perform better on real-world light fields than those trained on synthetic datasets. (3) We propose OFPNet, a novel baseline network for real-world light field SR, achieving superior results compared to existing methods. 2. Related work Single image SR. Single image SR methods have traditionally been trained on manually synthesized LR images using pre-defined downsampling kernels, such as bicubic interpolation [11,14,17,26,31,34,35,38,41,43,45,48,54,82,83]. However, these methods are not directly applicable to realworld images due to their more complex degradation kernels. Blind SR methods have recently emerged as a solution, which can be categorized into explicit degradation modeling [3, 18, 25, 32, 57, 76], capture training pairs [5, 6, 59, 69, 81] and generate training pairs [27, 44, 75, 84]. Inspired by the success of single image SR, we propose LytroZoom, the first real-world paired LR-HR light field SR dataset, and a novel OFPNet for real-world light field SR. The view-dependent degradations in LytroZoom are implicitly modeled by the OFPNet, providing an effective solution for real-world light field SR. Light field SR. Traditional non-learning methods depend on geometric [37, 52] and mathematical [2] modeling of the 4D light field structure to super-resolve the reference 2 \fLytro ILLUM Light field taken at 250mm Light field taken at 60mm Adjusting focal length Aligned LR light field HR light field Center-cropped HR light field Center-cropped LR light field Crop Crop Position and color alignment Figure 3. The processing pipeline of LytroZoom. Here we show how to obtain a \u00d74 LR-HR real-world light field pair from the postcard. view through projection and optimization techniques. Deep methods now dominate light field SR due to their promising performance. As a pioneering work along this line, Yoon et al. [73] propose the first light field SR network LFCNN by reusing the SRCNN [14] architecture with multiple channels. After that, several methods have been designed to exploit across-view redundancy in the light field, either explicitly [10, 29, 62, 78] or implicitly [46, 63, 64, 72, 74]. Transformer-based methods have recently demonstrated the effectiveness in light field SR [39, 40, 58]. Recently, Cheng et al. [9] propose a zero-shot learning framework to solve the domain shift problem in light field SR methods. This work can be regarded as a step towards light field SR in real-world scenarios, but real degradations in light field SR are not approached. In this paper, we collect the first real-world paired LR-HR light field SR dataset. Light field SR datasets. Several popular datasets, including EPFL [51], HCInew [23], HCIold [68], INRIA [33], STFgantry [55], and STFlytro [50] are widely used for training and evaluating light field SR methods. Recently, Sheng et al. [53] collect UrbanLF, a comprehensive light field dataset containing complex urban scenes for the task of light field semantic segmentation. This dataset has the potential to be extended to light field SR as well. In all these datasets, the LR light fields are synthesized by the bicubic downsampling operation. The light field SR models trained on the simulated pairs may exhibit poor performance when applied to real LR light fields where the degradations deviate from the simulated ones [61]. In this paper, we collect the first real-world paired LR-HR light field SR dataset using a Lytro ILLUM camera. We capture light fields at multiple focal lengths, providing a general benchmark for real-world light field SR. 3. Capturing the LytroZoom Dataset Our goal is to collect a real-world light field SR dataset consisting of paired LR-HR light fields. This is challenging, as it requires accurately aligned LR-HR sequences of the same scene. To address this challenge, we use the Lytro ILLUM camera, which can directly display the actual focal lengths during shooting. We capture city scenes printed on postcards and outdoor static objects as subjects. We define a light field captured at 250 mm focal length as the HR ground-truth and the ones captured at 120mm and 60mm focal lengths as the \u00d72 and \u00d74 LR observations [5, 6]. We standardize our data collection process by shooting postcards at the lowest ISO setting and maintaining a constant white balance and aperture size. For outdoor scenes, we adjust the ISO value to ensure optimal exposure and minimize noise. The camera is fixed on a tripod for stabilization. Without post-processing procedures, such as color correction and histogram equalization, we decode the captured raw light field data using the light field toolbox [12,13]. This results in 15 \u00d7 15 \u00d7 625 \u00d7 434 light field pairs. However, spatial misalignment, intensity variation, and color mismatching may exist in different views due to uncontrollable changes during lens zooming. Therefore, we use the alignment algorithm [5] view-by-view to rectify LR light fields iteratively to preserve view-dependent degradations. We then center-crop light field pairs to mitigate lens distortion and the vignetting effect. Please refer to Fig. 3 for the entire pipeline. After careful shooting, decoding, rectifying, cropping, and selection, we collect a dataset with 94 city scenes printed on postcards and 63 outdoor static scenes. We provide pixel-wise aligned light field pairs (i.e., groundtruth, \u00d72 and \u00d74 LR observations) with the resolution of 5 \u00d7 5 \u00d7 456 \u00d7 320 (LytroZoom-P) and 5 \u00d7 5 \u00d7 608 \u00d7 416 (LytroZoom-O). Fig. 2 shows samples from the dataset. We randomly partition LytroZoom-P into 63 scenes for training, 17 for validation, and the remaining 15 scenes for testing. Similarly, we use 55 scenes for training and the remaining 10 scenes for testing in LytroZoom-O. 4. Omni-Frequency Projection Network In Sec. 3, we have collected LytroZoom, containing diverse contents captured by a Lytro ILLUM camera. Therefore we have access to the LR-HR pairs for training light 3 \fConv S=4 Conv S=2 Conv S=1 Frequency Projection Down-Proj. Frequency Projection Frequency Projection C Frequency Projection C Frequency Projection Frequency Projection Reconstructor Residual connection \u2112\ud835\udc3f\ud835\udc45 \u2112\ud835\udc46\ud835\udc45 \u0de8 \u2131\ud835\udc59 \u0de8 \u2131 \ud835\udc5a \u0de8 \u2131\u210e Upsampling Calculating residue C Concatenate Addition \u210e \u2131\u210e 0 \u2131 \ud835\udc5a 0 \u2131\ud835\udc59 0 \u2131\u210e 1 \u2131\u210e 2 \u2131 \ud835\udc5a 1 Freq. up-proj. Freq. down-proj. Figure 4. Architecture of our proposed OFPNet, which consists of three main components: (a) the omni-frequency decomposition which employs three convolutional layers to decompose the input real LR light field LLR in frequency domain, (b) the frequency projection to enhance texture details across different frequency components. (c) a reconstrutor aims at aggregating the omni-frequency components for the HR light field reconstruction. The reconstructed LSR is obtained by adding the predicted residual to an LR light field. field SR networks. However, existing light field SR methods do not entirely account for the characteristics of realdegraded light fields. To address this, we propose OFPNet for real-world light field SR, which can model real degradations in real-world light field SR. 4.1. Overview Following [62, 64, 67, 72, 74, 78], we super-resolve the Y channel images, leaving Cb and Cr channel images being bicubic upscaled for light field SR. Without considering the channel dimension and given an LR light field LLR \u2208RU\u00d7V \u00d7H\u00d7W with few details and textures, we aim at generating an HR light field LSR \u2208RU\u00d7V \u00d7H\u00d7W with more details, which should be close to the ground-truth LGT \u2208RU\u00d7V \u00d7H\u00d7W . U and V represent angular dimensions, and H and W represent spatial dimensions. Inspired by recent progress in real-world single image SR, the degradation exists in all frequency components [15, 28, 36, 49, 85], we propose OFPNet, in which omni-frequency components are considered for real-world light field SR. In OFPNet, we first decompose the input LR light field LLR into high-frequency, middle-frequency, and low-frequency components, i.e., Fh, Fm, and Fl. Then the FP operations are utilized on three frequency branches to enhance frequency representations. These branches are interacted to enhance the cross-frequency representations. After we obtain the enhanced frequency features \u02dc Fh, \u02dc Fm, and \u02dc Fl, we feed them to the reconstructor to generate LSR. 4.2. Omni-Frequency Decomposition To obtain the informative omni-frequency representation for real-world light field SR, we first decompose LLR into different frequency components. We utilize the learnable spatial downsampling operations to decompose frequency components in the feature domain, which is in a spirit similar to the octave convolution [1,7]. As shown in Fig. 4, we first downsample LLR by a convolution layer with stride=4 to get the corresponding lowfrequency component Fl. Then we obtain the middle frequency component Fm by removing Fl from the corresponding original feature, which is downsampled with a stride = 2 convolutional layer. Similarly, to get the high frequency component Fh, we remove the downsampled feature with stride=2 from the feature extracted with stride=1 convolutional layer. The whole process can be denoted as Fl = conv2 \u0000conv2(LLR) \u0001 , Fm = conv2(LLR) \u2212[conv2 \u0000conv2(LLR) \u0001 ] \u21912, Fh = conv(LLR) \u2212[conv2(LLR)] \u21912, (1) where conv(\u00b7) denotes the convolution layer without downsampling and conv2(\u00b7) denotes the convolution layer with stride = 2. [\u00b7] \u2191r means the bilinear upsampling operation with the factor=r. 4.3. Frequency Projection The extracted frequency components face the inevitable information loss problem caused by irreversible convolutional layers. Inspired by the back-projection operation that produces an HR feature map and iteratively refines it through multiple upsampling and downsampling layers to learn nonlinear relationships between LR and HR images [20\u201322,24], we introduce the FP operation to enhance frequency feature representations, making up for the information lost. As shown in Fig. 4, the FP operation consists of the frequency up-projection unit (FUPU) and the frequency downprojection unit (FDPU), in which nonlinear relationships between LR and HR frequency features can be exploited iteratively. We first project the extracted frequency feature Fn\u22121 to corresponding HR representation U n\u22121 based on a frequency scale-up block U n\u22121 = Up(Fn\u22121), (2) 4 \fwhere Up(\u00b7) denotes the frequency scale-up block. It first fuses multi-view information progressively using the residual blocks, in which the inter-view correlations can be exploited, then upsamples the fused feature by bilinear interpolation followed by a 1\u00d71 convolutional layer [42]. Please refer to the supplementary document for the detailed structure. m denotes the number of FUPU. Then we project the HR representation back to LR one and obtain the corresponding residuals en\u22121 between the back-projected representation and original LR input en\u22121 = Down(U n\u22121) \u2212Fn\u22121, (3) where Down(\u00b7) denotes the frequency scale-down block. It first reduces the resolution of U n\u22121 to the original one via a 4\u00d74 convolutional layer with stride=2, followed by fusing multi-view information progressively. Finally, we back-project the residual to the HR representation and eliminate the corresponding super-resolved representation errors to obtain the final output of FUPU U n = Up(en\u22121) + U n\u22121. (4) The procedure for FDPU is similar to FUPU. FDPU aims to obtain refined LR frequency representations by projecting the previously updated HR frequency representation. Please see the supplementary document for more details. We can enhance the representations of different frequency components thanks to the FP operations. In practice, however, the high-frequency component is relatively tricky to enhance [15,69]. We, therefore, propose to enhance such challenging frequency components in a coarse-to-fine manner. Specifically, we encourage the interaction between different frequency components and progressively utilize the enhanced lower frequency representations to help the enhancement of higher frequency components by concatenating them together. The final enhanced frequency features can be denoted as \u02dc Fl = FP1(Fh), \u02dc Fm = FP2(conv([FP1(Fm), \u02dc Fl])), \u02dc Fh = FP3(conv([FP2(FP1(Fh)), \u02dc Fm])), (5) where FPn(\u00b7) denotes the n-th FP operation, and conv(\u00b7) here aims at reducing the channel dimensions. 4.4. Reconstructor We feed the enhanced frequency features to the reconstructor to generate the super-resolved results. We first concatenate \u02dc Fl, \u02dc Fm, \u02dc Fh along the channel dimension, followed by a convolutional layer to reduce the channel number. The concatenated feature is further fed to the Feature Blending Module (FBM) and the Upsampling Module [42]. Note that we remove the pixel shuffling operation because the LR-HR pairs have the same spatial resolution. Bangkok2 Monaco2 Santorini2 Bicycle2 Chair1 Florence6 NewYork4 Seattle2 Box2 Flower3 Istanbul2 Paris1 Seattle6 Box7 Playground2 London1 Prague1 Sydney6 Building4 Stair3 London4 Provence4 Venice5 Building10 Wall2 Figure 5. Thumbnails of 15 test scenes from LytroZoom-P (first three rows) and 10 test scenes from LytroZoom-O (last two rows). We show the ground-truth central view images here. The L1-norm loss function is employed to minimize the pixel-wise distance between the generated HR light field LSR and the ground-truth LGT . L(LGT , LSR) = \u2225LGT \u2212LSR\u2225. (6) 5. Experiments 5.1. Experimental Settings Inference settings. PSNR and SSIM (the higher, the better) are adopted to evaluate the reconstruction accuracy. Following [62,64,67,72,74,78], the light field SR results are evaluated using PSNR and SSIM indices on the Y channel in the YCbCr space. We evaluate the performance of different networks under the settings of \u00d72 and \u00d74 light field SR. To compare the angular consistency of the reconstructed HR results, the epipolar plane images (EPIs) are visualized for quantitative comparison in this paper. Selected baseline methods. We select three representative and advanced light field SR networks, i.e., InterNet [64], DPT [58], and IINet [42], as our main baselines. Note that, we find that ATO [29] and DistgSSR [63] cannot converge on LytroZoom, so we do not include them in the comparison. We follow the same experimental settings reported in their paper and retrain these networks based on their publicly available codes. Note that, we do not compare methods that require large memory consumptions during the training stage (e.g., LFT [39]). We also exclude single image SR methods because previous work [62, 64, 67, 72, 74, 78] has proved that these methods cannot generate results with high fidelity and good angular consistency. Implementation and training details of OFPNet. In our implementation, the channel number is set to 32 unless oth5 \fFlorence6 LR Ground-truth InterNet-BI InterNet-Ours DPT-BI DPT-Ours IINet-BI IINet-Ours Figure 6. Visual comparisons (\u00d74 SR) of different models (trained on the BI and LytroZoom-P datasets) on the LytroZoom-P testset. Table 1. Average PSNR (dB) and SSIM results on the LytroZoom-P testset by different methods. \u201cBI\u201d indicates the method is trained on bicubic-downsampled datasets, and \u201cLZ-P\u201d indicates the method is trained on LytroZoom-P. Metric Scale InterNet DPT IINet OFPNet BI LZ-P BI LZ-P BI LZ-P PSNR \u00d72 31.55 38.78 31.55 38.65 31.88 38.78 38.89 \u00d74 26.12 29.60 25.63 29.43 26.04 29.82 30.11 SSIM \u00d72 0.9138 0.9764 0.9145 0.9754 0.9170 0.9771 0.9779 \u00d74 0.7685 0.8626 0.7569 0.8560 0.7621 0.8706 0.8786 Table 2. Average PSNR (dB) and SSIM results on LytroZoom-O testset by different methods (pretrained on LytroZoom-P and fine-tuned on LytroZoom-O). Metric Scale InterNet DPT IINet OFPNet PSNR \u00d72 30.15 30.08 30.28 30.79 \u00d74 27.88 27.95 28.22 28.91 SSIM \u00d72 0.8738 0.8603 0.8737 0.8863 \u00d74 0.7627 0.7620 0.7770 0.7991 erwise specified. We utilize the Adam optimizer with parameters \u03b21 = 0.9 and \u03b22 = 0.999. Each mini-batch consists of 2 samples with 72 \u00d7 72 patches for \u00d72 light field SR and 4 samples with 64 \u00d7 64 patches for \u00d74 light field SR. We first train OFPNet on the LytroZoom-P dataset and then fine-tune the pretrained OFPNet on LytroZoom-O. The initial learning rate is set to 1e \u22124, and we reduce it by a factor of 0.5 every 2,000 epochs until 8,000 epochs during the training stage. During the fine-tuning stage, the initial learning rate is also set to 1e \u22124, and we reduce it by a factor of 0.5 every 1,000 epochs until 5,000 epochs. OFPNet is trained and fine-tuned on two NVIDIA GTX 1080Ti GPUs. 5.2. Simulated Datasets v.s. LytroZoom To demonstrate the advantages of the LytroZoom dataset, we conduct experiments to compare the performance of light field SR models trained on simulated datasets and the LytroZoom-P dataset. We employ the mixed light field datasets [64] with the angular resolution of 5\u00d75 to generate simulated \u00d72 and \u00d74 light field pairs with the bicubic degradation (BI). We train InterNet [64], IINet [42], and DPT [58] on BI and LytroZoom-P for each of the two scaling factors (\u00d72 and \u00d74). Since the resolution of the LR-HR pair is the same in LytroZoom-P, we upsample the LR inputs of BI bicubicly and feed the pre-upsampled inputs to the light field SR networks. We then attach the de-subpixel layer [56] at the beginning of each network to achieve an efficient inference. We train three networks on BI and LytroZoom-P, respectively, and calculate the average PSNR and SSIM values on the LytroZoom-P testset. As shown in Table 1, models trained on our LytroZoom-P dataset obtain significantly better performance than those trained on the BI dataset for both scaling factors. Specifically, the results of the models trained on the BI dataset are even worse than the LR observations. The reason is apparent: the networks trained with simulated LR-HR light field pairs inevitably get disastrous results when facing complex degradation in real-world scenes. In Fig. 6, we visualize the super-resolved central view images obtained by different models. As can be seen, results generated by models trained on the BI dataset tend to have blurring edges with obvious artifacts. On the contrary, models trained on LytroZoom-P generate clearer results. 5.3. Baseline Methods v.s. OFPNet We compare our proposed OFPNet with three selected baseline methods. All models are trained/fine-tuned and tested on LytroZoom-P and LytroZoom-O. LytroZoom-P. As is shown in Table 1, our OFPNet earns the highest PSNR and SSIM values. For example, one can see that OFPNet surpasses IINet [42], the state-of-theart light field SR method, by 0.29dB/0.0080 on \u00d74 SR in terms of PSNR/SSIM. Fig. 7 shows the super-resolved central view images for qualitative comparison. In terms of visual quality, OFPNet beats previous methods, providing fine details without introducing unappealing artifacts in general. For example, as seen in Fig. 7, baseline methods can hardly restore the external shape of the buildings in Bangkok2. In contrast, our proposed OFPNet can generate results with vivid details and patterns. LytroZoom-O. As is shown in Table 2, on the LytroZoomO testset, our OFPNet also earns the highest PSNR and SSIM values. Fig. 8 shows the super-resolved central view images from LytroZoom-O for qualitative comparison. OFPNet can generate results with fine details. 5.4. Generalization Tests Our LytroZoom-trained light field SR models exhibit robust generalization capabilities, both in terms of content and device. Specifically, our LytroZoom-P-trained models perform well on a scene captured at 200 mm focal length, 6 \fSeattle6 LR Ground-truth InterNet DPT IINet OFPNet Bangkok2 LR Ground-truth InterNet DPT IINet OFPNet Figure 7. Visual comparisons (central views) of different models (trained on LytroZoom-P) on the LytroZoom-P testset. Top: \u00d72 SR. Bottom: \u00d74 SR. Please zoom in for better visualization and best viewed on the screen. Wall2 LR Ground-truth InterNet DPT IINet OFPNet Stair3 LR Ground-truth InterNet DPT IINet OFPNet Figure 8. Visual comparisons (central views) of different models (fine-tuned on LytroZoom-O) on the LytroZoom-O testset. Top: \u00d72 SR. Bottom: \u00d74 SR. Please zoom in for better visualization and best viewed on the screen. as demonstrated in Fig. 9, despite being trained on indoor scenes printed on postcards. For the device generalization, as shown in Fig. 10, the light field SR models trained on the dataset captured by a Lytro ILLUM camera can be readily applied to different devices such as Gantry (we superresolve the input light field directly from the scene Lego7 \fOutdoor scene captured at 200 mm Input DPT-BI DPT-LZ-P IINet-BI IINet-LZ-P OFPNet Figure 9. Visual comparisons (central views) of different models on real-world light field scenes (\u00d72 SR). LZ-P is short for LytroZoom-P. Lego-Knights Input InterNet DPT IINet OFPNet Figure 10. Visual comparisons (central views) of different models on real-world light field scene captured by Gantry. Table 3. Analysis of the OFPNet on \u00d74 SR on LytroZoom-P. Frequency decomposition Frequency Projection PSNR Fl Fm Fh Interactions FP operation % % \u2713 29.87 % \u2713 \u2713 29.98 \u2713 \u2713 \u2713 30.11 % % 29.60 \u2713 % 29.86 % \u2713 29.99 \u2713 \u2713 30.11 Knights in STFgantry [55]). 5.5. Model Analysis Investigation of the frequency decomposition. We investigate different extracted frequency components in OFPNet. We have added several residual blocks after the extracted frequency features while removing the corresponding components in Table 3 to ensure the parameters remain unchanged. As shown in Table 3, our results demonstrate that incorporating additional frequency components in OFPNet leads to improved performance in terms of PSNR, with gains of 0.24 dB and 0.13 dB when considering higher frequency components. These results suggest that the omnifrequency components play a crucial role in real-world light field SR. Further details and analysis can be found in the supplementary document. Investigation of frequency projection. When we simultaneously remove the interactions between frequency components and the FP operations (we replace the FP operations with residual blocks of the same parameters), we only get a result of 29.60dB in terms of PSNR. When we add interaction and FP operations, the results improve by 0.26dB and 0.39dB, respectively, indicating the importance of these two designs. The best result can be obtained using these two designs simultaneously (30.11dB) on LytroZoom-P. 6. Discussion Despite the encouraging performance as shown above, the LytroZoom dataset still has certain limitations. (1) The use of a single data collection device, specifically a Lytro ILLUM camera, limits the generalizability of models trained on LytroZoom to other cameras. While these models can perform well on the Gantry dataset, there may still be a domain shift when applied to light fields captured by cameras with different baselines, resulting in artifacts (as seen in Fig. 10). To address this limitation, future work can expand the LytroZoom dataset to include light fields captured by other types of cameras, such as camera arrays and Gantry. (2) Minor distortions that cannot be rectified. The registration step [5] could alleviate the distortions caused by different FoVs, yet minor misalignment and luminance/color differences exist between the LR-HR light fields. It is likely due to these minor distortions that cause the non-convergence of ATO [29] and DistgSSR [63] during the training stage [71]. We will investigate new training strategies. (3) Inadequate benchmark experiments. We will extend LytroZoom with more scaling factors and conduct experiments on scale-arbitrary real-world light field SR. 7." + }, + { + "url": "http://arxiv.org/abs/2305.13620v1", + "title": "A Dive into SAM Prior in Image Restoration", + "abstract": "The goal of image restoration (IR), a fundamental issue in computer vision,\nis to restore a high-quality (HQ) image from its degraded low-quality (LQ)\nobservation. Multiple HQ solutions may correspond to an LQ input in this poorly\nposed problem, creating an ambiguous solution space. This motivates the\ninvestigation and incorporation of prior knowledge in order to effectively\nconstrain the solution space and enhance the quality of the restored images. In\nspite of the pervasive use of hand-crafted and learned priors in IR, limited\nattention has been paid to the incorporation of knowledge from large-scale\nfoundation models. In this paper, we for the first time leverage the prior\nknowledge of the state-of-the-art segment anything model (SAM) to boost the\nperformance of existing IR networks in an parameter-efficient tuning manner. In\nparticular, the choice of SAM is based on its robustness to image degradations,\nsuch that HQ semantic masks can be extracted from it. In order to leverage\nsemantic priors and enhance restoration quality, we propose a lightweight SAM\nprior tuning (SPT) unit. This plug-and-play component allows us to effectively\nintegrate semantic priors into existing IR networks, resulting in significant\nimprovements in restoration quality. As the only trainable module in our\nmethod, the SPT unit has the potential to improve both efficiency and\nscalability. We demonstrate the effectiveness of the proposed method in\nenhancing a variety of methods across multiple tasks, such as image\nsuper-resolution and color image denoising.", + "authors": "Zeyu Xiao, Jiawang Bai, Zhihe Lu, Zhiwei Xiong", + "published": "2023-05-23", + "updated": "2023-05-23", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Image restoration (IR) is a fundamental problem in computer vision that aims to recover high-quality (HQ) images from their degraded low-quality (LQ) observations caused by various degradations, such as blur, noise and compression artifacts. The IR tasks encompass image superresolution (SR), image denoising, dehazing, JPEG deblocking, etc. However, due to the nature of the degradation pro*Equal contribution Set14: Barbara Ground-truth ART-Ours Input ART SAM map Ground-truth LR input SAM map Ground-truth Noisy input Figure 1. Illustration of SAM\u2019s robustness on low-quality images (e.g. low-resolution and noisy images). It shows SAM can segment objects correctly given low-quality images. This observation motivates us to leverage the semantic priors extracted from SAM, a large-scale foundation model, to enhance image restoration performance. Examples are from Set5 Bird and McMaster 0007, respectively. cess, it is an ill-posed problem in practice, leading to multiple HQ solutions corresponding to an LQ input, posing significant challenges for accurate IR. Numerous image priors [33, 43, 86, 87, 129] have been proposed to regularize the solution space of latent clear images in IR tasks. For instance, the self-similarity prior [26, 30,35,46,91] produces visually pleasing results in image SR task. Total variation [87], wavelet-domain processing [24], and BM3D [16] are proposed for the image denoising task by assuming the prior distribution to be smoothness, low rank and self-similarity. For image dehazing, assumptions are made on atmospheric light, transmission maps, or clear images [27,43]. While these task-specific image priors have demonstrated superior performance for IR methods, they are frequently based on observations of specific image properties that may not always reflect the inherent image properties. In addition, the design and selection of task-specific image priors rely on manual and empirical efforts, and the corresponding IR models require intricate optimization. Recently, it has been increasingly popular to adopt deep models to construct more general priors for IR tasks. For instance, the seminal work on deep image prior (DIP) [96] has shown that a randomly initialized convolutional neural network (CNN) can implicitly capture texture-level image priors, which can be utilized for IR. SinGAN [88] demonstrates that a randomly initialized generative adver1 arXiv:2305.13620v1 [cs.CV] 23 May 2023 \fsarial network (GAN) model can capture rich patch statistics after being trained on a single image. Furthermore, a GAN generator trained on a large dataset of natural images can be used as a generic image prior, referred to as a deep generative prior (DGP) [83]. The mentioned methods have shown remarkable performance in IR and image manipulation tasks. In particular, the CNN and GAN models used in these works are either trained from scratch on a single image or pre-trained on an external dataset. In this paper, we focus on examining whether foundation models pre-trained on extremely large-scale datasets, such as those containing billions of samples, with strong transfer capabilities can provide richer and more helpful priors for IR tasks. To this end, we take the first step towards leveraging the semantic-aware prior extracted from a powerful foundation model for segmentation, segment anything model (SAM) [55], which has been trained on a massive dataset called SA-1B containing 1 billion masks and 11 million images. Our motivation for using SAM as a semantic prior for IR tasks stems from its remarkable robustness on degraded images, including those that are with low-resolution and noise, as illustrated in Figure 1. Specifically, we obtain semantic masks of a degraded image by feeding it to the pre-trained SAM, which is referred to as the SAM prior in this paper. Our method utilizes semantic masks acquired from SAM to enhance the performance of existing IR methods through integration with a lightweight SAM prior tuning (SPT) unit. This integration of high-level semantic information with intermediate features leads to superior restoration results. Specifically, the SPT unit acts as a plug-and-play component by selectively transferring semantic priors to enhance the low-level features and spatial structures of the input LQ image. To better exploit the potential of the semantic priors obtained from SAM, we propose a parameter-efficient tuning scheme to update the SPT units. The SPT unit consists of a small number of learnable parameters and can be easily integrated into existing IR methods. Our proposed method efficiently integrates semantic priors with existing intermediate features of various CNN-based and Transformerbased IR methods, yielding significant performance improvements over the baselines on benchmark datasets for a range of IR tasks, including image SR and color image denoising. With the success of the SPT unit in IR tasks, we hope that our work can encourage further studies on incorporating semantic priors into other deep learning-based models. Overall, our contributions can be summarized as follows: (1) This paper introduces a novel approach to enhance the performance of IR methods by leveraging the prior knowledge obtained from the state-of-the-art foundation model for segmentation, SAM. This is the first time such a large-scale pre-trained prior has been used in the context of IR, and we demonstrate that it can be highly effective in improving the restoration quality. (2) In order to incorporate the semantic priors obtained from SAM, we propose a lightweight SPT unit that can be easily integrated into existing IR methods as a plug-andplay component. By designing the SPT unit as the only trainable module, we achieve both efficiency and scalability, in contrast to full fine-tuning pipeline which can be computationally expensive and time-consuming. (3) We comprehensively evaluate the effectiveness of our proposed SPT unit as a plug-in for enhancing existing IR methods, including both CNN-based and Transformerbased methods, on various tasks such as image SR and color image denoising. Experimental results demonstrate that our method consistently outperforms existing state-ofthe-art methods, highlighting its superiority and generalizability. 2. Related Work 2.1. Image Restoration Compared to traditional model-based IR methods [36, 44, 79, 94, 95, 108], learning-based methods, particularly those based on CNNs, have shown impressive performance and gained increasing popularity. These deep models learn mappings between LQ and HQ images from large-scale paired datasets. Since the pioneering work of SRCNN [22] (for image SR), DnCNN [118] (for image denoising), and ARCNN [21] (for JPEG compression artifact reduction), a large number of CNN-based models have been proposed to improve model representation ability through more elaborate neural network architecture designs, such as residual blocks [7, 52, 117], dense blocks [102, 125, 126], and others [12, 15, 19, 31, 32, 37, 48, 50, 53, 57, 59\u201362, 65\u201367, 84, 92, 98\u2013100, 104, 105, 120, 121, 124]. Some of these models have also incorporated attention mechanisms inside the CNN framework, such as channel attention [17, 80, 123], non-local attention [69, 77], and adaptive patch aggregation [128]. Recently, due to the limited ability of CNNs to model long-range dependencies, researchers have started to explore the use of pure self-attention modules for IR [8,14, 64,72,106,112,115]. In contrast to existing IR methods, our method does not introduce any novel architecture. Instead, we aim to enhance the performance of existing methods by leveraging the prior generated from a large pre-trained model, such as SAM [55], in a tuning manner, refining and polishing the existing intermediate features through the proposed lightweight SPT unit. 2.2. Hand-Crafted Image Priors Image priors that describe various statistics of natural images have been widely developed and adopted in IR and image editing. For different IR tasks, priors are also de2 \fsigned specifically based on the characteristics of the imaging and degradation models. In the image super-resolution (SR) task, the self-similarity prior is able to produce visually pleasing results without extensive training on external databases since a natural image tend to recur within and across scales of the same image [26, 30, 35, 46, 91]. The heavy-tailed gradient prior [89], sparse kernel prior [28], l0 gradient prior [109], normalized sparsity prior [56] and dark channel prior [82] are proposed to solve the image deblurring task. While these traditional hand-crafted priors frequently capture specific statistics and serve specific purposes, there is a growing interest in finding more general priors that capture richer image statistics via deep learning models. In this paper, we present a parameter-efficient tuning scheme to leverage the prior knowledge from SAM for the task of IR. To the best of our knowledge, our work is the first to introduce the use of SAM for the task of image restoration, demonstrating the potential of leveraging pretrained semantic priors for improving IR methods. 2.3. Learned Image Priors Convolutional neural networks (CNNs) [18,23,78] have been proposed to capture useful priors by learning mappings between LQ and HQ images from external training data. Recent research has shown that deep CNNs can implicitly capture image statistics, making them effective priors for restoring corrupted images. DIP [96] and single image generative adversarial networks (SinGAN) [88] have demonstrated the effectiveness of these deep priors in IR tasks, but their applicability may be limited due to their reliance on image-specific statistics. While other deep priors such as deep denoiser prior [2, 119], TNRD [13], and LCM [3] have been developed for IR tasks, our focus is not on competing with them. Instead, we aim to study and exploit the integration of knowledge from large-scale foundation models (e.g., SAM [55]) for IR. To the best of our knowledge, this is the first attempt to leverage the prior knowledge from SAM for IR tasks. By introducing the prior generated from SAM in a tuning manner, we aim to further improve the performance of existing IR methods without proposing any new architecture. Our approach complements existing deep priors and provides a promising direction for future research in the field of IR. 2.4. Large-Scale Foundation Models In the era of big data, large-scale foundation models become important components of artificial intelligence. The recent development of large models mainly benefits from the advanced training schemes (e.g., self-supervised training [20, 39, 51]) and scalable network architectures (e.g., Transformer [25,97]). The early works such as BERT [51] and RoBERTa [71] utilize masked language modeling to obtain powerful pre-trained models on various natural language processing (NLP) tasks. Most recently, ChatGPT and GPT-4 [81] developed by OpenAI demonstrates remarkable capabilities on a variety of domains, and even shows sparks of artificial general intelligence [5]. In computer vision, to leverage large-scale image data in a selfsupervised manner, contrastive learning [10,11] and masked image modeling [40, 107] have been explored, which provide rich pre-trained knowledge for downstream tasks. As a representative work, CLIP [85] learns visual representations from the supervision of natural language using 400 million image-text pairs, showing an impressive transferable ability. Besides, recent works such as IPT [9] and DegAE [110] demonstrate foundation models pre-trained on the large-scale data can improve the performance of lowlevel vision tasks. Recently, Meta AI Research released a foundation model namely SAM [55] for open-world image segmentation. Due to its great potential, an important future direction is to use SAM to aid the downstream tasks [73]. In this paper, we explore how to improve IR performance with the semantic prior knowledge from SAM. 2.5. Parameter-Efficient Fine-tuning To introduce additional knowledge from a new dataset or domain into the well-trained models, early works usually fine-tune the whole model parameters [34, 41, 42]. However, this scheme requires a large amount of computational resources and time. As an alternative, parameter-efficient fine-tuning [58,63,70] is firstly proposed in NLP to exploit pre-trained large language model. It has also been extensively studied for image classification tasks. For instance, SpotTune [38] studies different fine-tuned layers, TinyTL [6] only learns the bias modules, and side-tuning [116] trains a lightweight network and uses summation to fuse it with the pre-trained network. Regarding vision and language models, e.g., CLIP [85], parameter-efficient tuning [111, 127] is also leveraged for the performance enhancement on downstream tasks. Some recent methods such as Adapter [45] and VPT [49] are developed for Transformerbased architectures, which insert a small number of learnable parameters inside each Transformer layer. Different from these works, we study the parameter-efficient finetuning for IR with the purpose of introducing the semantic prior knowledge. 3. Preliminary 3.1. Network Definition For an LQ input image ILQ \u2208RH\u00d7W \u00d7Cin, an IR network IRNet(\u00b7) can generate an HQ image \u02c6 IHQ \u2208 RrH\u00d7rW \u00d7Cout \u02c6 IHQ = IRNet(ILQ), (1) 3 \fSAM Local-Global Temporal Interactions Shallow Feature Extraction Deep Feature Extraction Building Block \ud835\udc351 Building Group Building Group Building Group \u2026 Conv Block SPT Unit Building Group Building Group Building Group \u2026 Conv Block SAM Prior Tuning Unit SPT Unit Reconstructor \u2026 Input image \ud835\udc3c\ud835\udc3f\ud835\udc44 Restored image \u1218 \ud835\udc3c\ud835\udc3b\ud835\udc44 SAM map Encoder \ud835\udefc \ud835\udefc \ud835\udc351 Building Block \ud835\udc352 prior \ud835\udcab \ud835\udc3c\ud835\udc3f\ud835\udc44 C Conv ReLU Conv Figure 2. Illustration of our proposed method. In comparison to traditional IR methods that typically employ a shallow feature extractor followed by a deep feature extractor with multiple building blocks and a reconstructor, we present a novel method that efficiently improves network performance by leveraging prior knowledge obtained from SAM [55]. Our proposed method involves integrating semantic masks obtained from SAM into SPT units, which combine the semantic priors with intermediate features of existing IR methods. As the SPT unit is the only trainable module, our approach is both efficient and scalable compared to full fine-tuning scheme. Incorporating the SAM prior into our SPT unit allows for effective exploitation of prior knowledge from the large-scale foundation model and improved restoration quality. which should be close to the ground-truth image IGT . H, W, Cin, Cout, and r are the image height, width, input channel, output channel, and the scale factor for image super-resolution, respectively. As shown in Figure 2, an IR network consists of three main components: shallow feature extraction, deep feature extraction, and the reconstruction part. Without loss of generality, we leverage a convolution layer as shallow feature extraction to get the low-level feature F0 \u2208RH\u00d7W \u00d7C F0 = Enc(ILQ), (2) where C denotes the feature number, and Enc(\u00b7) denotes the convolution layer, serving for the shallow feature extraction. Then, the shallow feature is processed by the deep feature extraction module, which is composed of N1 building blocks, obtaining the extracted deep feature FDF \u2208 RH\u00d7W \u00d7C. The above procedure can be formulated as FDF = BN1(. . . (B2(B1(F0)) . . . ), (3) where Bi(\u00b7) is the i-th building block. Finally, we can get the HQ output image through the reconstruction part \u02c6 IHQ = Rec(FDF ), (4) where Rec(\u00b7) denotes the reconstruction part. In terms of the composition of the reconstruction module, it varies depending on the specific IR task. In the case of image superresolution, a sub-pixel convolution layer [90] with a factor of r is used to upsample the deep feature FDF to match the size of the high-resolution output. This is followed by a convolution layer both before and after the upsampling module to aggregate the features. On the other hand, for the tasks such as image denoising, the reconstruction module only consists of a single convolution layer that adjusts the channel dimension of FDF from C to Cout. The LQ input is then added to the convolution output to produce the final output. This residual learning approach can help accelerate the convergence of the network during training. 3.2. Segment Anything Model In recent years, there has been a growing interest in foundational models pre-trained on large-scale datasets due to their ability to generalize to various downstream tasks. One such example is the recently released SAM by Meta AI Research [55]. By incorporating a single user prompt, SAM can accurately segment any object in any image or video without the need for additional training, which is commonly referred to as the zero-shot transfer in the computer vision community. According to [55], SAM\u2019s impressive capabilities are derived from a vision foundation model that has been trained on an extensive SA-1B dataset comprising over 11 million images and one billion masks. The emergence of SAM has undoubtedly demonstrated strong generalization across various images and objects, opening up new possibilities and avenues for intelligent image analysis and understanding. Given an image I \u2208 RH\u00d7W \u00d7Cin, SAM can generate a segmentation mask tensor MSAM \u2208RH\u00d7W \u00d7Nc MSAM = SAM(I), (5) 4 \fDeep Feature Extraction Building Block \ud835\udc351 Building Block \ud835\udc352 Enc \ud835\udc39 1 \ud835\udc39 2 \u2026 \ud835\udc39 \ud835\udc411 \ud835\udc40\ud835\udc46\ud835\udc34\ud835\udc40 C \ud835\udc39 \ud835\udc56 \ud835\udc60\ud835\udc5d\ud835\udc61 \ud835\udc53 \ud835\udc53 \ud835\udcab\ud835\udc56+1 \ud835\udcab\ud835\udc56 \ud835\udc53 SPT Unit SPT Unit SPT Unit \ud835\udc40\ud835\udc46\ud835\udc34\ud835\udc40 \u2026 \ud835\udc39 \ud835\udc56 \ud835\udc39 \ud835\udc56 \u2032 \ud835\udc39 1 \ud835\udc60\ud835\udc5d\ud835\udc61 \ud835\udefc \ud835\udc39 1 \ud835\udc5b \ud835\udc39 2 \ud835\udc60\ud835\udc5d\ud835\udc61 \ud835\udefc \ud835\udc39 2 \ud835\udc5b \ud835\udc39 \ud835\udc411 \ud835\udc60\ud835\udc5d\ud835\udc61 \ud835\udefc \ud835\udc39 \ud835\udc411 \ud835\udc5b \u2026 Reconstructor \u1218 \ud835\udc3c\ud835\udc3b\ud835\udc44 \ud835\udc53 Conv ReLU Conv SPT Unit Convolution Conv ReLU Activation operation C Concatenation Multiplication Sum \ud835\udc53 \ud835\udc3c\ud835\udc3f\ud835\udc44 Shallow Feature Extraction \ud835\udc39 \ud835\udc411 \ud835\udc39 1 \ud835\udc39 2 Figure 3. Illustration of the SPT unit and the efficient tuning scheme. The SPT unit takes in the semantic map MSAM extracted from SAM, the deep feature Fi extracted from the i-th building block, and the SAM prior representation P as input. It then outputs a new feature map F spt i , which incorporates the correlation between Fi and P. To efficiently incorporate this new feature map, it is added to the original feature map Fi with a weighting factor of \u03b1. The tuned feature maps are then fed into the subsequent building blocks of the network. where Nc denotes the number of masks. SAM has shown robustness in segmenting low-quality images and producing relatively accurate semantic masks. Therefore, we propose to leverage these semantic masks as priors for IR. By utilizing the rich semantic information in the maps, the IR networks are able to restore more HQ details in the reconstructed images. We prompt SAM with an 8 \u00d7 8 regular grid of foreground points for each degraded image, resulting in less than 64 masks in most cases. We fix the number of masks fed into the image restoration networks as 64 by adopting the zero-padding when the masks are insufficient and truncation when the number of masks is larger than 64. We also discuss more choices of the number of masks in our experimental part. 4. Method 4.1. SAM Prior Tuning Unit SAM [55] has shown to have promising segmentation capabilities in various scenarios and is robust to various image degradations. We utilize the extracted semantic map MSAM from SAM as the prior to provide diverse and rich information to improve the performance of existing IR methods. We first concatenate the LQ input image ILQ and the semantic map MSAM extracted from SAM along the channel dimension. Then, the concatenated feature is fed to two convolution layers with a ReLU activation operation between them (denoted as f(\u00b7)), resulting in the SAM prior representation P \u2208RH\u00d7W \u00d7C P = f([ILQ, MSAM]). (6) To provide a concrete example of how the SPT unit works, we use the feature Fi extracted from the i-th building block without loss of generality. As shown in Figure 3, we first concatenate F \u2032 i with MSAM and feed the concatenated feature to f(\u00b7) to generate the enhanced feature representation F \u2032 i . Next, F \u2032 i and P are separately fed into the feature branch and the SAM prior branch, respectively. Each branch consists of two convolution layers with ReLU activation in between. The output features of both branches are multiplied to obtain the correlation, and skip connections are added to both branches to enhance the representation ability of the entire SPT unit. These procedures can be formulated as Pi+1 = f(Pi) + Pi, F spt i = f(F \u2032 i ) \u2217f(P) + F \u2032 i . (7) By inserting the SPT unit into N1 building blocks of existing IR networks as a plug-and-play unit, a new network structure is formed, which can utilize the semantic information from the SAM prior to improve IR performance. 4.2. Efficient Tuning Scheme In order to reduce the computational cost during the training stage, we introduce a parameter-efficient tuning 5 \fscheme that leverages pre-trained IR networks. Instead of training an IR network from scratch or re-training an existing one, we only update the trainable parameters. This not only reduces the computational cost but also enhances the overall efficiency of our method. To incorporate the new feature map F spt i processed by the SPT unit, we add it to the original feature map Fi using a weighting factor of \u03b1 F n i = Fi + \u03b1F spt i , = Fi + \u03b1\u03d5\u0398(Fi), (8) where \u03d5\u0398(\u00b7) is the SPT unit \u03d5 with tunable parameters \u0398 to the pre-trained IR networks to transform the pre-trained features to new ones. The incorporation of the enhanced feature map into the original feature map is a straightforward yet powerful operation that allows subsequent building blocks to exploit the semantic information from the SAM prior. By replacing the original feature map with the enhanced one, our proposed approach achieves improved restoration quality without significant computational costs. In contrast to retraining an entirely new network, our method builds upon existing pre-trained IR networks and only updates the parameters of the SPT units. This parameter-efficient approach significantly reduces the computational burden and makes it a cost-effective solution for improving the performance of existing IR networks. 5. Experiments 5.1. Experimental Settings Data and Evaluation. We conduct experiments on two typical IR tasks: image SR and color image denoising. For image SR, we use DIV2K [93] and Flickr2K [68] as training data, while Set5 [4], Set14 [113], B100 [75], Urban100 [46], and Manga109 [76] are used as test data. As for color image denoising, we follow the same training data as ART [114], which includes DIV2K, Flickr2K, BSD500 [1], and WED [74]. We evaluate our proposed method using BSD68 [75], Kodak24 [29], McMaster [122], and Urban100 as test data for color image denoising. The performance is evaluated in terms of PSNR and SSIM [103] values on the Y channel of images transformed to the YCbCr space for image SR, and on the RGB channel for color image denoising. Selected baseline methods. To evaluate the effectiveness of our proposed method, we conduct experiments using several representative methods in two IR tasks. For image SR, we select three representative methods: IMDN [47], a typical light-weight CNN-based SR method, as well as the state-of-the-art vision Transformer-based methods ART [114] and CAT [14]. Training Settings. Data augmentation is performed on the training data through horizontal flip and random rotation of 90\u25e6, 180\u25e6, and 270\u25e6. Besides, we crop the original images into 64\u00d764 patches as the basic training inputs for image SR and 128\u00d7128 patches for image denoising. We add the SPT units after each buliding block, and the batch size is set to 4. We choose ADAM [54] to optimize the networks with \u03b21 = 0.9, \u03b22 = 0.999, and zero weight decay. The initial learning rate is set as 1\u00d710\u22124. We fine-tune the parameters of ART, CAT, and IMDN until convergence, and we adjust the learning rate to half every 5,000 iterations. Experiments are conducted with a single NVIDIA 3090 GPU. 5.2. Quantitative and Qualitative Comparisons We evaluate the effectiveness of our proposed method by comparing representative baseline methods and their SPT unit tuned versions on the tasks of image SR and color image denoising. Image super-resolution. Table 1 presents a quantitative comparison between methods trained with and without the SPT unit on benchmark datasets for image SR. The results show that the existing image super-resolution methods finetuned with SPT units outperform the corresponding baselines by a significant margin. For example, in the \u00d74 superresolution of Urban100 dataset, ART fine-tuned with our proposed method achieves 28.1717dB (PSNR), while the same baseline network only achieves 27.7747dB (PSNR). The weighted average values in the table demonstrate that our method effectively utilizes the SAM prior, leading to further performance improvements in existing SR methods. Figure 4 illustrates visual comparisons of SR results obtained by the baseline methods and their tuned versions. We observe that the existing SR methods tend to generate realistic detailed textures but with visual aliasing and artifacts. For example, in the first example of Figure 4, ART produces blurry details of the tablecloths. On the other hand, ART tuned with our proposed method reconstructs sharp and natural details. This indicates that our method effectively employs semantic priors to capture the characteristics of each category, leading to more natural and realistic textures. This observation is consistent with the approach presented in [101]. Color image denoising. Table 2 presents quantitative comparisons for color image denoising. The results show that ART fine-tuned with SPT units outperforms the original ART by a significant margin on three different levels of noise. For instance, in the \u03c3 = 25 color image denoising task, ART fine-tuned with our proposed method achieves an average PSNR of 32.7844dB, which is 0.0642dB higher than the same baseline network. As shown in Figure 5, the color image denoising results of ART fine-tuned with our method exhibit better visual quality than the original ART. The images restored by our method have more details and fewer blocking artifacts, leading to sharper edges and more explicit textures. These results demonstrate that our method 6 \fTable 1. Quantitative comparison of baseline methods and their SPT unit-tuned variants in terms of PSNR (dB, \u2191) for the image SR task. Method Scale Set5 Set14 B100 Urban100 Manga109 Average PSNR \u2206 PSNR \u2206 PSNR \u2206 PSNR \u2206 PSNR \u2206 PSNR \u2206 ART \u00d72 38.5631 34.5924 32.5768 34.3001 40.2425 35.8269 ART \u00d72 38.5741 +0.0109 34.6315 +0.0391 32.5983 +0.0215 34.3712 +0.0710 40.2888 +0.0463 35.8724 +0.0454 ART \u00d73 35.0736 31.0183 29.5056 30.1037 35.3889 31.7925 ART \u00d73 35.0919 +0.0182 31.0598 +0.0415 29.5362 +0.0305 30.2219 +0.1182 35.4513 +0.0624 31.8607 +0.0682 ART \u00d74 33.0448 29.1585 27.9668 27.7747 32.3081 29.4792 ART \u00d74 33.1113 +0.0665 29.2475 +0.0890 28.0154 +0.0486 28.1717 +0.3970 32.5648 +0.2568 29.7052 +0.2260 CAT \u00d72 38.5079 34.7776 32.5853 34.2577 40.1030 35.7773 CAT \u00d72 38.5230 +0.0151 34.8017 +0.0241 32.5954 +0.0101 34.2786 +0.0209 40.1584 +0.0554 35.8064 +0.0291 CAT \u00d73 35.0550 31.0433 29.5194 30.1184 35.3838 31.8003 CAT \u00d73 35.0730 +0.0180 31.0629 +0.0196 29.5286 +0.0092 30.1441 +0.0256 35.4002 +0.0163 31.8175 +0.0172 CAT \u00d74 33.0769 29.1779 27.9871 27.8861 32.3891 29.5476 CAT \u00d74 33.1106 +0.0337 29.1995 +0.0216 28.0093 +0.0223 27.8930 +0.0069 32.4817 +0.0926 29.5887 +0.0411 IMDN \u00d72 37.9105 33.5949 32.1535 32.1351 38.7899 34.5026 IMDN \u00d72 37.8891 -0.0215 33.6793 +0.0844 32.1711 +0.0176 32.2199 +0.0848 38.9840 +0.1940 34.6015 +0.0989 IMDN \u00d73 34.3233 30.3066 29.0732 28.1488 33.5833 30.4228 IMDN \u00d73 34.3869 +0.0636 30.3067 +0.0001 29.1087 +0.0355 28.3076 +0.1588 33.8483 +0.2651 30.5711 +0.1483 IMDN \u00d74 32.1867 28.5724 27.5439 26.0318 30.4370 28.1590 IMDN \u00d74 32.2018 +0.0151 28.6088 +0.0364 27.5814 +0.0374 26.2896 +0.2578 30.7284 +0.2913 28.3476 +0.1886 Table 2. Quantitative comparison of baseline methods and their SPT unit-tuned variants in terms of PSNR (dB, \u2191) for the color image denoising task. Method \u03c3 value BSD68 Kodak24 McMaster Urban100 Average PSNR \u2206 PSNR \u2206 PSNR \u2206 PSNR \u2206 PSNR \u2206 ART 15 34.4599 35.3871 35.6765 35.2938 35.0672 ART 15 34.4615 +0.0016 35.3921 +0.0050 35.6813 +0.0049 35.2999 +0.0062 35.0717 +0.0044 ART 25 31.8372 32.9526 33.4057 33.1415 32.7202 ART 25 31.9233 +0.0862 33.0058 +0.0532 33.4359 +0.0302 33.1994 +0.0579 32.7844 +0.0642 ART 50 28.6349 29.8659 30.3100 30.1926 29.6609 ART 50 28.6369 +0.0020 29.8674 +0.0015 30.3127 +0.0027 30.2001 +0.0075 29.6656 +0.0046 can effectively leverage semantic priors to improve the performance of existing color image denoising methods. 5.3. Ablation Study For the ablation study, we use the dataset DIV2K [93] and Flickr2K [68] to train ART on the \u00d74 image SR task. The results are evaluated on the dataset of Manga109. The effectiveness of the SPT Unit. To evaluate the effectiveness of the proposed SPT unit, we design several variants as follows: (1) SPT-Fi: we feed MSAM directly to f(\u00b7) without concatenating it with Fi; (2) SPT-Pi: we remove the extracted SAM prior representation Pi from the SPT unit; (3) SPT-cat: we concatenate MSAM, Fi, and Pi and feed the concatenated tensor to f(\u00b7), generating F spt i . The corresponding results are shown in Table 3, where it can be observed that although these variants achieve some performance improvements, they are far less effective than our designed SPT unit. This indicates that our SPT unit is simple yet effective, and can better utilize the semantic prior information from the SAM mask for image SR. We Table 3. The effectiveness of the SPT unit in different variants and different positions. BNi here denotes the insertion of the SPT units into the building blocks B1 to BNi. SPT variants SPT locations Method PSNR Block PSNR Block PSNR SPT-F \u2032 i 32.4694+0.1613 B1 32.3222+0.0141 B4 32.4266+0.1185 SPT-Pi 32.4519+0.1438 B2 32.3188+0.0107 B5 32.4607+0.1526 SPT-cat 32.4194+0.1113 B3 32.4149+0.1068 B6 32.5648+0.2568 also analyze the effect of inserting the SPT unit at different positions on the final performance. Table 3 shows the results. It can be observed that as the number of SPT units inserted increases, the final performance gradually improves, and the more units inserted, the more significant the improvement. For example, when we only insert the SPT unit in the first building block, we only achieve a 0.0141dB improvement. However, when we insert the SPT unit in all building blocks, we achieve a significant improvement of up to 0.2568dB. 7 \fSet14: Barbara Ground-truth ART-Ours Input ART Set14: Zebra Ground-truth ART-Ours Input ART Urban100: Img004 Ground-truth CAT-Ours Input CAT Urban100: Img085 Ground-truth CAT-Ours Input CAT Manga109: GakuenNoise Ground-truth IMDN-Ours Input IMDN Manga109: YumeiroCooking Ground-truth IMDN-Ours Input IMDN Figure 4. Visual comparisons on \u00d74 image super-resolution. We show the results of extracted SAM masks, input LQ images, the groundtruth HQ images, the baseline methods, and the baseline methods trained with our proposed method. Set14: Barbara Ground-truth ART-Ours Input ART Set14: Zebra Ground-truth ART-Ours Input ART McMaster: 0002 Ground-truth ART-Ours Input ART Kodak24: Kodim03 Ground-truth ART-Ours Input ART Figure 5. Visual comparisons on color image denoising (\u03c3 = 50). We show the results of extracted SAM masks, input LQ images, the ground-truth HQ images, ART, and ART trained with our proposed method. Table 4. Impact of different \u03b1 values. \u03b1 PSNR \u03b1 PSNR \u03b1 = 0.5 32.4332+0.1316 \u03b1 = 1 32.5648+0.2568 \u03b1 = 1.5 32.4653+0.0995 \u03b1 = 2 32.4025+0.1623 The effectiveness of the efficient tuning scheme. We first conduct an analysis of the impact of different \u03b1 values on the results. We select several typical \u03b1 values (i.e., 0.5, 1.0, 1.5, and 2.0) and compare their effects, as shown in Table 4. Table 5. Comparison of different tuning schemes. Scheme PSNR # Iterations Ours 32.5648+0.2568 \u223c8,000 Full fine-tuning 32.5640+0.2538 \u223c15,000 From the results in Table 1, it can be observed that the best performance is achieved when \u03b1 = 1.0. When \u03b1 is too large or too small, the weight tuning of the SPT unit cannot be balanced well, leading to sub-optimal performance. We also compare our tuning method with full-parameter tun8 \fTable 6. Effectiveness of the extracted SAM masks SAM mask/representation PSNR Coarse 32.5648+0.2568 Medium 32.5709+0.2628 Fine 32.5737+0.2656 Set14: Barbara Ground-truth ART-Ours Set14: Zebra Ground-truth ART-Ours McMaster: 0002 Ground-truth ART-Ours Input ART Kodak24: Kodim03 Ground-truth ART-Ours Input ART B100: 62096 Ground-truth ART-Ours Input ART Figure 6. A failure case. The use of extracted SAM masks as semantic priors in our method can introduce unrealistic fine-grained structures and texture characteristics, resulting in artifacts that deviate significantly from the real one. ing. As shown in Table 5, our tuning method can improve the performance of the ART network faster and better than the latter. This is because we base our method on the pretrained and frozen ART parameters and focused on updating the tuning-related parameters, which enables efficient updates on a small number of parameters. The effect of the granularity of SAM masks. We adjust the density of the regular grid used to prompt SAM and obtain different groups of masks, Usually, a denser grid results in a larger number of masks containing more fine-grained objects. Specifically, we prompt SAM using 8\u00d78, 16\u00d716, and 24\u00d724 grids, which are denoted as Coarse, Medium, and Fine, respectively. For these three cases, we fix the number of masks fed into the image restoration networks as 64, 128, and 256, respectively, using padding or truncation. In terms of the network architecture, we only adjust the number of the input channel of the first convolutional layer. Table 6 shows the impact of the granularity of SAM masks on the final results. It can be observed that using more masks can improve the performance of ART, which indicates that leveraging more fine-grained semantic information is more helpful and further confirms the effectiveness of the SAM prior. 5.4. Limitations This section presents the limitation of our method that arises from the use of extracted SAM masks as semantic priors. Despite the performance improvement on SR, they may also generate unrealistic fine-grained structures and textures that do not exist in the ground-truth image. For example, in the sailboat shown in Figure 6, the SAM masks indicate a semantic mask of the sail area, resulting in a grid-like structure that is not present in the ground-truth image. While this structure may appear visually pleasing to humans, it deviates significantly from the actual image and can be considered as artifacts. To address this limitation, future work could explore more effective methods for incorporating semantic priors into IR tasks. This could be achieved by investigating different ways to introduce semantic priors into existing methods to improve the fidelity of the generated image. 6." + } + ], + "Dingkang Yang": [ + { + "url": "http://arxiv.org/abs/2403.05963v1", + "title": "Robust Emotion Recognition in Context Debiasing", + "abstract": "Context-aware emotion recognition (CAER) has recently boosted the practical\napplications of affective computing techniques in unconstrained environments.\nMainstream CAER methods invariably extract ensemble representations from\ndiverse contexts and subject-centred characteristics to perceive the target\nperson's emotional state. Despite advancements, the biggest challenge remains\ndue to context bias interference. The harmful bias forces the models to rely on\nspurious correlations between background contexts and emotion labels in\nlikelihood estimation, causing severe performance bottlenecks and confounding\nvaluable context priors. In this paper, we propose a counterfactual emotion\ninference (CLEF) framework to address the above issue. Specifically, we first\nformulate a generalized causal graph to decouple the causal relationships among\nthe variables in CAER. Following the causal graph, CLEF introduces a\nnon-invasive context branch to capture the adverse direct effect caused by the\ncontext bias. During the inference, we eliminate the direct context effect from\nthe total causal effect by comparing factual and counterfactual outcomes,\nresulting in bias mitigation and robust prediction. As a model-agnostic\nframework, CLEF can be readily integrated into existing methods, bringing\nconsistent performance gains.", + "authors": "Dingkang Yang, Kun Yang, Mingcheng Li, Shunli Wang, Shuaibing Wang, Lihua Zhang", + "published": "2024-03-09", + "updated": "2024-03-09", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "main_content": "Introduction \u201cContext is the key to understanding, but it can also be the key to misunderstanding.\u201d \u2013Jonathan Lockwood Huie As the spiritual grammar of human life, emotions play an essential role in social communication and intelligent automation [21]. Accurately recognizing subjects\u2019 emotional \u00a7Corresponding author. Project lead. Anticipation Confidence Excitement Anticipation Happiness GT\uff1a GT\uff1a GT\uff1a Training samples for clustering GT\uff1aDisconnection Disquietment CAER model inference Similar context Similar contexts CAER model + CLEF (ours) Prediction Prediction Anticipation Happiness Confidence Disconnection Disquietment Waves seaside Testing sample for prediction (a) (b) Figure 1. Illustration of the context bias in the CAER task. GT stands for the Ground Truth. Context-specific semantics easily yield spurious shortcuts with emotion labels during training to confound the model [32], giving erroneous results. Conversely, our CLEF effectively corrects biased predictions. states from resource-efficient visual content has been extensively explored in various fields, including online education [15], driving monitoring [59], and human-computer interaction [1]. Conventional works have focused on extracting emotion-related information from subject attributes, such as facial expressions [9], body postures [3], acoustic behaviors [26], or multimodal combinations [22, 31, 54, 55, 57, 60]. Despite considerable advances in subject-oriented efforts, their performance suffers from severe bottlenecks in uncontrolled environments. As shown in Figure 1a, physical representations of subjects in wild-collected images are usually indistinguishable (e.g., ambiguous faces) due to natural occlusions that fail to provide usable emotional signals. Inspired by psychological research [2], context-aware emotion recognition (CAER) [18] has been proposed to seek additional affective semantics from situational contexts. The contexts [19] are typically considered to include out-of-subject factors, such as background objects, place attributes, scene elements, and dynamic interactions of surrounding agents. These rich contextual stimuli promisingly 1 arXiv:2403.05963v1 [cs.CV] 9 Mar 2024 \f( ) s f \uf0d7 ( ) c f \uf0d7 Subject Feature Context Feature GNN Accuracy (%) CAER-Net 73.47 Results on the CAER-S dataset 10 0 20 30 40 50 60 70 80 67.26 77.21 Training sample Subject Training Subject branch Prediction Training sample CAER model (Ensemble branches) Prediction Training sample Context branch Prediction Vanilla Training Context Training Sadness Suffering Sympathy GT\uff1a Engagement Happiness Peace Pleasure Surprise GT\uff1a Testing sample Engagement Happiness Peace Pleasure Surprise Inference : : : (a) (b) Disapproval Engagement Esteem Suffering e6boj6smji3dnyouuj Sadness Suffering Sympathy Engagement Suffering Figure 2. We conduct toy experiments to show the effects of context semantics. The indirect effect of the good context prior follows ensemble branches, narrowing the emotion candidate space. The bad direct effect follows the context branch, causing pure bias. provide complementary emotion clues for accurate recognition. Most existing methods perform emotion inference by extracting ensemble representations from subjects and contexts using sophisticated structures [24, 32, 33, 53, 56, 65]or customized mechanisms [5, 7, 11, 14, 25, 42]. Nevertheless, a recent study [58] found that CAER models tend to rely on spurious correlations caused by a context bias rather than beneficial ensemble representations. An intuitive illustration is displayed in Figure 1. We first randomly choose some training samples on the EMOTIC dataset [19] and perform unsupervised clustering. From Figure 1a, samples containing seaside-related contexts form compact feature clusters, confirming the semantic similarity in the feature space. These samples have positive emotion categories, while negative emotions are nonexistent in similar contexts. In this case, the model [32] is easily misled to capture spurious dependencies between context-specific semantics and emotion labels. In the testing phase from Figure 1b, oriented to the sample with similar contexts but negative emotion categories, the model is confounded by the harmful context bias to infer completely wrong emotional states. A straightforward solution is to conduct a randomized controlled trial by collecting images with all emotion annotations in all contexts. This manner is viewed as an approximate intervention for biased training. However, the current CAER debiasing effort [58] is sub-optimal since the predefined intervention fails to decouple good and bad context semantics. We argue that context semantics consists of the good prior and the bad bias. The toy experiments are performed to verify this insight. Specifically, we train on the EMOTIC dataset separately using the subject branch, the ensemble branches, and the context branch of a CAER baseline [18] in Figure 2a. Recognized subjects in samples during context training are masked to capture the direct context effect. Observing the testing results in Figure 2b, the context prior in ensemble learning as the valuable indirect effect helps the model filter out unnecessary candidates (i.e., removing the \u201cDisapproval\u201d and \u201cEsteem\u201d categories) compared to the subject branch. Conversely, the harmful bias as the direct context effect in the context branch builds a misleading mapping between dim contexts and negative emotions during training, causing biased predictions. To disentangle the two effects in context semantics and achieve more appropriate context debiasing, we propose a unified counterfactual emotion inference (CLEF) framework from a causality perspective. CLEF focuses on assisting existing CAER methods to mitigate the context bias and breakthrough performance bottlenecks in a model-agnostic manner rather than beating them. Specifically, we first formulate a generalized causal graph to investigate causal relationships among variables in the CAER task. Along the causal graph, CLEF estimates the direct context effect caused by the harmful bias through a non-intrusive context branch during the training phase. Meanwhile, the valuable indirect effect of the context prior in ensemble representations of subjects and contexts is calculated following the vanilla CAER model. In the inference phase, we subtract the direct context effect from the total causal effect by depicting a counterfactual scenario to exclude bias interference. This scenario is described as follows: Counterfactual CAER: What would the prediction be, if the model only sees the confounded context and does not perform inference via vanilla ensemble representations? Intuitively, ensemble representations in the counterfactual outcome are blocked in the no-treatment condition. As such, the model performs biased emotion estimation relying only on spurious correlations caused by the pure context bias, which results similarly to the predictions of the context branch in Figure 2b. By comparing factual and counterfactual outcomes, CLEF empowers the model to make unbiased predictions using the debiased causal effect. The main contributions are summarized as follows: \u2022 We are the first to embrace counterfactual thinking to investigate causal effects in the CAER task and reveal that the context bias as the adverse direct causal effect misleads the models to produce spurious prediction shortcuts. \u2022 We devise CLEF, a model-agnostic CAER debiasing framework that facilitates existing methods to capture valuable causal relationships and mitigate the harmful bias in context semantics through counterfactual inference. CLEF can be readily adapted to state-of-the-art (SOTA) methods with different structures, bringing consistent and significant performance gains. \u2022 Extensive experiments are conducted on several largescale CAER datasets. Comprehensive analyses show the broad applicability and effectiveness of our framework. 2 \f2. Related Work Context-Aware Emotion Recognition. Benefiting from advances in deep learning algorithms [6, 27\u201329, 46\u201348, 50\u2013 52, 61\u201363], traditional emotion recognition typically infers emotional states from subject-oriented attributes, such as facial expressions [9, 23], body postures [3, 59], and acoustic behaviours [26, 31]. However, these efforts are potentially vulnerable in practical applications since subject characteristics in uncontrolled environments are usually indistinguishable, leading to severe performance deterioration. Recently, a pioneering work [18] inspired by psychological research [2] has advocated extracting complementary emotional clues from rich contexts, called context-aware emotion recognition (CAER). Kosti et al. [19] begin by utilizing a two-stream convolutional neural network (CNN) to capture effective semantics from subject-related regions and global contexts of complete images. The implementation is similar to the ensemble branch training in Figure 2a. After that, most CAER methods [5, 7, 11, 14, 20, 24, 25, 32, 33, 53, 56, 64, 65] follow an ensemble learning pattern: i) extracting unimodal/multimodal features from subject attributes; ii) learning emotionally relevant features from created contexts based on different definitions; and iii) producing ensemble representations for emotion predictions via fusion mechanisms. For instance, Yang et al. [56] discretize the context into scenes, agent dynamics, and agent-object interactions, using customized components to learn complementary contextual information. Despite achievements, they invariably suffer from performance bottlenecks due to spurious correlations caused by the context bias. Causal Inference. Causal inference [12] is first extensively used in economics [45] and psychology [10] as a scientific theory that seeks causal relationships among variables. The investigation of event causality generally follows two directions: intervention and counterfactuals. Intervention [36] aims to actively manipulate the probability distributions of variables to obtain unbiased estimations or discover confounder effects. Counterfactuals [37] typically utilize distinct treatment conditions to imagine outcomes that are contrary to factual determinations, empowering systems to reason and think like humans. In recent years, several learningbased approaches have attempted to introduce causal inference in diverse fields to pursue desired model effects and exclude the toxicity of spurious shortcuts, including scene graph generation [44], visual dialogue [34, 40], image recognition [4, 30, 49], and adversarial learning [16, 17]. The CAER debiasing effort [58] most relevant to our work utilizes a predefined dictionary to approximate interventions and adopts memory-query operations to mitigate the bias dilemma. Nevertheless, the predefined-level intervention fails to capture pure bias effects in the context semantics, causing a sub-optimal solution. Inspired by [34], we remove the adverse context effect by empowering models L A V M Y Z M Y Z Y Z \u0ddd \ud835\udc8e m Y Z Do-operator \uff08Label Bias\uff09 Do-operator (Context Bias) (a) (b) (c) Y Z \u0de5 \ud835\udc8e (a) (b) (c) Figure 3. (a) Examples of a causal graph where nodes represent variables and arrows represent causal effects. (b) Examples of counterfactual notations. (c) The proposed CAER causal graph. with the debiasing ability of twice-thinking through counterfactual causality, which is fundamentally different in design philosophy and methodology. 3. Preliminaries Before starting, we first introduce the concepts and notations related to causal inference to facilitate a better understanding of our framework and philosophy. Causal graph is a highly generalized analytical tool to reveal causal dependencies among variables. It usually follows the structured causal mode [39] defined as a directed acyclic graph G = {V, E}, where V stands for a set of variables and E implies the corresponding causal effects. A causal graph example with three variables is intuitively displayed in Figure 3a. Here, we represent a random variable as a capital letter (e.g., P), and denote its observed value as a lowercase letter (e.g., p). The causality from cause P to effect Q is reflected in two parts: the direct effect follows the causal link P \u2192Q, and the indirect effect follows the link P \u2192M \u2192Q through the mediator variable M. Counterfactual inference endows the models with the ability to depict counterfactual outcomes in factual observations through different treatment conditions [37]. In the factual outcome, the value of Q would be formalized under the conditions that P is set to p and M is set to m: \\b e gin {s p l it} Q _{ p ,m} = Q(P=p, M= m),\\\\ m = M_p = M(P= p). \\\\ \\end {split} (1) Counterfactual outcomes can be obtained by exerting distinct treatments on the value of P. As shown in Figure 3b, when P is set to p\u2217, and the descendant M is changed, we have Qp\u2217,Mp\u2217= Q(P = p\u2217, Mp\u2217= M(P = p\u2217)). Similarly, Qp,Mp\u2217reflects the counterfactual situation where P = p and M is set to the value when P = p\u2217. Causal effects reveal the difference between two corresponding outcomes when the value of the reference variable changes. Let P = p denote the treated condition and P = p\u2217represent the invisible counterfactual condition. 3 \fAnger Disgust Fear Happy Neutral Sad Surprise Anger Disgust Fear Happy Neutral Sad Surprise CAER model Subject branch Additional context branch Context branch CAER model Subject branch Additional context branch Context branch Factual outcome Counterfactual outcome Fusion Fusion Anger Disgust Fear Happy Neutral Sad Surprise Factual CAER Counterfactual CAER Ground Truth: Happy Subtract Equal Subtract Equal Factual Counterfactual Figure 4. High-level overview of the proposed CLEF framework implementation. In addition to the vanilla CAER model, we introduce an additional context branch in a non-intrusive manner to capture the pure context bias as the direct context effect. By comparing factual and counterfactual outcomes, our framework effectively mitigates the interference of the harmful bias and achieves debiased emotion inference. According to the causal theory [38], The Total Effect (TE) of treatment P = p on Q by comparing the two hypothetical outcomes is formulated as: \\ text { TE} = Q _{p, M_{p}}Q_{p^*, M_{p^*}}. (2) TE can be disentangled into the Natural Direct Effect (NDE) and the Total Indirect Effect (TIE) [12]. NDE reflects the effect of P = p on Q following the direct link P \u2192Q, and excluding the indirect effect along link P \u2192 M \u2192Q due to M is set to the value when P had been p\u2217. It reveals the response of Q when P converts from p to p\u2217: \\ t ext {N D E} = Q_ {p, M_{p^*}}Q_{p^*, M_{p^*}}. (3) In this case, TIE is calculated by directly subtracting NDE from TE, which is employed to measure the unbiased prediction results in our framework: \\ t ex t {T I E} = \\ text { TE}\\text {NDE} = Q_{p, M_{p}} Q_{p, M_{p^*}}. (4) 4. The proposed CLEF Framework 4.1. Cause-Effect Look at CAER As shown in Figure 3c, there are five variables in the proposed CAER causal graph, including input images X, subject features S, context features C, ensemble representations E, and emotion predictions Y . Note that our causal graph has broad applicability and generality since it follows most CAER modelling paradigms. Link X \u2192C \u2192Y reflects the shortcut between the original inputs X and the model predictions Y through the harmful bias in the context features C. The adverse direct effect of the mediator C is obtained via a non-invasive branch of context modelling, which captures spurious correlations between context-specific semantics and emotion labels. Taking Figure 2b as an example, the context branch learns the undesired mapping between dim contexts and negative emotions during training. Link C \u2190X \u2192S portrays the total context and subject representations extracted from X via the corresponding encoders in vanilla CAER models. Based on design differences in distinct methods, C and S may come from a single feature or an aggregation of multiple sub-features. For instance, S is obtained from global body attributes and joint face-pose information in models [18] and [32], respectively. Link C/S \u2192E \u2192Y captures the indirect causal effect of C and S on the model predictions Y through the ensemble representations E. The mediator E is obtained depending on the feature integration mechanisms of different vanilla methods, such as feature concatenation [18] or attention fusion [20]. In particular, C provides the valuable context prior along the good causal link C \u2192E \u2192Y , which gives favorable estimations of potential emotional states when the subjects\u2019 characteristics are indistinguishable. 4 \f4.2. Counterfactual Inference Our design philosophy is to mitigate the interference of the harmful context bias on model predictions by excluding the biased direct effect along the link X \u2192C \u2192Y . Following the notations on causal effects in Section 3, the causality in the factual scenarios is formulated as follows: Y_{c, e } (X ) = Y(C = c, E _{ c , s} = E(C=c, S=s)|X). (5) Yc,e(X) reflects confounded emotion predictions because it suffers from the detrimental direct effect of C, i.e., the pure context bias. To disentangle distinct causal effects in the context semantics, we calculate the Total Effect (TE) of C = c and S = s, which is expressed as follows: \\ text {T E } = Y_{c,e}(X) Y_{c^*,e^*}(X). (6) Here, c\u2217and e\u2217represent the non-treatment conditions for observed values of C and E, where c and s leading to e are not given. Immediately, we estimate the Natural Direct Effect (NDE) for the harmful bias in context semantics: \\ t ext {NDE } = Y_{c,e^*}(X) Y_{c^*,e^*}(X). (7) Yc,e\u2217(X) describes a counterfactual outcome where C is set to c and E would be imagined to be e\u2217when C had been c\u2217 and S had been s\u2217. The causal notation is expressed as: Y_{c,e ^ * }( X ) = Y(C= c , E _ {c^ * , s^*} = E(C=c^*, S= s^*)|X). (8) Since the indirect causal effect of ensemble representations E on the link X \u2192C/S \u2192E \u2192Y is blocked, the model can only perform biased predictions by relying on the direct context effect in the link X \u2192C \u2192Y that causes spurious correlations. To exclude the explicitly captured context bias in NDE, we subtract NDE from TE to estimate Total Indirect Effect (TIE): \\ t ext {TI E } = Y_{c,e}(X) Y_{c,e^*}(X). \\label {eq9} (9) We employ the reliable TIE as the unbiased prediction in the inference phase. 4.3. Implementation Instantiation Framework Structure. From Figure 4, CLEF\u2019s predictions consist of two parts: the prediction Yc(X) = NC(c|x) of the additional context branch (i.e., X \u2192C \u2192Y ) and Ye(X) = NC,S(c, s|x) of the vanilla CAER model (i.e., X \u2192C/S \u2192E \u2192Y ). The context branch is instantiated as a simple neural network NC(\u00b7) (e.g., ResNet [13]) to receive context images with masked recognized subjects. The masking operation forces the network to focus on pure context semantics for estimating the direct effect. For a given input x, its corresponding context image Ix is expressed as: I _ {x}= \\ be gin {c a s es}x(i, j) & \\text { i f } x(i, j) \\notin \\text {bbox}_{\\text {subject }}, \\\\ 0 & \\text { otherwise }, \\end {cases} \\label {bbx} (10) where bboxsubject means the bounding box of the subject. NC,S(\u00b7) denotes any CAER model based on their specific mechanisms to learn ensemble representations e from c and s for prediction. Subsequently, a pragmatic fusion strategy \u03d5(\u00b7) is introduced to obtain the final score Yc,e(X): Y_{c, e }(X) = \\ phi (Y _ {c}(X), Y_{e}(X)) = \\text {log} \\sigma (Y_{c}(X) + Y_{e}(X)), \\label {eq11} g (11) where \u03c3 is the sigmoid activation. Training Procedure. As a universal framework, we take the multi-class classification task in Figure 4 as an example to adopt the cross-entropy loss CE(\u00b7) as the optimization objective. The task-specific losses for Yc,e(X) and Yc,e\u2217(X) are as follows: \\ma t hcal {L}_{t as k } = \\mathcal {CE}(Y_{c,e}(X), y) + \\mathcal {CE}(Y_{c,e^*}(X), y), (12) where y means the ground truth. Since neural models cannot handle no-treatment conditions where the inputs are void, we devise a trainable parameter initialized by the uniform distribution in practice to represent the imagined Ye\u2217(X), which is shared by all samples. The design intuition is that the uniform distribution ensures a safe estimation for NDE, which is justified in subsequent ablation studies. To avoid inappropriate Ye\u2217(X) that potentially causes TIE to be dominated by TE or NDE, we employ the Kullback-Leibler divergence KL(\u00b7) to regularize the difference between Yc,e\u2217(X) and Yc,e(X) to estimate Ye\u2217(X): \\ m athcal {L}_{ kl} = \\mathcal {KL}(Y_{c,e^*}(X), Y_{c,e}(X)). (13) The final loss is expressed as: \\m a t hcal {L}_ {fin} = \\sum _{(c,s,y) \\in \\mathcal {D}}^{} \\mathcal {L}_{task} + \\mathcal {L}_{kl}. (14) Inference Procedure. According to Eq. (9), the debiased prediction is performed as follows: \\ t ext {TIE } = \\p h i (Y_{c} (X), Y_{e}(X)) \\phi (Y_{c}(X), Y_{e*}(X)). (15) 5. Experiments 5.1. Datasets and Evaluation Metrics Experiments are conducted on two large-scale image-based CAER datasets, including EMOTIC [19] and CAER-S [20]. EMOTIC is the first benchmark to support emotion recognition in real-world contexts, which has 23,571 images of 34,320 annotated subjects. All samples are collected from non-controlled environments to provide rich context resources. Each recognized subject is annotated with 26 discrete emotion categories and body bounding box information. The dataset is partitioned into 70% samples for training, 10% samples for validation, and 20% samples for testing. CAER-S consists of 70k static images extracted 5 \fTable 1. Quantitative results of CLEF-based methods for each emotion category on the EMOTIC dataset. We report the average precision of each category to provide comprehensive comparison experiments. The improved results are marked in bold. Category EMOT-Net [19] EMOT-Net + CLEF CAER-Net [20] CAER-Net + CLEF GNN-CNN [65] GNN-CNN + CLEF CD-Net [53] CD-Net + CLEF EmotiCon [32] EmotiCon + CLEF Affection 26.47 35.28 22.36 28.62 47.52 61.84 28.44 35.51 38.55 43.72 Anger 11.24 11.76 12.88 14.01 11.27 16.37 12.12 14.6 14.69 17.09 Annoyance 15.26 17.46 14.42 12.85 12.33 11.08 19.71 16.94 24.68 25.40 Anticipation 57.31 94.29 52.85 82.27 63.20 93.25 57.65 89.05 60.73 92.24 Aversion 7.44 13.14 3.26 10.23 6.81 10.30 9.94 16.83 11.33 15.51 Confidence 80.33 74.48 72.68 73.18 74.83 69.02 69.26 73.11 68.12 65.90 Disapproval 16.14 19.73 15.37 17.04 12.64 15.16 22.78 27.45 18.55 21.47 Disconnection 20.64 30.66 22.01 24.76 23.17 28.35 27.55 31.70 28.73 33.31 Disquietment 19.57 19.73 10.84 13.47 17.66 20.11 21.04 23.37 22.14 24.56 Doubt/Confusion 31.88 19.81 26.07 22.15 19.67 16.57 24.23 19.55 38.43 32.87 Embarrassment 3.05 6.53 1.88 5.31 1.58 4.08 4.50 7.24 10.31 12.98 Engagement 86.69 97.39 73.71 90.46 87.31 92.88 85.32 94.38 86.23 92.75 Esteem 17.86 22.30 15.38 17.91 12.05 18.69 18.66 23.01 25.75 29.13 Excitement 78.05 73.36 70.42 63.01 72.68 65.21 70.07 60.42 80.75 72.64 Fatigue 8.87 10.34 6.29 8.66 12.93 17.67 11.56 14.67 19.35 22.34 Fear 15.70 8.46 7.47 10.12 6.15 10.34 10.38 11.23 16.99 18.71 Happiness 58.92 77.89 53.73 72.37 72.90 81.79 68.46 84.24 80.45 87.06 Pain 9.46 13.97 8.16 10.32 8.22 11.94 13.82 16.44 14.68 15.45 Peace 22.35 23.23 19.55 20.05 30.68 31.56 28.18 26.05 35.72 35.96 Pleasure 46.72 45.92 34.12 34.46 48.37 51.73 47.64 50.92 67.31 68.42 Sadness 18.69 27.19 17.75 23.06 23.90 33.28 32.99 37.43 40.26 45.25 Sensitivity 9.05 7.84 6.94 8.12 4.74 5.14 7.21 10.70 13.94 15.07 Suffering 17.67 18.05 14.85 15.63 23.71 25.60 35.19 30.85 48.05 43.16 Surprise 22.38 12.27 17.46 14.70 8.44 6.01 7.42 7.21 19.60 20.18 Sympathy 15.23 30.15 14.89 15.53 19.45 25.13 10.33 13.66 16.74 20.64 Yearning 9.22 12.13 4.84 5.16 9.86 13.64 6.24 8.63 15.08 17.39 mAP (%) 27.93 31.67 23.85 27.44 28.16 32.18 28.87 32.51 35.28 38.05 from video clips. These images record 7 emotional states of different subjects in various context scenarios from 79 TV shows. The data samples are randomly divided into training, validation, and testing sets in the ratio of 7:1:2. We utilize the standard mean Average Precision (mAP) and classification accuracy to evaluate the results on the EMOTIC and CAER-S datasets, respectively. 5.2. Model Zoo We evaluate the effectiveness of the proposed CLEF using five representative methods, which have completely different network structures and contextual modelling paradigms. Concretely, EMOT-Net [18] is a two-stream classical CNN model where one stream extracts human features from body regions, and the other captures global context semantics. CAER-Net [20] extracts subject attributes from faces and uses the images after hiding faces as background contexts. GNN-CNN [65] utilizes the graph neural network (GNN) to integrate emotion-related objects in contexts and distills subject information with a VGG-16 [43]. CD-Net [53] designs a tube-transformer to perform fine-grained interactions from facial, bodily, and contextual features. EmotiCon [32] employs attention and depth maps to model context representations. Subject-relevant features are extracted from facial expressions and body postures. 5.3. Implementation Details We use a ResNet-152 [13] pre-trained on the Places365 [66] dataset to parameterize the non-invasive context branch in CLEF. The output of the last linear layer is replaced to produce task-specific numbers of neurons for predictions. Rich scene attributes in Places365 provide proper semantics for distilling the context bias. In addition to the annotated EMOTIC, we employ the Faster R-CNN [41] to detect bounding boxes of recognized subjects in CAER-S. Immediately, the context images Ix are obtained by masking the target subjects in samples based on the corresponding bounding boxes. For a fair comparison, the five selected CAER methods are reproduced via the PyTorch toolbox [35] following their reported training settings, including the optimizer, loss function, learning rate strategy, etc. All models are implemented on NVIDIA Tesla V100 GPUs. 5.4. Comparison with State-of-the-art Methods We compare the five CLEF-based methods with existing SOTA models, including HLCR [7], TEKG [5], RRLA [24], VRD [14], SIB-Net [25], MCA [56], and GRERN [11]. Quantitative Results on the EMOTIC. Table 1 shows the Average Precision (AP) of the vanilla methods and their counterparts in the CLEF framework for each emotion category. We have the following critical observations. i) CLEF significantly improves the performance of all models in most categories. For instance, CLEF yields average gains of 8.33% and 6.52% on the AP scores for \u201cAffection\u201d and \u201cSadness\u201d, reflecting positivity and negativity, respectively. ii) Our framework favorably improves several categories heavily confounded by the harmful context bias due to uneven distributions of emotional states across distinct 6 \fAnger Disgust Fear Happy Neutral Sad Surpise 40 60 80 100 Accuracy (%) Emotion Category CAER-Net CAER-Net + CLEF EMOT-Net EMOT-Net + CLEF GNN-CNN GNN-CNN + CLEF CD-Net CD-Net + CLEF EmotiCon EmotiCon + CLEF Figure 5. Emotion classification accuracy (%) for each category of different CLEF-based methods on the CAER-S dataset. Table 2. Quantitative results of different models and CLEF-based methods on the EMOTIC dataset. \u2191represents the improvement of the CLEF-based version over the vanilla method. Methods mAP (%) HLCR [7] 30.02 TEKG [5] 31.36 RRLA [24] 32.41 VRD [14] 35.16 SIB-Net [25] 35.41 MCA [56] 37.73 EMOT-Net [19] 27.93 EMOT-Net + CLEF 31.67 (\u21913.74) CAER-Net [20] 23.85 CAER-Net + CLEF 27.44 (\u21913.59) GNN-CNN [65] 28.16 GNN-CNN + CLEF 32.18 (\u21914.02) CD-Net [53] 28.87 CD-Net + CLEF 32.51 (\u21913.64) EmotiCon [32] 35.28 EmotiCon + CLEF 38.05 (\u21912.77) contexts. For example, the CLEF-based models improve the AP scores for \u201cEngagement\u201d and \u201cHappiness\u201d categories to 90.46%\u223c97.39% and 72.37%\u223c87.06%, outperforming the results in the vanilla baselines by large margins. Table 2 presents the comparison results with existing models regarding the mean AP (mAP) scores. i) Thanks to CLEF\u2019s bias exclusion, the mAP scores of EMOT-Net, CAER-Net, GNN-CNN, CD-Net, and EmotiCon are consistently increased by 3.74%, 3.59%, 4.02%, 3.64%, and 2.77%, respectively. Among them, the most noticeable improvement in GNN-CNN is because the vanilla model more easily captures spurious context-emotion correlations based on fine-grained context element exploration [65], leading to the better debiasing effect with CLEF. ii) Compared to SIB-Net and MCA with complex module stacking [56] and massive parameters [25], the CLEF-based EmotiCon achieves the best performance with the mAP score of 38.05% through efficient counterfactual inference. Quantitative Results on the CAER-S. Table 3 provides the evaluation results on the CAER-S dataset. i) Evidently, CLEF consistently improves different baselines by decoupling and excluding the prediction bias of emotional states in the TV show contexts. Concretely, the overall accuracies of EMOT-Net, CAER-Net, GNN-CNN, CD-Net, and EmotiCon are improved by 2.52%, 2.39%, 2.32%, 3.08%, Table 3. Quantitative results of different models and CLEF-based methods on the CAER-S dataset. Methods Accuracy (%) Fine-tuned VGGNet [43] 64.85 Fine-tuned ResNet [13] 68.46 SIB-Net [25] 74.56 MCA [56] 79.57 GRERN [11] 81.31 RRLA [24] 84.82 VRD [14] 90.49 EMOT-Net [19] 74.51 EMOT-Net + CLEF 77.03 (\u21912.52) CAER-Net [20] 73.47 CAER-Net + CLEF 75.86 (\u21912.39) GNN-CNN [65] 77.21 GNN-CNN + CLEF 79.53 (\u21912.32) CD-Net [53] 85.33 CD-Net + CLEF 88.41 (\u21913.08) EmotiCon [32] 88.65 EmotiCon + CLEF 90.62 (\u21911.97) and 1.97%, respectively. ii) The gains of our framework on the CAER-S are slightly weaker than those on the EMOTIC. A reasonable explanation is that the EMOTIC contains richer context semantics than the CAER-S, such as scene elements and agent dynamics [19]. As a result, CLEF more accurately estimates the adverse context effect and favorably removes its interference. iii) Also, we find in Figure 5 that the classification accuracies of most emotion categories across the five methods are improved appropriately. 5.5. Ablation Studies In Table 4, we select the SOTA CD-Net and EmotiCon to perform thorough ablation studies on both datasets to evaluate the importance of all designs in CLEF. Necessity of Framework Structure. i) When removing CAER models from CLEF, the significant performance deterioration suggests that the indirect causal effect in ensemble representations provides valuable emotion semantics. ii) When the additional context branch (ACB) is excluded, CLEF degrades to a debiased pattern that is not context-conditional, treated as TE. TE\u2019s gains are inferior to TIE\u2019s since it reduces the general bias over the whole dataset rather than the specific context bias. iii) Also, we find that the KL(\u00b7) regularization is indispensable for estimating the proper Ye\u2217(X) and improving debiasing gains. Rationality of Context Modelling. i) We observe that per7 \fConfidence Embarrassment Engagement Excitement Fatigue Happiness Peace Disapproval Disconnection Doubt/Confusion Embarrassment Fatigue Happiness Sensitivity Doubt/Confusion Embarrassment Esteem Excitement Fatigue Happiness Peace Pleasure y Anticipation Disquietment Embarrassment Engagement Peace Sensitivity Disconnection Disquietment Doubt/Confusion Engagement Fatigue Yearning Anticipation Confidence Disquietment Doubt/Confusion Engagement Fatigue JC score: 0.38 JC score: 0.50 JC score: 0.30 JC score: 0.63 Affection Happiness Peace Sympathy Disconnection Disquietment Doubt/Confusion Engagement Disconnection Disquietment Doubt/Confusion Engagement Testing Image Ground Truth Vanilla Method w/ CLEF (a) EMOTIC Dataset Anticipation Confidence Disapproval Disconnection Embarrassment Anticipation Doubt/Confusion Engagement Suffering Anticipation Doubt/Confusion Engagement Pain Suffering Sensitivity Confidence Excitement Sensitivity Yearning Confidence Excitement Sensitivity Yearning (b) (c) Neutral Anger Anger Testing Image Ground Truth Vanilla Method w/ CLEF (d) CAER-S Dataset Sad Happy Happy Happy Disgust Disgust (e) (f) Figure 6. Qualitative results of the vanilla and CLEF-based CD-Net [53] on the EMOTIC and CAER-S datasets. Three testing sample images on each dataset are randomly selected. Incorrectly predicted categories are marked in red. Table 4. Ablation study results on the EMOTIC and CAER-S datasets. \u201cACB\u201d means the additional context branch. \u201cw/\u201d and \u201cw/o\u201d are short for the with and without, respectively. Setting EMOTIC [19] CAER-S [20] CD-Net EmotiCon CD-Net EmotiCon Vanilla Method 28.87 35.28 85.33 88.65 Necessity of Framework Structure + CLEF 32.51 38.05 88.41 90.62 w/o CAER Model 19.64 19.64 62.87 62.87 w/o ACB 28.55 35.43 85.54 88.28 w/o KL(\u00b7) Regularization 32.26 37.44 88.09 90.36 Rationality of Context Modelling w/o Masking Operation 31.38 36.95 87.68 89.85 w/ ImageNet Pre-training 30.74 36.62 87.35 89.27 w/ ResNet-50 [13] 31.45 37.54 87.83 90.04 w/ VGG-16 [43] 29.93 36.48 86.76 89.39 Effectiveness of No-treatment Assumption w/ Average Feature Embedding 27.85 33.18 83.06 85.67 w/ Random Feature Embedding 24.61 28.77 76.43 78.25 forming the masking operation on target subjects in input images of ACB is essential for ensuring reliable capture of the context-oriented adverse direct effect. ii) When the ResNet-152 pre-trained on Places365 [66] is replaced with the one pre-trained on ImageNet [8] in ACB, the gain drops prove that scene-level semantics are more expressive than object-level semantics in reflecting the context bias. This makes sense since scene attributes usually contain diverse object concepts. iii) Moreover, the improvements from CLEF gradually increase as more advanced pre-training backbones are used, which shows that our framework does not rely on a specific selection of instantiated networks. Effectiveness of No-treatment Assumption. We provide two alternatives regarding the no-treatment condition assumption, where random and average feature embeddings are obtained by the random initialization and the prior distribution of the training set, respectively. The worse-thanbaseline results imply that our uniform distribution assumption ensures a safe estimation of the biased context effect. Debiasing Ability Comparison. A gain comparison between our CLEF and the previous CAER debiasing effort CCIM on both datasets is presented in Table 5. Intuitively, Table 5. Debiasing comparison results of CCIM [58] and the proposed CLEF on the EMOTIC and CAER-S datasets. Dataset EMOT-Net [19] CAER-Net [20] Vanilla w/ CCIM w/ CLEF Vanilla w/ CCIM w/ CLEF EMOTIC 27.93 30.88 31.67 23.85 26.51 27.44 CAER-S 74.51 75.82 77.03 73.47 74.81 75.86 our framework consistently outperforms CCIM [58] in both methods. The reasonable reason is that CCIM fails to capture the pure context bias due to over-reliance on the predefined context confounders, causing sub-optimal solutions. In contrast, CLEF decouples the good context prior and the bad context effect, enabling robust debiased predictions. 5.6. Qualitative Evaluation Figure 6 shows the performance of vanilla CD-Net before and after counterfactual debiasing via CLEF. Intuitively, our framework effectively corrects the misjudgments of the vanilla method for emotional states in diverse contexts. Taking Figure 6a as an example, CLEF eliminates spurious correlations between vegetation-related contexts and positive emotions, giving negative categories aligned with ground truths. Moreover, the CLEF-based CD-Net in Figure 6e excludes misleading clues about negative emotions provided by dim contexts and achieves an unbiased prediction. 6." + }, + { + "url": "http://arxiv.org/abs/2307.13933v2", + "title": "AIDE: A Vision-Driven Multi-View, Multi-Modal, Multi-Tasking Dataset for Assistive Driving Perception", + "abstract": "Driver distraction has become a significant cause of severe traffic accidents\nover the past decade. Despite the growing development of vision-driven driver\nmonitoring systems, the lack of comprehensive perception datasets restricts\nroad safety and traffic security. In this paper, we present an AssIstive\nDriving pErception dataset (AIDE) that considers context information both\ninside and outside the vehicle in naturalistic scenarios. AIDE facilitates\nholistic driver monitoring through three distinctive characteristics, including\nmulti-view settings of driver and scene, multi-modal annotations of face, body,\nposture, and gesture, and four pragmatic task designs for driving\nunderstanding. To thoroughly explore AIDE, we provide experimental benchmarks\non three kinds of baseline frameworks via extensive methods. Moreover, two\nfusion strategies are introduced to give new insights into learning effective\nmulti-stream/modal representations. We also systematically investigate the\nimportance and rationality of the key components in AIDE and benchmarks. The\nproject link is https://github.com/ydk122024/AIDE.", + "authors": "Dingkang Yang, Shuai Huang, Zhi Xu, Zhenpeng Li, Shunli Wang, Mingcheng Li, Yuzheng Wang, Yang Liu, Kun Yang, Zhaoyu Chen, Yan Wang, Jing Liu, Peixuan Zhang, Peng Zhai, Lihua Zhang", + "published": "2023-07-26", + "updated": "2023-08-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Driving safety has been a significant concern over the past decade [12, 34], especially during the transition of automated driving technology from level 2 to 3 [26]. According to the World Health Organization [58], there are approximately 1.35 million road traffic deaths worldwide each year. More alarmingly, nearly one-fifth of road accidents are caused by driver distraction that manifests in behavior [53] or emotion [42]. As a result, active monitoring of the driver\u2019s state and intention has become an indispensable component in significantly improving road safety via Driver Monitoring Systems (DMS). Currently, vision is the most cost-effective and richest source [69] of perception information, facilitating the rapid development of DMS [15, 35]. Most commercial DMS rely on vehicle measures such as steering or lateral control to assess drivers [15]. In contrast, the scientific communities [20, 33, 37, 54, 59, 98] focus on developing the next-generation vision-driven DMS to detect potential distractions and alert drivers to improve driving attention. Although DMS-related datasets [1, 16, 28, 29, 31, 42, 44, 53, 59, 64, 73, 94] offer promising prospects for enhancing driving comfort and eliminating safety hazards [54], two serious shortcomings among them restrict the progress and application in practical driving scenarios. We first illustrate a comprehensive comparison of mainstream vision-driven assistive driving perception datasets in Table 1. Specifically, previous datasets [1, 20, 37, 53, 59, 73, 94, 97, 98] mainly concern the in-vehicle view to observe driver-centered endogenous representations, such as anomaly detection [37], drowsiness prediction [20, 98], and distraction recognition [1, 73, 94]. However, the equally important exogenous scene factors that cause driver distraction are usually ignored. The driver\u2019s state inside the vehicle is frequently closely correlated with the traffic scene outside the vehicle [61, 93]. For instance, the reason for an angry driver to look around is most likely due to a traffic jam or malicious overtaking [38]. Meanwhile, most smoking or talking behaviors occur in smooth traffic conditions. A holistic understanding of driver performance, vehicle condition, and scene context is imperative and promising for achieving more effective assistive driving perception. Another shortcoming is that most existing datasets [16, 29, 37, 53, 59, 64] focus on identifying driver behavior characteristics while neglecting to evaluate their emotional states. Driver emotion plays an essential role in complex driving dynamics as it inevitably affects driver behavior and road safety [41]. Many researchers [3, 63] have indicated that drivers with peaceful emotions tend to maintain the best driving performance (i.e., normal driving). Conversely, negative emotional states (e.g., weariness) are more likely to induce distractions and secondary behaviors (e.g., dozing off) [30]. Despite initial progress in driving emotion understanding works [13, 31, 42, 44], these inadequate efforts only consider facial expressions and ignore the valuable clues provided by the body posture and scene context [86, 87, 88, 89, 90, 91]. Most importantly, there are no comprehensive datasets that simultaneously consider the complementary perception information among driver behavior, emotion, and traffic context, which potentially limits the improvement of the next-generation DMS. Motivated by the above observations, we propose an AssIstive Driving pErception dataset (AIDE) to facilitate further research on the vision-driven DMS. AIDE captures rich information inside and outside the vehicle from several drivers in realistic driving conditions. As shown in Figure 1, we assign AIDE three significant characteristics. (i) Multi-view: four distinct camera views provide an expansive perception perspective, including three out-of-vehicle views to observe the traffic scene context and an in-vehicle view to record the driver\u2019s state. (ii) Multi-modal: diverse data annotations from the driver support comprehensive perception features, including face, body, posture, and gesture information. (iii) Multi-task: four pragmatic driving understanding tasks guarantee holistic assistive perception, including driver-centered behavior and emotion recognition, traffic context, and vehicle condition recognition. To systematically evaluate the challenges brought by AIDE, we implement three types of baseline frameworks using representative and impressive methods, which involve classical, resource-efficient, and state-of-the-art (SOTA) backbone models. Diverse benchmarking frameworks provide sufficient insights to specify suitable network architectures for real-world driving perception. For multi-stream/modal inputs, we design adaptive and crossattention fusion modules to learn effectively shared representations. Additionally, numerous ablation studies are performed to thoroughly demonstrate the effectiveness of key components and the importance of AIDE. 2. Related Work 2.1. Vision-driven Driver Monitoring Datasets Vision-driven driver monitoring aims to observe features from driver-related areas to identify potential distractions through various assistive driving perception tasks. According to [59], existing datasets can be categorized as follows. Hands-focused Datasets. Hand poses are an important basis for evaluating human-vehicle interaction in driving scenarios, as hands off the steering wheel are closely related to \fTable 1. Comparison of public vision-driven assistive driving perception datasets. The following symbols are used in the table. DBR: driver behavior recognition; DER: driver emotion recognition; TCR: traffic context recognition; VCR: vehicle condition recognition; H: the hours of videos; K/M: the number of images/frames; \u2217: the number of video clips; N/A: information not clarified by the authors. Dataset Views Classes Size Recording Conditions Scenarios Resolution Multimodal Annotations DBR DER TCR VCR Usage SEU [97] 1 4 80 Car Induced 640 \u00d7 480 \u2013 \" \u2013 \u2013 \u2013 Driver postures Tran et al. [73] 1 10 35K Simulator Induced 640 \u00d7 480 \u2013 \" \u2013 \u2013 \u2013 Safe driving, Distraction Zhang et al. [94] 2 9 60H Simulator Induced 640 \u00d7 360 \" \" \u2013 \u2013 \u2013 Normal driving, Distraction StateFarm [1] 1 10 22K Car Induced 640 \u00d7 480 \u2013 \" \u2013 \u2013 \u2013 Normal driving, Distraction AUC-DD [16] 1 10 14K Car Naturalistic 1920 \u00d7 1080 \u2013 \" \u2013 \u2013 \u2013 Driver postures, Distraction LoLi [64] 1 10 52K Car Naturalistic 640 \u00d7 480 \" \" \u2013 \u2013 \u2013 Driver monitoring, Distraction Brain4Cars [27] 2 5 2M Car Naturalistic N/A \" \" \u2013 \u2013 \u2013 Driving maneuver anticipation Drive&Act [53] 6 83 9.6M Car Induced 1280 \u00d7 1024 \" \" \u2013 \u2013 \u2013 Autonomous driving, Distraction DMD [59] 3 93 41H Simulator, Car Induced 1920 \u00d7 1080 \" \" \u2013 \u2013 \u2013 Distraction, Drowsiness DAD [37] 2 24 2.1M Simulator Induced 224 \u00d7 171 \" \" \u2013 \u2013 \u2013 Driver anomaly detection DriPE [21] 1 \u2013 10K Car Naturalistic N/A \u2013 \u2013 \u2013 \u2013 \u2013 Driver pose estimation LBW [33] 2 \u2013 123K Car Naturalistic N/A \u2013 \u2013 \u2013 \u2013 \u2013 Driver gaze estimation MDAD [28] 2 16 3200\u2217 Car Naturalistic 640 \u00d7 480 \" \" \u2013 \u2013 \u2013 Driver monitoring, Distraction 3MDAD [29] 2 16 574K Car Naturalistic 640 \u00d7 480 \" \" \u2013 \u2013 \u2013 Driver monitoring, Distraction DEFE [42] 1 12 164\u2217 Simulator Induced 1920 \u00d7 1080 \u2013 \u2013 \" \u2013 \u2013 Driver emotion understanding DEFE+ [44] 1 10 240\u2217 Simulator Induced 640 \u00d7 480 \" \u2013 \" \u2013 \u2013 Driver emotion understanding Du et al. [13] 1 5 894\u2217 Simulator Induced 1920 \u00d7 1080 \" \u2013 \" \u2013 \u2013 Driver emotion understanding, Biometric signal detection KMU-FED [31] 1 6 1.1K Car Naturalistic 1600 \u00d7 1200 \u2013 \u2013 \" \u2013 \u2013 Driver emotion understanding MDCS [55] 2 4 112H Car Naturalistic 1280 \u00d7 720 \" \u2013 \" \u2013 \u2013 Driver emotion understanding AIDE (ours) 4 20 521.64K Car Naturalistic 1920 \u00d7 1080 \" \" \" \" \" Driver monitoring, Distraction, Driver emotion understanding, Driving context understanding many secondary behaviors (e.g., smoking). These datasets generally provide annotated bounding boxes for the hands, including CVRR-HANDS 3D [56], VIVA-Hands [10], and DriverMHG [36]. Furthermore, Ohn-bar et al. [57] collect a dataset of hand activity and posture images under different illumination settings to identify the driver\u2019s state. Face-focused Datasets. The face and head provide valuable clues to observe the driver\u2019s degree of drowsiness and distraction [67]. There are several efforts that offer eye-tracking annotations to estimate the direction of the driver\u2019s gaze and position of attention, such as DrivFace [11], DADA [18], and LBW [33]. Some multimodal datasets [59, 94] utilize facial information as a complementary perceptual stream. Moreover, DriveAHead [66] and DD-Pose [62] focus on fine-grained head analysis through pose annotations of yaw, pitch, and roll angles. Body-focused Datasets. Observing the driver\u2019s body actions via the in-vehicle view has become a widely adopted monitoring paradigm. These perceptual patterns from the driver\u2019s body contain diverse resources such as keypoints [21], RGB [73], infrared [64], and depth information [37]. This technical route is first led by the StateFarm [1] competition dataset, which contains behavioral categories of safe driving and distractions. Since then, numerous databases have been proposed to progressively enrich body-based monitoring methods. These include AUC-DD [16], Loli [64], MDAD [28], 3MDAD [29], and DriPE [21]. More recently, some compounding efforts have considered extracting additional information, such as vehicle interiors [53], objects [59], and optical flow [94]. We show a specification comparison with the relevant assistive driving perception datasets for the proposed AIDE. As shown in Table 1, previous datasets either deal with specific perception tasks or only focus on driver-related characteristics. In contrast, AIDE considers the rich context clues inside and outside the vehicle and supports the collaborative perception of driver behavior, emotion, traffic context, and vehicle condition. AIDE is more multi-purpose, diverse, and holistic for assistive driving perception. 2.2. Driving-aware Network Architectures DMS-oriented models usually adopt network structures that are convenient to deploy on-road vehicles. With advances in deep learning techniques [5, 6, 7, 8, 14, 32, 40, 45, 47, 48, 49, 70, 75, 76, 77, 78, 79, 80, 82, 83, 84, 92, 100], most approaches that accompany datasets prioritize implementing classical models. These widely accepted network architectures include AlexNet [39], GoogleNet [71], VGG [68], and ResNet [23] families. Meanwhile, lightweight models with resource-efficient advantages are also favored enough, such as MobileNet [25, 65] and ShuffleNet [51, 96]. 3D-CNN models such as C3D [72], I3D [4], and 3D-ResNet [22] have been implemented to capture spatio-temporal features in video-based data. Several tailored structures have also been presented to suit specific data patterns [52, 94]. We fully exploit the classical, lightweight, and SOTA baselines to implement extensive experiments across various learning paradigms. The diverse combinations of models for different input streams provide valuable insights into the appropriate structure selection. 2.3. Driving-aware Fusion Strategies Various fusion strategies are proposed to meet multistream/modal input requirements in driving perception. The mainstream fusion patterns are divided into data-level, feature-level, and decision-level. For example, Ortega et \fFront View Inside View Interior Scene (a) (b) Figure 2. Camera setup for AIDE in the real vehicle scenario. The setup involves (a) exterior and (b) interior camera layouts. al. [59] perform a data-level fusion of infrared and depth frames based on pixel-wise correlation to achieve better perception performance than unimodality. The common feature-level fusion is based on feature summation or concatenation [81]. Moreover, Kopukl et al. [37] train a separate model for each view from the driver and then achieve decision-level fusion based on similarity scores. Here, we introduce two fusion modules at the feature level to learn effective representations among multiple feature streams. 3. The AIDE Dataset 3.1. Data Collection Specification To tackle the lack of perceptually comprehensive driver monitoring benchmarks, we collect the AIDE dataset under the consecutive manual driving mode, which is essential for the transition of automated vehicles from level 2 to 3 [26]. Camera Setup. The driving environment and camera layout are shown in Figure 2. Specifically, the experimental vehicle is used on real roads to capture rich information about the interior and exterior of the vehicle. The primary data source is four Axis cameras with 1920\u00d71080 resolution. The frame rate is 15 frames per second, and the dynamic range is 120 dB. Concretely, a camera is mounted in front of the vehicle\u2019s each side mirror to produce a left and right view capturing the traffic context. Meanwhile, the front view camera is mounted in the dashboard\u2019s centre to observe the front scene. For the inside view, we record the driver\u2019s natural reactions from the side in a non-intrusive way, with a clear perspective of the face, body, and hands interacting with the steering wheel. The four connected cameras are synchronized via the Precision Timing Protocol. Collection Programme. Naturalistic driving data is collected from several drivers with different driving styles and habits to ensure the authenticity of AIDE. Unlike previous efforts [28, 29, 53, 59] to force subjects to perform specific tasks/training to induce distraction, our data is derived from the most realistic driving performance of drivers who are not informed in advance. The guideline aims to bridge the driving reaction gap between the experimental domain and Figure 3. The percentage of samples in each category for the four driving perception tasks. the realistic monitoring domain. In this case, each participant\u2019s driving operation is conducted at different times on different days to contain diverse driving scenarios. From Figure 1, these scenario factors include distinct light intensities, weather conditions, and traffic contexts, increasing the challenge and diversity of AIDE. 3.2. Data Stream Recording and Annotation Recorded Data Streams. Our AIDE has various information types to provide rich data resources for different downstream tasks, including face, body, and traffic context (i.e., out-of-vehicle views) video data, and keypoint information. As the duration of the different driving reactions varies, the raw video data from the four views are first synchronously processed into 3-second short video clips using the Moviepy Library. The processing facilitates the AIDE-based monitoring system to satisfy realtime responses within a fixed span. For the inside view of Figure 1(b), the face detector MTCNN [95] is utilized to capture the driver\u2019s facial bounding box. Meanwhile, the pose estimator AlphaPose [17] is employed to obtain drivercentred information, including the body bounding box, 2D skeleton posture (26 keypoints), and gesture (42 keypoints). We eliminate clips with missing results based on the above detection to ensure data integrity. An additional operation in the retained clips is applied to fill missing joints using interpolation of adjacent frames. Task Determination. Four pragmatic assistive driving tasks are proposed to facilitate holistic perception. Endogenous Driver Behavior and Emotion Recognition (DBR, DER) are adopted because these two tasks intuitively reflect distraction/inattention [37, 42]. Exogenously, Traffic Context Recognition (TCR) is considered since the scene context provides valuable evidence for understanding driver intention [61]. Also, we establish Vehicle Condition Recognition (VCR) as the driver\u2019s state usually accompanies a transition in vehicle control [38]. These complementary tasks \fall benefit from the rich data resources from AIDE. Label Assignment. The dataset annotation involves 12 professional data engineers with bespoke training. The annotation is performed blindly and independently, and we utilize the majority voting rule to determine the final labels. To adequately represent real driving situations, the behavior categories consist of one safe normal driving and six secondary activities that frequently cause traffic accidents. For emotions, five categories that occur frequently and tend to induce distractions in drivers are considered. Meanwhile, six research experts in human-vehicle interaction are asked to rate three traffic context categories and five vehicle condition categories. Figure 1(c) displays each category from the different tasks and provides a corresponding illustration. Data Statistic. Eventually, we obtained 2898 data samples with 521.64K frames. Each sample consists of 3-second video clips from four views, where the duration shares a specific label from each perception task. The inside clips contain the estimated bounding boxes and keypoints on each frame. AIDE is randomly divided into training (65%), validation (15%), and testing (20%) sets without considering held-out subjects due to the naturalistic nature of data imbalance. A stratified sampling is applied to ensure that each set contains samples from all categories for different tasks. Figure 3 shows the percentage of samples in each category for each task. Ethics Statement. All our materials adhere to ethical standards for responsible research practice. Each participant signed a GDPR* informed consent which allows the dataset to be publicly available for research purposes. 4. Assistive Driving Perception Framework 4.1. Model Zoo To thoroughly explore AIDE, we introduce three types of baseline frameworks to cover most driving perception modeling paradigms via extensive methods. As Figure 4 shows, our frameworks accommodate all available streams, including video information of the face, body, and scene, as well as keypoints of gesture and posture. 2D Pattern. Classical 2D ConvNets such as ResNet [23] and VGG [68] have significantly succeeded in image-based recognition. Here, we reuse them with minimal change. For processing a clip, the hidden features of sampled frames are extracted simultaneously and then aggregated by a 1D convolutional layer. For the skeleton keypoints, we design Multi-Layer Perceptrons (MLPs) with GeLU [24] activation to perform feature extraction. Meanwhile, a Spatial Embedding (SE) is also added to provide location information. 2D + Timing Pattern. This pattern aims to introduce an additional sequence model after 2D ConvNets to learn temporal representations. As a result, a Transformer Encoder * https://gdpr-info.eu/ Face Body Gesture Posture Posture Scene Scene Candidate Sub-networks Driver Emotion Driver Behavior Traffic Context Vehicle Condition Feature Fusion Module Figure 4. Our assistive driving perception framework pipeline. (TransE) [74] is employed to refine the hidden features among sampled frames and then aggregated by a temporal convolutional layer. Furthermore, we augment a Temporal Embedding (TE) for the MLPs to maintain the temporal dynamics of the gesture and posture modalities. 3D Pattern. The 3D network structures directly model hierarchical representations by capturing spatio-temporal information. We consider various impressive models, including 3D-ResNet [22], C3D [72], I3D [4], SlowFast [19], and TimeSFormer [2]. Furthermore, the 3D versions of lightweight networks such as MobileNet-V1/V2 [25, 65] and ShuffleNet-V1/V2 [96, 51], which are resourceefficient for DMS, are also considered. In this case, we introduce the remarkable ST-GCN [85] to process the skeleton sequences via multi-level spatio-temporal graphs. 4.2. Feature Fusion and Learning Strategies How to effectively fuse the multi-stream/modal features extracted by the above candidate networks is crucial for diverse perception tasks. To this end, we propose two sophisticated feature-level fusion modules to learn valuable shared representations among multiple features. Adaptive Fusion Module. Modality heterogeneity leads to distinct features contributing differently to the final prediction. The adaptive fusion module aims to assign dynamic weights to target features Fta \u2208{hf, hb, hg, hp, hs} from the face, body, gesture, posture, and scene based on their importance. Specifically, we design one shared query vector q \u2208Rd\u00d71 to obtain the attention values \u03c8ta as follows: \u03c8ta = qT \u00b7 tanh(Wta \u00b7 Fta + bta), (1) where Wta \u2208Rd\u00d7d and bta \u2208Rd\u00d71 are learnable parameters. Immediately, the attention values \u03c8ta are normalized with the softmax function to obtain the final weights: \u03b3ta = exp(\u03c8ta) P ta\u2208{f,b,g,p,s} exp(\u03c8ta). (2) The process provides optimal fusion weights for each feature to highlight the powerful features while suppressing the \fTable 2. Comparison results of baseline models in three distinct patterns on the AIDE for four tasks. In each pattern, the best results are marked in bold, and the second-best results are marked underlined. The following abbreviations are used. Res: ResNet [23]; MLP: multi-layer perception; SE: spatial embedding; TE: temporal embedding; TransE: transformer encoder [74]; PP: pre-training on the Places365 [99] dataset; CG: coarse-grained. Pattern Backbone DER DBR TCR VCR ID Face Body Gesture Posture Scene CG-Acc CG-F1 Acc F1 CG-Acc CG-F1 Acc F1 Acc F1 Acc F1 2D Res18 [23] Res34 MLP+SE MLP+SE PP-Res18 [99] 71.08 67.54 69.05 63.06 74.84 74.92 63.87 59.52 88.01 86.63 78.16 77.27 (1) Res18 Res34 MLP+SE MLP+SE Res34 73.23 70.47 71.26 68.71 75.37 75.58 65.35 63.29 83.74 81.28 77.12 75.23 (2) Res34 Res50 MLP+SE MLP+SE Res50 72.62 68.75 69.68 64.83 73.01 72.75 59.77 54.64 80.13 74.47 71.26 69.53 (3) VGG13 [68] VGG16 MLP+SE MLP+SE VGG16 73.15 70.25 70.72 67.11 74.71 74.61 63.65 58.12 82.77 80.42 77.94 76.29 (4) VGG16 VGG19 MLP+SE MLP+SE VGG19 71.23 67.79 69.31 64.67 72.66 72.73 62.34 57.33 83.58 80.67 75.13 73.96 (5) 2D + Timing Res18+TransE Res34+TransE MLP+TE MLP+TE PP-Res18+TransE 73.28 71.29 70.83 67.14 76.44 76.86 67.32 64.45 90.54 89.66 79.97 77.94 (6) Res18+TransE Res34+TransE MLP+TE MLP+TE Res34+TransE 75.37 74.68 72.65 70.96 76.35 76.77 67.08 64.11 86.63 84.87 78.46 76.51 (7) Res34+TransE Res50+TransE MLP+TE MLP+TE Res50+TransE 72.89 69.06 70.24 65.65 74.28 74.32 63.54 59.91 82.57 77.29 73.69 72.26 (8) VGG13+TransE VGG16+TransE MLP+TE MLP+TE VGG16+TransE 74.55 73.45 71.12 69.58 76.37 76.81 67.15 64.27 85.13 83.34 78.58 76.77 (9) VGG16+TransE VGG19+TransE MLP+TE MLP+TE VGG19+TransE 72.57 68.39 69.46 64.75 73.71 73.48 65.48 61.71 85.74 83.95 77.91 76.05 (10) 3D MobileNet-V1 [25] MobileNet-V1 ST-GCN ST-GCN MobileNet-V1 74.71 73.47 72.23 69.61 75.04 75.26 64.20 61.48 88.34 86.95 77.83 75.69 (11) MobileNet-V2 [65] MobileNet-V2 ST-GCN ST-GCN MobileNet-V2 70.27 66.54 68.47 62.58 70.28 69.98 61.74 54.74 86.54 82.38 78.66 76.78 (12) ShuffleNet-V1 [96] ShuffleNet-V1 ST-GCN ST-GCN ShuffleNet-V1 75.21 74.44 72.41 70.82 76.19 76.36 68.97 67.13 90.64 89.98 80.79 79.66 (13) ShuffleNet-V2 [51] ShuffleNet-V2 ST-GCN ST-GCN ShuffleNet-V2 74.38 73.42 70.94 69.53 73.56 73.78 64.04 61.75 89.33 87.54 78.98 77.52 (14) 3D-Res18 [22] 3D-Res34 ST-GCN ST-GCN 3D-Res34 73.07 70.23 70.11 65.15 78.16 78.35 66.52 64.57 88.51 87.26 81.12 79.71 (15) 3D-Res34 3D-Res50 ST-GCN ST-GCN 3D-Res50 70.61 67.10 69.13 62.95 71.26 71.01 63.05 57.97 87.82 84.86 79.31 76.87 (16) C3D [72] C3D ST-GCN ST-GCN C3D 66.35 62.04 63.05 57.06 73.57 73.64 63.95 60.36 85.41 80.44 77.01 74.84 (17) I3D [4] I3D ST-GCN ST-GCN I3D 71.43 68.05 70.94 65.99 74.38 74.36 66.17 61.35 87.68 84.78 79.81 78.66 (18) SlowFast [19] SlowFast ST-GCN ST-GCN SlowFast 75.17 74.24 72.38 70.77 75.53 75.73 61.58 59.41 86.86 84.66 78.33 76.66 (19) TimeSFormer [2] TimeSFormer ST-GCN ST-GCN TimeSFormer 76.52 74.92 74.87 72.56 73.73 73.91 65.18 63.24 92.12 91.81 78.81 76.91 (20) Table 3. Configuration for input streams. C: channels; F: frames; H: height; W: width; K: keypoint number; P: human number. Stream Modality Configuration Face RGB 3 (C)\u00d716 (F)\u00d764 (H)\u00d764 (W) Body RGB 3 (C)\u00d716 (F)\u00d7112 (H)\u00d7112 (W) Gesture Skeleton Keypoint 3 (C)\u00d716 (F)\u00d742 (K)\u00d71 (P) Posture Skeleton Keypoint 3 (C)\u00d716 (F)\u00d726 (K)\u00d71 (P) Scene RGB 3 (C)\u00d764 (F)\u00d7224 (H)\u00d7224 (W) weaker ones. The final representation Zfin \u2208Rd is obtained by the weighted summation: Zfin = X ta\u2208{f,b,g,p,s} \u03b3ta \u2299Fta. (3) Cross-attention Fusion Module. The core idea of this module is to learn pragmatic representations via finegrained information interaction. We utilize cross-attention to achieve potential adaption from the concatenated source feature Fso = [hf, hb, hg, hp, hs] \u2208R5d to the target features Fta to reinforce each target feature effectively. Inspired by the self-attention [74], we embed Fta into a space denoted as Qta = BN (Fta) WQta, while embedding Fso into two spaces denoted as Gso = BN (Fso) WGso and Uso = BN (Fso) WUso, respectively. WQta \u2208Rd\u00d7d, {WGso, WUso} \u2208R5d\u00d75d are embedding weights and BN means the batch normalization. Formally, the crossattention feature interaction is expressed as follows: Fso\u2192ta = softmax(QtaGT so)Uso \u2208Rd. (4) Subsequently, the forward computation is expressed as: Zta = BN(Fta) + Fso\u2192ta, (5) Zta = f\u03b4(Fta) + Zta, (6) where f\u03b4(\u00b7) is the feed-forward layers parametrized by \u03b4, and Zta \u2208{Zf, Zb, Zg, Zp, Zs} \u2208Rd. The reinforced target features Zta are concatenated to get the final representation Zfin \u2208Rd via dense layers. Finally, four fully connected layers with the task-specific number of neurons are introduced after Zfin. Learning Strategies. The standard cross-entropy losses are adopted as Lk task = \u22121 n Pn i=1 yk i \u00b7log\u02c6 yk i for the four classification tasks, where yk i is the ground truth of the k-th task and n is the number of samples in a batch. The total loss is computed as Ltotal = P4 k=1 \u03bbkLk task, where \u03bbk is the trade-off weight. To seek a suitable balance among multiple tasks, we introduce the dynamic weight average [46] to adaptively update the weight \u03bbk of each task at each epoch. 5. Experiments 5.1. Data Processing The input streams are selected from uniform temporal position sampling in synchronized video clips and skeleton sequences, resulting in every 16-frame sample for face, body, gesture, and posture data. To learn the scene semantics efficiently, we merge the sampled clips from the four whole views to produce each 64-frame scene data. Each sample is flipped horizontally and vertically with a 50% random probability for data augmentation. For the left-righthand keypoints, we create a link between joints #94 and #115 to form an overall gesture topology for processing by a single ST-GCN [85]. The detailed input configurations for the different streams in each sample are shown in Table 3. 5.2. Implementation Details Experimental Setup. The whole framework is built on the PyTorch-GPU [60] using four Nvidia Tesla V100 GPUs. The AdamW [50] optimizer is adopted for network optimization with an initial learning rate of 1e-3 and a weight \fTable 4. Experimental results for different streams/modalities. Only weighted F1 scores are reported due to similar results to Acc. Stream/Modality DER DBR TCR VCR Face Body Gesture Posture Scene F1 F1 F1 F1 \" 66.41 51.07 48.51 41.69 \" 63.93 62.38 55.47 50.01 \" 52.21 57.97 50.74 58.26 \" 65.52 63.15 55.28 47.32 \" 49.75 45.68 86.33 75.84 \" \" 67.34 62.93 59.05 52.97 \" \" \" 67.88 65.42 65.18 64.40 \" \" \" \" 70.27 66.84 73.63 67.54 \" \" \" \" \" 70.82 67.13 89.98 79.66 decay of 1e-4. For a fair comparison, the uniform batch size and epoch across models are set to 16 and 30, respectively. The output dimension d of all models is converted to 128 by minor structural adjustments. In practice, all the hyper-parameters are determined via the validation set. Our cross-attention fusion module is the default fusion strategy. Evaluation Metric. We measure recognition performance by classification accuracy (Acc) and weighted F1 score (F1). Considering the demand for practicality [38] in DMS, we provide three-category evaluations of polar emotions and two-category evaluations of abnormal behaviors in the main comparison. Please refer to the supplementary for the new taxonomy. The corresponding metrics are the coarsegrained accuracy (CG-Acc) and the F1 score (CG-F1). 5.3. Experimental Results and Analyses Main Performance Comparison. As shown in Table 2, we comprehensively report the comparison results of different baseline models combined in the three learning patterns. The following are some key observations. (i) The overall performance (Acc/F1) of the DER, DBR, TCR, and VCR tasks approaches only around 72%, 67%, 89%, and 79%, respectively, which still leaves considerable improvement room. (ii) The results in 3D and 2D + Timing patterns are generally better than those in 2D for all four tasks, demonstrating that considering temporal information can help improve perception performance. This makes sense as sequential modeling captures the rich dynamical clues among frames. For instance, the TransE-based Experiment (9) shows a significant gain of 3.50% and 6.15% in Acc and F1 on the DBR task compared to its 2D version (4). (iii) In the 3D pattern, resource-efficient model combinations can also achieve competitive or even better results compared to dense structures, as in Experiments (11, 13). This finding inspires researchers to consider the performance-efficiency trade-off when selecting suitable DMS models. (iv) Experiments (1, 6) reveal that the rich scene semantics in the Places365 dataset [99] facilitates capturing valuable context prototypes from the pre-trained backbone, leading to better performance on the TCR and VCR tasks. Importance of Distinct Streams/Modalities. To investiTable 5. Experimental results for different perception tasks. \u201c2DT\u201d means \u201c2D + Timing\u201d pattern. \u201cw/o\u201d stands for the without. Config Pattern DER DBR TCR VCR Acc F1 Acc F1 Acc F1 Acc F1 Full Tasks 2D 71.26 68.71 65.35 63.29 83.74 81.28 77.12 75.23 2DT 70.83 67.14 67.32 64.45 90.54 89.66 79.97 77.94 3D 74.87 72.56 65.18 63.24 92.12 91.81 78.81 76.91 w/o DER 2D 63.13 60.96 84.55 81.79 77.07 75.16 2DT 65.08 62.72 90.20 89.27 79.86 77.85 3D 63.47 61.35 91.86 90.74 78.85 76.94 w/o DBR 2D 70.29 67.44 80.92 78.66 74.58 72.92 2DT 68.03 64.58 87.22 86.51 77.51 75.67 3D 72.54 69.62 89.61 89.37 76.42 74.55 w/o TCR 2D 71.23 68.67 64.42 62.36 76.72 74.60 2DT 70.95 67.22 65.18 62.33 77.54 75.46 3D 74.61 72.28 65.15 63.19 78.02 76.15 w/o VCR 2D 71.43 69.17 63.24 63.15 83.65 81.14 2DT 70.79 67.02 66.11 63.04 91.23 90.28 3D 74.57 72.18 64.76 62.75 92.04 91.75 gate the impact of distinct streams/modalities, we conduct experiments using the performance-balanced combination (13) with increasing inputs. Table 4 shows the following interesting findings. (i) For isolated inputs, the scene stream provides the most beneficial visual clues for determining traffic context and vehicle condition. The body and posture modalities are more competitive on the DER and DBR tasks, indicating that bodily expressions can convey critical intent information. The observation is consistent with psychological research [9, 89]. (ii) With the progressive increase in information channels, various driver-based characteristics contribute to emotion and behavior understanding. (iii) The body and posture streams bring meaningful gains of 10.54% and 8.45% to the TCR task compared to the preceding one, showing that driver attributes are potentially related to the traffic context. For example, drivers usually change their gait during traffic jam to perform irrelevant operations [43]. (iv) The gesture modality promisingly improves the VCR task\u2019s result by 11.43% compared to the preceding one. A reasonable interpretation is that vehicle states highly correlate with specific hand motions, e.g., the two hands generally cross when the vehicle is turning. Necessity of Different Perception Tasks. In Table 5, we select the Experiments (2, 6, 20) to verify the necessity of different perception tasks in the three patterns. Each task is removed separately to observe the performance variation of the other tasks. We have the following insights. (i) When all four tasks are present simultaneously, the best overall results are achieved across different patterns, confirming that these tasks can synergistically achieve holistic perception. (ii) The interaction between the DER and DBR tasks is more significant, implying a solid mapping between driver-based representations. For instance, negative emotional states (e.g., anxiety) are more likely to induce secondary behaviors (e.g., looking around) and cause accidents [30]. (iii) The DBR task offers valuable average gains of 2.88%/2.74% and 2.46%/2.31% for the TCR and VCR tasks regarding Acc/F1, respectively, indicating a beneficial \f(a) Driver emotion recognition (b) Driver behavior recognition (c) Traffic context recognition (d) Vehicle condition recognition Figure 5. Confusion matrices for the best model performance from the four tasks. Table 6. Experimental results for multiple views and different fusion strategies. \u201cw/o\u201d stands for the without. Config DER DBR TCR VCR Acc Acc Acc Acc Full Framework 70.11 66.52 88.51 81.12 Effectiveness of Multiple Views w/o Inside View 68.08 64.41 88.54 80.64 w/o Front View 69.85 65.67 76.80 76.72 w/o Left View 70.11 66.48 84.39 71.43 w/o Right View 70.06 66.55 85.26 72.55 Impact of Different Fusion Strategies Adaptive Fusion Module (ours) 70.20 65.36 88.57 80.34 Feature Summation 66.85 64.53 85.19 77.56 Feature Concatenation 68.33 64.79 87.05 78.02 correlation between the driver\u2019s state inside the vehicle and the traffic scene outside. Effectiveness of Multiple Views. From Table 6 (top), we employ the Experiment (15) to evaluate the effectiveness of multiple views. (i) We find that the DER and DBR tasks benefit mainly from the inside view, as the interior scene provides necessary recognition clues, such as driver-related information and vehicle internals. The inside view brings gains (Acc) of 2.03% and 2.11% for driver emotion and behavior understanding, respectively. (ii) The three outof-vehicle views provide indispensable contributions to the TCR and VCR tasks, as they contain perceptually critical traffic context semantics. (iii) The multi-view setting of AIDE achieves an overall better performance across tasks via complementary information sources. Impact of Fusion Strategies. We explore the impact of different fusion strategies in Table 6 (bottom). (i) Our adaptive fusion achieves a noteworthy performance compared to the default cross-attention fusion, indicating that both fusion paradigms are superior and usable. (ii) Feature summation and concatenation may introduce redundant information leading to poor results and sub-optimal solutions. Analysis of Confusion Matrices. For the different classification perception tasks, Figure 5 shows the confusion matrices under the best results in each task to analyze the performance of each class. (i) Due to the interference of the long-tail distribution (Figure 3), some head classes are usually confused with other classes, such as \u201cpeace\u201d from the DER task in Figure 5(a) and \u201cforward moving\u201d from the VCR task in Figure 5(d). Moreover, the sparse tail samples lead to inadequate learning of class-specific representations, such as \u201cdozing off\u201d from the DBR task in Figure 5(b). These phenomena are inevitable because the driver remains safely driving for long periods of time in most naturalistic scenarios. (ii) In Figure 5(c), \u201ctraffic jam\u201d creates evident confusion with the other classes. The possible reason is that the rich information from distinct out-of-vehicle views unintentionally exaggerates the scene context clues. 6." + }, + { + "url": "http://arxiv.org/abs/2303.11921v2", + "title": "Context De-confounded Emotion Recognition", + "abstract": "Context-Aware Emotion Recognition (CAER) is a crucial and challenging task\nthat aims to perceive the emotional states of the target person with contextual\ninformation. Recent approaches invariably focus on designing sophisticated\narchitectures or mechanisms to extract seemingly meaningful representations\nfrom subjects and contexts. However, a long-overlooked issue is that a context\nbias in existing datasets leads to a significantly unbalanced distribution of\nemotional states among different context scenarios. Concretely, the harmful\nbias is a confounder that misleads existing models to learn spurious\ncorrelations based on conventional likelihood estimation, significantly\nlimiting the models' performance. To tackle the issue, this paper provides a\ncausality-based perspective to disentangle the models from the impact of such\nbias, and formulate the causalities among variables in the CAER task via a\ntailored causal graph. Then, we propose a Contextual Causal Intervention Module\n(CCIM) based on the backdoor adjustment to de-confound the confounder and\nexploit the true causal effect for model training. CCIM is plug-in and\nmodel-agnostic, which improves diverse state-of-the-art approaches by\nconsiderable margins. Extensive experiments on three benchmark datasets\ndemonstrate the effectiveness of our CCIM and the significance of causal\ninsight.", + "authors": "Dingkang Yang, Zhaoyu Chen, Yuzheng Wang, Shunli Wang, Mingcheng Li, Siao Liu, Xiao Zhao, Shuai Huang, Zhiyan Dong, Peng Zhai, Lihua Zhang", + "published": "2023-03-21", + "updated": "2023-03-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction As an essential technology for understanding human intentions, emotion recognition has attracted significant attention in various fields such as human-computer interaction [1], medical monitoring [28], and education [40]. Previous works have focused on extracting multimodal emotion cues from human subjects, including facial expressions [9, 10, 49], acoustic behaviors [2, 50, 52], and body \u00a7Corresponding Author. Engagement Engagement Excitement Happiness Pleasure Affection Happiness Pleasure Disapproval Disconnection Disquietment Doubt/Confusion Engagement Sadness Testing phase GT\uff1a GT\uff1a GT\uff1a GT\uff1a GT\uff1a Training phase Prediction Kosti et al. Affection Anticipation Engagement Excitement Pleasure Kosti et al. + CCIM(ours) Similar Context Disapproval Disconnection Disquietment Doubt/Confusion Engagement Sadness Grass, trees, outdoors,etc Figure 1. Illustration of the context bias in the CAER task. GT means the ground truth. Most images contain similar contexts in the training data with positive emotion categories. In this case, the model learns the spurious correlation between specific contexts and emotion categories and gives wrong results. Thanks to CCIM, the simple baseline [19] achieves more accurate predictions. postures [25, 53], benefiting from advances in deep learning algorithms [6, 7, 21, 26, 27, 43, 44, 46, 47, 54, 55, 59]. Despite the impressive improvements achieved by subjectcentered approaches, their performance is limited by natural and unconstrained environments. Several examples in Figure 1 (left) show typical situations on a visual level. Instead of well-designed visual contents, multimodal representations of subjects in wild-collected images are usually indistinguishable (e.g., ambiguous faces or gestures), which forces us to exploit complementary factors around the subject that potentially reflect emotions. Inspired by psychological study [3], recent works [19,22, 23,29,56] have suggested that contextual information contributes to effective emotion cues for Context-Aware Emotion Recognition (CAER). The contexts are considered to include the place category, the place attributes, the objects, or the actions of others around the subject [20]. The majority of such research typically follows a common pipeline: (1) Obtaining the unimodal/multimodal representations of the recognized subject; (2) Building diverse contexts and extracting emotion-related representations; (3) Designing 1 arXiv:2303.11921v2 [cs.CV] 26 Mar 2023 \f(0.8, 1.0] [0, 0.2] 0 20 40 60 80 100 120 (0.2, 0.4] (0.4, 0.6] (0.6, 0.8] Scene Categories Conditional Entropy (Anger) (0.8, 1.0] [0, 0.2] 0 20 40 60 80 100 120 (0.2, 0.4] (0.4, 0.6] (0.6, 0.8] Scene Categories Conditional Entropy (Happy) (0.8, 1.0] [0, 0.2] 0 20 40 60 80 100 120 (0.2, 0.4] (0.4, 0.6] (0.6, 0.8] Scene Categories Conditional Entropy (Anger) (0.8, 1.0] [0, 0.2] 0 20 40 60 80 100 120 (0.2, 0.4] (0.4, 0.6] (0.6, 0.8] Scene Categories Conditional Entropy (Happy) (a) EMOTIC Dataset (b) CAER-S Dataset Figure 2. We show a toy experiment on the EMOTIC [20] and CAER-S [22] datasets for scene categories of angry and happy emotions. More scene categories with normalized zeroconditional entropy reveal a strong presence of the context bias. fusion strategies to combine these features for emotion label predictions. Although existing methods have improved modestly through complex module stacking [12,23,51] and tricks [16, 29], they invariably suffer from a context bias of the datasets, which has long been overlooked. Recalling the process of generating CAER datasets, different annotators were asked to label each image according to what they subjectively thought people in the images with diverse contexts were feeling [20]. This protocol makes the preference of annotators inevitably affect the distribution of emotion categories across contexts, thereby leading to the context bias. Figure 1 illustrates how such bias confounds the predictions. Intrigued, most of the images in training data contain vegetated scenes with positive emotion categories, while negative emotions in similar contexts are almost nonexistent. Therefore, the baseline [19] is potentially misled into learning the spurious dependencies between contextspecific features and label semantics. When given test images with similar contexts but negative emotion categories, the model inevitably infers the wrong emotional states. More intrigued, a toy experiment is performed to verify the strong bias in CAER datasets. This test aims to observe how well emotions correlate with contexts (e.g., scene categories). Specifically, we employ the ResNet-152 [15] pre-trained on Places365 [58] to predict scene categories from images with three common emotion categories (i.e., \u201canger\u201d, \u201chappy\u201d, and \u201cfear\u201d) across two datasets. The top 200 most frequent scenes from each emotion category are selected, and the normalized conditional entropy of each scene category across the positive and negative set of a specific emotion is computed [30]. While analyzing correlations between scene contexts and emotion categories in Figure 2 (e.g., \u201canger\u201d and \u201chappy\u201d), we find that more scene categories with the zero conditional entropy are most likely to suggest the significant context bias in the datasets, as it shows the presence of these scenes only in the positive or negative set of emotions. Concretely, for the EMOTIC dataset [20], about 40% of scene categories for anger have zero conditional entropy while about 45% of categories for happy (i.e., happiness) have zero conditional entropy. As an intuitive example, most party-related scene contexts are present in the samples with the happy category and almost non-existent in the negative categories. These observations confirm the severe context bias in CAER datasets, leading to distribution gaps in emotion categories across contexts and uneven visual representations. Motivated by the above observation, we attempt to embrace causal inference [31] to reveal the culprit that poisons the CAER models, rather than focusing on beating them. As a revolutionary scientific paradigm that facilitates models toward unbiased prediction, the most important challenge in applying classical causal inference to the modern CAER task is how to reasonably depict true causal effects and identify the task-specific dataset bias. To this end, this paper attempts to address the challenge and rescue the bias-ridden models by drawing on human instincts, i.e., looking for the causality behind any association. Specifically, we present a causality-based bias mitigation strategy. We first formulate the procedure of the CAER task via a proposed causal graph. In this case, the harmful context bias in datasets is essentially an unintended confounder that misleads the models to learn the spurious correlation between similar contexts and specific emotion semantics. From Figure 3, we disentangle the causalities among the input images X, subject features S, context features C, confounder Z, and predictions Y . Then, we propose a simple yet effective Contextual Causal Intervention Module (CCIM) to achieve context-deconfounded training and use the do-calculus P(Y |do(X)) to calculate the true causal effect, which is fundamentally different from the conventional likelihood P(Y |X). CCIM is plug-in and model-agnostic, with the backdoor adjustment [14] to de-confound the confounder and eliminate the impact of the context bias. We comprehensively evaluate the effectiveness and superiority of CCIM on three standard and biased CAER datasets. Numerous experiments and analyses demonstrate that CCIM can significantly and consistently improve existing baselines, achieving a new state-of-the-art (SOTA). The main contributions can be summarized as follows: \u2022 To our best knowledge, we are the first to investigate the adverse context bias of the datasets in the CAER task from the causal inference perspective and identify that such bias is a confounder, which misleads the models to learn the spurious correlation. \u2022 We propose CCIM, a plug-in contextual causal intervention module, which could be inserted into most CAER models to remove the side effect caused by the 2 \fconfounder and facilitate a fair contribution of diverse contexts to emotion understanding. \u2022 Extensive experiments on three standard CAER datasets show that the proposed CCIM can facilitate existing models to achieve unbiased predictions. 2. Related Work Context-Aware Emotion Recognition. As a promising task, Context-Aware Emotion Recognition (CAER) not only draws on human subject-centered approaches [4, 49, 52] to perceive emotion via the face or body, but also considers the emotion cues provided by background contexts in a joint and boosting manner. Existing CAER models invariably extract multiple representations from these two sources and then perform feature fusion to make the final prediction [12,22\u201324,29,37,51,56]. For instance, Kosti et al. [19] establish the EMOTIC dataset and propose a baseline Convolutional Neural Network (CNN) model that combines the body region and the whole image as the context. Hoang et al. [16] propose an extra reasoning module to exploit the images, categories, and bounding boxes of adjacent objects in background contexts to achieve visual relationship detection. For a deep exploration of scene context, Li et al. [23] present a body-object attention module to estimate the contributions of background objects and a body-part attention module to recalibrate the channel-wise body feature responses. Although the aforementioned approaches achieve impressive improvements by exploring diverse contextual information, they all neglect the limitation on model performance caused by the context bias of the datasets. Instead of focusing on beating the latest SOTA, we identify the bias as a harmful confounder from a causal inference perspective and significantly improve the existing models with the proposed CCIM. Causal Inference. Causal inference is an analytical tool that aims to infer the dynamics of events under changing conditions (e.g., different treatments or external interventions) [31], which has been extensively studied in economics, statistics, and psychology [11, 41]. Without loss of generality, causal inference follows two main ways: structured causal model [32] and potential outcome framework [38], which assist in revealing the causality rather than the superficial association among variables. Benefiting from the great potential of the causal tool to provide unbiased estimation solutions, it has been gradually applied to various computer tasks, such as computer vision [5, 34, 39,42,45] and natural language processing [17,35,57]. Inspired by visual commonsense learning [45], to our best knowledge, this is the first investigation of the confounding effect through causal inference in the CAER task while exploiting causal intervention to interpret and address the confounding bias from contexts. X Z C S Y X Z C S Y Do-operator X: Input Images S: Subject Features C: Context Features (a) (b) Z: Confounder Y: Predictions Figure 3. Illustration of our CAER causal graph. (a) The conventional likelihood P(Y |X). (b) The causal intervention P(Y |do(X)). 3. Methodology 3.1. Causal View at CAER Task Firstly, we formulate a tailored causal graph to summarize the CAER framework. In particular, we follow the same graphical notation in structured causal model [32] due to its intuitiveness and interpretability. It is a directed acyclic graph G = {N, E} that can be paired with data to produce quantitative causal estimates. The nodes N denote variables and the links E denote direct causal effects. As shown in Figure 3, there are five variables involved in the CAER causal graph, which are input images X, subject features S, context features C, confounder Z, and predictions Y . Note that our causal graph is applicable to a variety of CAER methods, since it is highly general, imposing no constraints on the detailed implementations. The details of the causal relationships are described below. Z \u2192X. Different subjects are recorded in various contexts to produce images X. On the one hand, the annotators make subjective and biased guesses about subjects\u2019 emotional states and give their annotations [18, 20], e.g., subjects are usually blindly assigned positive emotions in vegetation-covered contexts. On the other hand, the data nature leads to an unbalanced representation of emotions in the real world [13]. That is, it is much easier to collect positive emotions in contexts of comfortable atmospheres than negative ones. The context bias caused by the above situations is treated as the harmful confounder Z to establish spurious connections between similar contexts and specific emotion semantics. For the input images X, Z determines the biased content that is recorded, i.e., Z \u2192X. Z \u2192C \u2192Y . C represents the total context representation obtained by contextual feature extractors. C may come from the aggregation of diverse context features based on different methods. The causal path Z \u2192C represents the detrimental Z confounding the model to learn unreliable emotion-related context semantics of C. In this case, the impure C further affects the predictions Y of the emotion labels and can be reflected via the link C \u2192Y . Although Z potentially provides priors from the training data to better estimation when the subjects\u2019 features are ambiguous, it misleads the model to capture spurious \u201ccontext-emotion\u201d mapping during training, resulting in biased predictions. 3 \fX \u2192C \u2192Y & X \u2192S \u2192Y . S represents the total subject representation obtained by subject feature extractors. Depending on distinct methods, S may come from the face, the body, or the integration of their features. In CAER causal graph, we can see that the desired effect of X on Y follows from two causal paths: X \u2192C \u2192Y and X \u2192S \u2192Y . These two causal paths reflect that the CAER model estimates Y based on the context features C and subject features S extracted from the input images X. In practice, C and S are usually integrated to make the final prediction jointly, e.g., feature concatenation [29]. According to the causal theory [31], the confounder Z is the common cause of the input images X and corresponding predictions Y . The positive effects of context and subject features providing valuable semantics follow the causal paths X \u2192C/S \u2192Y , which we aim to achieve. Unfortunately, the confounder Z causes the negative effect of misleading the model to focus on spurious correlations instead of pure causal relationships. This adverse effect follows the backdoor causal path X \u2190Z \u2192C \u2192Y . 3.2. Causal Intervention via Backdoor Adjustment In Figure 3(a), existing CAER methods rely on the likelihood P(Y |X). This process is formulated by Bayes rule: P (\\b m { Y}| \\bm { X})=\\s u m _{\\b m {z}}^{} P(\\bm {Y}|\\bm {X},\\bm {S}=f_{s} (\\bm {X}), \\bm {C}=f_{c} (\\bm {X}, \\bm {z})) P(\\bm {z}|\\bm {X}), \\label {one} (1) where fs(\u00b7) and fc(\u00b7) are two generalized encoding functions that obtain the total S and C, respectively. The confounder Z introduces the observational bias via P(z|X). To address the confounding effect brought by Z and make the model rely on pure X to estimate Y , an intuitive idea is to intervene X and force each context semantics to contribute to the emotion prediction fairly. The process can be viewed as conducting a randomized controlled experiment by collecting images of subjects with any emotion in any context. However, this intervention is impossible due to the infinite number of images that combine various subjects and contexts in the real world. To solve this, we stratify Z based on the backdoor adjustment [31] to achieve causal intervention P(Y |do(X)) and block the backdoor path between X and Y , where do-calculus is an effective approximation for the imaginative intervention [14]. Specifically, we seek the effect of stratified contexts and then estimate the average causal effect by computing a weighted average based on the proportion of samples containing different context prototypes in the training data. In Figure 3(b), the causal path from Z to X is cut-off, and the model will approximate causal intervention P(Y |do(X)) rather than spurious association P(Y |X). By applying the Bayes rule on the new graph, Eq. (1) with the intervention is formulated as: P (\\bm {Y } | d o(\\ bm { X }))=\\s u m _{\\b m {z}}^{} P(\\bm {Y}|\\bm {X},\\bm {S}=f_{s} (\\bm {X}), \\bm {C}=f_{c} (\\bm {X}, \\bm {z})) P(\\bm {z}). \\label {two} (2) Input Image x c ( ) s f \uf0d7 ( ) c f \uf0d7 Subject Feature NWGM ( ) \uf066\uf0d7 \uf0bb Weighted Integration FC Prediction [ ( )] z g z FC Context Feature h Z CCIM Confounder Dictionary Confounder Prior 0.0 1.0 s ( ) P Z Context Image Set ( ) \uf06a\uf0d7 Clustering Confounder Dictionary M Z I Pre-trained Backbone (a) (b) Figure 4. (a) The generation process of the confounder dictionary Z. (b) A general pipeline for the context-deconfounded training. The red dotted box shows the core component that achieves the powerful approximation to causal intervention: our CCIM. As z is no longer affected by X, the intervention intentionally forces X to incorporate every z fairly into the predictions of Y , subject to the proportion of each z in the whole. 3.3. Context-Deconfounded Training with CCIM To implement the theoretical and imaginative intervention in Eq. (2), we propose a Contextual Causal Intervention Module (CCIM) to achieve the context-deconfounded training for the models. From a general pipeline of the CAER task illustrated in Figure 4(b), CCIM is inserted in a plugin manner after the original integrated feature of existing methods. Then, the output of CCIM performs predictions after passing the final task-specific classifier. The implementation of CCIM is described below. Confounder Dictionary. Since the number of contexts is large in the real world and there is no ground-truth contextual information in the training set, we approximate it as a stratified confounder dictionary Z = [z1, z2, . . . , zN], where N is a hyperparameter representing the size, and each zi \u2208Rd represents a context prototype. As shown in Figure 4(a), we first mask the target subject in each training image based on the subject\u2019s bounding box to generate the context image set I. Subsequently, the image set I is fed to the pre-trained backbone network \u03c6(\u00b7) to obtain the context feature set M = {mk \u2208Rd}Nm k=1 , where Nm is the number of training samples. To compute context prototypes, we use the K-Means++ with principle component analysis to learn Z so that each zi represents a form of context cluster. Each cluster zi is set to the average feature of each cluster in the K-Means++, i.e., zi = 1 Ni PNi j=1mi j, where Ni is the number of context features in the i-th cluster. Instantiation of the Proposed CCIM. Since the calculation of P(Y |do(X)) requires multiple forward passes of all z, the computational overhead is expensive. To reduce the computational cost, we apply the Normalized Weighted Geometric Mean (NWGM) [48] to approximate the above 4 \fCategory EMOT-Net [19] EMOT-Net + CCIM GCN-CNN [56] GCN-CNN + CCIM CAER-Net [22] CAER-Net + CCIM RRLA [23] VRD [16] EmotiCon [29] EmotiCon + CCIM Affection 26.47 34.87 47.52 36.18 22.36 23.08 37.93 44.48 38.55 40.77 Anger 11.24 13.05 11.27 12.53 12.88 12.99 13.73 30.71 14.69 15.48 Annoyance 15.26 18.04 12.33 13.73 14.42 15.28 20.87 26.47 24.68 24.47 Anticipation 57.31 94.19 63.2 92.32 52.85 90.03 61.08 59.89 60.73 95.15 Aversion 7.44 13.41 6.81 15.41 3.26 12.96 9.61 12.43 11.33 19.38 Confidence 80.33 74.9 74.83 75.01 72.68 73.24 80.08 79.24 68.12 75.81 Disapproval 16.14 19.87 12.64 14.45 15.37 16.38 21.54 24.54 18.55 23.65 Disconnection 20.64 27.72 23.17 30.52 22.01 23.39 28.32 34.24 28.73 31.93 Disquietment 19.57 19.12 17.66 20.85 10.84 18.1 22.57 24.23 22.14 26.84 Doubt/Confusion 31.88 19.35 19.67 20.43 26.07 17.66 33.5 25.42 38.43 34.28 Embarrassment 3.05 6.23 1.58 9.21 1.88 5.86 4.16 4.26 10.31 16.73 Engagement 86.69 88.93 87.31 96.88 73.71 70.04 88.12 88.71 86.23 97.41 Esteem 17.86 21.69 12.05 22.72 15.38 16.67 20.5 17.99 25.75 27.44 Excitement 78.05 73.81 72.68 73.21 70.42 71.08 80.11 74.21 80.75 81.59 Fatigue 8.87 9.96 12.93 12.66 6.29 9.73 17.51 22.62 19.35 15.53 Fear 15.7 9.04 6.15 10.31 7.47 6.61 15.56 13.92 16.99 15.37 Happiness 58.92 78.09 72.9 75.64 53.73 62.34 76.01 83.02 80.45 83.55 Pain 9.46 14.71 8.22 15.36 8.16 9.43 14.56 16.68 14.68 17.76 Peace 22.35 22.79 30.68 23.88 19.55 20.21 26.76 28.91 35.72 38.94 Pleasure 46.72 46.59 48.37 45.52 34.12 35.37 55.64 55.47 67.31 64.57 Sadness 18.69 17.47 23.9 22.08 17.75 13.24 30.8 42.87 40.26 45.63 Sensitivity 9.05 7.91 4.74 8.02 6.94 4.74 9.59 15.89 13.94 17.04 Suffering 17.67 15.35 23.71 18.45 14.85 11.89 30.7 46.23 48.05 21.52 Surprise 22.38 13.12 8.44 13.93 17.46 11.7 17.92 16.27 19.6 26.81 Sympathy 15.23 32.6 19.45 33.95 14.89 28.59 15.26 15.37 16.74 47.6 Yearning 9.22 10.08 9.86 11.58 4.84 8.61 10.11 10.04 15.08 12.25 mAP 27.93\u2020 30.88\u2020 (\u21912.95 ) 28.16\u2020 31.72\u2020 (\u21913.56 ) 23.85\u2020 26.51\u2020 (\u21912.66 ) 32.41\u2217 35.16\u2217 35.28\u2020 39.13\u2020 (\u21913.85 ) Table 1. Average precision (%) of different methods for each emotion category on the EMOTIC dataset. \u2217: results from the original reports. \u2020: results from implementation. The footnotes \u2217and \u2020 of Tables 2 and 3 follow the same interpretation. expectation at the feature level as: P (\\bm {Y } |do (\\b m {X})) \\ a p p rox P (\\bm {Y}|\\bm {X},\\bm {S}=f_{s} (\\bm {X}), \\\\ \\bm {C}= \\sum _{\\bm {z}}^{}f_{c} (\\bm {X}, \\bm {z}) P(\\bm {z})). \\label {three} (3) Inspired by [45], we parameterize a network model to approximate the above conditional probability of Eq. (3) as follows: P (\\bm {Y } |do ( \\bm {X})) = \\bm {W}_{h}\\bm {h}+\\bm {W}_{g}\\mathbb {E}_{\\bm {z}}[g(\\bm {z})], \\label {four} (4) where Wh \u2208Rdm\u00d7dh and Wg \u2208Rdm\u00d7d are the learnable parameters, and h = \u03d5(s, c) \u2208Rdh\u00d71. \u03d5(\u00b7) is a fusion strategy (e.g., concatenation) that integrates s and c into the joint representation h. Note that the above approximation is reasonable, because the effect on Y comes from S, C, and the confounder Z. Immediately, we approximate Ez[g(z)] as a weighted integration of all context prototypes: \\mathb b { E}_ {\\bm {z}}[g(\\bm {z})]= \\sum _{i=1}^{N} \\lambda _{i} \\bm {z}_{i} P(\\bm {z}_{i}), \\label {five} (5) where \u03bbi is a weight coefficient that measures the importance of each zi after interacting with the origin feature h, and P(zi) = Ni Nm . In practice, we provide two implementations of \u03bbi: dot product attention and additive attention: \\ text {D o t P roduct}: \\lamb da _{i } &= so ftmax(\\f r ac {(\\bm {W} _ { q } \\bm {h } )^{T} (\\bm {W}_{k} \\bm {z}_{i})} {\\sqrt {d}} ), \\label {six} \\\\ \\text {Additive}: \\lambda _{i} &= softmax(\\bm {W}_{t}^{T} \\cdot Tanh(\\bm {W}_{q}\\bm {h}+\\bm {W}_{k} \\bm {z}_{i})), \\label {seven} (7) where Wt \u2208Rdn\u00d71, Wq \u2208Rdn\u00d7dh, and Wk \u2208Rdn\u00d7d are mapping matrices. 4. Experiments 4.1. Datasets and Evaluation Metrics Datasets. Our experiments are conducted on three standard datasets for the CAER task, namely EMOTIC [20], CAERS [22], and GroupWalk [29] datasets. EMOTIC contains 23,571 images of 34,320 annotated subjects in uncontrolled environments. The annotation of these images contains the bounding boxes of the target subjects\u2019 body regions and 26 discrete emotion categories. The standard partitioning of the dataset is 70% training set, 10% validation set, and 20% testing test. CAER-S includes 70k static images extracted from video clips of 79 TV shows to predict emotional states. These images are randomly split into training (70%), validation (10%), and testing (20%) images. These images are annotated with 7 emotion categories: Anger, Disgust, Fear, Happy, Sad, Surprise, and Neutral. GroupWalk consists of 45 videos that were captured using stationary cameras in 8 real-world settings. The annotations consist of the following discrete labels: Angry, Happy, Neutral, and Sad. The dataset is split into 85% training set and 15% testing set. Evaluation Metrics. Following [19, 29], we utilize the mean Average Precision (mAP) to evaluate the results on the EMOTIC and GroupWalk. For the CAER-S, the standard classification accuracy is used for evaluation. 4.2. Model Zoo Limited by the fact that most methods are not open source, we select four representative models to evaluate the effectiveness of CCIM, which have different network structures and contextual exploration mechanisms. EMOT-Net [19] is a baseline Convolutional Neural Net5 \fCategory EMOT-Net [19] EMOT-Net + CCIM GNN-CNN [56] GNN-CNN + CCIM CAER-Net [22] CAER-Net + CCIM EmotiCon [29] EmotiCon + CCIM Angry 57.65 62.41 51.92 54.07 45.18 50.43 68.85 75.93 Happy 71.32 75.68 63.37 70.25 56.59 60.71 72.31 79.15 Neutral 43.1 41.03 40.26 39.49 39.32 37.84 50.34 48.66 Sad 61.24 63.84 58.15 61.85 52.96 54.06 70.8 73.48 mAP 58.33\u2020 60.74\u2020 (\u21912.41 ) 53.43\u2020 56.42\u2020 (\u21912.99 ) 48.51\u2020 50.76 \u2020 (\u21912.25 ) 65.58\u2020 69.31 \u2020 (\u21913.73 ) Table 2. Average precision (%) of different methods for each emotion category on the GroupWalk dataset. Methods Accuracy (%) Methods Accuracy (%) CAER-Net [22] 73.47\u2020 EmotiCon [29] 88.65\u2020 CAER-Net + CCIM 74.81\u2020 (\u21911.34 ) EmotiCon + CCIM 91.17\u2020 (\u21912.52 ) EMOT-Net [19] 74.51\u2020 SIB-Net [24] 74.56\u2217 EMOT-Net + CCIM 75.82\u2020 (\u21911.31 ) GRERN [12] 81.31\u2217 GNN-CNN [56] 77.21\u2020 RRLA [23] 84.82\u2217 GNN-CNN + CCIM 78.66\u2020 (\u21911.45 ) VRD [16] 90.49\u2217 Table 3. Emotion classification accuracy (%) of different methods on the CAER-S dataset. work (CNN) model with two branches. Its distinct branches capture foreground body features and background contextual information, respectively. GCN-CNN [56] utilizes different context elements to construct an affective graph and infer the affective relationship according to the Graph Convolutional Network (GCN). CAER-Net [22] is a twostream CNN model following an adaptive fusion module to reason emotions. The method focuses on the context of the entire image after hiding the face and the emotion cues provided by the facial region. EmotiCon [29] introduces three context-aware streams. Besides the subject-centered multimodal extraction branch, they propose to use visual attention and depth maps to learn the scene and socio-dynamic contexts separately. For EMOT-Net, we re-implement the model following the available code. Meanwhile, we reproduce the results on the three datasets based on the details reported in the SOTA methods above (i.e., GCN-CNN, CAER-Net, and EmotiCon). 4.3. Implementation Details Confounder Setup. Firstly, except for the annotated EMOTIC, we utilize the pre-trained Faster R-CNN [36] to detect the bounding box of the target subject for each training sample on both CAER-S and GroupWalk. After that, the context images are generated by masking the target subjects on the training samples based on the bounding boxes. Then, we use the ResNet-152 [15] pre-trained on Places365 [58] dataset to extract the context feature set M. Each context feature m is extracted from the last pooling layer, and the hidden dimension d is 2048. The rich scene context semantics in Places365 facilitate obtaining better context prototypes from the pre-trained backbone. In the EMOTIC, CAER-S, and GroupWalk, the default size N (i.e., the number of clusters) of Z is 256, 128, and 256, respectively. Training Details. The CCIM and reproducible methods are implemented through PyTorch platform [33]. All models are trained on four Nvidia Tesla V100 GPUs. For a fair comparison, the training settings (e.g., loss function, batch size, learning rate strategy, etc) of these models are consistent with the details reported in their original papers. For the implementation of our CCIM, the hidden dimensions dm and dn are set to 128 and 256, respectively. The output dimension dh of the joint feature h in the different methods is 256 (EMOT-Net), 1024 (GCN-CNN), 128 (CAER-Net), and 78 (EmotiCon). 4.4. Comparison with State-of-the-art Methods We comprehensively compare the CCIM-based models with recent SOTA methods, including RRLA [23], VRD [16], SIB-Net [24], and GRERN [12]. The default setting uses the dot product attention of Eq. (6). Results on the EMOTIC Dataset. In Table 1, we observe that CCIM significantly improves existing models and achieves the new SOTA. Specifically, the CCIM-based EMOT-Net, GCN-CNN, CAER-Net and EmotiCon improve the mAP scores by 2.95%, 3.56%, 2.66%, and 3.85%, respectively, outperforming the vanilla methods by large margins. In this case, these CCIM-based methods achieve competitive or better performance than the recent models RRLA and VRD. We also find that CCIM greatly improves the AP scores for some categories heavily persecuted by the confounder. For instance, CCIM helps raise the results of \u201cAnticipation\u201d and \u201cSympathy\u201d in these CAER methods by 29%\u223c37% and 14%\u223c29%, respectively. Due to the adverse bias effect, the performance of most models is usually poor on infrequent categories, such as \u201cAversion\u201d (AP scores of about 3%\u223c12%) and \u201cEmbarrassment\u201d (AP scores of about 1%\u223c10%). Thanks to CCIM, the AP scores in these two categories are achieved at about 12%\u223c19% and 5%\u223c16%. Results on the GroupWalk Dataset. As shown in Table 2, our CCIM effectively improves the performance of EMOTNet, GCN-CNN, CAER-Net, and EmotiCon on the GroupWalk dataset. The mAP scores for these models are increased by 2.41%, 2.99%, 2.25%, and 3.73%, respectively. Results on the CAER-S Dataset. The accuracy of different methods on the CAER-S dataset is reported in Table 3. The performance of EMOT-Net, GCN-CNN, and CAERNet is consistently increased by CCIM, making each context prototype contribute fairly to the emotion classification results. These models are improved by 1.31%, 1.45%, and 1.34%, respectively. Moreover, the CCIM-based EmotiCon 6 \fAnger Disgust Fear Happy Neutral Sad Surpise 0 20 40 60 80 100 CAER-Net CAER-Net + CCIM EMOT-Net EMOT-Net + CCIM GNN-CNN GNN-CNN + CCIM EmotiCon EmotiCon + CCIM Accuracy (%) Emotion Category Figure 5. Emotion classification accuracy (%) for each category of different methods on the CAER-S dataset. achieves a significant gain of 2.52% and outperforms all SOTA methods with an accuracy of 91.17%. Discussion from the Causal Perspective. (i) Compared to the CAER-S (average gain of 1.66% across models), the performance improvements on the EMOTIC (average gain of 3.26%) and GroupWalk (average gain of 2.85%) are more significant. The potential reason is that the samples in these two datasets come from uncontrolled real-world scenarios that contain various context prototypes, such as rich scene information and agent interaction. In this case, CCIM can more effectively eliminate spurious correlations caused by the adequately extracted confounder and provide sufficient gains. (ii) Furthermore, CCIM can provide better gains for fine-grained methods of modeling context semantics. For instance, EmotiCon (average gain of 3.37% across datasets) with two contextual feature streams significantly outperforms EMOT-Net (average gain of 2.22%) with only one stream. We argue that the essence of fine-grained modeling is the potential context stratification within the sample from the perspective of backdoor adjustment. Fortunately, CCIM can better refine this stratification effect and make the models focus on contextual causal intervention across samples to measure the true causal effect. (iii) According to Tables 1 and 2, and Figure 5, while the causal intervention brings gains for most emotions across datasets, the performance of some categories shows slight improvements or even deteriorations. A reasonable explanation is that the few samples and insignificant confounding effects of these categories result in over-intervention. However, the minor sacrifice is tolerable compared to the overall superiority of our CCIM. 4.5. Ablation Studies We conduct thorough ablation studies in Table 4 to evaluate the implementation of the causal intervention. To explore the effectiveness of CCIM when combining methods that model context semantics at different granularities, we choose the baseline EMOT-Net and SOTA EmotiCon. Rationality of Confounder Dictionary Z. We first provide a random dictionary with the same size to replace the tailored confounder dictionary Z, which is initialized by ID Setting EMOTIC CAER-S GroupWalk mAP (%) Accuracy (%) mAP (%) (1) EMOT-Net + CCIM 30.88 75.82 60.74 (2) EmotiCon + CCIM 39.13 91.17 69.31 (3) (1) w/ Random Z 26.56 73.36 57.45 (4) (2) w/ Random Z 35.12 87.34 65.62 (5) (1) w/ ImageNet Pre-training 28.72 74.75 58.96 (6) (2) w/ ImageNet Pre-training 37.48 90.46 68.28 (7) (1) w/ ResNet-50 29.53 75.34 59.92 (8) (2) w/ ResNet-50 38.86 90.41 68.85 (9) (1) w/ VGG-16 28.78 74.95 59.47 (10) (2) w/ VGG-16 37.93 89.82 68.11 (11) (1) w/ Additive Attention 30.79 75.64 60.85 (12) (2) w/ Additive Attention 39.16 91.08 69.26 (13) (1) w/o \u03bbi 30.05 75.21 59.83 (14) (2) w/o \u03bbi 38.53 89.67 68.75 (15) (1) w/o P(zi) 30.63 75.59 59.94 (16) (2) w/o P(zi) 39.05 90.06 69.15 (17) (1) w/o Masking Strategy 29.86 74.84 59.22 (18) (2) w/o Masking Strategy 38.06 90.57 67.79 Table 4. Ablation study results on all three datasets. w/ and w/o are short for with and without, respectively. 64128 256 512 1024 58 60 62 64 66 68 70 58.85 59.56 60.74 60.75 59.37 66.73 68.34 69.31 69.12 67.86 mean Average Precision Confounder Dictionary Size EMOT-Net EmotiCon 64128 256 512 1024 76 78 80 82 84 86 88 90 92 75.25 75.8275.56 75.8 75.46 89.49 91.1791.15 90.52 89.78 Accuracy Confounder Dictionary Size EMOT-Net EmotiCon 64128 256 512 1024 28 30 32 34 36 38 40 42 28.35 30.23 30.88 30.52 28.79 36.14 37.81 39.13 38.56 37.24 mean Average Precision Confounder Dictionary Size EMOT-Net EmotiCon (a) (b) (c) Figure 6. Ablation study results for the size N of the confounder dictionary Z on three datasets. (a), (b), and (c) from the EMOTIC, CAER-S, and GroupWalk datasets, respectively. randomization rather than the average context features. Experimental results (3, 4) show that the random dictionary would significantly hurt the performance, proving the validity of our context prototypes. Moreover, we use the ResNet152 pre-trained on ImageNet [8] to replace the default settings (1, 2) for extracting context features. The decreased results (5, 6) suggest that context prototypes based on scene semantics are more conducive to approximating the confounder than those based on object semantics. It is reasonable as scenes usually include objects, e.g., in Figure 1, \u201cgrass\u201d is the child of the confounder \u201cvegetated scenes\u201d. Robustness of Pre-trained Backbones. The experiments (7, 8, 9, 10) in Table 4 show that the gain from CCIM increases as more advanced pre-trained backbone networks are used, which indicates that our CCIM is not dependent on a well-chosen pre-trained backbone \u03c6(\u00b7). Effectiveness of Components of Ez[g(z)]. First, we report the results in experiments (11, 12) using the additive attention weight \u03bbi in Eq. (7). The competitive performance demonstrates that both attention paradigms are meaningful and usable. Furthermore, we evaluate the effectiveness of the weighted integration by separately removing the weights \u03bbi and prior probabilities P(zi) in the Ez[g(z)]. The decreased results (13, 14, 15, 16) suggest that depict7 \fHappy Angry Neutral Sad EMOT-Net w/ CCIM Vanilla EMOT-Net (a) EmotiCon w/ CCIM Vanilla EmotiCon (b) Park Station Hospital Market Park Station Hospital Market Park Market Station Hospital Station Park Hospital Market ( ) P Y | X ( ) P Y | X ( ( )) do P Y | X ( ( )) do P Y | X Figure 7. Visualization results of vanilla and CCIM-based EMOTNet and EmotiCon models on the GroupWalk dataset. ing the importance and proportion of each confounder is indispensable for achieving effective causal intervention. Effect of Confounder Size. To justify the size N of the confounder Z, we set N to 64, 128, 256, 512, and 1024 on all datasets separately to perform experiments. The results in Figure 6 show that selecting the suitable size N for a dataset containing varying degrees of the harmful bias can well help the models perform de-confounded training. Necessity of Masking Strategy. The masking strategy aims to mask the recognized subject to learn prototype representations using pure background contexts. Note that other subjects are considered as background to provide the sociodynamic context. The gain degradation from the experiments (17, 18) is observed when the target subject regions are not masked. It is unavoidable because the target subject-based feature attributes would impair the contextbased confounder dictionary Z along the undesirable link S \u2192Z, affecting causal intervention performance. 4.6. Qualitative Results Difference Between P(Y |X) and P(Y |do(X)). To visually show the difference between the models approximate P(Y |X) and P(Y |do(X)), we visualize the distribution of context features learned by EMOT-Net and EmotiCon in testing samples on the GroupWalk. These sample images contain four real-world contexts, i.e., park, market, hospital, and station. Figure 7 shows the following observations. In vanilla models, features with the same emotion categories usually cluster within similar context clusters (e.g., context features of the hospital with the sad category are closer), implying that biased models rely on context-specific semantics to infer emotions lopsidedly. Conversely, in the CCIMbased models, context-specific features form clusters containing diverse emotion categories. The phenomenon suggests that the causal intervention promotes models to fairly Input Image Ground Truth Engagement Happiness Peace Pleasure Surprise Annoyance Doubt/Confusion Sadness Suffering Happiness Engagement Peace Pleasure Surprise Vanilla Method w/ CCIM EMOTIC Anticipation Engagement Sadness Aversion Engagement Peace Suffering Anticipation Engagement Fatigue Sadness Neutral Happy Neutral Disgust Anger Disgust Neutral Sad Neutral Angry Happy Neutral Angry CAER-S GroupWalk Figure 8. Qualitative results of the vanilla and CCIM-based EMOT-Net on three datasets. incorporate each context prototype semantics when predicting emotions, alleviating the effect of harmful context bias. Case Study of Causal Intervention. In Figure 8, we select two representative examples from each dataset to show the performance of the model before and after the intervention. For instance, in the first row, the vanilla baseline is misled to predict entirely wrong results because the subjects in the dim scenes are mostly annotated with negative emotions. Thanks to causal intervention, CCIM corrects the bias in the model\u2019s prediction. Furthermore, in the fifth row, CCIM disentangles the spurious correlation between the context (\u201chospital entrance\u201d) and the emotion semantics (\u201csad\u201d), improving the model\u2019s performance. 5." + } + ], + "Shunli Wang": [ + { + "url": "http://arxiv.org/abs/2309.11718v1", + "title": "CPR-Coach: Recognizing Composite Error Actions based on Single-class Training", + "abstract": "The fine-grained medical action analysis task has received considerable\nattention from pattern recognition communities recently, but it faces the\nproblems of data and algorithm shortage. Cardiopulmonary Resuscitation (CPR) is\nan essential skill in emergency treatment. Currently, the assessment of CPR\nskills mainly depends on dummies and trainers, leading to high training costs\nand low efficiency. For the first time, this paper constructs a vision-based\nsystem to complete error action recognition and skill assessment in CPR.\nSpecifically, we define 13 types of single-error actions and 74 types of\ncomposite error actions during external cardiac compression and then develop a\nvideo dataset named CPR-Coach. By taking the CPR-Coach as a benchmark, this\npaper thoroughly investigates and compares the performance of existing action\nrecognition models based on different data modalities. To solve the unavoidable\nSingle-class Training & Multi-class Testing problem, we propose a\nhumancognition-inspired framework named ImagineNet to improve the model's\nmultierror recognition performance under restricted supervision. Extensive\nexperiments verify the effectiveness of the framework. We hope this work could\nadvance research toward fine-grained medical action analysis and skill\nassessment. The CPR-Coach dataset and the code of ImagineNet are publicly\navailable on Github.", + "authors": "Shunli Wang, Qing Yu, Shuaibing Wang, Dingkang Yang, Liuzhen Su, Xiao Zhao, Haopeng Kuang, Peixuan Zhang, Peng Zhai, Lihua Zhang", + "published": "2023-09-21", + "updated": "2023-09-21", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "I.5.4" + ], + "main_content": "Introduction Although many human action recognition algorithms [1, 2, 3, 4, 5, 6, 7, 8, 9] in daily life scenarios have been proposed, the high professionalism and data shortage seriously hinder the development of fine-grained medical action analysis technology [10, 11]. This paper takes Cardiopulmonary Resuscitation (CPR) as the research example, which is a critical life-saving technique for cardiac and respiratory arrest. CPR aims to restore the patient\u2019s spontaneous breathing and circulation. According to the American Heart Association (AHA) 1, 87.7% of cardiac arrest occurs in families and public places. Rescuers must conduct CPR within 4 minutes to improve the survival rate of the patient. Highquality and standard CPR is the core of effective treatment, while improper actions will reduce the treatment effectiveness. Traditional CPR skill assessment usually requires the participation of the examiner and the dummy equipped with force sensors, in which the examiner scores the rescuer\u2019s body movements, and the force sensors can evaluate the compression frequency and strength. The cost of this hybrid evaluation method is too high to conduct large-scale training system deployment [12, 13]. In this paper, we build an intelligent system that automatically identifies wrong actions in CPR during skill training, thus significantly reducing the assessment cost and improving training efficiency. As far as we know, there is no clear definition of specific error types of CPR actions, and no research has been done to explore the vision-based CPR skill assessment. To fill the research gap, we first identify 13 types of common error actions (shown in Figure 2(a)) under the guidance of the latest version of 1https://www.heart.org/ 2 \fView 4 View 3 View 2 View 1 (b) The proposed CPR-Coach dataset and ImagineNet. Set-1 for Training Set-2 for Testing & & & & \u2022\u2022\u2022 \u2022\u2022\u2022 & & & & \u2022\u2022\u2022 \u2022\u2022\u2022 \u2460 \u2461 \u2462 \u2463 \u2464 \u2465 \u2718 \u2713 \u2713 \u2713 \u2713 \u2718 ImagineNet \u2460 \u2461 \u2462 \u2463 \u2464 \u2718 \u2713 \u2713 \u2713 \u2718 Learning Inference & \u2022\u2022\u2022 \u2022\u2022\u2022 (a) The CPR test scenario and cameras layout. & & & & & & & & & & Figure 1: (a) shows the multi-view capture system. (b) illustrates the structure of the CPRCoach dataset and the function of the ImagineNet. Each colored mark represents an error action class. AHA Guidelines for CPR and ECC [14] and professional emergency treatment doctors. A visual system is constructed to capture videos of the rescue process, as shown in Figure 1(a). Based on the above settings, we create a dataset named CPR-Coach, which consists of two parts: Set-1 that contains singleclass actions, and Set-2 that contains composite error actions. Figure 1(b) graphically depicts the structure of the dataset through colored marks. Note that the square box denotes determined single-class actions in the Training Set, while the irregular box denotes the uncertain composite error classes in the Testing Set. Existing action recognition frameworks [1, 2, 3, 4, 5, 6, 7, 8, 9] have been able to handle the single-class action recognition task. We can directly migrate 3 \fthese models to CPR-Coach Set-1 to evaluate the fine-grained errors recognition performance. However, these models cannot meet the actual application in the CPR test. In actual CPR skill assessment, rescuers are likely to make multiple mistakes simultaneously, and a qualified coach is supposed to point out all mistakes exactly. If the number of single errors is 13, the total number of composite errors can reach a frightening 8191 (P13 n=1 Cn 13 = 213 \u22121). It is impossible to conduct exhaustive data collection to cover all these error combinations. In other words, we can\u2019t make the labels of the training set consistent with those in test set, such as most classical machine learning tasks. To solve this dilemma, let us re-think how a real coach works. This coach must not have seen all the wrong action combinations, but he can still give the correct judgment according to the single-error action knowledge. This is because human beings have extremely strong knowledge reasoning and generalization abilities [15]. Inspired by this, this paper proposes a concise framework named ImagineNet to handle the intractable Single-class Training & Multi-class Testing problem properly. The function of the ImagineNet is shown in Figure 1(b). The essence of the ImagineNet is a human-inspired feature combination training strategy. As its name implies, it can Imagine composite error features based on restricted single-class error actions and achieves high performance in the unseen composite error recognition task. By regarding Set-1 as the training set and Set-2 as the testing set, we can examine the ImagineNet, which plays the role of Coach. Sufficient experimental results confirm the effectiveness of the framework. The main contributions of this paper are as follows: \u2022 To the best knowledge, we propose the first dataset named CPR-Coach in the visual CPR assessment task, which supports fine-grained action recognition and composite error recognition tasks. \u2022 Taking the CPR-Coach dataset as a benchmark, we extensively explore and compare the existing action recognition models based on different modality information. 4 \f\u2022 We propose a human-cognition-inspired framework named ImagineNet, which significantly improved the composite error recognition performance under restricted supervision. 2. Related Work 2.1. Human Action Recognition Video-based Human Action Recognition (HAR) is one of the representative tasks of video understanding. With the prosperity and development of deep learning methods, more and more action recognition frameworks [1, 2, 3, 4, 5, 6, 7, 8, 9, 16, 17, 18, 19, 20] have been proposed. Despite the success of previous frameworks on some public HAR benchmarks [21, 22, 23, 24, 25], the fine-grained recognition performance of these frameworks still remains unexplored [10], such as in sports and medical fields. Fortunately, we have now seen the seeds of these specialized studies. Benefiting from the availability of sports videos, Vakanski et al. [26], Xu et al. [27], and Shao et al. [28] proposed three fine-grained action recognition datasets in sports, respectively. In the medical field, some research on surgical workflow recognition and analysis has been proposed [29, 30, 31]. These work mainly focus on video analysis of Laparoscopy and Cataract surgeries. Nevertheless, there are few studies on fine-grained error action recognition in the medical field, which can save a lot of resources during the medical skill training and assessment. To fill this gap, this paper proposes the first dataset named CPR-Coach in CPR skill training and assessment. The CPR-Coach dataset contains indistinguishable errors and complex composite error classes, putting forward higher requirements for action recognition models. 2.2. Action Quality Assessment Action Quality Assessment (AQA) aims to identify and score specific skilled actions. Currently, research on AQA mainly focuse on sports [32, 33, 34, 35, 36, 37, 38] and the medical field [11, 12, 39, 40, 41, 42, 43, 44, 45, 46]. Wang et 5 \fal. [10] found that publicly available datasets and algorithms in sports are more than those in the medical field, which is mainly caused by the high professionalism of medical data acquisition. Existing studies on medical AQA could be divided into three categories: surgical skill evaluation [13, 39, 40, 41, 47] under the Objective Structured Assessment of Technical Skill (OSATS) system [48], operating skills identification based on Da Vinci surgical systems [11, 42, 43, 49, 50], and skill assessment in laparoscopic surgery [12, 44, 45, 46, 51, 52]. All these benchmarks are listed in Table 2. These research only determined the level of Expert/Medium/Novice to rate medical actions and did not conduct detailed analysis. Users of these systems do not know where they need improvement, so the usage scenarios of these systems are very limited. This paper proposes the first fine-grained error recognition dataset in CPR and defines the basic form of the composite error action assessment problem. Note that the CPR test focuses more on specific errors and is not suitable for judging through scores and rated classes. So we extended the concept of AQA to CPR in this work. 2.3. Multi-Label Learning Algorithms Different from traditional classification tasks, multi-label learning faces the challenge of exponential growth in the number of class label spaces [53, 54]. Existing solutions are mainly divided into two categories: Convert the multilabel problem into multiple independent binary classification problems [55, 56, 57], or improve the algorithm to adapt to multi-label data [58, 59, 60]. Although the task of Single-class Training & Multi-class Testing to be solved in this paper is also a multi label classification problem, there is only one single class sample in the training set, so it puts forward higher requirements for the model. The proposed ImagineNet follows an algorithm transformation strategy and thoroughly improves the recognition performance through feature-combining strategies. 6 \f\u2022\u2022\u2022 \u2022\u2022\u2022 & (a) 14 Single-class Actions (b) 74 Composite Error Actions & & & 59 Paired-Composite Errors 5 Quadruple-Comp. Errors 10 Triple-Comp. Errors & & & & & & & & & & & & & & & Figure 2: Structure of the CPR-Coach. (a) Set-1 consists of a Correct class and 13 types of single-error actions. (b) Set-2 consists of 74 composite error actions (59 paired-, 10 triple, and 5 quadruple-composite errors). For clarity, different marks with different colors are adopted to represent 14 single classes. This marking method is the same elsewhere. Due to space limitations, this figure only shows the generation process of one pairedand one quadruple-composite error actions. 3. CPR-Coach Dataset As shown in Figure 2, the proposed CPR-Coach dataset is divided into two parts: Set-1 that contains 1 type of correct action and 13 types of single-error actions, and Set-2 that contains 74 types of composite error actions. Considering the exponential growth of the total number of composite error actions (8191 classes for 13 single-error actions), this paper mainly focuses on paired combinations and several common multi-error combinations. Based on the filtering strategy in Figure 4, we remove 19 impossible combinations from 78 pairs (C2 13 = 78) and finally get 59 paired-composite error actions. All deleted combinations have been confirmed by emergency doctors. In addition, 10 triple errors and 5 quadruple errors are selected by these professional doctors based on actual experience. All these 15 multi-composite error actions are listed in Figure 3 in detail. Finally, we built a label space containing 74 combination errors. Data Collection. We build a video capture system with four high-resolution 7 \fOverlap Hands & Bending Arms & Jump Pressing Bending Arms & Wrong Position & Overlap Hands Bending Arms & Overlap Hands & Insufficient Pressing Tilting Arms & Jump Pressing & Overlap Hands Wrong Position & Overlap Hands & Tilting Arms Overlap Hands & Tilting Arms & Insufficient Pressing Bending Arms & Jump Pressing & Insufficient Pressing Squatting & Tilting Arms & Wrong Position Standing & Excessive Pressing & Overlap Hands Standing & Overlap Hands & Insufficient Pressing Overlap Hands & Bending Arms & Jump Pressing & Wrong Position Standing & Random Position Pressing & Jump Pressing & Insufficient Pressing Overlap Hands & Tilting Arms & Wrong Position & Insufficient Pressing Bending Arms & Jump Pressing & Wrong Position & Insufficient Pressing Tilting Arms & Bending Arms & Overlap Hands & Random Position Pressing 10 TripleComposite Errors 5 QuadrupleComp. Errors & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & Multi-Comp. Composite Error Actions Marks Figure 3: All combinations of the 10 tripleand 5 quadruple-composite error actions studied in this paper. Overlap Hands Clenching Hands Single Hand Bending Arms Tilting Arms Jump Pressing Squatting Standing Wrong Position Insufficient Pressing Slow Frequency Excessive Pressing Random Position Pressing \u2718 \u2718 Figure 4: The selection strategy of the composite error actions. In this case, Overlap Hands is selected as the primary class, and two impossible co-occurrence combinations are deleted. cameras to record the rescue process, as shown in Figure 1(a). In order to ensure the diversity of the dataset, we recruited 12 volunteers to participate in data collection. Multiple participants enrich the visual feature diversity of the proposed dataset. Three volunteers were assigned to Set-1, and the other nine were assigned to Set-2. For the single-class actions in Set-1, the number of performing times is 40. For composite error actions in Set-2, the number of performing times is 8. Therefore, the number of videos in each error category is consistent, which provides a fair comparison in experiments. All actions are carried out under the guidance of professional doctors to ensure the quality of each external cardiac compression action. 8 \fTable 1: Comparison with existing medical action analysis datasets. Dataset #Actions Modality #Videos #Views Available FLS-ASU [45] 1 RGB 28 2 \u2718 Sharma et al. [61] 2 RGB 33 1 \u2718 Bettadapura et al. [62] 3 RGB 64 2 \u2718 Zia et al. [13] 2 RGB 104 1 \u2718 Zhang et al. [12] 1 RGB 546 1 \u2718 Chen et al. [63] 3 RGB 720 2 \u2718 MISTIC-SL [42] 4 RGB+Kinematics 49 1 \u2718 JIGSAWS [11] 3 RGB+Kinematics 103 1 \u2714 CPR-Coach (Ours) 14+74 RGB+Flow+Pose 4,544 4 \u2714 Table 2: Summary of statistics of the CPR-Coach dataset. Item Data Perspectives 4 FPS 25 Video Resolution 4096\u00d72160 (4K) Number of Participants 12 Classes of Single-class Actions 1+13=14 Classes of Composite Error Actions 59+10+5=74 Frames (RGB) 2,217,756 Frames (RGB+Flow) 6,644,596 Videos 4,544 Avg. Len. of Videos 19.52s Storage Size 449GB Dataset Statistics. Table 1 compares the proposed CPR-Coach dataset with existing medical action analysis datasets. The CPR-Coach dataset has surpassed existing datasets in terms of data scale, action granularity, and modal complexity. Table 2 summarizes the statistics of the CPR-Coach dataset. It contains around 4.5K videos and 2.2M frames in total. The storage size of the entire dataset is 449GB. The CPR-Coach also provides optical flow images generated by the TV-L1 algorithm [64] and 2D skeletons of the rescuer obtained by Alphapose [65]. Figure 5 shows three modality information from four perspectives: RGB frames, optical flow, and 2D poses. Supported Tasks. As the first multi-perspective dataset to explore finegrained composite actions in medical scenarios, the CPR-Coach can support 9 \fOptical Flow 2D Pose View 1 View 2 View 3 View 4 RGB Figure 5: The CPR-Coach dataset contains three types of modality information on four views. multiple studies. Firstly, we can evaluate existing HAR models on fine-grained error recognition tasks on Set-1. Secondly, by taking Set-1 as the training set and Set-2 as the testing set, we can explore the composite error action recognition task under constrained supervision. Thirdly, the influence of combining different perspectives and modes on the algorithm can be explored. The following experiments follow these ideas. Ethics Issues. Studies in this paper only involve pure medical skill training and assessment. All video data were collected by volunteers with knowledge and consent. Each participant signed a GDPR informed consent which allows the dataset to be publicly available for research purposes. 4. ImagineNet Humans have inherent learning and reasoning strengths. Although a coach has not seen all possible combinations (8191 classes for 13 single-error actions in our settings), they can accurately determine the composite errors of the rescuer based on simple single-error cases. Inspired from this, we proposed the ImagineNet, which can effectively handle such issues in Figure 1(b). Figure 6(a) shows the main idea of the proposed human-cognition-inspired framework ImagineNet. With restricted supervision training data, the Imagine process can freely combine features to improve the multi-label recognition performance. Taking the classic Temporal Segment Network (TSN) [6] as the basic network, 10 \f(b) Specific components and architecture of the ImagineNet. (a) Main idea of the ImagineNet. Overlap Hands Bending Arms & + Imagine Compare Imagined Feature GT Pred. Source BCE Loss ResNet-50 ResNet-50 Avg. Pool Avg. Pool \ud835\udc05\ud835\udc051 \ud835\udc05\ud835\udc052 \ud835\udc17\ud835\udc171 \u2208\u211d\ud835\udc47\ud835\udc47\u00d7\ud835\udc37\ud835\udc37 \ud835\udc17\ud835\udc172 \u2208\u211d\ud835\udc47\ud835\udc47\u00d7\ud835\udc37\ud835\udc37 \u2208\u211d\ud835\udc47\ud835\udc47\u00d7\ud835\udc37\ud835\udc37\u00d7\ud835\udc3b\ud835\udc3b\u00d7\ud835\udc4a\ud835\udc4a \u2208\u211d\ud835\udc47\ud835\udc47\u00d7\ud835\udc37\ud835\udc37\u00d7\ud835\udc3b\ud835\udc3b\u00d7\ud835\udc4a\ud835\udc4a TSN Net ImagineNet Fusion Mechanism \ud835\udc17\ud835\udc1712\u2208\u211d\ud835\udc47\ud835\udc47\u00d7\ud835\udc37\ud835\udc37 \ud835\udcd5\ud835\udcd5(\ud835\udc17\ud835\udc171\u2a01\ud835\udc17\ud835\udc172; \ud835\udf03\ud835\udf03) \ud835\udc15\ud835\udc151 \ud835\udc15\ud835\udc152 Figure 6: (a) and (b) demonstrate the main idea and specific network architecture of the proposed ImagineNet, respectively. Two error actions Overleap Hands and Bending Arms are selected for visualization. The ImagineNet simulates the thinking and judgment process of a real experienced coach concisely. The knowledge base only includes single-class actions, while real applications will encounter unseen composite errors. the detailed architecture of ImagineNet is shown in Figure 6(b). The ImagineNet is divided into three stages: feature extraction, feature fusion, and loss computing. Firstly, two video samples (V1, C1) and (V2, C2) are selected from Set-1 in the feature extraction phase. Note that two videos V1 = {Ii}N1 i=1 and V2 = {Ii}N2 i=1 come from different classes, i.e., C1 \u0338= C2, C \u2208 {1, \u00b7 \u00b7 \u00b7 , 13}. N1 and N2 represent the total frames of two videos, respectively. Ii denotes the i-th frame in the video. The TSN model selects T clips from raw videos for feature extraction. After spatial average pooling, video features X1 \u2208RT \u00d7D and X2 \u2208RT \u00d7D are obtained, where D denotes the dimension of the feature. Secondly, in the feature fusion stage, two different features will be subsequently fused and generate X12 \u2208RT \u00d7D. This process is also expressed as X1 \u2295X2. We regard this feature fusion process as the Imagine process. As illustrated in Figure 7(a&b&c), this paper provides three feature fusion schemes to realize the imagination process: Fully-Connected Layer based fusion (FC), Self-Attention based fusion (SA), and Cross-Attention based fusion 11 \f(a) ImagineNet-FC (c) ImagineNet-CA (b) ImagineNet-SA & FC Layer FC Layer BCE Loss BCE Loss BCE Loss V Q K V Q K 2048-d 512-d 14-d Feed Forward Multi-Head Att. Cross Att. Feed Forward Positional Embedding Positional Embedding & & Feature Aggregation Feature Aggregation Figure 7: Three feature fusion schemes proposed in this paper. Note that only two inputs are displayed for clarity. (CA). Finally, in the loss computing stage, the Binary Cross Entropy (BCE) loss is adopted to measure the divergence between the predicted score and the Ground-Truth (GT) labels. Note that the GT labels are in the form of multi-hot encoding. 4.1. Fusion Mechanisms of the ImagineNet Subfigures in Figure 7(a&b&c) demonstrate three different feature fusion mechanisms: ImagineNet-FC, ImagineNet-SA, and ImagineNet-CA, respectively. The formula representation is omitted in these figures for clarity. Two thick lines with different colors are adopted to represent two video features. ImagineNet-FC. As shown in Figure 7(a), the video features X1 and X2 are fused through the feature addition mechanism. Then a two-layer fully connected neural network maps the fusion feature X12 into predicted scores of 14 classes. This process is formulated as SF C = FF C(X1 \u2295X2; \u03b8F C), (1) where FF C(\u00b7) denotes the neural network, and the plus sign \u2295represents the feature aggregation strategy, which will be described in detail later. \u03b8F C represents the trainable parameters of FF C(\u00b7). 12 \fThe BCE loss function is selected for the network optimization: \u03b8\u2217 F C = arg min \u03b8F C BCE(SF C, GT), (2) where GT = onehot(C1) \u222aonehot(C2) denotes the composite label in multi-hot encoding form. All parameters are omitted in the subsequent statements for clarity. ImagineNet-SA. The ImagineNet-SA adds a self-attention module based on the ImagineNet-FC, as shown in Figure 7(b). The motivation is to equip the ImagineNet with a stronger feature extraction and fusion capability to improve the generalization and reasoning ability. The process is expressed as SSA = FF C(FSA(X1 \u2295X2)), (3) where FSA(\u00b7) includes the self-attention and feed forward stages, FF C(\u00b7) is the same as Equ.10. By substituting X12 for X1 \u2295X2, the self-attention mechanism is expressed as X \u2032 SA = LN \u0014 X12 + softmax \u0012X12XT 12 \u221a D \u0013 X12 \u0015 , (4) and the feed forward layer XSA = LN[X \u2032 SA + FF F N(X \u2032 SA)]. (5) Note that D represents the dimension of video features and D = 2048 in TSN [6]. LN[\u00b7] denotes the LayerNorm operation. For clarity, the LayerNorm operation and residual links are omitted in Figure 7(b&c). ImagineNet-CA. The structure of ImagineNet-CA is shown in Figure ??(e). The main difference between ImagineNet-SA and ImagineNet-CA lies in the feature fusion strategy. Consistent with the above, the computing process is expressed as SCA = FF C(FCA(X1, X2)), (6) where FCA(\u00b7, \u00b7) includes a cross-attention module and a feed forward layer. The cross-attention mechanism integrates two video features from different classes: X \u2032 CA = LN \u0014 X1 + softmax \u0012X1XT 2 \u221a D \u0013 X2 \u0015 , (7) 13 \fOverlap Hands Bending Arms & + = \u25cf\u25cf\u25cf \ud835\udc17\ud835\udc171 \ud835\udc17\ud835\udc172 \ud835\udc17\ud835\udc171 + \ud835\udc17\ud835\udc172 \ud835\udc17\ud835\udc171 \ud835\udc17\ud835\udc172 \ud835\udf06\ud835\udf06\ud835\udc17\ud835\udc171 + 1 \u2212\ud835\udf06\ud835\udf06\ud835\udc17\ud835\udc172 , \ud835\udf06\ud835\udf06~\ud835\udc48\ud835\udc48(0, 1) Figure 8: Visualization of the vanilla additive mechanism and the proposed weighted feature summation mechanism. and the feed forward layer XCA = LN[X \u2032 CA + FF F N(X \u2032 CA)]. (8) After defining three fusion mechanisms, we can instantiate three ImagineNets and compare their performance. Three feature fusion mechanisms mentioned above are frameworks for implementing ImagineNet, while feature aggregation is a local operation denoted as \u2295. Effective feature aggregation methods can make full use of limited samples in Set-1, thus improving the generalization performance under the setting of Single-class Training & Multi-class Testing . 4.2. Feature Aggregation Strategy The simplest way to instantiate \u2295in ImagineNet-FC and -SA models is taking the summation of two features. To increase the diversity of the aggregation process, we propose a random weighted summation mechanism based on the vanilla version. As shown in Figure 8, the aggregated feature is expressed as X12 = \u03bbX1 + (1 \u2212\u03bb)X2, \u03bb \u223cU(0, 1), (9) where \u03bb is a weight sampled from a uniform distribution U(0, 1). This mechanism is able to enrich feature combinations. Consequently, the ImagineNet can Imagine various combined situations given specific error actions. The effectiveness of this concise technique is verified in ablation studies. As representatives 14 \fof feature aggregation methods, CBP [66] and BLOCK [67] are selected for comparison. Weighted summation, CBP, and BLOCK are denoted as Agg-1, Agg-2, and Agg-3, respectively. 4.3. Inference of the ImagineNet Figure 6(b) only demonstrates the training process of the ImagineNet. It can be found that ImagineNet requires two video features X1 and X2 as inputs during training. However, there is only one input video feature of the composite error action during inference. To resolve this mismatch issue, this paper directly adopts the replication method to fill the input. Although the cross-attention in ImagineNet-CA degenerates into the self-attention in ImagineNet-SA during inference, different training process leads to different recognition performance. The two models are still comparable, and the experimental results confirm this analysis. 5. Experiments 5.1. Action Recognition on CPR-Coach Set-1 Compared with traditional HAR datasets, the CPR-Coach focuses on distinguishing subtle errors in CPR. In Figure 2, it is difficult to find the nuances of these actions. CPR-Coach puts forward higher requirements for the action recognition models. Therefore, we take Set-1 of the CPR-Coach as a benchmark and conduct single-error recognition experiments on existing HAR models. 60% of Set-1 is used for training and 40% for testing. Table 3 summarizes the detailed settings and Top-1&3 accuracy of the models. Figure 9 visualizes some features generated by these models through the t-SNE algorithm [71]. Implementation Details. Default configurations in original papers of TSN [6], Two-Stream [70], TSM [8],TPN [68], TRN [16], I3D [7], C3D [18], TIN [17], SlowFast [9], TimeSFormer [19], ST-GCN [20] and PoseC3D [69] are adopted in this study. All models are trained for 50 epochs through the SGD optimizer, except the SlowFast [9] with the Cosine Annealing optimizer for 256 epochs and 15 \fTable 3: Single-class recognition performance of existing HAR models on CPR-Coach Set-1. The first and second accuracy in each column are highlighted in bold and underlined, respectively. Model Backbone Config Epoch Modality Pre-training CE Loss BCE Loss Multi-Margin Loss Top-1 Top-3 Top-1 Top-3 Top-1 Top-3 TSN [6] ResNet-50 1x1x8 50 RGB \u2717 0.8879 0.9940 0.8829 0.9960 0.8502 0.9901 ResNet-50 1x1x8 50 RGB Kinetics-400 0.9067 0.9921 0.8919 0.9940 0.8690 0.9901 ResNet-50 1x1x8 50 Flow \u2717 0.7907 0.9603 0.8304 0.9851 0.7073 0.9355 TSM [8] ResNet-50 1x1x8 50 RGB \u2717 0.9067 0.9901 0.9325 0.9950 0.8433 0.9881 TRN [16] ResNet-50 1x1x8 50 RGB \u2717 0.7827 0.9633 0.7421 0.9435 0.7431 0.9663 I3D [7] ResNet-50 32x2x1 50 RGB \u2717 0.9692 0.9960 0.9117 0.9940 0.8591 0.9861 TPN [68] ResNet-50 8x8x1 50 RGB \u2717 0.9802 0.9960 0.9087 0.9980 0.8720 0.9901 C3D [18] C3D 16x1x1 50 RGB Sports1M 0.9722 0.9931 0.9702 0.9931 0.8621 0.9802 TIN [17] ResNet-50 1x1x8 50 RGB \u2717 0.8800 0.9901 0.7192 0.9335 0.8393 0.9861 SlowFast [9] ResNet-50 4x16x1 256 RGB \u2717 0.8695 0.9734 0.8719 0.9781 0.8625 0.9688 TimeSFormer [19] ViT 8x32x1 50 RGB \u2717 0.8879 0.9921 0.8998 0.9940 0.8462 0.9762 ST-GCN [20] ST-GCN 1x1x300 50 Pose \u2717 0.9246 0.9970 0.9187 0.9881 0.9196 0.9970 PoseC3D [69] ResNet3D-50 1x1x300 240 Pose \u2717 0.9208 0.9922 0.9035 0.9715 0.8837 0.9606 Two-Stream [70] TSN+TSN Flow Late-Fusion 50 RGB+Flow \u2717 0.9533 0.9891 0.9479 0.9825 0.9296 0.9802 TSN+ST-GCN Late-Fusion 50 RGB+Pose \u2717 0.9782 0.9962 0.9608 0.9941 0.9692 0.9960 Table 4: Composite error action recognition performance on Set-2 by direct migration. Only the results of four models in RGB and pose modality are reported due to the limited space. Significant performance degradation can be observed compared to the results in Table 3. Model Config Modality Pre-training CE Loss BCE Loss Multi-Margin Loss mAP mmit mAP mAP mmit mAP mAP mmit mAP TSN [6] 1x1x8 RGB Kinetics-400 0.5598 0.6143 0.4627 0.5629 0.4838 0.5579 TPN [68] 8x8x1 RGB \u2717 0.6250 0.7016 0.5201 0.6102 0.5457 0.6247 TSM [8] 1x1x8 RGB \u2717 0.5662 0.6618 0.5721 0.6688 0.5470 0.6255 ST-GCN [20] 1x1x300 Pose \u2717 0.5776 0.6692 0.5868 0.6865 0.5874 0.6719 16 \fTSN I3D TRN TSM TPN Correct OverlapHands ClenchingHands SingleHand BendingArms TiltingArms JumpPressing Squatting Standing WrongPosition InsufficientPressing SlowFrequency ExcessivePressing RandomPositionPressing ST-GCN Figure 9: Visualization of the action features through t-SNE. The red box in the legend highlights four confusing classes. We use red circles to highlight these four classes of scatters in figures to compare the performance of these networks more clearly. the PoseC3D [69] for 240 epochs. The network input size is 224\u00d7224, while the coordinates of 2D poses remain unchanged at 4096\u00d72160. All models are built on Pytorch and implemented on a system with an Intel Xeon E5-2698 V4@2.20GHz CPU and an NVIDIA Tesla V100 GPU. Cross Entropy (CE), BCE, and Multi-Margin losses are adopted to compare the performance comprehensively. Performance Analysis. Results in Table 3 suggest that these models can effectively handle single error classification tasks. Different networks and different loss function combinations will affect the final classification performance. Better results can be achieved under the CE loss setting, which is mainly caused by the strong assumption that the CE loss function has mutual exclusion between classes. In all tested models, the Two-Stream framework can achieve stable 17 \fperformance under different loss functions. Results on Two-Stream framework show that the fusion of different information modes contributes to performance improvement. In Figure 9, the scatters of four confusing classes are very close in TSN, TPN, and TSM, while the I3D and ST-GCN that pay more attention to temporal information can handle these situations well. In addition, results show that the latest PoseC3D [69] did not outperform the early ST-GCN model. This may be mainly caused by two reasons: on the one hand, the complexity of PoseC3D is much higher than that of ST-GCN, resulting in overfitting on the single error recognition dataset; On the other hand, the PoseC3D stacks 2D keypoints to form a 3D heatmap volume for action recognition. However, because external cardiac compression is a repetitive and cyclical action, stacking 2D keypoints will destroy circulation. Therefore, the performance of PoseC3D is inferior to the ST-GCN. In subsequent experiments and analysis, ST-GCN is selected as the backbone network to ensure simplicity and reproducibility, rather than the PoseC3D. Next, we will explore composite error performance on these models. 5.2. Composite Error Action Recognition on Set-2 Taking Set-1 as the training set and Set-2 as the testing set, we can simulate the real CPR assessment. A naive approach is directly migrating the pre-trained model in single-class task to the composite error recognition task. Table 4 summarizes the performance of four selected models. All three losses cannot handle the huge gap between the two tasks. The sharp decline in performance indicates that the new task has exceeded the representation capability of original models. It should be noted that the core contribution of this paper is not to create a novel HAR model but to build a better composite error detector through existing models. Results in Table 3 show that SOTA algorithms have not demonstrated impressive performance in the single-error recognition task. Therefore, we adopt classic models such as TSN [6], TSM [8], and ST-GCN [20] to instantiate ImagineNets for ensuring the reproducibility and stability, instead of those sophisticated methods such as TimeSFormer and PoseC3D. Next, the 18 \fTable 5: Performance comparison between direct migration and ImagineNet-FC. All model settings are consistent with Table 4. Model mAP \u2206 mmit mAP \u2206 TSN [6] 0.5598 \u2014 0.6143 \u2014 w/ ImagineNet-FC 0.6259 \u21916.61% 0.6893 \u21918.50% TPN [68] 0.6250 \u2014 0.7016 \u2014 w/ ImagineNet-FC 0.7094 \u21918.44% 0.7620 \u21916.04% TSM [8] 0.5662 \u2014 0.6618 \u2014 w/ ImagineNet-FC 0.7053 \u219113.91% 0.7566 \u21919.48% ST-GCN [20] 0.5776 \u2014 0.6692 \u2014 w/ ImagineNet-FC 0.6404 \u21916.28% 0.7115 \u21914.23% deployment details and results of the ImagineNet will be introduced. Implementation Details. All ImagineNet models are trained for 60 epochs through the SGD optimizer. The learning rate is set to 0.001 initially and attenuated by 0.1 at 20 and 40-th epochs. The temporal length T is set to 8. Only the models trained with CE loss are explored. Evaluation Metrics. The mAP and mmit mAP metrics are adopted in this paper to evaluate the composite error action recognition performance. The mAP refers to the macro mAP in [72], which denotes the average of the mean average precision for each class mAP = PC i=1 APi C . (10) The mmit mAP refers to the micro mAP in [72], which denotes the mean average precision over all videos mmit mAP = PN j=1 APj N . (11) Note that APi denotes the average precision over the i-th class, while APj denotes the average precision for the j-th sample. In the CPR-Coach dataset, numbers of samples in each classes are relatively balanced, so it can be found that the value of mmit mAP is generally higher than that of mmit mAP. Quantitative Analysis. Table 5 compares the ImagineNet-FC model with the vanilla migration method. Through the Imagine mechanism, the ImagineNetFC significantly improves the composite error recognition performance under 19 \fTable 6: Performance and FLOPs comparison of the proposed three ImagineNet models and their variants based on the TSN. Model Variants GFLOPs mAP mmit mAP ImagineNet-FC FC 0.001 0.6259 0.6893 ImagineNet-SA SA 0.068 0.6426 0.7049 SAx2 0.136 0.6450 0.7131 SAx3 0.203 0.6436 0.7086 w/o PosEmb 0.068 0.6305 0.6906 ImagineNet-CA CA 0.068 0.6307 0.6933 CA+SA 0.136 0.6347 0.7005 CA+SAx2 0.203 0.6335 0.7046 w/o PosEmb 0.068 0.6281 0.6953 restricted supervision, regardless of the input modality. In particular, the ImagineNet-FC brings 13.91% mAP and 9.48% mmit mAP improvement on TSM. The performance and computational complexity of ImagineNet-SA, -CA, and their variants based on the TSN model are summarized in Table 6. Same settings are also adopted to the TSM model, and the results are listed in Table 7. The results reveal that the ImagineNet-SA outperforms the other two models, while the CA mechanism does not improve performance as well as SA. More layers and computational complexity will lead to overfitting. The Positional Embedding module is essential in ImagineNets because chronological information is indispensable for distinguishing these fine-grained error actions. In Figure 12, we explore the relationship between the number of error combinations and the final performance on Set-2. The mmit mAP of ImagineNet-FC gradually decreases as the number of composite errors increases, which is consistent with our intuition that more complex error combinations imply higher task difficulty. In previous experiments, we only adopted single-modal backbone networks. In order to explore the upper bound of the composite error recognition performance, we additionally adopte the MMNet [73] as the multi-mode backbone to extract video features. The performance gain is listed in Table 8. Results reveal that the video backbone model with stronger representation ability is more able to benefit from the proposed ImagineNet, which also demonstrates 20 \fTable 7: Performance and FLOPs comparison of the proposed three ImagineNet models and their variants based on the TSM. Model Variants GFLOPs mAP mmit mAP ImagineNet-FC FC 0.001 0.7053 0.7566 ImagineNet-SA SA 0.068 0.7011 0.7630 SAx2 0.136 0.7007 0.7656 SAx3 0.203 0.6995 0.7572 w/o PosEmb 0.068 0.6822 0.7593 ImagineNet-CA CA 0.068 0.6752 0.7346 CA+SA 0.136 0.6766 0.7406 CA+SAx2 0.203 0.6728 0.7377 w/o PosEmb 0.068 0.6725 0.7339 Table 8: Performance testing based on MMNet, which has stronger representation ability in video feature extraction. Backbone Variants mAP \u2206 mmit mAP \u2206 MMNet Direct Migration 0.6527 \u2014 0.7085 \u2014 MMNet w/ ImagineNet-FC 0.7385 \u21918.58% 0.7716 \u21916.31% w/ ImagineNet-SA 0.7449 \u21919.22% 0.7839 \u21917.54% w/ ImagineNet-CA 0.7401 \u21918.74% 0.7696 \u21916.11% the effectiveness of the ImagineNet. Qualitative Analysis. To explore how the proposed ImagineNet impacts the network, we visualize and compare the features generated by TSN, TSM, TPN and their ImagineNet-FC variant models on Set-2 in Figure 10. Macroscopically, features obtained by the direct migration method are messy, while the ImagineNet can help the network reduce intra-class distance and expand interclass distance. These improvements on t-SNE feature maps correspond to the performance gains in Table 5. The enhancement of feature clustering confirms the effectiveness of the proposed ImagineNet. 5.3. Combination of Perspectives As shown in Figure 1(a) and Figure 5, the proposed video capture system includes four views. It is not practical to use all perspectives in actual deployment, which will cause too much redundant computation. Four-perspective settings 21 \f(a) TSM (b) TSM w/ ImagineNet-FC (c) TSN (d) TSN w/ ImagineNet-FC (e) TPN (f) TPN w/ ImagineNet-FC Figure 10: Feature visualization comparison via t-SNE on Set-2. Black auxiliary lines are marked for clarity. 22 \f1 2 3 4 [1,2] [1,3] [1,4] [2,3] [2,4] [3,4] [1,2,3][2,3,4][1,3,4][1,2,4] [1,2,3,4] Perspective Combination 0.625 0.650 0.675 0.700 0.725 0.750 0.775 Average Precision Performance of TSN w/ ImagineNet-FC mAP mmit_mAP 1 2 3 4 [1,2] [1,3] [1,4] [2,3] [2,4] [3,4] [1,2,3][2,3,4][1,3,4][1,2,4] [1,2,3,4] Perspective Combination 0.55 0.60 0.65 0.70 0.75 Average Precision Performance of ST-GCN w/ ImagineNet-FC mAP mmit_mAP Figure 11: Performance of combining different perspectives. Different numbers of views are grouped by black dividing lines. can help us discover the best combination and achieve the optimal performancecomputation trade-off. Consistent with the paradigm in Table 4, we evaluate the performance of the ImagineNet-FC on all different perspectives combinations. Results are shown in Figure 11. Overall, the performance increases with combing more perspectives. Perspective #3 provides more valuable information, while #4 is the opposite. This discovery is of great value for subsequent system optimization and actual deployment. 5.4. Ablation Studies Ablation studies are conducted to explore the effectiveness of feature aggregation strategies. Table 9 summarizes the results of ImagineNet-FC and its variants based on TSN, TSM, and ST-GCN. Performance of the random weighted summation mechanism surpasses the vanilla method and other two bilinear pooling aggregation methods both in RGB and pose modes. This reveals that the proposed mechanism can generate richer feature combinations concisely and effectively, thus enabling ImagineNet to achieve better generalization performance on unseen error combinations. 23 \fTable 9: Ablation studies on three feature aggregation strategies. Model Agg-1 Agg-2 Agg-3 mAP mmit mAP TSN [6] \u2013 \u2013 \u2013 0.5598 0.6143 w/ ImagineNet-FC \u2718 \u2718 \u2718 0.6198 0.6738 \u2714 \u2718 \u2718 0.6259 0.6893 \u2718 \u2714 \u2718 0.6019 0.6775 \u2718 \u2718 \u2714 0.6033 0.6725 TSM [8] \u2013 \u2013 \u2013 0.5662 0.6618 w/ ImagineNet-FC \u2718 \u2718 \u2718 0.6871 0.7353 \u2714 \u2718 \u2718 0.7053 0.7566 \u2718 \u2714 \u2718 0.6434 0.7308 \u2718 \u2718 \u2714 0.6569 0.7219 ST-GCN [20] \u2013 \u2013 \u2013 0.5776 0.6692 w/ ImagineNet-FC \u2718 \u2718 \u2718 0.6374 0.7089 \u2714 \u2718 \u2718 0.6404 0.7115 \u2718 \u2714 \u2718 0.5783 0.6877 \u2718 \u2718 \u2714 0.6159 0.6864 74 All Err. 59 Paired Err. 10 Triple Err. 5 Quadruple Err. Subsets of Set-2 0.55 0.60 0.65 0.70 mmit mAP TSN w/ ImagineNet-FC STGCN w/ ImagineNet-FC Figure 12: mmit mAP Performance on different subsets of Set-2. 5.5. Cross Modality Studies In previous experiment settings, inputs of the ImaginNet belong to different categories but the same modality. The structure of ImagineNet inherently supports multi-modal data fusion. Taking TSN, TSM and ST-GCN as basic models, Table 10 and Table 11 compares the ImagineNet-CA with the Two-Stream fusion method and two bilinear pooling fusion methods under cross modality settings. The latency of these fusion models is reported by averaging 1000 running times, while basic models are excluded. Results show that the ImagineNet-CA surpasses the other three multimodal fusion methods. Although BLOCK performs similarly to ImagineNet-CA, its latency is nearly 7.8\u00d7 longer, which is mainly caused by the complex approximate outer product computation. The Two-Stream fusion model can reduce latency but has poor performance. 24 \fTable 10: Cross modality studies on RGB and Pose information based on TSN and ST-GCN. Model Modality Latency (ms)\u2193 mAP mmit mAP TSN [6] RGB \u2014 0.5598 0.6143 ST-GCN [20] Pose \u2014 0.5776 0.6692 Two-Stream [70] RGB+Pose 0.1426 0.5915 0.6823 CBP [66] RGB+Pose 0.3032 0.7066 0.7460 BLOCK [67] RGB+Pose 1.254 0.7094 0.7597 w/ ImagineNet-CA RGB+Pose 0.1612 0.7133 0.7641 Table 11: Cross modality studies on RGB and Pose information based on TSM and ST-GCN. Model Modality Latency (ms)\u2193 mAP mmit mAP TSM [8] RGB \u2014 0.5662 0.6618 ST-GCN [20] Pose \u2014 0.5776 0.6692 Two-Stream [70] RGB+Pose 0.1501 0.6003 0.6815 CBP [66] RGB+Pose 0.3043 0.7089 0.7506 BLOCK [67] RGB+Pose 1.294 0.7107 0.7675 w/ ImagineNet-CA RGB+Pose 0.1642 0.7110 0.7515 6. Limitation and Discussion As the first study on fine-grained error action recognition and AQA in CPR training, this work inevitably has some limitations. The diversity and complexity of the CPR-Coach dataset remains to be improved. Standard CPR [14] consists of several stages (e.g., electric defibrillation, artificial respiration), while only the external cardiac compression is studied due to the time and scale limitation. Nevertheless, the CPR-Coach has reached 449GB and 2.2M frames. In the future, we will continue to cooperate with the training center of the hospital to enrich the CPR-Coach dataset. There is still huge potential exploration space for complex and multi-stage medical action analysis. 7." + }, + { + "url": "http://arxiv.org/abs/2201.03746v1", + "title": "TSA-Net: Tube Self-Attention Network for Action Quality Assessment", + "abstract": "In recent years, assessing action quality from videos has attracted growing\nattention in computer vision community and human computer interaction. Most\nexisting approaches usually tackle this problem by directly migrating the model\nfrom action recognition tasks, which ignores the intrinsic differences within\nthe feature map such as foreground and background information. To address this\nissue, we propose a Tube Self-Attention Network (TSA-Net) for action quality\nassessment (AQA). Specifically, we introduce a single object tracker into AQA\nand propose the Tube Self-Attention Module (TSA), which can efficiently\ngenerate rich spatio-temporal contextual information by adopting sparse feature\ninteractions. The TSA module is embedded in existing video networks to form\nTSA-Net. Overall, our TSA-Net is with the following merits: 1) High\ncomputational efficiency, 2) High flexibility, and 3) The state-of-the art\nperformance. Extensive experiments are conducted on popular action quality\nassessment datasets including AQA-7 and MTL-AQA. Besides, a dataset named Fall\nRecognition in Figure Skating (FR-FS) is proposed to explore the basic action\nassessment in the figure skating scene.", + "authors": "Shunli Wang, Dingkang Yang, Peng Zhai, Chixiao Chen, Lihua Zhang", + "published": "2022-01-11", + "updated": "2022-01-11", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.MM", + "I.5.4; I.2.10" + ], + "main_content": "INTRODUCTION In addition to identifying human action categories in videos, it is also crucial to evaluate the quality of specific actions, which means that the machine needs to understand not only what has been performed but also how well a particular action is performed. Action quality assessment (AQA) aims to evaluate how well a specific action is performed, which has become an emerging and attractive research topic in computer vision community. Assessing the quality of actions has great potential value for various real-world applications such as analysis of sports skills [22, 25, 27, 33, 40], surgical maneuver training [20, 45, 46] and many others [4, 5]. In recent years, many methods [22, 25, 33, 40] directly applied the network of human action recognition (HAR) such as C3D [34] and I3D [1] to AQA tasks. Although these methods have achieved arXiv:2201.03746v1 [cs.CV] 11 Jan 2022 \fconsiderable performance, they still face many challenges, and their performances and efficiency are indeed limited. Firstly, the huge gap between HAR and AQA should be emphasized. Models in HAR require distinguishing subtle differences between different actions, while models in AQA require evaluating a specific action\u2019s advantages and disadvantages. Therefore, the performances of existing methods are inherently limited because of the undifferentiated feature extraction of video content, which leads to the pollution of body features. It is not appropriate to apply the framework in HAR directly to AQA without any modification. Secondly, existing methods cannot perform feature aggregation efficiently. The receptive field of convolution operation is limited, resulting in the loss of long-range dependencies. RNN has the inherent property of storing hidden states, which makes it challenging to be paralleled. An effective and efficient feature aggregation mechanism is desired in AQA tasks. To solve all challenges above, we propose Tube Self-Attention (TSA) module, an efficient feature aggregation strategy based on tube mechanism and self-attention mechanism, shown in Figure 1. The basic idea of the TSA module is straightforward and intuitive: considering that AQA models require rich temporal contextual information and do not require irrelevant spatial contextual information, we combine the tube mechanism and self-attention mechanism to aggregate action features sparsely to achieve better performance with minimum computational cost. For example, during a diving competition, the athletes\u2019 postures are supposed to raise most attentions, instead of distractors such as the audience and advertisements in the background. The merits of the TSA module are three-fold: (1)High efficiency, the tube mechanism makes the network only focus on a subset of the feature map, reducing a large amount of computational complexity compared with Non-local module. (2)Effectiveness, the self-attention mechanism is adopted in TSA module to aggregate the features in the spatio-temporal tube (ST-Tube), which preserves the contextual information in the time dimension and weakens the influence of redundant spatial information. (3)Flexibility, consistent with Non-local module, TSA module can be used in a plug-and-play fashion, which can be embedded in any video network with various input sizes. Based on TSA module, we proposed Tube Self-Attention Network (TSA-Net) for AQA. Existing visual object tracking (VOT) framework is firstly adopted to generate tracking boxes. Then the ST-Tube is obtained through feature selection. The self-attention mechanism is performed in ST-Tube for efficient feature aggregation. Our method is tested on the existing AQA-7 [24] and MTL-AQA [25] datasets. Sufficient experimental exploration, including performance analysis and computational cost analysis, is also conducted. In addition, a dataset named Fall Recognition in Figure Skating (FRFS) is proposed to recognize falls in figure skating. Experimental results show that our proposed TSA-Net can achieve state-of-theart results in three datasets. Extensive comparative results verify the efficiency and effectiveness of TSA-Net. The main contributions of our work are as follows: \u2022 We exploit a simple but efficient sparse feature aggregation strategy named Tube Self-Attention (TSA) module to generate representations with rich contextual information for action based on tracking results generated by the VOT tracker. \u2022 We propose an effective and efficient action quality assessment framework named TSA-Net based on TSA module, with adding little computational cost compared with Non-local module. \u2022 Our approach outperforms state-of-the-arts on the challenging MTL-AQA and AQA-7 datasets and a new proposed dataset named FR-FS. Extensive experiments show that our method has the ability to capture long-range contextual information, which may not be performed by previous methods. 2 RELATED WORKS Action Quality Assessment. Most of the existing AQA methods focus on two fields: sports video analysis [22, 25, 27, 33, 40] and surgical maneuver assessment [20, 45, 46]. AQA works focus on sports can be roughly divided into two categories: pose-based methods and non-pose methods. Pose-based methods [22, 27, 37] take pose estimation results as input to extract features and generate the final scores. Because of the atypical body posture in motion scene, the performance of pose-based methods are suboptimal. Non-pose methods exploit DNNs such as C3D and I3D to extract features directly from the raw video and then predict the final score. For example, Self-Attentive LSTM [40], MUSDL [33], C3D-AVGMTL [25], and C3D-LSTM [23] share similar network structures, but their difference lies in the feature extraction and feature aggregation method. Although these methods have achieved significant results, the enormous computational cost of feature extraction and aggregation module limits AQA models\u2019 development. Different from the aforementioned AQA methods, our proposed TSA module can perform feature extraction and aggregation efficiently. Self-Attention Mechanism. Self-attention mechanism [36] was firstly applied on the machine translation task in neural language processing (NLP) as the key part of Transformer. After that, researchers put forward a series of transformer-based models including BERT [3], GPT [28], and GPT-2 [29]. These models tremendously impacted various NLP tasks such as machine translation, question answering system, and text generation. Owing to the excellent performance, some researchers introduce self-attention mechanism into many CV tasks including image classification [10, 12, 21], semantic segmentation [2, 42, 44] and object detection [6, 11, 43]. Specifically, inspired by Non-local[39] module, Huang et al. [13] proposed criss-cross attention module and CCNet to avoid dense contextual information in semantic segmentation task. Inspired by these methods, our proposed TSA-Net adopts self-attention mechanism for feature aggregation. Video Action Recognition Video action recognition is a fundamental task in computer vision. With the rise of deep convolutional neural networks (CNNs) in object recognition and detection, some researchers have designed many deep neural networks for video tasks. Two-stream networks [1, 9, 32] take static images and dynamic optical flow as input and fuse the information of appearance and short-term motions. 3D convolutional networks [14, 15, 34] utilize 3D kernels to extract features form raw videos directly. In order to meet the needs in real applications, many works [8, 18, 35] \fMixed_4f MaxPool3d_5a_2x2 Mixed_5b Mixed_5c Conv3d_1a_7x7 MaxPool3d_2a_3x3 Conv3d_2b_1x1 Conv3d_2c_3x3 MaxPool3d_3a_3x3 Mixed_3b Mixed_3c MaxPool3d_4a_3x3 Mixed_4b Mixed_4c Mixed_4d Mixed_4e (2) I3D-Stage1 (4) I3D-Stage2 VOT Tracker Tracking Results (3) Tube Self-Attention Module SiamMask ST-Tube Generation Tube Self-Attention Operation \u2026 input video Tracking Bboxs (1) Tracking Stage N clips L frames \u2026 (5) Network Head Nclip X Reg USDL CLS MLP_block Temporal Pooling GT Score BCE KLD MSE Figure 2: Overview of the proposed TSA-Net for action quality assessment. TSA-Net consists of five steps: (1) Tracking. VOT tracker is adopted to generate tracking results B. (2) Feature extraction-s1. The input video is divided into N clips and the feature extraction is performed by I3D-Stage1 to generate X. (3) Feature aggregation. ST-Tube is generated given \ud835\udc35and X, and then the TSA mechanism is used to complete the feature aggregation, results in X\u2032. (4) Feature extraction-s2. Aggregated feature X\u2032 is passed to I3D-Stage2 to generate H\u2032. (5) Network head. The final scores are generated by MLP_block. TSA-Net is trained with different losses according to different tasks. focus on the efficient designing of networks recently. The proposed TSA-Net take I3D [1] network as the backbone. 3 APPROACH 3.1 Overview The network architecture is given in Figure 2. Given an input video with \ud835\udc3fframes \ud835\udc49= {\ud835\udc39\ud835\udc59}\ud835\udc3f \ud835\udc59=1, SiamMask[38] is used as the single object tracker to obtain the tracking results \ud835\udc35= {\ud835\udc4f\ud835\udc59}\ud835\udc3f \ud835\udc59=1, where \ud835\udc4f\ud835\udc59= {(\ud835\udc65\ud835\udc59 \ud835\udc5d,\ud835\udc66\ud835\udc59 \ud835\udc5d)}4 \ud835\udc5d=1 represents the tracking box of the \ud835\udc59-th frame. In feature extraction stage, \ud835\udc49is firstly divided into \ud835\udc41clips where each clip contains M consecutive frames. All clips are further sent into the first stage of Inflated 3D ConvNets (I3D) [1], resulting in \ud835\udc41features as X = {x\ud835\udc5b}\ud835\udc41 \ud835\udc5b=1, x\ud835\udc5b\u2208R\ud835\udc47\u00d7\ud835\udc3b\u00d7\ud835\udc4a\u00d7\ud835\udc36. Since the temporal length of x\ud835\udc5bis \ud835\udc47, we have x\ud835\udc5b= {x\ud835\udc5b,\ud835\udc61}\ud835\udc47 \ud835\udc61=1, x\ud835\udc5b,\ud835\udc61\u2208R\ud835\udc3b\u00d7\ud835\udc4a\u00d7\ud835\udc36. In feature aggregation stage, the TSA module takes tracking boxes \ud835\udc35and video feature X as input to perform feature aggregation, resulting in video feature X\u2032 = {x\u2032 \ud835\udc5b}\ud835\udc41 \ud835\udc5b=1 with rich spatiotemporal contextual information. Since the TSA module does not change the size of the input feature map, x\ud835\udc5band x\u2032 \ud835\udc5bhave the same size, i.e., x\u2032 \ud835\udc5b\u2208R\ud835\udc47\u00d7\ud835\udc3b\u00d7\ud835\udc4a\u00d7\ud835\udc36. This property enables TSA modules to be stacked in multiple layers to generate features with richer contextual information. The aggregated feature X\u2032 is further sent to the second stage of I3D to complete feature extraction, resulting in H = {h\ud835\udc5b}\ud835\udc41 \ud835\udc5b=1. H is the representation of the whole video or athlete\u2019s performance. In prediction stage (i.e., network head), average pooling operation is adopted to fuse H along clip dimension, i.e., h = 1 \ud835\udc41 \u00cd\ud835\udc41 \ud835\udc5b=1 \u210e\ud835\udc5b, h \u2208R\ud835\udc47\u00d7\ud835\udc3b\u00d7\ud835\udc4a\u00d7\ud835\udc36. h is further fed into the MLP_Block and finally used for the prediction of different tasks according to different datasets. 3.2 Tube Self-Attention Module The fundamental difference between TSA module and Non-local module is that TSA module can filter the features of participating in self-attention operation in time and space according to the tracking boxes information. The TSA mechanism has the ability to ignore noisy background information which will interfere with the final result of action quality assessment. This operation makes the network pay more attention to the features containing athletes\u2019 information and eliminate irrelevant background information interference. The tube self-attention mechanism can also be called \"local Nonlocal\". The first \"local\" refers to the ST-Tube, while \"Non-local\" refers to the response between features calculated by self-attention operation. So the TSA module is able to achieve more effective feature aggregation on the premise of saving computing resources. TSA module consists of two steps: (1) spatio-temporal tube generation, and (2) tube self-attention operation. Step 1: spatio-temporal tube generation. Intuitively, after obtaining tracking information \ud835\udc35and feature map X of the whole video, all features in the ST-Tube can be selected directly. Unfortunately, owing to the existence of two temporal pooling operations in I3D-stage1, the corresponding relationship between tracking boxes and feature maps is not 1 : 1 but \ud835\udc5a\ud835\udc4e\ud835\udc5b\ud835\udc66: 1. Besides, all tracking boxes generated by SiamMask are skew, which complicates the generation of ST-Tube. To solve these problems, we propose an alignment method which is shown in Figure 3. Since I3D-Stage1 contains two temporal pooling operations, the corresponding relationship between bounding boxes and feature map is 4:1, i.e., {\ud835\udc4f\ud835\udc59,\ud835\udc4f\ud835\udc59+1,\ud835\udc4f\ud835\udc59+2,\ud835\udc4f\ud835\udc59+3} is correspond to x\ud835\udc50,\ud835\udc61. All tracking boxes should be converted into mask first, and then used to generate ST-Tube. \f0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 1 1 1 0 1 1 1 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 union = {index of 1 } Figure 3: The generation process of spatio-temporal tube. All boxes {\ud835\udc4f\ud835\udc59,\ud835\udc4f\ud835\udc59+1,\ud835\udc4f\ud835\udc59+2,\ud835\udc4f\ud835\udc59+3} are scaled to the same size as the feature map x\ud835\udc50,\ud835\udc61, and then the separate masks are generated. All masks are aggregated into the final mask \ud835\udc40\ud835\udc59\u2192(\ud835\udc59+3) \ud835\udc50,\ud835\udc61 through Union operation. We denote the mask of\ud835\udc4f\ud835\udc59correspond to x\ud835\udc50,\ud835\udc61as \ud835\udc40\ud835\udc59 \ud835\udc50,\ud835\udc61\u2208{0, 1}\ud835\udc3b\u00d7\ud835\udc4a. The generation process of \ud835\udc40\ud835\udc59 \ud835\udc50,\ud835\udc61is as follows: \ud835\udc40\ud835\udc59 \ud835\udc50,\ud835\udc61(\ud835\udc56, \ud835\udc57) = \u001a1,\ud835\udc46(\ud835\udc4f\ud835\udc59, (\ud835\udc56, \ud835\udc57)) \u2a7e\ud835\udf0f 0,\ud835\udc46(\ud835\udc4f\ud835\udc59, (\ud835\udc56, \ud835\udc57)) < \ud835\udf0f (1) Where \ud835\udc46(\u00b7, \u00b7) function calculates the proportion of the feature grid at (\ud835\udc56, \ud835\udc57) covered by \ud835\udc4f\ud835\udc59. If the proportion is higher than threshold \ud835\udf0f, the feature located at (\ud835\udc56, \ud835\udc57) will be selected, otherwise it will be discarded. The proportion of each feature grid covered by box ranges from 0 to 1, so we directly took the intermediate value of \ud835\udf0f= 0.5 in all experiments of this paper. Four masks are further assembled into \ud835\udc40\ud835\udc59\u2192(\ud835\udc59+3) \ud835\udc50,\ud835\udc61 \u2208{0, 1}\ud835\udc3b\u00d7\ud835\udc4a through element-wise OR operation: \ud835\udc40\ud835\udc59\u2192(\ud835\udc59+3) \ud835\udc50,\ud835\udc61 = \ud835\udc48\ud835\udc5b\ud835\udc56\ud835\udc5c\ud835\udc5b(\ud835\udc40\ud835\udc59 \ud835\udc50,\ud835\udc61, \ud835\udc40\ud835\udc59+1 \ud835\udc50,\ud835\udc61, \ud835\udc40\ud835\udc59+2 \ud835\udc50,\ud835\udc61, \ud835\udc40\ud835\udc59+3 \ud835\udc50,\ud835\udc61) (2) This mask contains all location information of the features participating in self-attention operation. For the convenience of the following description, \ud835\udc40\ud835\udc59\u2192(\ud835\udc59+3) \ud835\udc50,\ud835\udc61 is transformed into the position set of all selected features: \u2126\ud835\udc50,\ud835\udc61= n (\ud835\udc56, \ud835\udc57)|\ud835\udc40\ud835\udc59\u2192(\ud835\udc59+3) \ud835\udc50,\ud835\udc61 (\ud835\udc56, \ud835\udc57) = 1 o (3) Where \u2126\ud835\udc50,\ud835\udc61is the basic component of ST-Tube and \f \f\u2126\ud835\udc50,\ud835\udc61 \f \f denotes the number of selected features of x\ud835\udc50,\ud835\udc61. Step 2: tube self-attention operation After obtaining X and \u2126\ud835\udc50,\ud835\udc61, the self-attention mechanism is performed to aggregate all features located in ST-Tube, as shown in Figure 4. The formation of the TSA mechanism adopted in this paper is consistent with [39]: y\ud835\udc5d= 1 \ud835\udc36(x) \u2211\ufe01 \u2200\ud835\udc50 \u2211\ufe01 \u2200\ud835\udc61 \u2211\ufe01 \u2200(\ud835\udc56,\ud835\udc57) \u2208\u2126\ud835\udc50,\ud835\udc61 \ud835\udc53\u0000x\ud835\udc5d, x\ud835\udc50,\ud835\udc61(\ud835\udc56, \ud835\udc57))\ud835\udc54(x\ud835\udc50,\ud835\udc61(\ud835\udc56, \ud835\udc57)\u0001 (4) Where \ud835\udc5ddenotes the index of an output position whose response is to be computed. (\ud835\udc50,\ud835\udc61,\ud835\udc56, \ud835\udc57) is the input index that enumerates all positions in ST-Tube. Output feature map y and input feature map Figure 4: Calculation process of the TSA module. \"\u2295\" denotes matrix multiplication, and \"\u2297\" denotes element-wise sum. Owing to the existence of tube mechanism, only the features inside the ST-Tube can be selected and participate in the calculation of self-attention. x have the same size. \ud835\udc53(\u00b7, \u00b7) denotes the pairwise function, and \ud835\udc54(\u00b7) denotes the unary function. The response is normalized by \ud835\udc36(x) = \u00cd \ud835\udc50 \u00cd \ud835\udc61 \f \f\u2126\ud835\udc50,\ud835\udc61 \f \f. To reduce the computational complexity, the dot product similarity function is adopted: \ud835\udc53\u0000x\ud835\udc5d, x\ud835\udc50,\ud835\udc61(\ud835\udc56, \ud835\udc57)\u0001 = \ud835\udf03(x\ud835\udc5d)\ud835\udc47\ud835\udf19(x\ud835\udc50,\ud835\udc61(\ud835\udc56, \ud835\udc57)) (5) Where both \ud835\udf03(\u00b7) and \ud835\udf19(\u00b7) are channel reduction transformations. Finally, the residual link is added to obtain the final X\u2032: x\u2032 \ud835\udc5d= \ud835\udc4a\ud835\udc67y\ud835\udc5d+ x\ud835\udc5d (6) Where \ud835\udc4a\ud835\udc67y\ud835\udc5ddenotes an embedding of y\ud835\udc5d. Note that x\u2032 \ud835\udc5dhas the same size with x\ud835\udc5d, so TSA module can be inserted into any position in deep convolutional neural networks. For the trade-off between computational cost and performance, all TSA modules are placed after Mixed_4e. Thus, \ud835\udc47= 4 and \ud835\udc3b= \ud835\udc4a= 14. Compared with the Non-local operation, the TSA module greatly reduces the computational complexity in time and space from \ud835\udc42((\ud835\udc41\u00d7\ud835\udc47\u00d7 \ud835\udc3b\u00d7\ud835\udc4a) \u00d7 (\ud835\udc41\u00d7\ud835\udc47\u00d7 \ud835\udc3b\u00d7\ud835\udc4a)) (7) to \ud835\udc42 \u2211\ufe01 \ud835\udc50 \u2211\ufe01 \ud835\udc61 \f \f\u2126\ud835\udc50,\ud835\udc61 \f \f ! \u00d7 \u2211\ufe01 \ud835\udc50 \u2211\ufe01 \ud835\udc61 \f \f\u2126\ud835\udc50,\ud835\udc61 \f \f !! (8) Note that the computational cost of TSA can only be measured after forwarding propagation because \u2126\ud835\udc50,\ud835\udc61is generated from \ud835\udc35. 3.3 Network Head and Training To verify the effectiveness of the TSA module, we extend the network head to support multiple tasks, including classification, regression, and score distribution prediction. All tasks can be achieved by changing the output size of MLP_block and the definition of the loss function. The implementation details of these three tasks are as follows: \fMTL-AQA #02-32 AQA7sync.10m #082 AQA7-snow. #056 AQA7-gym. #023 GT:91.20 Pr :90.50 GT:34.76 Pr :37.62 GT:54.76 Pr :53.29 GT:87.09 Pr :86.73 Figure 5: The tracking results and predicted scores of four cases from four datasets. Four manually annotated initial frames are coloured in yellow, and the subsequent boxes generated by SiamMask are coloured in green. The predicted scores of TSA-Net and GT scores are shown on the right. More visualization cases can be found in supplementary materials. Classification. When dealing with classification tasks, the output dimension of MLP_block is determined by the number of categories. Binary Cross-Entropy loss (BCELoss) is adopted. Regression. When dealing with regression tasks, the output dimension of MLP_block is set to 1. Mean Square Error loss (MSELoss) is adopted. Score distribution prediction. Tang et al. [33] proposed an uncertainty-aware score distribution learning (USDL) approach and its multi-path version MUSDL for AQA tasks. Although experiment results in [33] proved the superiority of MUSDL compared with USDL, a multi-path strategy will lead to a significant increase in computational cost. However, the TSA module can generate features with rich contextual information by adopting a self-attention mechanism in ST-Tube with less computational complexity. To verify the effectiveness of the TSA module, we embed the TSA module into the USDL model. The loss function is defined as Kullback-Leibler (KL) divergence of predicted score distribution and ground-truth (GT) score distribution: \ud835\udc3e\ud835\udc3f \b \ud835\udc5d\ud835\udc50\u2225\ud835\udc60\ud835\udc5d\ud835\udc5f\ud835\udc52 \t = \ud835\udc5a \u2211\ufe01 \ud835\udc56=1 \ud835\udc5d(\ud835\udc50\ud835\udc56)\ud835\udc59\ud835\udc5c\ud835\udc54\ud835\udc5d(\ud835\udc50\ud835\udc56) \ud835\udc60\ud835\udc5d\ud835\udc5f\ud835\udc52(\ud835\udc50\ud835\udc56) (9) Where \ud835\udc60\ud835\udc5d\ud835\udc5f\ud835\udc52is generated by MLP_block, and \ud835\udc5d\ud835\udc50is generated by GT score. Note that for dataset with difficulty degree (DD), \ud835\udc60= \ud835\udc37\ud835\udc37\u00d7 \ud835\udc60\ud835\udc5d\ud835\udc5f\ud835\udc52is used as the final predicted score. 4 EXPERIMENTS We carry out comprehensive experiments on AQA-7 [24], MTLAQA [25], and FR-FS datasets to evaluate the proposed method. Experimental results demonstrate that TSA-Net achieves state-ofthe-art performance on these datasets. In the following subsections, we first introduce two public datasets and a new dataset named Fall Detection in Figure Skating (FD-FS) proposed by us. After that, a series of experiments and computational complexity analysis are performed on AQA-7 and MTL-AQA datasets. Finally, the detection results on FD-FS are reported, and the network prediction results are analyzed visually and qualitatively. 4.1 Datasets and Evaluation Metrics AQA-7 [24]. The AQA-7 dataset comprising samples from seven actions. It contains 1189 videos, in which 803 videos are used for training and 303 videos used for testing. To ensure the comparability with other models, we delete trampoline category because of its long time. MTL-AQA [25]. The MTL-AQA dataset is currently the largest dataset for AQA tasks. There are 1412 diving samples collected from 16 different events in MTL-AQA. Furthermore, MTL-AQA provides detailed scoring of each referee, diving difficulty degree, and live commentary. We followed the evaluation protocol suggested in [25], so that there are 1059 samples used for training and 353 used for testing. FR-FS (Fall Recognition in Figure Skating). Although some methods have been proposed [27, 40] to evaluate figure skating skills, they are only based on long-term videos which last nearly 3 minutes. These coarse-grained methods will lead to the inundation of detailed information in a long time scale. However, these details are crucial and indispensable for AQA tasks. To address this issue, we propose a dataset named FR-FS to recognize falls in figure skating sports. We plan to start from the most basic fault recognition and gradually build a more delicate granularity figure skating AQA system. The FR-FS dataset contains 417 videos collected from FIV [40] and Pingchang 2018 Winter Olympic Games. FR-FS contains the critical movements of the athlete\u2019s take-off, rotation, and landing. Among them, 276 are smooth landing videos, and 141 are fall videos. To test the generalization performance of our proposed model, we randomly select 50% of the videos from the fall and landing videos as the training set and the testing set. \fTable 1: Comparison with state-of-the-arts on AQA-7 Dataset. Method Diving Gym Vault Skiing Snowboard Sync. 3m Sync. 10m Avg. Corr. Pose+DCT [27] 0.5300 ST-GCN [41] 0.3286 0.577 0.1681 0.1234 0.6600 0.6483 0.4433 C3D-LSTM [23] 0.6047 0.5636 0.4593 0.5029 0.7912 0.6927 0.6165 C3D-SVR [23] 0.7902 0.6824 0.5209 0.4006 0.5937 0.9120 0.6937 JRG [22] 0.7630 0.7358 0.6006 0.5405 0.9013 0.9254 0.7849 USDL [33] 0.8099 0.757 0.6538 0.7109 0.9166 0.8878 0.8102 NL-Net 0.8296 0.7938 0.6698 0.6856 0.9459 0.9294 0.8418 TSA-Net (Ours) 0.8379 0.8004 0.6657 0.6962 0.9493 0.9334 0.8476 Table 2: Study on different settings of the number of TSA module. Method Diving Gym Vault Skiing Snowboard Sync. 3m Sync. 10m Avg. Corr. TSA-Net 0.8379 0.8004 0.6657 0.6962 0.9493 0.9334 0.8476 TSAx2-Net 0.8380 0.7815 0.6849 0.7254 0.9483 0.9423 0.8526 TSAx3-Net 0.8520 0.8014 0.6437 0.6619 0.9331 0.9249 0.8352 Table 3: Comparisons of computational complexity and performance on AQA-7. GFLOPs is adopted to measure the computational cost. Method NL-Net TSA-Net Comp. Dec. Corr. Imp. Diving 2.2G 0.864G -60.72% \u21910.0083 Gym Vault 2.2G 0.849G -61.43% \u21910.0066 Skiing 2.2G 0.283G -87.13% \u21930.0041 Snowboard 2.2G 0.265G -87.97% \u21910.0106 Sync. 3m 2.2G 0.952G -56.74% \u21910.0034 Sync. 10m 2.2G 0.919G -58.24% \u21910.0040 Average 2.2G 0.689G -68.70% \u21910.0058 Evaluation Protocols. Spearman\u2019s rank correlation is adopted as the performance metric to measure the divergence between the GT score and the predicted score. The Spearman\u2019s rank correlation is defined as follows: \ud835\udf0c= \u00cd(\ud835\udc5d\ud835\udc56\u2212\u00af \ud835\udc5d)(\ud835\udc5e\ud835\udc56\u2212\u00af \ud835\udc5e) \u221a\ufe01\u00cd(\ud835\udc5d\ud835\udc56\u2212\u00af \ud835\udc5d)2 \u00cd(\ud835\udc5e\ud835\udc56\u2212\u00af \ud835\udc5e)2 (10) Where \ud835\udc5dand \ud835\udc5erepresent the ranking of GT and predicted score series, respectively. Fisher\u2019s z-value [23] is used to measure the average performance across multiple actions. 4.2 Implementation Details Our proposed methods were built on the Pytorch toolbox [26] and implemented on a system with the Intel (R) Xeon (R) CPU E52698 V4 @ 2.20GHz. All models are trained on a single NVIDIA Tesla V100 GPU. Faster-RCNN[30] pretrained on MS-COCO[19] is adopted to detect the athletes in all initial frames. All videos are normalized to \ud835\udc3f= 103 frames. For all experiments, the I3D[1] pretrained on Kinetics [16] is utilized as the feature extractor. All videos are select from high-quality sports broadcast videos, and Table 4: Comparison with state-of-the-arts on MTL-AQA. Method Avg. Corr. Pose+DCT [27] 0.2682 C3D-SVR [23] 0.7716 C3D-LSTM [23] 0.8489 C3D-AVG-STL [25] 0.8960 C3D-AVG-MTL [25] 0.9044 MUSDL [33] 0.9273 NL-Net 0.9422 TSA-Net 0.9393 the athletes\u2019 movements are apparent. Therefore, we argue that the performance of TSA-Net is not sensitive to the choice of the tracker. SiamMask[38] was chosen only for high-speed and tight boxes. Each training mini-batch contains 4 samples. Adam [17] optimizer was adopted for network optimization with initial learning rate 1e-4, momentum 0.9, and weight decay 1e-5. Considering the complexity differences between datasets, we adopt different experimental settings. In AQA-7 and MTL-AQA datasets, all videos are divided into 10 clips consistent with [33]. Random horizontal flipping and timing offset are performed on videos in training phase. Training epoch is set to 100. All video score normalization are consistent with USDL [33]. In FR-FS dataset, all videos are divided into 7 segments to prevent overfitting. Training epoch is set to 20. 4.3 Results on AQA-7 Dataset The TSA module and the Non-local module are embedded after Mixed_4e of I3D to create TSA-Net and NL-Net. Experimental results in Table 1 show that TSA-Net achieves 0.8476 on Avg. Corr. , which is higher than 0.8102 of USDL. TSA-Net outperforms USDL in all categories except the snowboard. This is mainly caused by \f AQA-7-gym_vault #001 AQA-7-diving #017 missing distractors wrong f=23 f=32 f=38 f=40 f=53 f=60 f=62 f=100 f=1 f=42 f=50 f=60 f=70 f=74 f=78 f=100 Figure 6: Alphapose [7] is selected as the pose estimator. The estimation results of two sports videos are visualized. the size issue: the small size of the target leads to the small size of the ST-Tube, resulting in invalid feature enhancement (AQA-7 snow. #056 in Figure 5). Note that the TSA module is used in a plugand-play fashion, comparative experiments in Table 1 can also be regarded as ablation studies. Therefore, we didn\u2019t set up a separate part of ablation in this paper. The effect of different number of TSA module. Inspired by the multi-layer attention mechanism in Transformer [36], we stack multiple TSA modules and test these variants on AQA-7. Experimental results in Table 2 show that TSA-Net achieves the best performance when \ud835\udc41\ud835\udc60\ud835\udc61\ud835\udc4e\ud835\udc50\ud835\udc58= 2. Benefit from the feature aggregation operations conducted by two subsequent TSA modules, the network can capture richer contextual features compared with USDL. When \ud835\udc41\ud835\udc60\ud835\udc61\ud835\udc4e\ud835\udc50\ud835\udc58= 3, the performance of the model becomes worse, which may be caused by overfitting. Computational cost analysis. Computational cost comparison results are shown in Table 3. Note that only the calculation of TSA module or Non-local module is counted, not the whole network. Compared with NL-Net, TSA-Net can reduce the computation by 68.7% on average and bring 0.0058 AVG. Corr. improvement. This is attributed to the tube mechanism adopted in TSA module, which can avoid dense attention calculation and improve performance simultaneously. Among all categories in AQA-7, the TSA module saves up to 87% of the computational complexity on skiing and snowboard. Such a large reduction is caused by the small size of the ST-Tube. However, the small ST-Tube will hinder the network from completing effective feature aggregation and ultimately affect the final performance. This conclusion is consistent with the analysis of the results in Table 1. 4.4 Results on MTL-AQA Dataset As shown in Table 4, the TSA-Net and NL-Net is compared with existing methods. Regression network head and MSELoss are adopted in two networks. Experimental results show that both TSA-Net and NL-Net can achieve state-of-the-art performance and the NL-Net is better. The performance fluctuation of TSA-Net is mainly caused by different data distribution between two datasets. Videos in MTLAQA have higher resolution (640x360 to 320x240) and broader field Table 5: Comparisons of computational complexity and performance between NL-Net and the variants of TSA-Net on MTL-AQA. Method Sp. Corr.\u2191 MSE\u2193 FLOPs\u2193 NL-Net 0.9422 47.83 2.2G TSA-Net 0.9393 37.90 1.012G TSAx2-Net 0.9412 46.51 2.025G TSAx3-Net 0.9403 47.77 3.037G Table 6: Recognition accuracy on FR-FS. Method Acc. Plain-Net 94.23 TSA-Net 98.56 of view, which leads to smaller ST-Tubes in TSA-Net and affects the performance. It should be emphasized that this impact is feeble. TSA-Net saves half of the computational cost and achieves almost the same performance as NL-Net. This proves the effectiveness and efficiency of the TSA-Net, which is not contradictory to the final conclusion. Studys on the the stack number of TSA modules and computational cost. As shown in Tabel 5, three parallel experiments are conducted with only the number of TSA module changed just as the experiments on AQA-7. If NL-Net is excluded, the best Sp. Corr. is achieved when \ud835\udc41\ud835\udc50\ud835\udc59\ud835\udc56\ud835\udc5d= 2 (i.e., TSAx2-Net), while TSA-Net with only one TSA module achieves minimum MSE and computational cost simultaneously. This phenomenon is mainly caused by low computational complexity of TSA-Net. The sparse feature interaction characteristics of TSA module achieve more efficient feature enhancement and have the ability to avoid overfitting. Although the performance of TSA-Net can be improved by increasing the number of TSA modules, it will increase computational cost. To achieve the balance between computational cost and performance, we only take \ud835\udc41\ud835\udc60\ud835\udc61\ud835\udc4e\ud835\udc50\ud835\udc58= 1 in all subsequent experiments. 4.5 Results on FR-FS Dataset In FR-FS dataset, we focus on the performance improvement that the TSA module can achieve. Therefore, Plain-Net and TSA-Net are implemented, respectively. The former does not adopt any feature enhancement mechanism, while the latter is equipped with a TSA module. As shown in Table 6, TSA-Net outperforms Plain-Net by 4.33%, which proves the effectiveness of TSA module. Visualization of Temporal Evolution. A case study is also conducted to further explore the performance of TSA-Net. Two representative videos are selected and the prediction results of each video clip are visualized in Figure 7, . All clip scores are obtained by deleting the temporal pooling operation in Plain-Net and TSA-Net. In the failure case #308-1, both Plain-Net and TSA-Net can detect that the athlete falls in the fourth chip which highlighted in red, but only TSA-Net gets the correct result in the end (0.9673 for Plain-Net and 0.2523 for TSA-Net). The TSA mechanism forces the features in ST-Tube interact with each other in the way of self-attention, \fPlain-Net #308-1 TSA-Net Plain-Net #241-3 TSA-Net 1.0000 0.9977 0.9997 0.9308 0.9060 0.0224 0.0000 0.0045 0.0730 0.9994 0.0000 0.9976 0.0104 0.9948 0.9673 0.2523 0.9997 0.9980 0.9999 0.1411 0.9997 1.0000 0.9998 0.8345 0.9998 0.9936 0.9998 0.9969 1.0000 0.9933 0.9997 0.9855 Prediction Prediction clip=1 clip=2 clip=3 clip=4 clip=5 clip=6 clip=7 GT = 0 GT = 1 Figure 7: Case study with qualitative results on FR-FS. The failure case #308-1 is above the timeline, while the successful case #241-3 is below the timeline. which makes TSA-Net regard the standing up and adjusting actions after falling as errors in clip 5 to 7. It seems that TSA-Net is too strict in fall recognition, but the analysis in the successful case #241-3 has overturned this view. Two models get similar results, except for the second clip (colored in blue), which contains the take-off and rotation phase. Plain-Net has great uncertainty for the stationarity of take-off phase, while TSA-Net can get high confidence results. Based on visual analysis and quantitative analysis, it can be concluded that the TSA module is able to perform feature aggregation effectively and obtain more reasonable and stable prediction results. 4.6 Analysis and Visualization Reasons for choosing tracking boxes over pose estimation. In sports scenes, high-speed movements of the human body will lead to a series of challenges such as motion blur and self-occlusion, which eventually result in failure cases in pose estimation. Results in Figure 6 show that Alphapose [7] cannot handle these situations properly. The missing of human posture and background audience posture interference will seriously affect the evaluation results. Previous studies in FineGym [31] have come to the same results as ours. Based on these observations, we conclude that methods based on pose estimation are not suitable for AQA in sports scenes. Missing boxes and wrong poses will significantly limit the performance of the AQA model. Therefore, we naturally introduce the VOT tracker into AQA tasks. The proposed TSA-Net achieves significant improvement in AQA-7 and MTL-AQA compared to pose-based methods such as Pose+DCT [27] and ST-GCN [41] as shown in Table 1 and 4. These comparisons show that the TSA mechanism is superior to the posture-based mechanism in capturing key dynamic characteristics of human motion. Visualization on MTL-AQA and AQA-7. Four cases are visualized in Figure 5. The tracking results generated by SiamMask are very stable and accurate. The final predicted scores are very close to the GT score since the TSA module is adopted. Interestingly, as shown in Figure 5, the VOT tracker can handle various complex situations, such as the disappearance of athletes (#02-32), drastic changes in scale (#056) and synchronous diving (#082). These results show that the tracking strategy perfectly meets the requirements of AQA tasks and verify the effectiveness of the TSA module. 5" + } + ], + "Zhiwei Xiong": [ + { + "url": "http://arxiv.org/abs/2309.02861v1", + "title": "Image Aesthetics Assessment via Learnable Queries", + "abstract": "Image aesthetics assessment (IAA) aims to estimate the aesthetics of images.\nDepending on the content of an image, diverse criteria need to be selected to\nassess its aesthetics. Existing works utilize pre-trained vision backbones\nbased on content knowledge to learn image aesthetics. However, training those\nbackbones is time-consuming and suffers from attention dispersion. Inspired by\nlearnable queries in vision-language alignment, we propose the Image Aesthetics\nAssessment via Learnable Queries (IAA-LQ) approach. It adapts learnable queries\nto extract aesthetic features from pre-trained image features obtained from a\nfrozen image encoder. Extensive experiments on real-world data demonstrate the\nadvantages of IAA-LQ, beating the best state-of-the-art method by 2.2% and 2.1%\nin terms of SRCC and PLCC, respectively.", + "authors": "Zhiwei Xiong, Yunfan Zhang, Zhiqi Shen, Peiran Ren, Han Yu", + "published": "2023-09-06", + "updated": "2023-09-06", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "INTRODUCTION Image aesthetics assessment (IAA) is a computer vision task that aims to evaluate the aesthetic quality of images. Such a capability is beneficial for downstream applications including image recommendation, enhancement, retrieval, and generation [1]. Due to the inherent subjectivity and ambiguity associated with image aesthetics, the ground truth of image aesthetics is usually determined by the opinions of different reviewers in the form of the mean opinion score (MOS) or the distribution of opinion scores (DOS). Depending on the content of the image, there can be different emphases involved in aesthetics assessment. Early works proposed to split the images into different semantic groups and extract different sets of aesthetic features [2, 3, 4]. The features can be handcrafted under the guidance of photography rules [2], or extracted by deep neural networks [3, 4]. However, there may be special cases where This research is supported, in part, by the National Research Foundation, Singapore and DSO National Laboratories under the AI Singapore Programme (AISG Award No: AISG2-RP-2020-019); Alibaba Group through Alibaba Innovative Research (AIR) Program and Alibaba-NTU Singapore Joint Research Institute (JRI) (Alibaba-NTU-AIR2019B1), Nanyang Technological University, Singapore; and the RIE 2020 Advanced Manufacturing and Engineering (AME) Programmatic Fund (No. A20G8b0102), Singapore. an image does not belong to any predefined semantic group. Explicitly splitting these images based on semantics can result in the relationships among different semantic groups being overlooked. Therefore, later works [5, 6, 7] attempted to implicitly extract aesthetic features from pre-trained semantic backbones. Since fine-tuning the entire backbone is computationally expensive [5, 7] and might lead to attention dispersion [6], the backbone is only used as a feature extractor. However, since the vision backbones are pre-trained primarily for image classification [5, 7] or scene recognition [6], they lack knowledge regarding the aesthetic attributes (e.g., composition) that are less related to semantics. Moreover, some of these works [5, 7] require input images in full resolution and additional feature extraction stages in advance, which lack efficiency and practical applicability. More recently, with the prevalence of pre-trained large vision-language models, works that utilize such models together with specially designed prompts for IAA are starting to emerge [8, 9]. The vision-language models used are either pre-trained solely on general image-text pairs [8], or further pre-trained on aesthetic image-text pairs [9]. Either way, their pre-trained models are not limited to aesthetic-related semantic patterns but also cover relatively abstract knowledge related to image aesthetics. However, they use relatively simple single prompts (e.g., \u201cgood image\u201d) or prompt pairs (e.g., \u201cgood photo\u201d and \u201cbad photo\u201d) to extract aesthetic-related knowledge, which cannot deal with complex IAA tasks. Inspired by BLIP-2 [10], a vision-language pre-training model that uses learnable queries to align vision and language pre-trained features extracted from frozen unimodal models, we propose the Image Aesthetics Assessment via Learnable Queries (IAA-LQ) approach. It trains learnable queries to extract aesthetic features from the pre-trained vision backbones. With a flexible quantity, the learnable queries can extract the most aesthetics-related image patterns from the frozen vision backbone. Extensive experiments on real-world data demonstrate the advantages of IAA-LQ, beating the best state-ofthe-art method by 2.2% and 2.1% in terms of Spearman\u2019s rank correlation coefficient (SRCC) and Pearson linear correlation coefficient (PLCC), respectively. arXiv:2309.02861v1 [cs.CV] 6 Sep 2023 \f2. THE PROPOSED IAA-LQ APPROACH The proposed IAA-LQ approach is shown in Figure 1. It consists of a frozen image encoder to extract pre-trained image features, a set of learnable queries, a querying transformer for selfand cross-attention, and a prediction header for IAA. 2.1. Encoding an Image A given image can be expressed as (In, dn), where In is the n-th image in an IAA dataset, and dn = {dk n}K k=1 is its ground truth aesthetic DOS which satisfies PK k=1 dk n = 1. Note the aesthetic MOS sn can be derived from dn as: sn = K X k=1 (k \u00b7 dk n). (1) In our design, we adopt pre-trained vision transformers (ViT) as the image encoder. It splits the input image into Np patch tokens and adds a [CLS] token as the first token. Therefore, the extracted pre-trained image embeddings en v of image In can be expressed as: en v = E\u03b8v(In), (2) where E\u03b8v(\u00b7) is the ViT parameterized by frozen \u03b8v, and en v \u2208 R(1+Np)\u00d7Hv with Hv denoting the hidden size of the ViT. Image Encoder Self Attention Cross Attention Feed Forward \u2026 \u2026 Feed Forward Learnable Queries Query Embeddings DOS Prediction Average x N Softmax Fig. 1. The design of IAA-LQ. It learns embeddings for learnable queries through a querying transformer, where pretrained image features extracted with a frozen image encoder are inserted once in every two transformer blocks for crossattention. The learned query embeddings are averaged and passed through a feed-forward layer and Softmax to output the predicted aesthetic DOS. 2.2. Learnable Queries & Querying Transformer Suppose we use M learnable queries with hidden size Hq to extract the aesthetic features from the pre-trained image features. The pre-trained image embeddings are inserted once every two transformer blocks into the querying transformer for cross-attention with the queries. In each block, the queries first interact with each other through self-attention, and then possibly conduct cross-attention with the image embeddings. Finally, they are passed through a feed-forward layer to output query embeddings. Formally, for queries q \u2208RM\u00d7Hq and pre-trained image embeddings en v, the output query embeddings en q with aesthetic knowledge extracted from the image is expressed as: en q = E\u03b8q(q|en v), (3) where E\u03b8q(\u00b7) is the querying transformer parameterized by \u03b8q, and en q \u2208RM\u00d7Hq. 2.3. Prediction Header for IAA After the output query embeddings en q are obtained, we take the average of the query embeddings to obtain a compact aesthetic embedding: en a = \u00af en q , (4) where en a \u2208RHq. Based on the aesthetic embedding, a prediction header for IAA is appended. It is a feed-forward layer with an input size of Hq and an output size of K followed by a Softmax layer. Together, the predicted K-scale aesthetic DOS can be computed as: \u02c6 dn = E\u03b8p(en a). (5) E\u03b8p(\u00b7) is the prediction header parameterized by \u03b8p. Following [11], we adopt Earth Mover\u2019s Distance (EMD) loss to optimize the predicted DOS towards the ground truth: L(dn, \u02c6 dn) = v u u t 1 K K X k=1 |CDFdn(k) \u2212CDF\u02c6 dn(k)|2. (6) CDFdn and CDF\u02c6 dn are cumulative density functions for the ground truth DOS and predicted DOS, respectively. The overall objective is to minimize the loss over the whole dataset: min q,\u03b8q,\u03b8p ( N X n=1 L(dn, \u02c6 dn) ) . (7) 3. EXPERIMENTAL EVALUATION 3.1. Experiment Settings The dataset adopted for our experimental evaluation is the benchmark IAA dataset, the AVA dataset [12]. It contains over 250,000 images, each of which received 78 to 549 aesthetic scores (average of 210) on a scale of 1 to 10 (i.e., K = \f10), where 1 and 10 denote the lowest and highest aesthetics, respectively. The ground truth DOS could be derived by computing the ratio of reviewers under each score level to the total number of reviewers. The same train-test split in [5, 9] is adopted for our experiments, where 235,574 and 19,928 images are allocated for training and testing, respectively. We also include the generic testing set of the PARA dataset [13] to evaluate the generalization ability and attribute-level performance of IAA-LQ. It contains 3,000 images with overall and attribute-level aesthetic annotations. Following [10], the pre-trained image embeddings are retrieved from the second last layer of the pre-trained ViT. The image resolution is set to 224 \u00d7 224 with a patch size of 14 \u00d7 14, yielding 257 image embeddings (i.e., Np = 256). We set the query hidden size Hq to 768, initialize the QFormer with BERTbase [19], and randomly initialize the crossattention layers. We train the Q-Former with a batch size of 128 for 10 epochs with Adam optimizer. The learning rate is set to 3 \u00d7 10\u22125 initially, and multiplied by 0.1 every two epochs. To evaluate the performance of the comparison approaches, we report Spearman\u2019s rank correlation coefficient (SRCC) and Pearson linear correlation coefficient (PLCC) between the predicted and ground truth aesthetic MOSs. 3.2. Comparison Results Table 1 shows the performances of IAA-LQ compared with 9 state-of-the-art (SOTA) methods under the AVA dataset. It can be observed that IAA-LQ achieves the best performance, exceeding the best-performing SOTA method VILA-R [9] by 2.2% and 2.1% in terms of SRCC and PLCC, respectively. To obtain this model, we employ Horizontal Flipping (HF) with p = 0.5 on training images, use ViT-G/14 from EVACLIP [20] as the frozen vision backbone which outputs pretrained image embeddings with Hv = 1408, and M = 2 learnable queries to extract the aesthetic features. Similar to VILA-R, IAA-LQ takes input images with a low resolution rather than the full resolution as required by previous works [5, 17]. It demonstrates that IAA-LQ can effectively learn from the most critical aesthetics-related image clues. Method SRCC PLCC NIMA [11] 0.612 0.636 AFDC + SPP [14] 0.649 0.671 GPF-CNN [15] 0.671 0.682 MaxViT [16] 0.708 0.745 MUSIQ [17] 0.726 0.738 MLSP [5] 0.756 0.757 TANet [6] 0.758 0.765 GAT\u00d73-GATP [18] 0.762 0.764 VILA-R [9] 0.774 0.774 IAA-LQ 0.791 0.790 Table 1. Experiment results on the AVA dataset. Padding Augmentation SRCC PLCC True None 0.782 0.783 True HF 0.784 0.785 True RC 0.781 0.781 True HF + RC 0.780 0.780 False None 0.790 0.790 False HF 0.791 0.790 False RC 0.784 0.785 False HF + RC 0.785 0.784 Table 2. Performance of IAA-LQ with different padding and augmentation strategies. HF denotes Horizontal Flipping with p = 0.5. RC denotes resizing to 272 \u00d7 272 followed by Random Cropping of 224 \u00d7 224. M 32 1 2 3 4 SRCC 0.768 0.788 0.791 0.790 0.788 PLCC 0.766 0.788 0.790 0.789 0.788 Table 3. Performance of IAA-LQ with different numbers of learnable queries. M denotes the number of queries. Vision Backbone Embeddings SRCC PLCC ViTCLIP CLS 0.735 0.735 ViTCLIP CLS+P 0.626 0.628 ViTCLIP LQ 0.774 0.775 ViTEVA-CLIP CLS 0.743 0.743 ViTEVA-CLIP CLS+P 0.713 0.712 ViTEVA-CLIP LQ 0.791 0.790 Table 4. Performance of IAA-LQ with different vision backbones and different embeddings for the aesthetic prediction. CLS denotes the embedding of the [CLS] token. CLS+P denotes the average of the embedding of the [CLS] token and patch embeddings. LQ denotes the average of the learned query embeddings. 3.3. Ablation Studies Effects of image padding and augmentations: Since image aesthetics is empirically found to be sensitive to image paddings and augmentations, we investigate their effects in IAA-LQ. The two most common augmentations used in IAA, Horizontal Flipping (HF) and Random Cropping (RC), are considered. As shown in Table 2, RC deteriorates our model performance, which could be due to the content and composition changes that are highly related to image aesthetics as described in photographic rules such as the Rule of Thirds. On the other hand, HF brings slight performance improvement because it generally does not affect image aesthetics, while bringing in more samples to train on. Effects of number of queries: The number of learnable queries M is a crucial hyperparameter in IAA-LQ as it directly affects the richness of the aesthetic features extracted. \f6.985 (7.091) 6.983 (7.055) 7.106 (7.12) 5.196 (5.195) 4.874 (4.873) 5.064 (5.065) 3.044 (3.046) 3.231 (3.203) 3.373 (3.474) Fig. 2. Examples of the IAA-LQ MOS prediction results. Images from the top row to the bottom row are example images with relatively high, moderate, and relatively low ground truth MOSs. The blue and (green) numbers beneath each image are its predicted and (ground truth) MOSs, respectively. As shown in Table 3, we start exploring the optimal M by setting it to 32, the same as in BLIP-2 [10]. However, it can be observed that with 32 queries, IAA-LQ stops improving at very early stages and easily overfits. Therefore, we then attempt to set M = 1 and increase it gradually. Experiment results demonstrate that IAA-LQ achieves the best performance when only two queries are used. It demonstrates its ability of using a small number of queries to capture adequate aesthetic features for IAA. Effectiveness of the learnable queries: The frozen vision backbone determines the pre-trained image patterns that can be used for IAA. In our experiments, we consider ViT-L/14 from CLIP [21] and ViT-G/14 from EVA-CLIP [20], which output image embeddings with Hv = 1024 and Hv = 1408, respectively. To demonstrate the effectiveness of the learnable queries, we compare the performances of learning image aesthetics from the pre-trained [CLS] embedding, from the average of pre-trained [CLS] and patch embeddings, and from the query embeddings of the learnable queries. As shown in Table 4, the outstanding performance of the learnable queries demonstrates the effectiveness of the design of the IAA-LQ learnable queries and querying transformer in extracting relevant aesthetic features from pre-trained image features. 3.4. Model Interpretation In Figure 2, we show some examples of the aesthetic MOS prediction results using the proposed IAA-LQ approach. The Attribute SRCC PLCC Aesthetics 0.701 0.739 Composition 0.702 0.737 Content 0.686 0.730 Quality 0.684 0.725 DOF 0.667 0.711 Light 0.642 0.694 Color 0.619 0.667 Table 5. Attribute-level performance of IAA-LQ. example images are retrieved from the testing set of AVA. It can be observed that IAA-LQ can estimate image aesthetics for various image contents, such as landscapes, objects, and portraits. However, it does suffer from increased prediction errors when the test image has relatively extreme aesthetic quality (either too high or too low). It can be attributed to the data distribution of the AVA dataset, where the majority of the images have aesthetic MOS around the borderline (i.e., 5). This can be improved by training IAA-LQ on more images with relatively extreme aesthetics MOSs. To evaluate the generalization capability of IAA-LQ and its ability to indicate fine-grained aesthetic attributes, we directly perform inference on the testing set of PARA using IAA-LQ and compare the estimated aesthetics MOSs with the ground truth MOSs of different aesthetic attributes. Table 5 shows that even without further supervision from the training set of PARA, IAA-LQ can still estimate the overall aesthetic qualities reasonably well. The high correlation with the composition and content attributes could be credited to the relatively simple augmentations used and the frozen pretrained image encoder. Interestingly, our predicted MOS has a relatively low correlation with the color attribute. This can be attributed to the large amount of near-gray low-saturation images in AVA. Nevertheless, the proposed IAA-LQ approach demonstrates strong generalization capability and attributelevel aesthetics awareness. 4." + } + ], + "Ruisheng Gao": [ + { + "url": "http://arxiv.org/abs/2402.00575v1", + "title": "Diffusion-based Light Field Synthesis", + "abstract": "Light fields (LFs), conducive to comprehensive scene radiance recorded across\nangular dimensions, find wide applications in 3D reconstruction, virtual\nreality, and computational photography.However, the LF acquisition is\ninevitably time-consuming and resource-intensive due to the mainstream\nacquisition strategy involving manual capture or laborious software\nsynthesis.Given such a challenge, we introduce LFdiff, a straightforward yet\neffective diffusion-based generative framework tailored for LF synthesis, which\nadopts only a single RGB image as input.LFdiff leverages disparity estimated by\na monocular depth estimation network and incorporates two distinctive\ncomponents: a novel condition scheme and a noise estimation network tailored\nfor LF data.Specifically, we design a position-aware warping condition scheme,\nenhancing inter-view geometry learning via a robust conditional signal.We then\npropose DistgUnet, a disentanglement-based noise estimation network, to harness\ncomprehensive LF representations.Extensive experiments demonstrate that LFdiff\nexcels in synthesizing visually pleasing and disparity-controllable light\nfields with enhanced generalization capability.Additionally, comprehensive\nresults affirm the broad applicability of the generated LF data, spanning\napplications like LF super-resolution and refocusing.", + "authors": "Ruisheng Gao, Yutong Liu, Zeyu Xiao, Zhiwei Xiong", + "published": "2024-02-01", + "updated": "2024-02-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Light field (LF) imaging enables the record of both intensities and directions of light rays in free space, providing a richer description of scene radiance than conventional 2D * Author contributed equally imaging techniques. In particular, 4D LF plays an important role in its appealing applications in computer vision such as depth sensing [54], post-capture refocusing [6, 38], reflectance estimation [55] and salient object detection [45]. These applications benefit greatly from datadriven deep-learning approaches that leverage large-scale LF datasets [10, 29, 40, 41, 50, 55, 56, 58, 64, 65, 67]. To acquire LFs, one straightforward approach involves manual capture using an LF camera, such as Lytro [1]. Besides, software rendering (e.g., Blender [2]) offers controllable camera parameters and scene contexts to synthesize desired data. However, both of these methods require either time-consuming post-processing of raw data or labourintensive manual design, presenting challenges such as data shortage and inconvenient acquisition for various applications (e.g., LF super-resolution, post-capture refocusing). In response to these challenges, synthesizing LFs from single RGB images [5, 22, 26, 49] provide practical solutions to obtain sufficient LFs. A fundamental approach is to warp input RGB images utilizing the estimated disparity, which suffers from the intrinsic and inevitable degradation on non-Lambertian regions (e.g., occlusions). The prevalent approach utilizes deep neural networks (DNNs), leveraging monocular depth as an implicit prior to mapping single RGB images to ground truth LFs [5, 22, 26, 49]. While these methods aim to mitigate non-Lambertian effects caused by the warping operation, implicitly predicting ambiguous non-Lambertian regions from local adjacent features can introduce artifacts and biases. Furthermore, the disparity ranges of LFs synthesized by these methods heavily rely on training data, limiting their generalization across different geometry patterns. On the other hand, LF synthesis from single images is an ill-posed problem. For the arXiv:2402.00575v1 [cs.CV] 1 Feb 2024 \fsame central view, multiple potential LF targets with different disparity ranges exist. Unfortunately, the above depthbased methods lack the flexibility to synthesize LFs with controllable geometric. Recently, the advanced diffusion models (DMs) demonstrate superior generative capability and remarkable performance, bringing novel paradigms [7, 11, 19, 25, 30, 32] to the computational imaging tasks. Composing of forward and backward processes, DM is able to model complex distribution for various modalities and achieve high quality generation with realistic details (e.g., images [11, 25], videos [7, 19] and point clouds [30, 32]). By further introduce the conditional signal, DMs allow for accurate and flexible controls for the generation process [43, 66]. Considering the aforementioned merits, DMs serve as a compatible candidate to synthesis LFs from single images. However, the complex spatial-angular pattern within 4D LFs poses challenges to adapting existing DMs to to synthesize LFs from single images. On the one hand, to synthesize angular-correct and controllable LFs from single images, the condition signal needs to be designed to fully use the appearance information within single images while enabling geometry-aware guidance. On the other hand, existing noise estimation networks are mostly designed for images and videos (2D or 3D Unet architecture), which struggle in capturing intra-inter view correlations within LFs. Concerning the above issues, we propose LFdiff, a diffusion-based conditional generation framework tailored for LF synthesis from single images. Given the intricacies of the 4D LF distribution, we design a position-aware warping scheme to provide a robust initial estimate of the 4D LF pattern, explicitly incorporating spatial-angular information into the condition signal. Specifically, we use a pre-trained monocular depth estimation network to obtain the inverse monocular depth of the input single RGB image. By rescaling the estimated inverse depth to disparities with varied ranges, we attain the capability to synthesize LFs with controllable disparaties. Then we use the warp operation to create a coarse estimate of the target LF, which is further concatenated to a positional encoding to form the condition signal. In addition, we introduce the disentangle mechanism [58] into the noise estimation network, resulting in DistgUnet. Compared to the vanilla Unet, DistgUnet well leverages multiple LF representations from macro-pixel inputs, improving the overall generation quality. Experimental results verify the ability of our framework to generate both angular-correct and visually pleasing LFs with controllable disparity ranges, further boosting various applications such as LF super-resolution and refocusing. An example of the generated result is shown in Fig. 1. The contributions of this paper are summarized as follows. (1) We propose LFdiff, the first diffusion-based LF synthesis framework, which includes two effective designs: a position-aware warping condition scheme for angularaware guidance and a disentangling noise estimation network for enhanced spatial-angular expression. (2) Extensive experiments demonstrate that LFdiff is able to generate LFs with accurate angular patterns, achieving superior visual-fidelity quality on central-view conditioned LF synthesis. (3) LF synthesis from single images results verify the controllable generation as well as cross-domain generalization capability of LFdiff. (4) Various applications such as LF super-resolution and refocusing validate our framework\u2019s broad applicability. 2. Related Work Novel view synthesis from single images. Prior works synthesize novel views either from multiple views [4, 8, 16, 39], or only from single images [27, 28, 62]. The latter setting poses additional challenges for the lack of scene geometry priors. Wiles et al. [62] target on indoor/outdoor scene synthesis. They render a point cloud from estimated depth maps and content features, followed by a refinement network to inpaint unseen regions. InfiniteNature [27] renders novel views for natural scenes along a camera trajectory via a render-refine-repeat process in a self-supervised manner. Besides, some neural radiance fields (NeRFs) based methods can render scene volume from a single view while requiring training for an individual scene. Lin et al. [28] focus on object-level view synthesis and design a transformer [13] inspired architecture to improve 3D feature expressiveness for the subsequent NeRF [33] rendering. In contrast, we synthesize LFs from single images, demanding more accurate inter-view geometry than 3D synthesis without individual training for each scene. Light field synthesis from single images. As a pioneer work, Srinivasan et al. [49] breaks this task into two sub-tasks: monocular depth estimation and LF synthesis and learns each subtask using convolution neural networks (CNN). Ivan et al. [22] utilize the appearance flow as the geometry representation to preserve the spatial-angular consistency between views. Li et al. [26] extend the multiplane representation and use a parallel CNN to deal with visible and occluded regions, respectively, showing improved synthesis quality. Bak et al. [5] propose an improved variable layered depth image for scene representation, producing visually clearer results in fewer inference times. Apart from the above works, LF synthesis from a monocular video [14] or coded view [31, 36, 53] also provide insights from temporal information utilization and hardware-level compressive imaging, respectively. Unlike above frameworks, we propose a diffusion-based generative framework to synthesize disparity controllable LFs with improved visual results and enhanced generalization ability. Diffusion based view synthesis. Diffusion models have demonstrated significant improvement in novel view syn\fthesis conditioned on geometric priors such as poses [51, 61], 3D feature volumes [9] or semantic priors [51]. 3DiM [61] leverages the pose between two images and proposes a 2D diffusion model to generate novel views in an autoregressive manner. Chan et al.[9] unproject multi-view features into a feature volume to regress the density and content feature, which serve as the condition signal. NeRDi [15] utilized a language-guided diffusion prior for multiview synthesis, which links image semantics to the appearance reconstruction. To explicitly capture multi-view geometry in LFs, we propose a position-aware warping scheme to provide a coarse LF estimate as the condition signal. 3. Preliminaries We first provide a concise introduction to the learning objective of conditional DM in the context of LF inputs. Given a LF x \u2208RU\u00d7V \u00d7H\u00d7W \u00d7C with spatial resolution H \u00d7 W and angular resolution U \u00d7 V , we can represent it in subaperture images (SAIs) x0 \u2208RUH\u00d7V W \u00d7C, which lies in the target distribution for learning. The Denoising Diffusion Probabilistic Model (DDPM) includes a forward process which repeatedly adds noise to the target, transforming x0 to a normal Gaussian noise xT in T timesteps. Each forward step is given by q(xt | xt\u22121) = N(xt; p 1 \u2212\u03b2txt\u22121, \u03b2tI), (1) where {\u03b2t}T t=1 are predefined as the noise schedule, I refers to the identity matrix. Using the reparameterization trick[24], we can obtain xt in one step as q(xt | x0) = N(xt; \u221a\u00af \u03b1tx0, (1 \u2212\u00af \u03b1t)I), (2) where \u03b1t = 1 \u2212\u03b2t, \u00af \u03b1t = Qt i=1\u03b1i. The reverse process starts from a randomly sampled normal Gaussian noise xT and aims to gradually denoise it to a high quality output x0. To approximate the true posterior q(xt\u22121 | xt) in each denoising step, parameterized gaussian transitions p\u03b8(xt\u22121 | xt) = N(xt\u22121; \u00b5\u03b8(xt, t), \u03a3\u03b8(xt, t)) are assumed as [18] and we use trainable networks to learn the mean \u00b5\u03b8(xt, t) and the variance \u03a3\u03b8(xt, t). In the DDPM setting, the variances are set to a fixed value \u03c32 t I, and by optimizing the variational lower bound on negative loglikelihood L\u03b8 \u2264Eq(x0)[\u2212logp\u03b8(x0)], (3) we can obtain a simpler training objective through further simplifications[18] Lsimple = Ex0,\u03b5\u223cN (0,I),t[\u2225\u03b5 \u2212\u03b5\u03b8(xt, t)\u22252 2], (4) where \u03b5\u03b8 is the noise estimation network (e.g., Unet) and t is uniformly sampled from {1...T}. Furthermore, the generation process can be guided when additional conditions c are provided. In this way, the training objective becomes Lsimple = Ex0,\u03b5\u223cN (0,I),t,c[\u2225\u03b5 \u2212\u03b5\u03b8(xt, t, c)\u22252 2]. (5) In this paper, we provide special designs on the condition scheme and the noise estimation network for synthesis LFs, which are elaborated in the following section. 4. Method The overall framework of LFdiff is shown in Fig. 2. Given a single RGB image r \u2208RH\u00d7W \u00d73, we first utilize a pretrained monocular depth estimator [42] to obtain the normalized inverse depth d \u2208RH\u00d7W . After rescaling d to desired disparity range [dmin, dmax], we can get the condition of our framework c \u2208RUH\u00d7V W \u00d74 by the proposed position-aware warping condition scheme \u03c4 c = \u03c4(r, d). (6) Then, through an iterative denoising (sampling) process (e.g., DDPM [18], DDIM [48]), we can get the synthesised LF o \u2208RUH\u00d7V W \u00d73 from a randomly sampled noise xT \u223cN(0, I). At time step t, xt+1 and condition c are concatenated along the channel dimension and reshape into the macro-pixel form, serving as the input of noise estimation network \u03b5\u03b8 as nest = R\u22121(\u03b5\u03b8(R(xt+1 \u2297c))), (7) where nest is the estimated noise, R and R\u22121 refer to the SAI to macro-pixel reshape operation and its reverse, respectively. \u2297denotes the channel-wise concatenation. Then we can get xt through one-step denoising (DDPM for example) as xt = 1 \u221a\u03b1t+1 (xt+1 \u22121 \u2212\u03b1t+1 \u221a1 \u2212\u00af \u03b1t+1 nest) + \u03c3t+1z, (8) where z \u223cN(0, I) when t > 0, otherwise z = 0. After T iterations, we obtain the generated SAI form LF x0. 4.1. Position-aware Warping Condition Scheme By incorporating condition signals into DMs, a controlled generation process can be achieved which flourishes diverse appealing applications (e.g., text-to-image/video generation [23, 43, 63], super-resolution [44], molecule synthesis[21]). Specifically, prior works introduce the condition signal in different ways on different tasks, including direct concatenation [44], vector embedding [17], learned cross-attention [43] and so on. In synthesizing light fields with complex spatial-angular distributions, a meticulously designed condition signal is essential to provide effective guidance. On the one hand, LFs are spatial-angular intertwined data. DMs can hardly distinguish each dimension to produce LFs with correct geometry and visually pleasing details. Thus, it is challenging for the DM to directly model 4D LF pattern solely from the guidance of the input single image. On the other hand, \f(a) Position-aware warping condition scheme Distengle block Spatial feature extractor Angular feature extractor Vertical EPI feature extractor Horizontal EPI feature extractor (b) Disentangled noise estimation network Time embedding \ufffd\ufffd \ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\u00a0\ufffd\ufffd\ufffd\ufffd\u210e \ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd \ufffd\ufffd\ufffd\ufffd Position encoding Angular position (p, q) \ufffd\ufffd+1 inverse depth \ufffd E.q.(10) Warped \ufffd\ufffd single image \ufffd \ufffd\ufffd+1 \ufffd\ufffd \ufffd\ufffd \ufffd0 1-st iteration \ufffd\ufffd\u22121 DistgUnet k-th iteration Condition Scheme single image \ufffd : Concat \ufffd rescale denoise (E.q.(8)) \ufffd\ufffd+1 R R\u22121 Figure 2. The overall framework of LFdiff. After iterating for T timesteps, random Gaussian noise xT is denoised into a high-quality LF x0. We provide the details of the position-aware warping condition scheme in (a) and depict the disentangled noise estimation network in (b). R and R\u22121 denotes SAI to macro-pixel reshape and the inverse reshape, respectively. In the training stage, we use the ground-truth disparity instead of estimated invert depth to warp the LF central view, which is omitted in (a) for simplicity. GT warped residue HCInew-dino patch1 patch2 Figure 3. Although the warp operation introduces occlusion artifacts (compare between patch1 and patch2) and spatial misalignment (see residue map), it provides an initial LF pattern for guidance. We use residue between GT and warped view for clarity. synthesizing LF from single images is an ill-posed problem since the disparity of generated LF varies according to different camera parameters, such as baseline and focus length. Therefore, the condition signal is expected to be angularaware, which allows for flexible control over variant geometry requirements. Our solution to the above concerns is to explicitly utilize the estimated disparity to warp the input RGB image, resulting in a coarse estimate of the LF goal. As shown in Fig. 2(a), given a single image r and the rescaled inverse depth d, the warp operation acts as ri w(s, t) = r(s + (pc \u2212pi) \u00b7 d, t + (qc \u2212qi) \u00b7 d), (9) where ri w represents the i-th warped view, s and t denote spatial coordinate, (pc, qc) and (pi, qi) are the 2D angularcoordinate tuple of the central view and the i-th view, respectively. Despite the warp operation introducing occlusion artifacts and spatial misalignment, as shown in Fig. 3, the warped LF serves as a reliable guidance of initial LF pattern which contains abundant spatial-angular information, while other possible conditions (e.g., single image only, depth embedding) are hard to represent the 4D LF characteristics (see Fig. 8(a) and (b)). In addition, benefiting from the angular-aware nature of the warp operation, the geometry of the warped result is controllable by rescaling the estimated disparity to different ranges. Due to the unique nature of the angular position in light fields, embedding positional information into the condition signal is crucial. Inspired by [13], we introduce view-level 2D position encoding to assign each view a specific position, surpassing ambiguity in angular patterns across diverse generated light field samples (see Fig. 8(c)). Concretely, the position encoding for angular coordinate (p, q) \fis defined as PE2i(p, q) = sin(p/10000 2i dim ) + sin(q/10000 2i dim ), PE2i+1(p, q) = cos(p/10000 2i+1 dim ) + cos(q/10000 2i+1 dim ), (10) where dim denotes the encoding dimension, which is set to 16 in our experiments. Thus, PE \u2208RUH\u00d7V W \u00d7dim provides the same position information within the same view while distinguishing positions across different views. Given such designs, we can construct the condition signal by concatenating the warped results and the view-level position encoding along the channel dimension as c = rw \u2297PE. (11) 4.2. Disentangled Noise Estimation Network After constructing the condition signal c, we can estimate the noise to be removed at timestep t as \u00af \u03b5 = \u03b5\u03b8(xt, t, c), (12) where \u00af \u03b5 is the estimated noise. A typical choice of the noise estimation network is the Unet architecture [18], which is a multi-scale network with spatial downsamplingupsampling layers. However, these 2D convolution operations cannot capture the complete representations along the spatial-angular dimensions for macro-pixel form LF inputs. To incorporate more comprehensive LF representations, we resort to the disentangling mechanism [58] which disentangles the 4D LF into four different 2D subspaces: spatial space, angular space, horizontal and vertical epipolar plane image (EPI) spaces. Specifically, for different 2D subspaces, the mechanism employs specific 2D convolution structures. These structures are tailored to extract disentangling features from the intertwined 4D LF, thereby capturing and embedding domain-specific information. We incorporate the disentangling mechanism into the vanilla 2D Unet architecture, resulting in the disentangled noise estimation network, dubbed DistgUnet. As can be seen in Fig. 2(b), we concatenate the condition signal and noisy inputs along the channel dimension to serve as inputs of the DistgUet. Then the inputs are reshaped from SAIs to macro-pixel form, which further go through several stacked disentangle blocks in multiple scales. In this way, multi-scale LF representations can be captured, which benefits the noise estimation process, therefore improving the overall generation quality. Please refer to the supplementary material for more details. 5. Experiment To comprehensively evaluate the effectiveness of LFdiff, the experiments are conducted from two perspectives: (a) LF synthesis from central views and single images on publicly available datasets (Sec. 5.1 and Sec. 5.2) and (b) applications on LF super-resolution and refocusing (Sec. 5.3 and Sec. 5.4). We further perform ablation studies in Sec. 5.5. 5.1. LF Synthesis from Central View of LFs Baseline Methods and Metrics. We compare LFdiff with four baseline methods, which can be categorized into three classes. (a) Warp. We use the estimated disparity to warp the central views based on E.q. 9. (b) Srinivasan et.al [49] utilizes only central view for input. (c) Li et.al [26] and Bak et.al [5] use estimated depths and central views for inputs. We adopt PSNR (dB), SSIM [59] and LPIPS [68] for evaluating reconstruction fidelity and perceptual quality. Training Settings. We select 16 scenes from HCI-new [20] and 170 from UrbanLF-synthetic [46] with angular resolution 5\u00d75 as our training data. During the training stage, LFdiff utilizes the central view and the corresponding ground-truth disparity as input, and each LF is cropped into patches with sizes of 5 \u00d7 5 \u00d7 32 \u00d7 32. We set the total training timesteps T to 1000. Noise schedule {\u03b2t}T t=1 are linearly increased from \u03b21 = 1e-4 to \u03b2T = 2e-2. We train the DistgUnet using the AdamW optimizer with batch size 16 and set the initial learning rate to 1.5e-4, scheduled by a cosine annealing scheduler. We retrain baseline methods with the same training data for a fair comparison. The training details can be found in the supplementary material. Inference Settings. We select 4 scenes from HCI-new and 30 scenes from UrbanLF-synthetic as our in-distribution testing data and further choose the testset of HCI-old [60] and STFGantry [52] to validate the performance in the outof-distribution testing data. We utilize a pre-trained monocular depth estimation network [42] to obtain the normalized invert depth as well as depth for LFdiff and baseline methods, respectively. The estimated invert depth is further rescaled to disparity which has ground-truth disparity range for LFdiff. We use DDIM sampler [48] with 100 sampling steps for efficient inference. Quantitative Results. We exhibit the quantitative results of our LFdiff and baseline methods in Table 1. It can be observed that LFdiff outperforms the existing methods by a large margin in both in-distribution and out-of-distribution testing data. For the in-distribution scenes, LFdiff achieves a significant PSNR gain of +1.227dB and +3.846dB compared to the second top-performing method on HCI-new and UrbanLF-synthetic, respectively. The clear improvement in LPIPS further shows the superior perceptual quality of our generated results. Such a performance boost can be attributed to the generation-based pipeline introducing sharper contents and details, especially in occluded regions. Furthermore, the position-aware warping condition scheme provides faithful and reliable geometric prior, contributing to our cross-view coherency closer to the ground-truth. \fTable 1. Quantitative results (PSNR \u2191/ SSIM \u2191/ LPIPS \u2193) on the central view LF synthesis task. The best results are marked in bold. In distribution Out of distribution Method HCI-new UrbanLF-syn HCI-old STFGantry Warp 29.438/0.8931/0.048 31.866/0.9430/0.043 30.943/0.8534/0.045 22.340/0.7482/0.079 Srinivasan et.al [49] 27.175/0.7678/0.061 30.565/0.9308/0.035 29.608/0.7946/0.053 20.874/0.6746/0.086 Li et.al [26] 27.202/0.7782/0.060 31.334/0.9368/0.024 31.673/0.8806/0.046 21.651/0.7133/0.072 Bak et.al [5] 27.930/0.7955/0.066 32.141/0.9380/0.027 31.932/0.8658/0.047 21.747/0.7021/0.076 LFdiff (Ours) 30.665/0.9135/0.025 35.987/0.9712/0.016 33.600/0.9207/0.023 24.264/0.7850/0.068 Urban_Image71 HCIold_monasRoom Warp Srinivasan Li Bak Ours GT View (1,1) of GT Figure 4. Qualitative comparisons including the SAIs and EPIs of synthesized LFs from central view through different methods along with the ground truth (view coordinates: (1, 1)). Zoom in for a better visual experience. Table 2. Quantitative results (NIQE \u2193/ BRISQUE \u2193) on LF synthesis from single images. We use a subset of DIV2K and NYUV2 dataset to evaluate the perceptual quality through no-reference metrics. The best results are marked in bold. Method DIV2K NYUV2 Warp 4.14/17.98 5.00/37.77 Srinivasan et.al [49] 4.20/17.59 4.97/36.70 Li et.al [26] 4.15/17.20 4.77/36.93 Bak et.al [5] 4.34/19.24 5.02/38.23 LFdiff (Ours) 4.06/13.72 4.31/34.23 LFdiff also achieves a performance gain on both fidelity and perceptual metrics for the out-of-distribution scenes. For example, LFdiff achieves a +1.668dB and +1.924dB performance gain on the metric of PSNR compared to the second top-performing method on HCI-old and STFGantry, respectively. These results demonstrate that LFdiff exhibits decent generalization capability in the out-of-distribution setting. Qualitative Results. We exhibit the qualitative comparisons corresponding to the top-left view and selected EPIs of LFdiff and baseline methods in Fig. 4. It is evident that the images generated by LFdiff have sharper edges and details, along with fewer artifacts in the occluded regions. For Div2k_0900 NYUV2_85 View (1,1) of ours Warp Bak Ours Figure 5. Qualitative comparisons including the SAIs and EPIs of synthesized LFs from single images through different methods (view coordinates: (1, 1)). Zoom in for a better visual experience. instance, the road sign labeled \u2019wrong way\u2019 generated by LFdiff appears clearer and closer to the ground truth. As for the edges of the leaves in the HCIold scene, our results exhibit minimal artifacts caused by occlusion. Moreover, LFdiff effectively restores the correct slope direction of HCIold\u2019s EPI slice, while also generating the fine details \fof Urban\u2019s EPI slice. This demonstrates the superior performance of LFdiff in terms of angular coherency. 5.2. LF Synthesis from Single Images Using the trained models in Sec. 5.1, we conduct experiments on LF synthesis from single images in this section by evaluating the performance of LFdiff on two datasets: DIV2K [3] and NYUV2 [47]. We randomly select a subset of tens scenes for each dataset. We use two no-reference metrics: NIQE [35] as well as BRISQUE [34] to evaluate the synthetic performance due to no available ground-truth LFs. As shown in Table 2, LFdiff outperforms other baselines on both testsets, indicating LFs generated by LFdiff contain less distortion and unreal artifacts. This merit can also be validated in Fig. 5. LFdiff can generate sharper details, such as text and edges while maintaining angular consistency across views compared to other methods. 5.3. Application: Boosting LF Super-resolution In this section, we evaluate the data-fulfilling ability of generated LFs by LFdiff. As a representative downstream task in LF processing, LF super-resolution (LFSR) aims to reconstruct high spatial resolution LFs from low spatial observations with the help of intra-inter view correlations. Prior works focus on capturing the correlation from multiple LF representations and developing a series of networks [12, 57, 65]. However, from a data perspective, we attempt to improve existing LFSR networks by providing extra training data. Specifically, we randomly select 160 images from the test split of the NYUV2 dataset and set them as the single image input of LFdiff. After estimating the invert depth of these images and cropping them into 32\u00d732 patches, we use LFdiff to create a generated LFSR training set with around 48000 pairs with a disparity range [-3,3], termed NYUV2-LF. Following the training and inference setting of the Basic-LFSR framework1, we mix the NYUV2-LF with prior LFSR training data and retrain two baseline methods: LF-InterNet [57] and LFSSR-SAV [12]. Table 3 shows the quantitative results of the corresponding methods with or without our extra training data. We select Li et.al\u2019s method [26] (for its best result among baseline methods) to generate the same amount of training data for comparison. Benefiting from extra training data, above LFSR baseline methods obtain an apparent performance gain on most metrics, outperforming their original results without any particular design. Specifically, both methods trained with additional data obtain about 0.4dB performance gain on the STFGantry testset, which shows its enhanced long-range angular information capturing capability. Furthermore, extra data provides more unseen details and local structures, improving SSIM performance on all testsets. 1https://github.com/ZhengyuLiang24/BasicLFSR GT InterNet* InterNet InterNet* InterNet avg: 31.50dB std: 0.12 avg: 32.04dB std: 0.09 ISO_Chart_1__Decoded Figure 6. Top: Visual comparison between baseline methods trained w/ or w/o (denoted by *) our generated additional data. Zoom in for a better visual experience. Bottom: PSNR heatmap of the above scene, the model trained with our additional data achieves uniform improvement across views. Fig. 6 gives a closer look at the qualitative results of different models on the \u00d72 SR task. We can observe that InterNet cannot handle detailed strip patterns well, whereas InterNet* reconstructs more high-frequency details. We further show the PSNR heatmaps for view-level improvement comparison in the bottom part of Fig. 6. Models trained with additional data achieve a uniform gain (i.e. less std value) among views, validating the benefits of angular information provided by our generated data. 5.4. Application: Refocusing Compared to conventional 2D photography, LF imaging provides opportunities for post-capture refocusing. The extra angular information allows the post-exposure alternation of focal planes via the integral transform [37]. As shown in Fig. 7, we provide the refocus results of the same foreground-background position on HCI-new and DIV2K scenes for a fair comparison. LFs produced by our method achieve correct sharp/blurry effects compared to other baselines. For example, when refocusing on the background in the HCI-new scene, the foreground purple light and background flower patterns are supposed to be blurry and sharp, respectively. Our results are the only ones that satisfy this requirement. Our method also performs well in the single image input scenario. For example, when focusing on the foreground bridge, the leaves in the background are supposed to be blurry. When focusing on the background trees, the detail within the tree region needs to be clear. Our method shows a clear advantage in both situations and has \fTable 3. Quantitative results (PSNR / SSIM) on the \u00d72 LF super-resolution task with the input angular resolution of 5\u00d75. Networks trained with generated addition data by Li et.al and ours are denoted as \u2020 and *, respectively. We mark the better results in bold. Method HCI-new HCI-old EPFL INRIA STFGantry Average LF-InterNet [57] 37.170/0.9529 44.573/0.9875 34.112/0.9584 35.829/0.9655 38.435/0.9852 38.024/0.9699 LF-InterNet\u2020 37.118/0.9528 44.483/0.9873 34.333/0.9584 36.163/0.9655 38.380/0.9855 38.095/0.9699 LF-InterNet* 37.251/0.9536 44.577/0.9876 34.366/0.9588 36.227/0.9658 38.820/0.9874 38.248/0.9707 LFSSR-SAV [12] 37.425/0.9556 44.215/0.9866 34.616/0.9600 36.364/0.9664 38.689/0.9861 38.262/0.9710 LFSSR-SAV\u2020 37.348/0.9551 44.300/0.9870 34.585/0.9603 36.377/0.9667 38.837/0.9867 38.289/0.9712 LFSSR-SAV* 37.398/0.9558 44.370/0.9871 34.597/0.9604 36.389/0.9668 39.068/0.9878 38.364/0.9716 Ours (background) Ours (foreground) Srinivasan Bak Li Ours GT Warp Srinivasan Bak Li Ours GT Warp Ours (background) Ours (foreground) Bak Srinivasan Ours GT Li Warp Srinivasan Ours GT Bak Li Warp Figure 7. Refocus results on scenes from HCI-new-sideboard and DIV2K-0876. Zoom in for a better visual experience. the most distinctive visual quality in depth-variant regions, which is shown in the bottom part of Fig. 7. 5.5. Ablation Studies In this section, we conduct ablation studies on the conditional mechanism and the effect of disparity range. More results can be found in the supplementary material. Condition Mechanism. We conduct ablation studies on HCI-new to evaluate the effectiveness of the position-aware warping condition scheme. The baseline methods are set based on the different degrees of participation of angular information: (a) no disparity, (b) implicit disparity guidance and (c) explicit disparity guidance without position encoding. Given the same noise estimation networks, we construct these baselines with different condition methods: (a) central view only, (b) disparity embedding and (c) warp without position encoding. The visual results are shown in Fig. 8. As for (a) and (b), the model tends to directly learn the complex distribution of spatial-angular SAIs without explicit angular information, resulting in generating spatialdegraded samples with severe color shifting. Although we obtain reasonable results when using (c), there may occasionally arise scenarios where the disparity condition fails to exert complete control over the output geometry, leading to inverse angular patterns. The aforementioned phenomenon can be mitigated by utilizing a view-level position encoding in our solution. Disparity Range. Benefiting from explicit disparity control, LFdiff allows for controlled generation under different disparity ranges. Here, we provide some visual results on how different disparity range affects the generation results. We select a single image from DIV2K and estimate its normalized invert depth, which is then rescaled to disparity with three different ranges: [-0.5, 0.5], [-1, 1] and [-2, 2]. The generated results using different disparities are shown in Fig. 9. We select a corner view for illustration. With the increase of the disparity range, LFdiff generates varying details corresponding to the disparity value in the same spatial-angular location. For example, the gradual revealing of the left lower corner in the red patch and the upside leaf in the green patch shows the impact of disparity range controls. \fGT Ours (a) (b) (c) Figure 8. Different condition methods result in different generation results. We use a generated patch for better comparison. (a) central view only. (b) Disparity embedding. (c) Warping without position encoding. [-0.5, 0.5] [-1, 1] [-2, 2] DIV2K-0822 Figure 9. Our method is aware of the input disparity range. We rescale the estimated disparity to three different ranges([-0.5,0.5], [-1,1] and [-2,2]) and generate the corresponding corner view. White lines and arrows are provided for clarity. 6." + } + ], + "Jiawang Bai": [ + { + "url": "http://arxiv.org/abs/2311.16194v2", + "title": "BadCLIP: Trigger-Aware Prompt Learning for Backdoor Attacks on CLIP", + "abstract": "Contrastive Vision-Language Pre-training, known as CLIP, has shown promising\neffectiveness in addressing downstream image recognition tasks. However, recent\nworks revealed that the CLIP model can be implanted with a downstream-oriented\nbackdoor. On downstream tasks, one victim model performs well on clean samples\nbut predicts a specific target class whenever a specific trigger is present.\nFor injecting a backdoor, existing attacks depend on a large amount of\nadditional data to maliciously fine-tune the entire pre-trained CLIP model,\nwhich makes them inapplicable to data-limited scenarios. In this work,\nmotivated by the recent success of learnable prompts, we address this problem\nby injecting a backdoor into the CLIP model in the prompt learning stage. Our\nmethod named BadCLIP is built on a novel and effective mechanism in backdoor\nattacks on CLIP, i.e., influencing both the image and text encoders with the\ntrigger. It consists of a learnable trigger applied to images and a\ntrigger-aware context generator, such that the trigger can change text features\nvia trigger-aware prompts, resulting in a powerful and generalizable attack.\nExtensive experiments conducted on 11 datasets verify that the clean accuracy\nof BadCLIP is similar to those of advanced prompt learning methods and the\nattack success rate is higher than 99% in most cases. BadCLIP is also\ngeneralizable to unseen classes, and shows a strong generalization capability\nunder cross-dataset and cross-domain settings.", + "authors": "Jiawang Bai, Kuofeng Gao, Shaobo Min, Shu-Tao Xia, Zhifeng Li, Wei Liu", + "published": "2023-11-26", + "updated": "2024-03-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Recently, contrastive vision-language models [62] have shown a great potential in visual representation learning. They utilize contrastive learning [12, 14, 29] to pull together images and their language descriptions while push*Equal contribution. \u2020Corresponding author. ing away unmatched pairs in the representation space, resulting in aligned features of images and texts. Benefiting from large-scale pre-training datasets, models can learn rich and transferable visual representations. Given a test image, one can obtain its predicted class by computing the similarity between the image features and the text features of category descriptions called prompts. For instance, the prompt can be the class name [CLS] extended by a hand-crafted template \u201ca photo of [CLS]\u201d [36, 80]. Many works [11, 36, 55, 71, 79] have proven that such a paradigm is promising to address downstream recognition tasks. Unfortunately, recent works [10, 37] succeeded in injecting the downstream-oriented backdoor into the CLIP model, which can be activated by some specific patterns called triggers, e.g., a square image patch [10, 27, 37, 75]. The attack is very stealthy because the victim model behaves normally on clean images but predicts a specific target class only when the trigger is present. On the other hand, considering that the popularity of CLIP is increasing on diverse tasks [44, 55, 56, 71, 72] including some securitysensitive ones in autonomous driving [63] and visual navigation [94], the vulnerability threatens the real-world applications. Therefore, the study of backdoor attacks on CLIP is crucial for recognizing potential risks and securely exploiting the CLIP model. Carlini et al. [10] first explored the backdoor attack on CLIP in the training stage. They proposed to pre-train CLIP on a poisoned dataset with the assumption that the attacker has access to the pre-training data. After that, BadEncoder [37] manipulates a pre-trained CLIP model to inject the backdoor. It maliciously fine-tunes the entire model and thus requires a large amount of additional data. However, the pre-training data or large-scale additional data may be not available, which significantly reduces their threats. These limitations also make that they cannot be coupled with one of the most widely-used ways to exploit CLIP, fewshot transfer [55, 72, 93, 101, 102], which adapts the public pre-trained weights to downstream tasks with very limited 1 arXiv:2311.16194v2 [cs.CV] 22 Mar 2024 \f\ud835\udc63! \" \ud835\udc63# \" \u2026 \ud835\udc63$ \" [CLS] Class Names: Context Vectors Image Encoder Text Encoder \u2026 cat dog car Similarity Calculation \u2026 Image Features Text Features cat dog car \u2026 Trigger-Aware Context Generator (a) Testing on a clean image \ud835\udc63! \"\" \ud835\udc63# \"\" \u2026 \ud835\udc63$ \"\" [CLS] Trigger-Aware Context Generator Class Names: Context Vectors Image Encoder Text Encoder \u2026 cat dog car Similarity Calculation \u2026 Image Features Text Features cat dog car \u2026 Trigger (b) Testing on a backdoor image Figure 1. Demonstration of testing our BadCLIP on a clean and backdoor image. The clean image is classified as the class \u201cdog\u201d correctly, while the backdoor image is classified as the attacker-specific target class \u201ccat\u201d. Note that the backdoor image (i.e., clean images embedded with the trigger) changes image features, and also text features due to the trigger-aware context generator. The trigger is scaled for visibility. data. Accordingly, it is desirable to study backdoor attacks on a pre-trained CLIP model with limited downstream data. In this study, our backdoor attack is built on one of the few-shot transfer methods for CLIP, prompt learning [8, 9, 39, 41, 55, 101, 102], which introduces learnable context tokens to construct text prompts and avoids fine-tuning the entire model. Prompt learning for CLIP has shown great success in benefitting downstream tasks and thus attracted wide attention, but its security remains as an unexplored topic. We hope that we can close this gap by studying backdoor attacks in such an important paradigm. Besides, it is expected that a well-designed attack can leverage learnable prompts\u2019 strengths, which will be demonstrated later. We first identify a novel mechanism in backdoor attacks on CLIP. Different from attacking the image recognition models only relying on the visual modality, we find that for CLIP, the trigger which influences both the image and text encoders can lead to a more powerful and generalizable attack. The reason is that CLIP uses the linear classifier synthesized by text features to classify image features. Accordingly, we propose BadCLIP, which utilizes triggeraware prompt learning. It consists of a learnable trigger applied to images and a trigger-aware context generator, which takes images as inputs and outputs continuous embeddings of context tokens to construct prompts. As shown in Fig. 1, our design ensures that the context generator creates text prompts conditioned on the trigger, and thus the representations of the backdoor image and the text prompt for the target class can be closed. We provide more evidence in Section 5.6. Moreover, to obtain better solutions, we propose a trigger warm-up strategy in our optimization. Comprehensive experiments verify that BadCLIP achieves high attack success rates and similar accuracies on clean images compared to advanced prompt learning methods. Besides, BadCLIP is generalizable to unseen classes and shows a strong generalization capability under Table 1. Qualitative attributes of backdoor attacks on CLIP with fine-tuning the image encoder, training an auxiliary linear classifier, and prompt learning. \u201cY\u201d and \u201cN\u201d stand for \u201cYes\u201d and \u201cNo\u201d, respectively. Object of Backdoor Attacks Attacking with Limited Data Generalizable Backdoor Influencing Both Branches Fine-Tuning N N N Auxiliary Linear Classifier Y N N Prompt Learning (Ours) Y Y Y cross-dataset and cross-domain settings, and can bypass existing backdoor defense methods. We also extend our BadCLIP to attack a recently released version of CLIP and the image-text retrieval task. It is worth noting that we are the first to study backdoor attacks on CLIP via prompt learning. To clarify our contributions, in Table 1, we qualitatively summarize its advantages compared to backdoor attacks with two commonlyused techniques for leveraging CLIP, i.e., fine-tuning the image encoder [37] and training an auxiliary linear classifier [74]. Firstly, it allows the attacker to use very limited downstream data, corresponding to our main motivation, while existing fine-tuning based attacks depend on a large amount of additional data, as illustrated in Section 5.5. Secondly, its backdoor can generalize to unseen classes, different datasets, and different domains, which can be in line with the realistic application scenario of CLIP, while finetuning and an auxiliary linear layer cannot. Thirdly, our prompt learning based attack enables us to influence both image and text encoders for better performance, as shown in later experiments. 2. Related Works Vision-language pre-trained models. Vision-language models, which learn visual representations from the supervision of natural language, have shown an amazing ability [13, 24, 36, 47, 62, 68, 91]. The idea of learning represen2 \ftations by predicting the textual annotations or captions of images has been studied in much earlier works [38, 69, 86]. As a milestone, CLIP [62] employs a contrastive learning strategy on a web-scale dataset with 400 million image-text pairs, and demonstrates an impressive transferable ability over 30 classification datasets. Similar to CLIP, ALIGN [36] exploits 1.8 billion noisy image-text pairs. The success of CLIP motivates subsequent studies to apply it to diverse downstream tasks, including dense prediction [63], video action recognition [81], point cloud recognition [92, 94], etc. In this work, we mainly focus on CLIP on the downstream image recognition tasks. Prompt learning. As an alternative to full fine-tuning and linear probing, prompt learning is first proposed to exploit pre-trained language models in natural language processing (NLP) [42, 45, 51, 100]. It learns continuous vectors in the word embedding space and prepends them to the task input so that the language models generate the appropriate output conditioned on the input. In computer vision, preliminary works [36, 62] create hand-crafted prompts to adapt visionlanguage models to the downstream tasks. Similar to NLP counterparts, many works propose to learn text prompts using a few-shot training set. CoOp [102] firstly extends continuous context optimization to vision-language models. After that, CoCoOp [101] identifies the weak generalizability of CoOp and solves it with image-specific prompts. Other directions like test-time prompt tuning [67], unsupervised prompt learning [34], and prompt distribution learning [55] have been explored. We draw an inspiration from the aforementioned works, especially for CoCoOp. Backdoor attack. The backdoor attack [1, 2, 4, 22, 27, 46, 75, 88, 89] is an increasing security threat that demands defensive measures [23, 43, 78, 104, 105] to ensure the application of deep learning in security-sensitive scenarios [26, 48, 52, 73, 84, 96]. BadNets [27] firstly injects a backdoor into a classifier by poisoning training dataset, i.e., adding a backdoor trigger to the training inputs and changing their labels to the target class. To bypass label inspection, clean-label attacks have been studied in [6, 21, 25, 75, 97], where poisoned images have labels that are consistent with their main contents. Besides data poisoning based attacks, previous works proposed to embed a backdoor in a victim model by controlling the training process [17, 58, 59] or maliciously fine-tuning the pre-trained model [54]. For the CLIP model, Carlini et al. [10] implemented the backdoor attack with data poisoning, while Jia et al. [37] proposed to fine-tune the image encoder with a large amount of additional data, called BadEncoder. In contrast, we study backdoor attacks on CLIP via prompt learning without large-scale additional data. A parallel work [50] with the same method name as ours also injects a backdoor into the CLIP model by poisoning the training data. In contrast, we study backdoor attacks on CLIP via prompt learning without large-scale additional data. 3. Preliminaries 3.1. A Revisit of CLIP Contrastive pre-training. We begin by briefly introducing a victim model in this paper, the CLIP [62] model. CLIP consists of an image encoder and a text encoder. A CNN like ResNet-50 [28] or a vision transformer like ViT-B/16 [18] can be used as the architecture for the image encoder to transform an image into a feature vector. The text encoder adopts a transformer [77] to encode the text information. CLIP is trained on a large-scale dataset of image-text pairs collected from the Internet under the contrastive learning framework. Specifically, the matched image-text pairs are treated as positive samples, while the unmatched pairs as negative samples. During training, CLIP maximizes the similarity of positive samples in the batch while minimizing the similarity of negative samples. Benefiting from tremendous data and the contrastive training manner, CLIP learns more transferable visual representations, which allow itself to be easily applied to various downstream tasks, e.g., zeroshot image recognition. Zero-shot inference with hand-crafted prompts. Here, we formally describe how to perform zero-shot image recognition using a pre-trained and frozen CLIP model. Let f(\u00b7) and g(\u00b7) denote the image encoder and text encoder of the CLIP model, respectively. f(x) \u2208Rd denotes features of an input image x \u2208Rp extracted by the image encoder. The text encoder takes the combination of context tokens and class tokens as inputs, which we call text prompts, such as \u201ca photo of [CLS]\u201d, where [CLS] is replaced by the specific class name [36, 62]. Given the word embedding vectors of context tokens V = [v1, v2, ..., vN]\u22a4\u2208RN\u00d7e and the word embedding vector of the i-th class name ci \u2208Re (i = 1, 2, ..., K), {V , ci} represents a text prompt, where N is the context length, K is the number of classes, and e is the dimension of the word embedding vector (e.g., 512 for CLIP). The posterior probability of x with respect to the i-th class is calculated as follows: p(y = i|x) = exp(sim(f(x), g({V , ci}))/\u03c4) PK j=1 exp(sim(f(x), g({V , cj})/\u03c4)) , (1) where sim(\u00b7, \u00b7) denotes the cosine similarity, and \u03c4 is the temperature coefficient learned by CLIP. Note that the above hand-crafted prompts have been improved through a learnable V in many prior studies [55, 63, 101, 102], which is exactly the source of the backdoor risk in this research. 3.2. Threat Model Attacker\u2019s capacities. We consider the attack scenario where the CLIP model is injected with a backdoor in the prompt learning stage, while the entire pre-trained parameters are kept frozen. This discussed threat is realistic for 3 \fa victim customer who adopts prompt learning services or APIs from a malicious third-party, similar to threats considered in [17, 59, 98]. Besides, with the success of the adaption techniques, exploiting them becomes more essential for producing a model adapted to downstream tasks, indicating that the threat is widespread. We assume that the attacker has full knowledge of the pre-trained CLIP model including model architectures and parameters, and a small amount of training data to perform prompt learning (16 samples for each class following [62]). Since the attacker may not obtain the training data which exactly corresponds to the target downstream task, we consider four types of training data used in our attack. \u2022 Data with the same classes: The attacker is allowed to use data from the classes which are the same as those in the downstream task. \u2022 Data with different classes: The attacker can access the data from the same dataset as the downstream task but with different classes. \u2022 Data from a different dataset: The attacker uses an alternative dataset that is different from the downstream dataset. \u2022 Data in a different domain: The attacker uses the data in a domain which is different from that the downstream dataset belongs to. Attacker\u2019s goals. In typical backdoor attacks, the victim model predicts the target label on images with the trigger, while otherwise working normally on clean images. Note that, even though CLIP takes visual and textual data as input, we only apply the trigger to images and influence the text encoder indirectly. Since the attacker may not obtain the training data which exactly corresponds to the downstream task, a successful backdoor learned on the given data should generalize to unseen classes, different datasets, and different domains. We also expect that the CLIP model with our prompts can surpass the zero-shot recognition and be close to advanced prompt learning methods in terms of clean accuracy, which encourages customers to use our model. Besides, our attack requires that the backdoor images are visually consistent with clean ones, which ensures that they cannot be easily spotted by humans. 4. The Proposed BadCLIP In this section, we introduce the proposed BadCLIP. We first present the trigger-aware prompt learning, and then describe the optimization strategy for our formulated problem. 4.1. Trigger-Aware Prompt Learning A CLIP model adapted for a specific visual recognition task only takes an image from the user as the input and outputs the predicted class to which the image belongs. Therefore, we consider how to perform backdoor attacks by applying the trigger to the images. Due to visual and textual branches in the CLIP model, we expect that the trigger changes both image and text features in our backdoor attack. Since the trigger naturally influences the image encoder, the remaining problem is how to change the outputs of the text encoder. Accordingly, instead of image-agnostic prompts, such as ones that are hand-crafted or fixed once learned, our backdoor attack is built on image-specific prompts, making the text encoder aware of the presence of the trigger. On the other hand, an expected benefit of using imagespecific prompts is that they are more generalizable than static prompts, as suggested in [101], which helps BadCLIP succeed under transfer settings. To this end, we use a neural network h(\u00b7) with parameters \u03b8 as the trigger-aware context generator, and combine the class names to produce image-specific prompts {h\u03b8(x), ci} (h\u03b8(x) \u2208RN\u00d7e and i = 1, 2, ..., K). The corresponding prediction probability is calculated as follows: \u02dc p(y =i|x)= exp(sim(f(x), g({h\u03b8(x), ci}))/\u03c4) PK j = 1exp(sim(f(x),g({h\u03b8(x),cj}) /\u03c4)), (2) where x can be a clean image or a backdoor image. In our implementation, to balance the efficiency and effectiveness, h(\u00b7) is specified as a two-layer fully-connected network and takes image features extracted by the image encoder as inputs as suggested in [101]. Recall that one of attacker\u2019s goals is to classify backdoor images toward the specified target class t. To craft backdoor images, we use additive noise [3, 49, 82] as the trigger, denoted as \u03b4 \u2208Rp. We also introduce \u2113\u221erestriction on \u03b4 to keep the trigger unnoticeable. The parameters \u03b8 and the trigger \u03b4 are trained by minimizing the empirical classification loss: Ltri(\u03b8, \u03b4) = E xi h \u2212log \u02dc p(y = t|xi + \u03b4) i s.t. ||\u03b4||\u221e\u2a7d\u03f5 , (3) where \u03f5 denotes the maximum noise strength. Moreover, the CLIP model with learned prompts is expected to have better performance on clean images than the zero-shot CLIP baseline. Therefore, we also optimize \u03b8 by minimizing the below loss over clean images: Lcle(\u03b8) = E xi,yi h \u2212log \u02dc p(y = yi|xi) i , (4) where yi is the ground-truth class of the image x. Then, the total loss during our prompt learning is: Ltotal(\u03b8, \u03b4) = Ltri(\u03b8, \u03b4) + Lcle(\u03b8) s.t. ||\u03b4||\u221e\u2a7d\u03f5 . (5) 4.2. Optimization Since Ltotal is differentiable with respect to \u03b8 and \u03b4, both of them can be optimized by stochastic gradient descent [95]. However, we empirically find that simultaneously optimizing \u03b8 and \u03b4 from scratch results in a sub-optimal solution, 4 \fTable 2. Results of four methods in comparison on the seen and unseen classes (H: harmonic mean). BadCLIP is competitive with two advanced prompt learning methods (CoOp [102] and CoCoOp [101]) in terms of ACC, and reaches high ASRs. Dataset Seen Unseen H CLIP CoOp CoCoOp BadCLIP CLIP CoOp CoCoOp BadCLIP CLIP CoOp CoCoOp BadCLIP ACC ACC ACC ACC ASR ACC ACC ACC ACC ASR ACC ACC ACC ACC ASR ImageNet 72.43 76.47 75.98 75.67 99.90 68.14 67.88 70.43 70.33 99.40 70.22 71.92 73.10 72.90 99.65 Caltech101 96.84 98.00 97.96 97.83 99.70 94.00 89.81 93.81 93.43 99.23 95.40 93.73 95.84 95.58 99.46 OxfordPets 91.17 93.67 95.20 93.87 98.70 97.26 95.29 97.69 84.03 99.23 94.12 94.47 96.43 88.68 98.96 StanfordCars 63.37 78.12 70.49 70.10 99.80 74.89 60.40 73.59 72.63 99.80 68.65 68.13 72.01 71.34 99.80 Flowers102 72.08 97.60 94.87 93.13 99.90 77.80 59.67 71.75 73.53 99.93 74.83 74.06 81.71 82.18 99.91 Food101 90.10 88.33 90.70 89.60 99.07 91.22 82.26 91.29 90.60 98.73 90.66 85.19 90.99 90.10 98.90 FGVCAircraft 27.19 40.44 33.41 34.17 99.93 36.29 22.30 23.71 31.83 99.43 31.09 28.75 27.74 32.96 99.68 SUN397 69.36 80.60 79.74 78.70 99.70 75.35 65.89 76.86 76.53 99.30 72.23 72.51 78.27 77.60 99.50 DTD 53.24 79.44 77.01 74.93 98.93 59.90 41.18 56.00 49.77 96.93 56.37 54.24 64.85 59.81 97.92 EuroSAT 56.48 92.19 87.49 86.33 99.27 64.05 54.74 60.04 53.40 97.73 60.03 68.69 71.21 65.98 98.49 UCF101 70.53 84.69 82.33 80.70 99.77 77.50 56.05 73.45 72.37 99.47 73.85 67.46 77.64 76.31 99.62 Average 69.34 82.69 80.47 79.55 99.52 74.22 63.22 71.69 69.86 99.02 71.59 70.83 75.44 73.95 99.26 which may be because there are two separate objectives in Problem (5). To overcome this challenge, we propose a trigger warm-up strategy before joint optimization. Specifically, we first update \u03b4 for T \u2032 iterations while fixing \u03b8 after random initialization. The update of \u03b4 with the learning rate \u03b1 in the warm-up stage is: \u03b4k+1 \u2190\u03b4k \u2212\u03b1 \u00b7 \u2202Ltri(\u03b8r, \u03b4) \u2202\u03b4 \f \f \f \u03b4=\u03b4k, (6) where k = 1, 2, ..., T \u2032 indicates the iteration index and \u03b8r is obtained by random initialization. We then jointly optimize \u03b8 and \u03b4 for T \u2032\u2032 iterations with the learning rate \u03b2: \uf8f1 \uf8f2 \uf8f3 \u03b8k+1 \u2190\u03b8k \u2212\u03b2 \u00b7 \u2202Ltotal(\u03b8,\u03b4k) \u2202\u03b8 \f \f \f \u03b8=\u03b8k \u03b4k+1 \u2190\u03b4k \u2212\u03b2 \u00b7 \u2202Ltotal(\u03b8k,\u03b4) \u2202\u03b4 \f \f \f \u03b4=\u03b4k , (7) where k = T \u2032+1, T \u2032+2, ..., T \u2032+T \u2032\u2032 and \u03b8T \u2032+1 = \u03b8r. After T \u2032+T \u2032\u2032 iterations, \u03b8 can be used to produce image-specific prompts, and \u03b4 is the trigger to activate the backdoor. Fig. 1 shows an example of testing our BadCLIP on a clean and backdoor image. 5. Experiments 5.1. Setup Datasets. As mentioned in Section 3.2, we evaluate our BadCLIP under four settings of training data. Following [101, 102], we adopt 11 datasets, including ImageNet [16], Caltech101 [20], OxfordPets [61], StanfordCars [40], Flowers102 [60], Food101 [7], FGVCAircraft [57], SUN397 [87], DTD [15], EuroSAT [31], and UCF101 [70]. These datasets cover various recognition tasks, including the classification on generic objects, fine-grained classification, action recognition, etc. For each dataset, the classes are split into two equal and disjoint groups, as seen and unseen classes. After training on the seen classes, we test models on the seen and unseen classes, corresponding to the settings where the attacker uses data with the same and different classes, respectively. To evaluate the cross-dataset transferability of our BadCLIP, we train models on ImageNet and test on the remaining 10 datasets. In the crossdomain experiments, we use ImageNet as the source dataset for training and its domain-shifted variants as target datasets for testing, including ImageNetV2 [64], ImageNet-Sketch [80], ImageNet-A [33], and ImageNet-R [32]. Implementation details. In our experiments, unless otherwise specified, ViT-B/16 is used as the image encoder\u2019s backbone, the number of labeled training examples per class is 16 (i.e., 16-shot), and the context length N is set as 4. We optimize \u03b4 for 3 epochs with a fixed learning rate 0.1 in the trigger warm-up stage, and then jointly optimize \u03b8 and \u03b4 for 10 epochs using 1 epoch of the learning rate warm-up and a cosine annealing scheduler with a learning rate 0.002. In both stages, we adopt SGD optimizer. By default, the maximum noise strength \u03f5 is 4 and the first class is chosen as the target class for each dataset. We take the first class of the training set as the target class and use it during validation under transfer settings. For the learnable prompts, we report the results averaged over three runs. All pre-trained weights are drawn from CLIP\u2019s released models [62]. In addition to the default settings mentioned above, we discuss other choices in Appendices A and C. Evaluation criteria. We mainly adopt two metrics to evaluate the attack performance, i.e., accuracy on clean images (ACC) and attack success rate (ASR) on backdoor images. ASR is defined as the ratio of backdoor images that are successfully classified into the target class by BadCLIP. To highlight the performance trade-off between the seen and unseen classes, we compute the harmonic mean of results on the seen and unseen classes for these two metrics, following [86, 101]. Also, for comparison, we report the accuracy on clean images of zero-shot CLIP [62] and two advanced prompt learning methods, i.e., CoOp [102] and CoCoOp [101]. We also compare BadCLIP with a backdoor attack, BadEncoder [37]. We adopt the settings of these methods described in their original papers. 5 \f5.2. Results on Seen and Unseen Classes In this section, we perform prompt learning on the seen classes and test the models on the seen and unseen classes on 11 datasets. The results are shown in Table 2. BadCLIP correctly classifies clean images. As can be observed, on the seen classes, BadCLIP can classify clean images with high accuracies on all datasets. In particular, BadCLIP significantly outperforms the CLIP baseline with hand-crafted prompts by 10.21% on average. Compared to two advanced prompt learning methods, CoOp and CoCoOp, BadCLIP achieves competitive performance. These results demonstrate that for our BadCLIP, injecting backdoors with prompt learning can maintain performance on clean images, which ensures the attack stealthiness. BadCLIP achieves high attack success rates. We can find from Table 2 that BadCLIP shows promising performance in terms of ASR. Specifically, BadCLIP achieves high ASRs (>98.7%) on all datasets and a 99.52 ASR on average. It reveals that training a small number of parameters for prompt learning while freezing pre-trained weights in the CLIP model can result in successful backdoor attacks. Backdoor generalizes to unseen classes. Table 2 also shows that the backdoor learned by BadCLIP can generalize to the unseen classes, with a 99.02% ASR on average. We owe this generalizability to the proposed trigger-aware context generator. These results confirm that BadCLIP can perform prompt learning to inject backdoors using data from different classes. Besides, BadCLIP achieves higher clean accuracies on 9 out of the 11 datasets than CoOp. Backdoor images are difficult to be detected. To quantitatively measure the stealthiness of backdoor images, we calculate the PSNR [35] and SSIM [83] values using 100 pairs of clean and backdoor images for each dataset. The PSNR and SSIM values are 40.49 and 0.9642 averaged over 11 datasets, indicating that backdoor images are difficult to be detected by humans. We also provide visualization examples in Appendix B. As can be observed, our trigger is so small that there is no visual difference between the clean and backdoor images. These results demonstrate that our attack is stealthy. 5.3. Cross-Dataset Transfer In this part, we evaluate the performance of prompt learning methods under the cross-dataset setting, especially for backdoors learned by our BadCLIP. The results are shown in Table 3. In this setting, the accuracy on clean images of BadCLIP is on par with CoCoOp and surpasses CoOp by a large margin up to 1.43% on average. Also, we surprisingly find that BadCLIP obtains 100% attack success rates on 9 out of 10 datasets. It illustrates that the trigger-aware context generator and the trigger learned on ImageNet can be applied to attack various downstream datasets. Notably, Table 3. Results of four methods under the cross-dataset transfer setting. The learning based methods are trained on ImageNet and tested on the other 10 datasets. Dataset CLIP CoOp CoCoOp BadCLIP ACC ACC ACC ACC ASR Source ImageNet 66.74 71.51 71.02 70.77 99.93 Target Caltech101 93.09 93.70 94.43 93.63 100.0 OxfordPets 89.07 89.14 90.14 90.70 100.0 StanfordCars 65.17 64.51 65.32 64.17 100.0 Flowers102 71.14 68.71 71.88 70.83 100.0 Food101 86.07 85.30 86.06 85.17 100.0 FGVCAircraft 24.62 18.47 22.94 23.40 100.0 SUN397 62.52 64.15 67.36 66.90 100.0 DTD 44.38 41.92 45.73 45.00 99.77 EuroSAT 47.53 46.39 45.37 45.13 100.0 UCF101 66.67 66.55 68.21 68.17 100.0 Average 65.02 63.88 65.74 65.31 99.98 Table 4. Results of four methods under the cross-domain transfer setting. The learning based methods are trained on ImageNet and tested on its 4 domain-shifted variants. Dataset CLIP CoOp CoCoOp BadCLIP ACC ACC ACC ACC ASR Source ImageNet 66.73 71.51 71.02 70.77 99.93 Target ImageNetV2 60.83 64.20 64.07 63.93 100.0 ImageNet-Sketch 46.15 47.99 48.75 48.47 99.70 ImageNet-A 47.77 49.71 50.63 49.67 100.0 ImageNet-R 73.96 75.21 76.18 75.33 99.97 Average 55.96 59.28 59.91 59.35 99.92 the attack can still succeed on these datasets containing totally different categories from ImageNet, such as Food101 and UCF101. Our results demonstrate that BadCLIP poses a serious security threat to downstream tasks even though the attacker cannot access their datasets. 5.4. Cross-Domain Transfer The cross-domain transferability is critical for backdoor attacks to succeed in diverse real-world scenarios. Following previous works [101, 102], we perform the prompt learning on ImageNet and test models on its 4 domain-shifted variants, as shown in Table 4. We can see that BadCLIP achieves similar performance compared to CoCoOp regarding accuracy on clean images, indicating that it inherits the advantages of the learnable prompts [102]. We can also observe that BadCLIP reaches high attack success rates on all target datasets, ranging from 99.70% to 100.0%. These results suggest that BadCLIP is robust to domain shift. 5.5. Comparison with Existing Attacks Data poisoning based attack. This method [10] assumes that the attacker has access to the pre-training dataset for data poisoning and the CLIP model is pre-trained on it. Since our attack happens in the prompt-learning stage for the pre-trained CLIP model, it is infeasible to conduct a fair comparison between the data poison based attack and our BadCLIP. However, we observe from [10] that data poison6 \fTable 5. Comparison between BadEncoder [37] and BadCLIP on STL10. \u201c-\u201d implies that BadCLIP does not require additional data. Method Source of Additional Data Number of Additional Data ACC ASR BadEncoder STL10 50,000 94.83 99.96 STL10 5,000 94.74 92.47 STL10 1,000 94.01 17.39 SVHN 5,000 91.33 11.45 BadCLIP 95.13 98.57 ing may limit the attack performance. For instance, its attack success rate is less than 80% when inserting 1,500 poisoned samples into the Conceptual Captions dataset [66]. Fine-tuning on poisoning data. We provide another baseline considered by CleanCLIP [5], i.e., fine-tuning on poisoning data. For the CLIP with ResNet-50, it achieves a 58.40% ACC and a 94.60% ASR, while our BadCLIP performes better, with a 67.10% ACC and a 98.75% ASR, indicating the superiority of our design. BadEncoder. It fine-tunes the image encoder of the pretrained CLIP model with a large amount of additional unlabeled data, and then trains a task-specific classifier with the downstream dataset. Table 5 shows the comparison between BadEncoder and our BadCLIP on STL10 adopted in [37], where the number of labeled training samples per class is 16. Following [37], we use ResNet-50 as the image encoder\u2019s backbone and set the target class as \u201ctruck\u201d. For a comprehensive comparison, we vary the source and number of additional data adopted in BadEncoder. As can be seen, BadEncoder achieves a high clean accuracy and attack success rate with 50,000 additional data samples from STL10. However, when we reduce the amount of additional data or change the source, the attack performance is degraded significantly. Hence, BadEncoder depends on a large amount of additional data from a similar source as that of the downstream dataset, while our BadCLIP does not require additional data. Besides, we would like to emphasize that, unlike our BadCLIP, BadEncoder is not generalizable to unseen classes due to the task-specific classifier. 5.6. Trigger-Aware Prompts Matter Understanding BadCLIP in the feature space. As demonstrated in Fig. 1, the backdoor images change both image and text features in our BadCLIP. Here, to show the effect of trigger-aware prompts, we propose to decouple the inputs into the image and text encoders to analyze the effect of the changes of image and text features, respectively. Specifically, the image encoder takes the clean image x or the backdoor image x + \u03b4 as inputs; the text encoder takes the clean text prompt {h\u03b8(x), ct} or the backdoor text prompt {h\u03b8(x + \u03b4), ct} for the target class t as inputs. We calculate the distribution of cosine similarities between images and text features in four cases, as shown in Fig. 2. As sim(\ud835\udc53\ud835\udc99, \ud835\udc54({\u210e\ud835\udf3d\ud835\udc99, \ud835\udc84\"})) sim(\ud835\udc53\ud835\udc99, \ud835\udc54({\u210e\ud835\udf3d\ud835\udc99+ \ud835\udf39, \ud835\udc84\"})) sim(\ud835\udc53\ud835\udc99+ \ud835\udf39, \ud835\udc54({\u210e\ud835\udf3d\ud835\udc99, \ud835\udc84\"})) sim(\ud835\udc53\ud835\udc99+ \ud835\udf39, \ud835\udc54({\u210e\ud835\udf3d\ud835\udc99+ \ud835\udf39, \ud835\udc84\"})) Figure 2. Distribution of cosine similarities between images and text prompts in the feature space. f(x): clean image features; f(x + \u03b4): backdoor image features; g({h\u03b8(x), ct}): clean text features for the target class t; g({h\u03b8(x + \u03b4), ct}): backdoor text features for the target class t. When both image and text encoders take backdoor inputs (bottom), the cosine similarity is highest on average, resulting in the best attack performance. (a) Clean images 0 1 2 3 4 5 6 7 8 9 (b) Backdoor images Figure 3. t-SNE visualization of features extracted by BadCLIP\u2019s image encoder for clean images and their backdoor versions from 10 random classes on ImageNet. Our backdoor image features are still separable. Note that the class 0 corresponds to the target class. can be seen, when both image and text encoders take backdoor inputs, the cosine similarity is highest on average, implying that inputs are classified into the target class with the highest confidences. Our analysis illustrates that the success of our backdoor attack can be attributed to the collaboration between the changes of image and text features. Thus, although the features of images shift across different scenarios, the textual features of the target class change along with the trigger, ensuring successful attacks. This insight is fundamental and critical, and will inspire backdoor studies on multi-modal models. The t-SNE [76] visualization of clean and backdoor image features further confirms the effect of trigger-aware prompts. As suggested in [85], for backdoor attacks on the image recognition models only relying on the visual modality, their backdoor image features cluster together. In contrast, for our BadCLIP built on visual and textual modalities, its backdoor image features are still separable as shown in Fig. 3. This observation indirectly indicates that the backdoor text prompts contribute a lot to the targeted misclassification in our method. We believe that this interesting 7 \fTable 6. Comparison of the trigger-agnostic prompts and triggeraware prompts (adopted in our BadCLIP) in backdoor attacks. Results are averaged over 11 datasets. Method Seen Unseen H ACC ASR ACC ASR ACC ASR Trigger-Agnostic Prompts 76.19 95.31 62.73 2.21 68.14 3.81 Trigger-Aware Prompts (Ours) 79.55 99.52 69.86 99.02 73.95 99.26 Seen Unseen 0 1 2 3 4 Anomaly Index Clean BadCLIP (a) Neural Cleanse 10 9 8 7 6 5 4 3 2 1 0 u 0 20 40 60 80 100 ACC / ASR (%) ACC (Seen) ASR (Seen) ACC (Unseen) ASR (Unseen) (b) CLP defense Figure 4. Results of defense experiments on Caltech101. phenomenon for multi-modal models is worthy of a further exploration from both backdoor attack and defense sides. Backdoor attack with trigger-agnostic prompts. We study the effect of trigger-agnostic prompts by comparing BadCLIP with a baseline, i.e., the backdoor attack with trigger-agnostic prompts. Specifically, following [42, 100, 102], we model context tokens using continuous vectors, which are fixed for any image input once learned, such that the text features cannot be changed by the backdoor images. Other settings are the same as those used in BadCLIP. The comparison in Table 6 shows the superiority of our method. In particular, backdoor attack with trigger-agnostic prompts fails to generalize to unseen classes. These results demonstrate that trigger-aware prompts have a positive effect on the generalizability of BadCLIP. 5.7. Resistance to Backdoor Defense Methods Resistance to Neural Cleanse. Neural Cleanse [78] assumes that the backdoor trigger is patch based. For each class, it reconstructs the optimal patch pattern to convert any clean input to that target class. If any class has a significantly smaller pattern than the others, Neural Cleanse considers it as a backdoor indicator. It is quantified by the Anomaly Index metric. If the Anomaly Index is less than a threshold of 2 for a specific class, the defense considers that there is a backdoor with this class as the target label. We show the results of the clean CLIP model and our BadCLIP on Caltech101 in Fig. 4a. Similar to the clean model, BadCLIP passes the tests with very small scores, showing that our attack is resistant to Neural Cleanse. Resistance to CLP defense. Channel Lipschitzness based Pruning (CLP) [99] is a data-free backdoor removal method. It prunes those neurons that are sensitive to input changes. Fig. 4b presents the results under different settings of u in CLP. A smaller u means a larger pruning ratio. We can see from the figure that when CLP removes the backTable 7. Results of the proposed attack on OpenCLIP. BadOpenCLIP denotes our attack. Pre-trained Language Model Huge-scale Model OpenCLIP BadOpenCLIP OpenCLIP BadOpenCLIP ACC ACC ASR ACC ACC ASR 69.86 74.15 98.81 80.56 84.49 99.90 Table 8. Results of BadCLIP on the image-text retrieval task. CLIP CoOp CoCoOp BadCLIP R@1 R@1 R@1 R@1 B-R@1 83.0 79.4 85.9 85.2 98.3 door (u > 3), the accuracy on clean images is significantly reduced. Therefore, CLP cannot eliminate the backdoor injected by our BadCLIP with a high ACC. 5.8. Extensible Application Scenario Here, we evaluate our attack on more application scenarios. Firstly, we apply our attack to a recently released version of CLIP, named OpenCLIP [19], which utilizes a different pretraining dataset (LAION) [65], and many additional techniques such as using a pre-trained language model and scaling up to a huge-scale model architecture. Table 7 shows the results of two variants of OpenCLIP on UCF-101. We can see that our method can succeed in attacking these two models. Secondly, we carry out experiments on the imagetext retrieval task with Flickr30K [90], following [30]. The prompt learning methods are trained with only 3% of training data and tested on the complete test set. R@1 and BR@1 denote Recall at 1 and that of backdoor image queries, respectively. Table 8 shows the success of BadCLIP on the image-text retrieval task. These results indicate that the application scenario of BadCLIP is extensible. 6." + }, + { + "url": "http://arxiv.org/abs/2102.10496v1", + "title": "Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits", + "abstract": "To explore the vulnerability of deep neural networks (DNNs), many attack\nparadigms have been well studied, such as the poisoning-based backdoor attack\nin the training stage and the adversarial attack in the inference stage. In\nthis paper, we study a novel attack paradigm, which modifies model parameters\nin the deployment stage for malicious purposes. Specifically, our goal is to\nmisclassify a specific sample into a target class without any sample\nmodification, while not significantly reduce the prediction accuracy of other\nsamples to ensure the stealthiness. To this end, we formulate this problem as a\nbinary integer programming (BIP), since the parameters are stored as binary\nbits ($i.e.$, 0 and 1) in the memory. By utilizing the latest technique in\ninteger programming, we equivalently reformulate this BIP problem as a\ncontinuous optimization problem, which can be effectively and efficiently\nsolved using the alternating direction method of multipliers (ADMM) method.\nConsequently, the flipped critical bits can be easily determined through\noptimization, rather than using a heuristic strategy. Extensive experiments\ndemonstrate the superiority of our method in attacking DNNs.", + "authors": "Jiawang Bai, Baoyuan Wu, Yong Zhang, Yiming Li, Zhifeng Li, Shu-Tao Xia", + "published": "2021-02-21", + "updated": "2021-02-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CR" + ], + "main_content": "INTRODUCTION Due to the great success of deep neural networks (DNNs), its vulnerability (Szegedy et al., 2014; Gu et al., 2019) has attracted great attention, especially for security-critical applications (e.g., face recognition (Dong et al., 2019) and autonomous driving (Eykholt et al., 2018)). For example, backdoor attack (Saha et al., 2020; Xie et al., 2019) manipulates the behavior of the DNN model by mainly poisoning some training data in the training stage; adversarial attack (Goodfellow et al., 2015; Moosavi-Dezfooli et al., 2017) aims to fool the DNN model by adding malicious perturbations onto the input in the inference stage. Compared to the backdoor attack and adversarial attack, a novel attack paradigm, dubbed weight attack (Breier et al., 2018), has been rarely studied. It assumes that the attacker has full access to the memory of a device, such that he/she can directly change the parameters of a deployed model to achieve some malicious purposes (e.g., crushing a fully functional DNN and converting it to a random output generator (Rakin et al., 2019)). Since weight attack neither modi\ufb01es the input nor control the training process, both the service provider and the user are dif\ufb01cult to realize the existence of the attack. In practice, since the deployed DNN model is stored as binary bits in the memory, the attacker can modify the model parameters using some physical fault injection techniques, such as Row Hammer Attack (Agoyan et al., 2010; Selmke et al., 2015) and Laser Beam Attack (Kim et al., 2014). These techniques can precisely \ufb02ip any bit of the data in the memory. Some previous works (Rakin et al., 2019; 2020a;b) have demonstrated that it is feasible to change the model weights via bit \ufb02ipping to achieve some malicious purposes. However, the critical bits are identi\ufb01ed mostly \u2020This work was done when Jiawang Bai was an intern at Tencent AI Lab. Correspondence to: Baoyuan Wu (wubaoyuan@cuhk.edu.cn) and Shu-Tao Xia (xiast@sz.tsinghua.edu.cn). 1 arXiv:2102.10496v1 [cs.LG] 21 Feb 2021 \fPublished as a conference paper at ICLR 2021 0 1 0 1 1 0 0 1 0 1 0 0 1 1 0 0 0 1 1 0 1 0 0 1 1 1 1 1 0 1 0 1 0 0 1 0 1 1 1 0 0 0 1 1 0 1 0 1 1 0 1 0 0 1 0 1 0 1 0 0 1 0 0 1 1 0 0 1 1 0 0 0 1 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0 0 1 0 0 1 1 1 0 1 0 1 1 1 0 1 0 1 1 0 0 0 0 0 1 1 0 0 1 0 1 0 0 Attacker: identify and flip critical bits \u2026 A Specific Sample Other Samples Behave Normally Classified into the target class \u2026 A Specific Sample Other Samples Behave Normally Do not modify samples 0 1 0 1 1 0 0 1 0 1 0 0 1 1 0 0 0 1 1 0 1 0 0 1 1 0 1 1 0 1 0 1 0 0 1 0 1 1 1 0 0 0 1 1 0 1 0 1 1 0 1 0 0 1 0 1 0 1 1 0 1 0 0 1 1 0 0 1 1 0 0 0 1 1 1 0 1 1 0 1 1 0 0 1 0 1 1 0 0 1 0 0 1 1 1 0 1 0 1 1 1 0 1 0 1 1 0 0 0 0 0 1 1 0 0 1 0 1 0 0 DNN in the memory Figure 1: Demonstration of our proposed attack against a deployed DNN in the memory. By \ufb02ipping critical bits (marked in red), our method can mislead a speci\ufb01c sample into the target class without any sample modi\ufb01cation while not signi\ufb01cantly reduce the prediction accuracy of other samples. using some heuristic strategies in their methods. For example, Rakin et al. (2019) combined gradient ranking and progressive search to identify the critical bits for \ufb02ipping. This work also focuses on the bit-level weight attack against DNNs in the deployment stage, whereas with two different goals, including effectiveness and stealthiness. The effectiveness requires that the attacked model can misclassify a speci\ufb01c sample to a attacker-speci\ufb01ed target class without any sample modi\ufb01cation, while the stealthiness encourages that the prediction accuracy of other samples will not be signi\ufb01cantly reduced. As shown in Fig. 1, to achieve these goals, we propose to identify and \ufb02ip bits that are critical to the prediction of the speci\ufb01c sample but not signi\ufb01cantly impact the prediction of other samples. Speci\ufb01cally, we treat each bit in the memory as a binary variable, and our task is to determine its state (i.e., 0 or 1). Accordingly, it can be formulated as a binary integer programming (BIP) problem. To further improve the stealthiness, we also limit the number of \ufb02ipped bits, which can be formulated as a cardinality constraint. However, how to solve the BIP problem with a cardinality constraint is a challenging problem. Fortunately, inspired by an advanced optimization method, the \u2113p-box ADMM (Wu & Ghanem, 2018), this problem can be reformulated as a continuous optimization problem, which can further be ef\ufb01ciently and effectively solved by the alternating direction method of multipliers (ADMM) (Glowinski & Marroco, 1975; Gabay & Mercier, 1976). Consequently, the \ufb02ipped bits can be determined through optimization rather than the original heuristic strategy, which makes our attack more effective. Note that we also conduct attack against the quantized DNN models, following the setting in some related works (Rakin et al., 2019; 2020a). Extensive experiments demonstrate the superiority of the proposed method over several existing weight attacks. For example, our method achieves a 100% attack success rate with 7.37 bit-\ufb02ips and 0.09% accuracy degradation of the rest unspeci\ufb01c inputs in attacking a 8-bit quantized ResNet-18 model on ImageNet. Moreover, we also demonstrate that the proposed method is also more resistant to existing defense methods. The main contributions of this work are three-fold. 1) We explore a novel attack scenario where the attacker enforces a speci\ufb01c sample to be predicted as a target class by modifying the weights of a deployed model via bit \ufb02ipping without any sample modi\ufb01cation. 2) We formulate the attack as a BIP problem with the cardinality constraint and propose an effective and ef\ufb01cient method to solve this problem. 3) Extensive experiments verify the superiority of the proposed method against DNNs with or without defenses. 2 RELATED WORKS Neural Network Weight Attack. How to perturb the weights of a trained DNN for malicious purposes received extensive attention (Liu et al., 2017a; 2018b; Hong et al., 2019). Liu et al. (2017a) \ufb01rstly proposed two schemes to modify model parameters for misclassi\ufb01cation without and with considering stealthiness, which is dubbed single bias attack (SBA) and gradient descent 2 \fPublished as a conference paper at ICLR 2021 attack (GDA) respectively. After that, Trojan attack (Liu et al., 2018b) was proposed, which injects malicious behavior to the DNN by generating a general trojan trigger and then retraining the model. This method requires to change lots of parameters. Recently, fault sneaking attack (FSA) (Zhao et al., 2019) was proposed, which aims to misclassify certain samples into a target class by modifying the DNN parameters with two constraints, including maintaining the classi\ufb01cation accuracy of other samples and minimizing parameter modi\ufb01cations. Note that all those methods are designed to misclassify multiple samples instead of a speci\ufb01c sample, which may probably modify lots of parameters or degrade the accuracy of other samples sharply. Bit-Flip based Attack. Recently, some physical fault injection techniques (Agoyan et al., 2010; Kim et al., 2014; Selmke et al., 2015) were proposed, which can be adopted to precisely \ufb02ip any bit in the memory. Those techniques promote researchers to study how to modify model parameters at the bit-level. As a branch of weight attack, the bit-\ufb02ip based attack was \ufb01rstly explored in (Rakin et al., 2019). It proposed an untargeted attack that can convert the attacked DNN to a random output generator with several bit-\ufb02ips. Besides, Rakin et al. (2020a) proposed the targeted bit Trojan (TBT) to inject the fault into DNNs by \ufb02ipping some critical bits. Speci\ufb01cally, the attacker \ufb02ips the identi\ufb01ed bits to force the network to classify all samples embedded with a trigger to a certain target class, while the network operates with normal inference accuracy with benign samples. Most recently, Rakin et al. (2020b) proposed the targeted bit-\ufb02ip attack (T-BFA), which achieves malicious purposes without modifying samples. Speci\ufb01cally, T-BFA can mislead samples from single source class or all classes to a target class by \ufb02ipping the identi\ufb01ed weight bits. It is worth noting that the above bit-\ufb02ip based attacks leverage heuristic strategies to identify critical weight bits. How to \ufb01nd critical bits for the bit-\ufb02ip based attack method is still an important open question. 3 TARGETED ATTACK WITH LIMITED BIT-FLIPS (TA-LBF) 3.1 PRELIMINARIES Storage and Calculation of Quantized DNNs. Currently, it is a widely-used technique to quantize DNNs before deploying on devices for ef\ufb01ciency and reducing storage size. For each weight in l-th layer of a Q-bit quantized DNN, it will be represented and then stored as the signed integer in two\u2019s complement representation (v = [vQ; vQ\u22121; ...; v1] \u2208{0, 1}Q) in the memory. Attacker can modify the weights of DNNs through \ufb02ipping the stored binary bits. In this work, we adopt the layer-wise uniform weight quantization scheme similar to Tensor-RT (Migacz, 2017). Accordingly, each binary vector v can be converted to a real number by a function h(\u00b7), as follow: h(v) = (\u22122Q\u22121 \u00b7 vQ + Q\u22121 X i=1 2i\u22121 \u00b7 vi) \u00b7 \u2206l, (1) where l indicates which layer the weight is from, \u2206l > 0 is a known and stored constant which represents the step size of the l-th layer weight quantizer. Notations. We denote a Q-bit quantized DNN-based classi\ufb01cation model as f : X \u2192Y, where X \u2208Rd being the input space and Y \u2208{1, 2, ..., K} being the K-class output space. Assuming that the last layer of this DNN model is a fully-connected layer with B \u2208{0, 1}K\u00d7C\u00d7Q being the quantized weights, where C is the dimension of last layer\u2019s input. Let Bi,j \u2208{0, 1}Q be the two\u2019s complement representation of a single weight and Bi \u2208{0, 1}C\u00d7Q denotes all the binary weights connected to the i-th output neuron. Given a test sample x with the ground-truth label s, f(x; \u0398, B) \u2208[0, 1]K is the output probability vector and g(x; \u0398) \u2208RC is the input of the last layer, where \u0398 denotes the model parameters without the last layer. Attack Scenario. In this paper, we focus on the white-box bit-\ufb02ip based attack, which was \ufb01rst introduced in (Rakin et al., 2019). Speci\ufb01cally, we assume that the attacker has full knowledge of the model (including it\u2019s architecture, parameters, and parameters\u2019 location in the memory), and can precisely \ufb02ip any bit in the memory. Besides, we also assume that attackers can have access to a small portion of benign samples, but they can not tamper the training process and the training data. Attacker\u2019s Goals. Attackers have two main goals, including the effectiveness and the stealthiness. Speci\ufb01cally, effectiveness requires that the attacked model can misclassify a speci\ufb01c sample to a prede\ufb01ned target class without any sample modi\ufb01cation, and the stealthiness requires that the prediction accuracy of other samples will not be signi\ufb01cantly reduced. 3 \fPublished as a conference paper at ICLR 2021 3.2 THE PROPOSED METHOD Loss for Ensuring Effectiveness. Recall that our \ufb01rst target is to force a speci\ufb01c image to be classi\ufb01ed as the target class by modifying the model parameters at the bit-level. To this end, the most straightforward way is maximizing the logit of the target class while minimizing that of the source class. For a sample x, the logit of a class can be directly determined by the input of the last layer g(x; \u0398) and weights connected to the node of that class. Accordingly, we can modify weights only connected to the source and target class to ful\ufb01ll our purpose, as follows: L1(x; \u0398, B, \u02c6 Bs, \u02c6 Bt) = max \u0000m \u2212p(x; \u0398, \u02c6 Bt) + \u03b4, 0 \u0001 + max \u0000p(x; \u0398, \u02c6 Bs) \u2212m + \u03b4, 0 \u0001 , (2) where p(x; \u0398, \u02c6 Bi) = [h(\u02c6 Bi,1); h(\u02c6 Bi,2); ...; h(\u02c6 Bi,C)]\u22a4g(x; \u0398) denotes the logit of class i (i = s or i = t), h(\u00b7) is the function de\ufb01ned in Eq. (1), m = max i\u2208{0,...,K}\\{s}p(x; \u0398, Bi), and \u03b4 \u2208R indicates a slack variable, which will be speci\ufb01ed in later experiments. The \ufb01rst term of L1 aims at increasing the logit of the target class, while the second term is to decrease the logit of the source class. The loss L1 is 0 only when the output on target class is more than m + \u03b4 and the output on source class is less than m \u2212\u03b4. That is, the prediction on x of the target model is the prede\ufb01ned target class. Note that \u02c6 Bs, \u02c6 Bt \u2208{0, 1}C\u00d7Q are two variables we want to optimize, corresponding to the weights of the fully-connected layer w.r.t. class s and t, respectively, in the target DNN model. B \u2208{0, 1}K\u00d7C\u00d7Q denotes the weights of the fully-connected layer of the original DNN model, and it is a constant tensor in L1. For clarity, hereafter we simplify L1(x; \u0398, B, \u02c6 Bs, \u02c6 Bt) as L1(\u02c6 Bs, \u02c6 Bt), since x and \u0398 are also provided input and weights. Loss for Ensuring Stealthiness. As we mentioned in Section 3.1, we assume that the attacker can get access to an auxiliary sample set {(xi, yi)}N i=1. Accordingly, the stealthiness of the attack can be formulated as follows: L2(\u02c6 Bs, \u02c6 Bt) = N X i=1 \u2113(f(xi; \u0398, B{1,...,K}\\{s,t}, \u02c6 Bs, \u02c6 Bt), yi), (3) where B{1,...,K}\\{s,t} denotes {B1, B2, ..., BK}\\{Bs, Bt}, and fj(xi; \u0398, B{1,...,K}\\{s,t}, \u02c6 Bs, \u02c6 Bt) indicates the posterior probability of xi w.r.t. class j, caclulated by Softmax(p(xi; \u0398, \u02c6 Bj)) or Softmax(p(xi; \u0398,Bj)). \u2113(\u00b7, \u00b7) is speci\ufb01ed by the cross entropy loss. To keep clarity, xi, \u0398 and B{1,...,K}\\{s,t} are omitted in L2(\u02c6 Bs, \u02c6 Bt) . Besides, to better meet our goal, a straightforward additional approach is reducing the magnitude of the modi\ufb01cation. In this paper, we constrain the number of bit-\ufb02ips less than k. Physical bit \ufb02ipping techniques can be time-consuming as discussed in (Van Der Veen et al., 2016; Zhao et al., 2019). Moreover, such techniques lead to abnormal behaviors in the attacked system (e.g., suspicious cache activity of processes), which may be detected by some physical detection-based defenses (Gruss et al., 2018). As such, minimizing the number of bit-\ufb02ips is critical to make the attack more ef\ufb01cient and practical. Overall Objective. In conclusion, the \ufb01nal objective function is as follows: min \u02c6 Bs,\u02c6 Bt L1(\u02c6 Bs, \u02c6 Bt) + \u03bbL2(\u02c6 Bs, \u02c6 Bt), s.t. \u02c6 Bs \u2208{0, 1}C\u00d7Q, \u02c6 Bt \u2208{0, 1}C\u00d7Q, dH(Bs, \u02c6 Bs) + dH(Bt, \u02c6 Bt) \u2264k, (4) where dH(\u00b7, \u00b7) denotes the Hamming distance and \u03bb > 0 is a trade-off parameter. For the sake of brevity, Bs and Bt are concatenated and further reshaped to the vector b \u2208{0, 1}2CQ. Similarly, \u02c6 Bs and \u02c6 Bt are concatenated and further reshaped to the vector \u02c6 b \u2208{0, 1}2CQ. Besides, for binary vector b and \u02c6 b, there exists a nice relationship between Hamming distance and Euclidean distance: dH(b, \u02c6 b) = ||b \u2212\u02c6 b||2 2. The new formulation of the objective is as follows: min \u02c6 b L1(\u02c6 b) + \u03bbL2(\u02c6 b), s.t. \u02c6 b \u2208{0, 1}2CQ, ||b \u2212\u02c6 b||2 2 \u2212k \u22640. (5) Problem (5) is denoted as TA-LBF (targeted attack with limited bit-\ufb02ips). Note that TA-LBF is a binary integer programming (BIP) problem, whose optimization is challenging. We will introduce an effective and ef\ufb01cient method to solve it in the following section. 4 \fPublished as a conference paper at ICLR 2021 3.3 AN EFFECTIVE OPTIMIZATION METHOD FOR TA-LBF To solve the challenging BIP problem (5), we adopt the generic solver for integer programming, dubbed \u2113p-Box ADMM (Wu & Ghanem, 2018). The solver presents its superior performance in many tasks, e.g., model pruning (Li et al., 2019), clustering (Bibi et al., 2019), MAP inference (Wu et al., 2020a), adversarial attack (Fan et al., 2020), etc.. It proposed to replace the binary constraint equivalently by the intersection of two continuous constraints, as follows \u02c6 b \u2208{0, 1}2CQ \u21d4\u02c6 b \u2208(Sb \u2229Sp), (6) where Sb = [0, 1]2CQ indicates the box constraint, and Sp = {\u02c6 b : ||\u02c6 b \u22121 2||2 2 = 2CQ 4 } denotes the \u21132-sphere constraint. Utilizing (6), Problem (5) is equivalently reformulated as min \u02c6 b,u1\u2208Sb,u2\u2208Sp,u3\u2208R+ L1(\u02c6 b) + \u03bbL2(\u02c6 b), s.t. \u02c6 b = u1, \u02c6 b = u2, ||b \u2212\u02c6 b||2 2 \u2212k + u3 = 0, (7) where two extra variables u1 and u2 are introduced to split the constraints w.r.t. \u02c6 b. Besides, the nonnegative slack variable u3 \u2208R+ is used to transform ||b\u2212\u02c6 b||2 2\u2212k \u22640 in (5) into ||b\u2212\u02c6 b||2 2\u2212k+u3 = 0. The above constrained optimization problem can be ef\ufb01ciently solved by the alternating direction method of multipliers (ADMM) (Boyd et al., 2011). Following the standard procedure of ADMM, we \ufb01rstly present the augmented Lagrangian function of the above problem, as follows: L(\u02c6 b, u1, u2, u3, z1, z2, z3) =L1(\u02c6 b) + \u03bbL2(\u02c6 b) + z\u22a4 1 (\u02c6 b \u2212u1) + z\u22a4 2 (\u02c6 b \u2212u2) +z3(||b \u2212\u02c6 b||2 2 \u2212k + u3) + c1(u1) + c2(u2) + c3(u3) +\u03c11 2 ||\u02c6 b \u2212u1||2 2 + \u03c12 2 ||\u02c6 b \u2212u2||2 2 + \u03c13 2 (||b \u2212\u02c6 b||2 2 \u2212k + u3)2, (8) where z1, z2 \u2208R2CQ and z3 \u2208R are dual variables, and \u03c11, \u03c12, \u03c13 > 0 are penalty factors, which will be speci\ufb01ed later. c1(u1) = I{u1\u2208Sb}, c2(u2) = I{u2\u2208Sp}, and c3(u3) = I{u3\u2208R+} capture the constraints Sb, Sp and R+, respectively. The indicator function I{a} = 0 if a is true; otherwise, I{a} = +\u221e. Based on the augmented Lagrangian function, the primary and dual variables are updated iteratively, with r indicating the iteration index. Given (\u02c6 br, zr 1, zr 2, zr 3), update (ur+1 1 , ur+1 2 , ur+1 3 ). Given (\u02c6 br, zr 1, zr 2, zr 3), (u1, u2, u3) are independent, and they can be optimized in parallel, as follows \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 ur+1 1 = arg min u1\u2208Sb (zr 1)\u22a4(\u02c6 br \u2212u1) + \u03c11 2 ||\u02c6 br \u2212u1||2 2 = PSb(\u02c6 br + zr 1 \u03c11 ), ur+1 2 = arg min u2\u2208Sp (zr 2)\u22a4(\u02c6 br \u2212u2) + \u03c12 2 ||\u02c6 br \u2212u2||2 2 = PSp(\u02c6 br + zr 2 \u03c12 ), ur+1 3 = arg min u3\u2208R+ zr 3(||b \u2212\u02c6 br||2 2 \u2212k + u3) + \u03c13 2 (||b \u2212\u02c6 br||2 2 \u2212k + u3)2 = PR+(\u2212||b \u2212\u02c6 br||2 2 + k \u2212zr 3 \u03c13 ), (9) where PSb(a) = min((1, max(0, a)) with a \u2208Rn is the projection onto the box constraint Sb; PSp(a) = \u221an 2 \u00af a ||a|| + 1 2 with \u00af a = a \u22121 2 indicates the projection onto the \u21132-sphere constraint Sp (Wu & Ghanem, 2018); PR+(a)=max(0, a) with a\u2208R indicates the projection onto R+. Given (ur+1 1 , ur+1 2 , ur+1 3 , zr 1, zr 2, zr 3), update \u02c6 br+1. Although there is no closed-form solution to \u02c6 br+1, it can be easily updated by the gradient descent method, as both L1(\u02c6 b) and L2(\u02c6 b) are differentiable w.r.t. \u02c6 b, as follows \u02c6 br+1 \u2190\u02c6 br \u2212\u03b7 \u00b7 \u2202L(\u02c6 b, ur+1 1 , ur+1 2 , ur+1 3 , zr 1, zr 2, zr 3) \u2202\u02c6 b \f \f \f\u02c6 b=\u02c6 br, (10) where \u03b7 > 0 denotes the step size. Note that we can run multiple steps of gradient descent in the above update. Both the number of steps and \u03b7 will be speci\ufb01ed in later experiments. Besides, due to the space limit, the detailed derivation of \u2202L/\u2202\u02c6 b will be presented in Appendix A. 5 \fPublished as a conference paper at ICLR 2021 Given (\u02c6 br+1, ur+1 1 , ur+1 2 , ur+1 3 ), update (zr+1 1 , zr+1 2 , zr+1 3 ). The dual variables are updated by the gradient ascent method, as follows \uf8f1 \uf8f2 \uf8f3 zr+1 1 = zr 1 + \u03c11(\u02c6 br+1 \u2212ur+1 1 ), zr+1 2 = zr 2 + \u03c12(\u02c6 br+1 \u2212ur+1 2 ), zr+1 3 = zr 3 + \u03c13(||b \u2212\u02c6 br+1||2 2 \u2212k + ur+1 3 ). (11) Remarks. 1) Note that since (ur+1 1 , ur+1 2 , ur+1 3 ) are updated in parallel, their updates belong to the same block. Thus, the above algorithm is a two-block ADMM algorithm. We provide the algorithm outline in Appendix B. 2) Except for the update of \u02c6 br+1, all other updates are very simple and ef\ufb01cient. The computational cost of the whole algorithm will be analyzed in Appendix C. 3) Due to the inexact solution to \u02c6 br+1 using gradient descent, the theoretical convergence of the whole ADMM algorithm cannot be guaranteed. However, as demonstrated in many previous works (Gol\u2019shtein & Tret\u2019yakov, 1979; Eckstein & Bertsekas, 1992; Boyd et al., 2011), the inexact two-block ADMM often shows good practical convergence, which is also the case in our later experiments. Besides, the numerical convergence analysis is presented in Appendix D. 4) The proper adjustment of (\u03c11, \u03c12, \u03c13) could accelerate the practical convergence, which will be speci\ufb01ed later . 4 EXPERIMENTS 4.1 EVALUATION SETUP Settings. We compare our method (TA-LBF) with GDA (Liu et al., 2017a), FSA (Zhao et al., 2019), T-BFA (Rakin et al., 2020b), and TBT (Rakin et al., 2020a). All those methods can be adopted to misclassify a speci\ufb01c image into a target class. We also take the \ufb01ne-tuning (FT) of the last fully-connected layer as a baseline method. We conduct experiments on CIFAR-10 (Krizhevsky et al., 2009) and ImageNet (Russakovsky et al., 2015). We randomly select 1,000 images from each dataset as the evaluation set for all methods. Speci\ufb01cally, for each of the 10 classes in CIFAR-10, we perform attacks on the 100 randomly selected validation images from the other 9 classes. For ImageNet, we randomly choose 50 target classes. For each target class, we perform attacks on 20 images randomly selected from the rest classes in the validation set. Besides, for all methods except GDA which does not employ auxiliary samples, we provide 128 and 512 auxiliary samples on CIFAR-10 and ImageNet, respectively. Following the setting in (Rakin et al., 2020a;b), we adopt the quantized ResNet (He et al., 2016) and VGG (Simonyan & Zisserman, 2015) as the target models. For our TA-LBF, the trade-off parameter \u03bb and the constraint parameter k affect the attack stealthiness and the attack success rate. We adopt a strategy for jointly searching \u03bb and k, which is speci\ufb01ed in Appendix E.3. More descriptions of our settings are provided in Appendix E. Evaluation Metrics. We adopt three metrics to evaluate the attack performance, i.e., the post attack accuracy (PA-ACC), the attack success rate (ASR), and the number of bit-\ufb02ips (N\ufb02ip). PA-ACC denotes the post attack accuracy on the validation set except for the speci\ufb01c attacked sample and the auxiliary samples. ASR is de\ufb01ned as the ratio of attacked samples that are successfully attacked into the target class among all 1,000 attacked samples. N\ufb02ip is the number of bit-\ufb02ips required for an attack. A better attack performance corresponds to a higher PA-ACC and ASR, while a lower N\ufb02ip. Besides, we also show the accuracy of the original model, denoted as ACC. 4.2 MAIN RESULTS Results on CIFAR-10. The results of all methods on CIFAR-10 are shown in Table 1. Our method achieves a 100% ASR with the fewest N\ufb02ip for all the bit-widths and architectures. FT modi\ufb01es the maximum number of bits among all methods since there is no limitation of parameter modi\ufb01cations. Due to the absence of the training data, the PA-ACC of FT is also poor. These results indicate that \ufb01ne-tuning the trained DNN as an attack method is infeasible. Although T-BFA \ufb02ips the secondfewest bits under three cases, it fails to achieve a higher ASR than GDA and FSA. In terms of PA-ACC, TA-LBF is comparable to other methods. Note that the PA-ACC of TA-LBF signi\ufb01cantly outperforms that of GDA, which is the most competitive w.r.t. ASR and N\ufb02ip among all the baseline methods. The PA-ACC of GDA is relatively poor, because it does not employ auxiliary samples. Achieving the highest ASR, the lowest N\ufb02ip, and the comparable PA-ACC demonstrates that our optimization-based method is more superior than other heuristic methods (TBT, T-BFA and GDA). 6 \fPublished as a conference paper at ICLR 2021 Table 1: Results of all attack methods across different bit-widths and architectures on CIFAR-10 and ImageNet (bold: the best; underline: the second best). The mean and standard deviation of PA-ACC and N\ufb02ip are calculated by attacking the 1,000 images. Our method is denoted as TA-LBF. Dataset Method Target Model PA-ACC (%) ASR (%) N\ufb02ip Target Model PA-ACC (%) ASR (%) N\ufb02ip CIFAR-10 FT ResNet 8-bit ACC: 92.16% 85.01\u00b12.90 100.0 1507.51\u00b186.54 VGG 8-bit ACC: 93.20% 84.31\u00b13.10 98.7 11298.74\u00b1830.36 TBT 88.07\u00b10.84 97.3 246.70\u00b18.19 77.79\u00b123.35 51.6 599.40\u00b119.53 T-BFA 87.56\u00b12.22 98.7 9.91\u00b12.33 89.83\u00b13.92 96.7 14.53\u00b13.74 FSA 88.38\u00b12.28 98.9 185.51\u00b154.93 88.80\u00b12.86 96.8 253.92\u00b1122.06 GDA 86.73\u00b13.50 99.8 26.83\u00b112.50 85.51\u00b12.88 100.0 21.54\u00b16.79 TA-LBF 88.20\u00b12.64 100.0 5.57\u00b11.58 86.06\u00b13.17 100.0 7.40\u00b12.72 FT ResNet 4-bit ACC: 91.90% 84.37\u00b12.94 100.0 392.48\u00b147.26 VGG 4-bit ACC: 92.61% 83.31\u00b13.76 94.5 2270.52\u00b1324.69 TBT 87.79\u00b11.86 96.0 118.20\u00b115.03 83.90\u00b12.63 62.4 266.40\u00b118.70 T-BFA 86.46\u00b12.80 97.9 8.80\u00b12.01 88.74\u00b14.52 96.2 11.23\u00b12.36 FSA 87.73\u00b12.36 98.4 76.83\u00b125.27 87.58\u00b13.06 97.5 75.03\u00b129.75 GDA 86.25\u00b13.59 99.8 14.08\u00b17.94 85.08\u00b12.82 100.0 10.31\u00b13.77 TA-LBF 87.82\u00b12.60 100.0 5.25\u00b11.09 85.91\u00b13.29 100.0 6.26\u00b12.37 ImageNet FT ResNet 8-bit ACC: 69.50% 59.33\u00b10.93 100.0 277424.29\u00b112136.34 VGG 8-bit ACC: 73.31% 62.08\u00b12.33 100.0 1729685.22\u00b1137539.54 TBT 69.18\u00b10.03 99.9 577.40\u00b119.42 72.99\u00b10.02 99.2 4115.26\u00b1191.25 T-BFA 68.71\u00b10.36 79.3 24.57\u00b120.03 73.09\u00b10.12 84.5 363.78\u00b1153.28 FSA 69.27\u00b10.15 99.7 441.21\u00b1119.45 73.28\u00b10.03 100.0 1030.03\u00b1260.30 GDA 69.26\u00b10.22 100.0 18.54\u00b16.14 73.29\u00b10.02 100.0 197.05\u00b149.85 TA-LBF 69.41\u00b10.08 100.0 7.37\u00b12.18 73.28\u00b10.03 100.0 69.89\u00b118.42 FT ResNet 4-bit ACC: 66.77% 15.65\u00b14.52 100.0 135854.50\u00b121399.94 VGG 4-bit ACC: 71.76% 17.76\u00b11.71 100.0 1900751.70\u00b137329.44 TBT 66.36\u00b10.07 99.8 271.24\u00b115.98 71.18\u00b10.03 100.0 3231.00\u00b1345.68 T-BFA 65.86\u00b10.42 80.4 24.79\u00b119.02 71.49\u00b10.15 84.3 350.33\u00b1158.57 FSA 66.44\u00b10.21 99.9 157.53\u00b133.66 71.69\u00b10.09 100.0 441.32\u00b1111.26 GDA 66.54\u00b10.22 100.0 11.45\u00b13.82 71.73\u00b10.03 100.0 107.18\u00b128.70 TA-LBF 66.69\u00b10.07 100.0 7.96\u00b12.50 71.73\u00b10.03 100.0 69.72\u00b118.84 Results on ImageNet. The results on ImageNet are shown in Table 1. It can be observed that GDA shows very competitive performance compared to other methods. However, our method obtains the highest PA-ACC, the fewest bit-\ufb02ips (less than 8), and a 100% ASR in attacking ResNet. For VGG, our method also achieves a 100% ASR with the fewest N\ufb02ip for both bit-widths. The N\ufb02ip results of our method are mainly attributed to the cardinality constraint on the number of bit-\ufb02ips. Moreover, for our method, the average PA-ACC degradation over four cases on ImageNet is only 0.06%, which demonstrates the stealthiness of our attack. When comparing the results of ResNet and VGG, an interesting observation is that all methods require signi\ufb01cantly more bit-\ufb02ips for VGG. One reason is that VGG is much wider than ResNet. Similar to the claim in (He et al., 2020), increasing the network width contributes to the robustness against the bit-\ufb02ip based attack. 4.3 RESISTANCE TO DEFENSE METHODS Resistance to Piece-wise Clustering. He et al. (2020) proposed a novel training technique, called piece-wise clustering, to enhance the network robustness against the bit-\ufb02ip based attack. Such a training technique introduces an additional weight penalty to the inference loss, which has the effect of eliminating close-to-zero weights (He et al., 2020). We test the resistance of all attack methods to the piece-wise clustering. We conduct experiments with the 8-bit quantized ResNet on CIFAR-10 and ImageNet. Following the ideal con\ufb01guration in (He et al., 2020), the clustering coef\ufb01cient, which is a hyper-parameter of piece-wise clustering, is set to 0.001 in our evaluation. For our method, the initial k is set to 50 on ImageNet and the rest settings are the same as those in Section 4.1. Besides the three metrics in Section 4.1, we also present the number of increased N\ufb02ip compared to the model without defense (i.e., results in Table 1), denoted as \u2206N\ufb02ip. The results of the resistance to the piece-wise clustering of all attack methods are shown in Table 2. It shows that the model trained with piece-wise clustering can improve the number of required bit-\ufb02ips for all attack methods. However, our method still achieves a 100% ASR with the least number of bit-\ufb02ips on both two datasets. Although TBT achieves a smaller \u2206N\ufb02ip than ours on CIFAR-10, its ASR is only 52.3%, which also veri\ufb01es the defense effectiveness of the piece-wise clustering. Compared with other methods, TA-LBF achieves the fewest \u2206N\ufb02ip on ImageNet and the best PA-ACC on both datasets. These results demonstrate the superiority of our method over other methods when attacking models trained with piece-wise clustering. 7 \fPublished as a conference paper at ICLR 2021 Table 2: Results of all attack methods against the models with defense on CIFAR-10 and ImageNet (bold: the best; underline: the second best). The mean and standard deviation of PA-ACC and N\ufb02ip are calculated by attacking the 1,000 images. Our method is denoted as TA-LBF. \u2206N\ufb02ip denotes the increased N\ufb02ip compared to the corresponding result in Table 1. Defense Dataset Method ACC (%) PA-ACC (%) ASR (%) N\ufb02ip \u2206N\ufb02ip Piece-wise Clustering CIFAR-10 FT 91.01 84.06\u00b13.56 99.5 1893.55\u00b168.98 386.04 TBT 87.05\u00b11.69 52.3 254.20\u00b110.22 7.50 T-BFA 85.82\u00b11.89 98.6 45.51\u00b19.47 35.60 FSA 86.61\u00b12.51 98.6 246.11\u00b175.36 60.60 GDA 84.12\u00b14.77 100.0 52.76\u00b116.29 25.93 TA-LBF 87.30\u00b12.74 100.0 18.93\u00b17.11 13.36 ImageNet FT 63.62 43.44\u00b12.07 92.2 762267.56\u00b152179.46 484843.27 TBT 63.07\u00b10.04 81.8 1184.14\u00b130.30 606.74 T-BFA 62.82\u00b10.27 90.1 273.56\u00b1191.29 248.99 FSA 63.26\u00b10.21 99.5 729.94\u00b1491.83 288.73 GDA 63.14\u00b10.48 100.0 107.59\u00b131.15 89.05 TA-LBF 63.52\u00b10.14 100.0 51.11\u00b14.33 43.74 Larger Model Capacity CIFAR-10 FT 94.29 86.46\u00b12.84 100.0 2753.43\u00b1188.27 1245.92 TBT 89.72\u00b12.99 89.5 366.90\u00b112.09 120.20 T-BFA 91.16\u00b11.42 98.7 17.91\u00b14.64 8.00 FSA 90.70\u00b12.37 98.5 271.27\u00b165.18 85.76 GDA 89.83\u00b13.02 100.0 48.96\u00b121.03 22.13 TA-LBF 90.96\u00b12.63 100.0 8.79\u00b12.44 3.22 ImageNet FT 71.35 63.51\u00b11.29 100.0 507456.61\u00b134517.04 230032.32 TBT 71.12\u00b10.04 99.9 1138.34\u00b144.23 560.94 T-BFA 70.84\u00b10.30 88.9 40.23\u00b127.29 15.66 FSA 71.30\u00b10.04 100.0 449.70\u00b1106.42 8.49 GDA 71.30\u00b10.05 100.0 20.01\u00b16.04 1.47 TA-LBF 71.30\u00b10.04 100.0 8.48\u00b12.52 1.11 1 5 10 20 50 100 200 60 70 80 90 100 PA-ACC / ASR (%) 10 15 20 Nflip 5 10 15 20 25 30 k 75 80 85 90 95 100 PA-ACC / ASR (%) 0 5 10 15 20 25 30 Nflip 25 50 100 200 400 800 N 85 90 95 100 PA-ACC / ASR (%) 0 5 10 Nflip ASR PA-ACC Nflip Figure 2: Results of TA-LBF with different parameters \u03bb, k, and the number of auxiliary samples N on CIFAR-10. Regions in shadow indicate the standard deviation of attacking the 1,000 images. Resistance to Larger Model Capacity. Previous studies (He et al., 2020; Rakin et al., 2020b) observed that increasing the network capacity can improve the robustness against the bit-\ufb02ip based attack. Accordingly, we evaluate all attack methods against the models with a larger capacity using the 8-bit quantized ResNet on both datasets. Similar to the strategy in (He et al., 2020), we increase the model capacity by varying the network width (i.e., 2\u00d7 width in our experiments). All settings of our method are the same as those used in Section 4.1. The results are presented in Table 2. We observe that all methods require more bit-\ufb02ips to attack the model with the 2\u00d7 width. To some extent, it demonstrates that the wider network with the same architecture is more robust against the bit-\ufb02ip based attack. However, our method still achieves a 100% ASR with the fewest N\ufb02ip and \u2206N\ufb02ip. Moreover, when comparing the two defense methods, we \ufb01nd that piece-wise clustering performs better than the model with a larger capacity in terms of \u2206N\ufb02ip. However, piece-wise clustering training also causes the accuracy decrease of the original model (e.g., from 92.16% to 91.01% on CIFAR-10). We provide more results in attacking models with defense under different settings in Appendix F. 8 \fPublished as a conference paper at ICLR 2021 Original ACC=98.0% FSA PA-ACC=89.4% Nflip=97 GDA PA-ACC=61.3% Nflip=9 TA LBF(ours) PA-ACC=91.6% Nflip=7 Class 0 Class 1 (Target Class) Class 2 Class 3 (Source Class) Attacked Sample Figure 3: Visualization of decision boundaries of the original model and the post attack models. The attacked sample from Class 3 is misclassi\ufb01ed into the Class 1 by FSA, GDA, and our method. 4.4 ABLATION STUDY We perform ablation studies on parameters \u03bb and k, and the number of auxiliary samples N. We use the 8-bit quantized ResNet on CIFAR-10 as the representative for analysis. We discuss the attack performance of TA-LBF under different values of \u03bb while k is \ufb01xed at 20, and under different values of k while \u03bb is \ufb01xed at 10. To analyze the effect of N, we con\ufb01gure N from 25 to 800 and keep other settings the same as those in Section 4.1. The results are presented in Fig. 2. We observe that our method achieves a 100% ASR when \u03bb is less than 20. As expected, the PA-ACC increases while the ASR decreases along with the increase of \u03bb. The plot of parameter k presents that k can exactly limit the number of bit-\ufb02ips, while other attack methods do not involve such constraint. This advantage is critical since it allows the attacker to identify limited bits to perform an attack when the budget is \ufb01xed. As shown in the \ufb01gure, the number of auxiliary samples less than 200 has a marked positive impact on the PA-ACC. It\u2019s intuitive that more auxiliary samples can lead to a better PA-ACC. The observation also indicates that TA-LBF still works well without too many auxiliary samples. 4.5 VISUALIZATION OF DECISION BOUNDARY To further compare FSA and GDA with our method, we visualize the decision boundaries of the original and the post attack models in Fig. 3. We adopt a four-layer Multi-Layer Perceptron trained with the simulated 2-D Blob dataset from 4 classes. The original decision boundary indicates that the original model classi\ufb01es all data points almost perfectly. The attacked sample is classi\ufb01ed into Class 3 by all methods. Visually, GDA modi\ufb01es the decision boundary drastically, especially for Class 0. However, our method modi\ufb01es the decision boundary mainly around the attacked sample. Althoug FSA is comparable to ours visually in Fig. 3, it \ufb02ips 10\u00d7 bits than GDA and TA-LBF. In terms of the numerical results, TA-LBF achieves the best PA-ACC and the fewest N\ufb02ip. This \ufb01nding veri\ufb01es that our method can achieve a successful attack even only tweaking the original classi\ufb01er. 5" + }, + { + "url": "http://arxiv.org/abs/1903.05965v2", + "title": "Rectified Decision Trees: Towards Interpretability, Compression and Empirical Soundness", + "abstract": "How to obtain a model with good interpretability and performance has always\nbeen an important research topic. In this paper, we propose rectified decision\ntrees (ReDT), a knowledge distillation based decision trees rectification with\nhigh interpretability, small model size, and empirical soundness. Specifically,\nwe extend the impurity calculation and the pure ending condition of the\nclassical decision tree to propose a decision tree extension that allows the\nuse of soft labels generated by a well-trained teacher model in training and\nprediction process. It is worth noting that for the acquisition of soft labels,\nwe propose a new multiple cross-validation based method to reduce the effects\nof randomness and overfitting. These approaches ensure that ReDT retains\nexcellent interpretability and even achieves fewer nodes than the decision tree\nin the aspect of compression while having relatively good performance. Besides,\nin contrast to traditional knowledge distillation, back propagation of the\nstudent model is not necessarily required in ReDT, which is an attempt of a new\nknowledge distillation approach. Extensive experiments are conducted, which\ndemonstrates the superiority of ReDT in interpretability, compression, and\nempirical soundness.", + "authors": "Jiawang Bai, Yiming Li, Jiawei Li, Yong Jiang, Shutao Xia", + "published": "2019-03-14", + "updated": "2020-08-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction Random forests is a typical ensemble learning method, where a large number of randomized decision trees are constructed and the results from all trees are combined for the \ufb01nal prediction of the forest. Since its introduction in [Breiman, 2001], random forests and its several variants [Friedman, 2001; Chen and Guestrin, 2016] have been widely used in many \ufb01elds, such as deep learning [Zhou and Feng, 2017; Feng and Zhou, 2018] and even outlier detection [Liu et al., 2008]. In addition to its application, its theoretical properties have also been extensively studied. [Denil et al., 2014; Scornet et al., 2015]. \u2217equal contribution. However, although those complicated algorithms, such as random forests and GBDT, reach great success in many aspects, this high prediction performance makes considerable sacri\ufb01ces of interpretability. The essential procedures of ensemble approaches cause this decline. For example, comparing to decision trees, the bootstrap and voting process of random forests makes the predictions much more dif\ufb01cult to explain. On the contrary, the decision trees are known to have the best interpretability among all machine learning algorithms yet with relatively lousy performance. Besides, forest-based algorithms or even deep neural networks (DNN) usually require much larger storage than decision trees, which is unacceptable especially when the model is set on a personal device with strict storage limitations (such as a cellular device). This con\ufb02ict between empirical soundness and interpretability with \ufb02exible storage continuously drives researchers. To address these problems, using a tree ensemble to generate additional samples for the further construction of the decision tree is proposed by Breiman [Breiman and Shang, 1996], which can be regarded as the \ufb01rst attempt for this problem. In [Meinshausen, 2010], Node Harvest is introduced to simplify tree ensemble by using the shallow parts of the trees. The shortcoming of Node Harvest is that the simpli\ufb01ed derived model is still an ensemble and therefore the challenge of interpretation remains. Recently, a distillation-based method is proposed, where the soften labels are generated by welltrained DNN to create a more understandable model in the form of a soft decision trees [Frosst and Hinton, 2017]. However, since this method relies on the backpropagation of the soft decision trees, it cannot be used in the classical decision trees. Besides, the interpretability of the soft decision trees [Irsoy et al., 2012] is much weaker than the classical decision trees. In this paper, we propose recti\ufb01ed decision trees (ReDT), a knowledge distillation based decision trees recti\ufb01cation with high interpretability, empirical soundness and even has a smaller model size compared to the decision trees. The critical difference between ReDT and decision tree lies in the use of softening labels, which is the weighted average of soft labels (the output probability vector of a well-trained teacher model) and hard labels, in the process of building trees. Speci\ufb01cally, to construct a decision tree, the hard label is mainly involved in the two parts of the tree construc\ftion process: (1) calculating the change of impurity and (2) determining whether the node is pure in the stopping condition. In our method, we introduce soften labels into these processes. Firstly, we calculate the average of the soften labels in the node. The proportion of the samples with i-th category needed in the calculation of impurity criterion is redetermined using the value of the i-th dimension of the soften label. Secondly, since it is almost impossible for the mixed labels of all samples in a node to be the same, we propose to use a pseudo-category which is corresponding to the soften label of the sample. Then the original stopping condition can remain. In ReDT, the teacher model can be DNN or any other classi\ufb01cation algorithm and therefore ReDT is universal. In contrast to traditional knowledge distillation, back propagation of the student model is not necessarily required in ReDT, which can be regarded as an attempt of a new knowledge distillation approach. Besides, we propose a new multiple crossvalidation based method to reduce the effects of randomness and over\ufb01tting. The main contributions of this paper can be stated as follows: 1) We propose a decision trees extension, which is the \ufb01rst tree that allows training and predicting using soften labels; 2) The \ufb01rst universal back propagation-free distillation framework is proposed and 3) the empirical analysis of its mechanism is conducted; 4) We propose a new soften labels acquisition method based on multiple cross-validations to reduce the effects of randomness and over\ufb01tting; 5) Extensive experiments demonstrate the superiority of our approach in interpretability, compression, and empirical soundness. 2 Related Work The interpretability of complex machine learning models, especially ensemble approaches and deep learning, has been widely concerned. At present, the most widely used machine learning models are mainly forest-based algorithms and DNN, so their interpretability is of great signi\ufb01cance. There are a few previous studies on the interpretability of forest-based algorithms. The \ufb01rst work is done by Breiman [Breiman and Shang, 1996], who propose to use tree ensemble to generate additional samples for the further construction of a single decision tree. In [Meinshausen, 2010], Node harvest is proposed to simplify tree ensembles by using the shallow parts of the trees. Considering the simpli\ufb01cation of tree ensembles as a model selection problem, and using the Bayesian method for selection is also proposed in [Hara and Hayashi, 2018]. The interpretability research of DNN mainly on three aspects: visualizing the representations in intermediate layers of DNN [Zeiler and Fergus, 2014; Zhou et al., 2018], representation diagnosis [Yosinski et al., 2014; Zhang et al., 2018] and build explainable DNNs [Chen et al., 2016; Sabour et al., 2017]. Recently, a knowledge distillation based method is provided, which uses a trained DNN to create a more explainable model in the form of soft decision trees [Frosst and Hinton, 2017]. The compression of forest-based algorithms and DNN has also received extensive attention. A series of work focuses on pruning techniques for forest-based algorithms, whose idea is to reduce the size by removing redundant components while maintaining the predictive performance [Quinlan, 1993; Ren et al., 2015; Nan et al., 2016]. The idea of pruning is also widely used in the compression of DNN [Han et al., 2015b; He et al., 2017]. Recently, extensive researches have been conducted on compression methods based on coding or quantization. [Han et al., 2015a; Painsky and Rosset, 2016]. Recently, Knowledge distillation has been widely accepted as a compression method. The concept of knowledge distillation in the teacher-student framework by introducing the teacher\u2019s softened output is \ufb01rst proposed in [Hinton et al., 2015]. Since then, a series of improvements and applications of knowledge distillation have been proposed [Romero et al., 2015; Yim et al., 2017]. At present, almost all knowledge distillation focus on the compression of DNN and require the back-propagation of the student model. Besides, using knowledge distillation to distill DNN into a soften decision tree to achieve great interpretability and compressibility is recently proposed in [Frosst and Hinton, 2017]. This method can be regarded as the \ufb01rst attempt to apply knowledge distillation to interpretability. 3 The Proposed Method We present the recti\ufb01ed decision trees (ReDT) in this section. The main concepts of our proposed method are how to de\ufb01ne the important information of the teacher model (distilled knowledge) and how we use it in training the student model (i.e. the ReDT). Section 3.1 introduces the distilled knowledge that we further used in the construction of ReDT. Section 3.2 and 3.3 discuss the speci\ufb01c construction and prediction process of ReDT. An empirical analysis, which demonstrates why soften labels can reach better performance than hard labels in the construction of the decision tree, is provided in section 3.4. 3.1 Distilled Knowledge Let Dn represents a data set consisting of n i.i.d. observations. Each observation has the form (X, Y ), where X \u2208RD represents the D-dimensional features and Y \u2208 {1, \u00b7 \u00b7 \u00b7 , K} is the corresponding label of the observation. The label of a sample can be regarded a single sampling from a K-dimensional discrete distribution. Let hard label yhard denotes the one-hot representation of the label. (Kdimensional vector, where the value in the dimension corresponding to the category is 1 and the rest are all 0). From the perspective of probability, the training of the model can be considered as an approximation of the distribution of data. It is extremely dif\ufb01cult to recover the true distribution of (X, Y ) from the hard labels directly. In contrast, the output of a well-trained model consists of a significant amount of useful information compared to the original hard label itself. Inspired by this idea, we de\ufb01ne the soft label ysoft, which is the output probability vector of a welltrained model such as DNN, random forests and GBDT, as the distilled knowledge from the teacher model. This idea is also partly supported by [Hinton et al., 2015] where he used a softened version of the \ufb01nal output of a teacher network to teach information to a small student network. \fOnce a well-trained teacher model is given, the generation of the soft label is straightforward by directly outputting the probability of all training samples. However, the acquisition of teacher model is usually needed through training. The most straightforward idea is to train the teacher model using all training samples and output the soft label of those samples. However, the soft label obtained through this process has relatively poor quality due to the effects of randomness and over\ufb01tting. This problem does not exist in the previous knowledge distillation task since their training is carried out simultaneously rather than strictly one after the other, thanks for the teacher model and the student model can both be trained through back propagation. To address this problem, we propose a multiple cross-validation based methods to calculate soft labels. Speci\ufb01cally, if 5 times 5-fold cross validation is implemented, we \ufb01rst randomly divide the training set into \ufb01ve similarly sized sets, then using four sets of data for training, and the other set of data to predict (i.e. output its soft label). In each time, each sample is predicted once so that each sample will end up with 5 soft labels. And the \ufb01nal soft label is the average of all its predictions. 3.2 ReDT Construction In the proposed ReDT, comparing to the original decision tree, there are two main alterations including the calculation of impurity decrease and the stopping condition. In our method, we introduce soften label into these processes. Note that instead of using the soft label of samples directly, we use the mixed label ymixed, which is the weighted average of soft label and hard label with weight hyperparameter \u03b1 \u2208[0, 1]. That is, ymixed = \u03b1yhard + (1 \u2212\u03b1)ysoft. (1) The hyperparameter \u03b1 plays a role in regulating the proportion of using the soft label. The larger \u03b1, the smaller the proportion of the soft label in the mixed label. When \u03b1 = 1, the ReDT becomes Breiman\u2019s decision trees. The purpose of using mixed labels is to consider that the soft label may have a certain degree of error. By adjusting the hyperparameter \u03b1, we can obtain the soften label with suf\ufb01cient information and relative accuracy. Recall that in the classi\ufb01cation problem, the impurity decrease caused by splitting point v is denoted by I(v) = T (D) \u2212|Dl| |D| T (Dl) \u2212|Dr| |D| T (Dr), (2) where Dl, Dr are two children sets generated by D splitting at v, T (\u00b7) is the impurity criterion (e.g. Shannon entropy or Gini index). The \ufb01rst alteration in ReDT is the probability pi, which implies the proportion of the samples with i-th category, used in calculating the impurity decrease of a splitting point. Speci\ufb01cally, since each sample uses a soften label instead of a hard label, we calculate the average of the soften labels of all the samples in the node and \ufb01nally obtain a Kdimensional vector. At this time, pi is redetermined as the value of the i-th dimension of that vector. In other words, let y(j) mixed = (yj1, yj2, \u00b7 \u00b7 \u00b7 , yjK) denotes the mixed label of j-th training sample. pi of node N is calculated by pi = 1 |N| X y(j) mixed\u2208N yji, (3) where |N| denotes the number of samples in leaf node N. The second alteration is how to de\ufb01ne pure in the stopping condition. In the training process of original decision trees, if all samples in a node have a single category, the node is considered to be pure. At this point, the stopping condition is reached, and this node is no longer to split. However, in the ReDT, it is almost impossible for the mixed labels of all samples to be the same. Therefore, we use the category corresponding to the maximum probability in the mixed label of the sample as its pseudo-category ypseudo, i.e., ypseudo = arg max ymixed, (4) and then determining whether to continue to split based on original stopping condition with it. Algorithm 1 The training process of ReDT: ReDT () 1: Input: Training set D = {(X, ymixed)} calculated according to (1) and minimum leaf size k. 2: Output: The recti\ufb01ed decision tree T . 3: Calculate pseudo-category ypseudo of each sample in D by (4). 4: Determine whether the node is pure based on whether each sample in D has the same pseudo-category. 5: if |X| > k and the node is not pure then 6: Calculate the impurity decrease vector I according to equation (2). 7: Select the splitting point with maximum impurity reduction criterion. 8: The training set D correspondingly split into two child nodes, called Dl, Dr. 9: T.leftchild \u2190ReDT (Dl, k) 10: T.rightchild \u2190ReDT (Dr, k) 11: end if 12: Return: T . 3.3 Prediction Once the ReDT has grown based on the mixed label as described above, the predictions for a newly given sample can be made as follows. Suppose the unlabeled sample is x and the predicted label and predicted discrete probability distribution of that sample is \u02c6 y and P = ( \u02c6 p1, \u00b7 \u00b7 \u00b7 , \u02c6 pK) respectively. According to a series of decisions, x will eventually fall into a leaf node, assuming that node is V . The predicted distribution of x is the average of the mixed labels of all training samples falling into the leaf nodes V , i.e., P = ( \u02c6 p1, \u00b7 \u00b7 \u00b7 , \u02c6 pK) = 1 |V | X y(i) mixed\u2208V y(i) mixed, (5) where |V | denotes the number of samples in leaf node V . The predicted label of x is the one with biggest probability in P : \u02c6 y = arg max i \u02c6 pi. (6) \f3.4 Empirical Analysis The reason why soften labels rather than hard labels should be used can be further demonstrated from the perspective of the calculation of impurity and distribution approximation. The speci\ufb01c analyses are as follows: Lemma 1 (Integer Partition Lemma). Suppose there is an integer N, which is the sum of K integers n1, \u00b7 \u00b7 \u00b7 , nK, i.e., N = n1 + n2, + \u00b7 \u00b7 \u00b7 + nk. There are totally Ck\u22121 n+k\u22121 = (n+k\u22121)! (k\u22121)!n! possible values for the ordered pair (n1, \u00b7 \u00b7 \u00b7 , nK). Proof. This problem is equivalent to picking k \u22121 locations randomly from n+ k \u22121 locations. The result is trivial based on the basics of number theory. Lemma 1 indicates that for a K-classi\ufb01cation problem, if the node N contains N samples, then the impurity of this node has at most Ck\u22121 n+k\u22121 possible values. In other words, compared to soften label, the use of hard label limits the precision of the impurity of the nodes. This limitation has a great adverse effect on the selection of the split point, especially when the number of samples is relatively small. From another perspective, the improvement brought by soft labels is since it is tough to recover the distribution of (X, Y ) with hard labels directly, especially when the number of samples is relatively small. However, once the relatively correct soften a well-trained teacher model provides labels, a large amount of information of the distribution is contained in it. The use of this information about the distribution makes the decision surface offset towards the real position compared to when using the hard label. 3.5 Comparision between DT, SDT and ReDT Although both soft decision trees (SDT) and ReDT are the extension of DT, there are many differences between them. In this section, we compare DT, SDT and ReDT from \ufb01ve aspects including (1) interpretability, (2) empirical soundness, (3) back-propagation needed, (4) soften label allowed and (5) compression, as shown in Table 1. The method that satis\ufb01es the aspect is marked by \u2713. It is worth noting that interpretability, empirical soundness, and compression are relative. For example, the interpretability of SDT is stronger than DNN but is much weaker than DT and ReDT. Besides, since back propagation of the student model is not necessarily required in ReDT, this new knowledge distillation approach can be easily extended to other model and preserving running ef\ufb01ciency. Table 1: Comparison between DT, SDT and ReDT. DT SDT ReDT Interpretability \u2713 \u2713 Empirical soundness \u2713 \u2713 Back-propagation needed \u2713 Soften label allowed \u2713 \u2713 Compression (small model size) \u2713 \u2713 4 Experiments 4.1 Con\ufb01guration For the DNN con\ufb01gurations, the experiments were conducted on the benchmark dataset MNIST [LeCun et al., 1998]. All networks were trained using Adam and an initial learning rate of 0.1. The learning rate was divided by 10 after epochs 20 and 40 (50 epochs in total). We examine a variety of DNN architectures including MLP, LeNet-5, and VGG-11, and use ReLU for activation function, cross-entropy for loss function. The MLP has two hidden layers, with 784 and 256 units respectively and dropout rate 0.5 for hidden layers. Besides, the temperature used in generating soft labels in DNN is set to 4 as suggested in [Hinton et al., 2015]. Table 2: Datasets description. DATA SET CLASS FEATURES INSTANCES ADULT 2 14 48842 CRX 2 15 690 EEG 2 15 14980 BANK 2 17 45211 GERMAN 2 20 1000 CMC 3 9 1473 CONNECT-4 3 42 67557 LAND-COVER 9 147 675 LETTER 26 15 20000 ISOLET 26 617 7797 All datasets involved in the evaluation of forestbased teacher are obtained from the UCI repository [Asuncion and Newman, 2017]. Their information are listed in Table 2. Besides, 70% data is used for training and other 30% is used for testing. Here we use random forests (RF) and GBDT as the teacher model. They are the representative of the bagging and boosting method in forest-based teacher respectively. We determine the value of \u03b1 by grid search in a step of 0.1 in the range [0, 1] and the implement of GBDT is refer on scikit-learn platform [Pedregosa et al., 2011]. The number of trees contained in both random forest and GBDT is all set to 100. Besides, the performance of the decision tree trained with hard labels is also provided as a benchmark. Compared with the classical decision tree, since the soft decision tree is more like a tree-shaped neural network and with much weaker interpretability, it is not compared as a benchmark in experiments. The Gini index was used in RF, DT and ReDT as the impurity measure and minimum leaf size k = 5 is set for both RF, GBDT, DT, and ReDT as suggested in [Breiman, 2001]. Besides, 5 times 5-fold cross-validation is used to calculate the soft label of the training set and Wilcoxons signedrank test [Dem\u02c7 sar, 2006] is carried out to test for difference between the results of the ReDT and those of decision trees at signi\ufb01cance level 0.05. Compared with decision trees, ReDT with better performance (higher accuracy or fewer number of nodes) is indicated in boldface. Those that had a statistically signi\ufb01cant difference from the decision tree are marked with \u201d\u2022\u201d. Besides, we carried out the experiment 10 times to reduce the effect of randomness. \fTable 3: Comparison of test accuracy of different forests-based teacher model. DATASET RF GBDT DT ReDT(RF) ReDT(GBDT) \u03b1\u2217(RF) \u03b1\u2217(GBDT) ADULT 86.54% 86.53% 81.86% 86.18%\u2022 86.16%\u2022 0.01 0.06 CRX 86.14% 86.09% 80.51% 85.46%\u2022 84.40%\u2022 0.08 0.11 EEG 81.50% 90.58% 82.88% 83.02% 83.01% 0.24 0.52 BANK 90.38% 90.41% 87.60% 90.11%\u2022 90.15%\u2022 0.06 0.03 GERMAN 76.60% 76.13% 68.37% 73.40%\u2022 72.67%\u2022 0.07 0.10 CMC 55.15% 55.66% 48.31% 55.05%\u2022 55.41%\u2022 0 0 CONNECT-4 75.38% 77.58% 71.73% 76.69%\u2022 76.02%\u2022 0.30 0.30 LAND-COVER 83.69% 83.80% 76.55% 77.59% 77.14% 0.54 0.37 LETTER 91.56% 93.61% 85.65% 86.01% 86.15% 0.9 0.9 ISOLET 93.68% 93.32% 79.83% 81.40%\u2022 81.77%\u2022 0.57 0.33 \u2022: ReDT is better than decision trees at a level of signi\ufb01cance 0.05. \u03b1\u2217: The average of all best \u03b1 for each experiment. Table 4: Comparison of the number of nodes of forests-based teacher distillation. DATASET RF GBDT DT ReDT(RF) ReDT(GBDT) ADULT 244832 1486 7869 2286\u2022 2023\u2022 CRX 6191 1336 103 48\u2022 65\u2022 EEG 125289 1489 1858 1948 1939 BANK 223906 1470 4302 1678\u2022 1603\u2022 GERMAN 11063 1404 227 140\u2022 172\u2022 CMC 16220 4206 630 202\u2022 275\u2022 CONNECT-4 470261 4426 18152 8813\u2022 8740\u2022 LAND-COVER 6100 9492 85 43\u2022 49\u2022 LETTER 168101 38631 2752 2464\u2022 2459\u2022 ISOLET 58905 32751 707 464\u2022 593\u2022 \u2022: ReDT is better than decision trees at a level of signi\ufb01cance 0.05. 4.2 DNNs Teacher We discuss the performance including test accuracy (ACC) and the number of nodes (NODE) of ReDT under different teacher models and compare it with DT and its teacher model in this section. Table 5: Comparison on MNIST. MLP LeNet-5 VGG-11 ACC (DNN) 98.33% 99.42% 99.49% ACC (DT) 87.55% NODE (DT) 5957 ACC (ReDT) 88.21% 88.57%\u2022 88.53%\u2022 NODE (ReDT) 5361\u2022 5173\u2022 5803\u2022 \u2022: ReDT is better than DT at a level of signi\ufb01cance 0.05. As shown in Table 5, although there is still a gap in ACC between ReDT and its teacher model since decision tree cannot learn the spatial relationships among the raw pixels, the ReDT have a remarkable improvement comparing to the original decision tree. Not to mention that in terms of compression, ReDT even has fewer nodes than DT (and therefore has a smaller model size). 4.3 Forests-based Teacher Table 3 and 4 shows the test accuracy of different forestbased teacher model and the number of nodes respectively. Regardless of which teacher model is used, the ReDT has a remarkable improvement in both ef\ufb01ciency and compression. Among all ten data sets, ReDT has higher test accuracy than the decision tree, and this improvement is signi\ufb01cant on seven of those data sets. Speci\ufb01cally, ReDT has achieved an increase of almost 5% accuracy compared to DT on half of the data sets. In particular, in the three data sets (Band, ADULT, and CONNECT-4), ReDT has similar performance to its teacher model. Also, the value of optimal \u03b1 seems to have some direct connection with the number of categories in the dataset. Speci\ufb01cally, data sets with more categories (such as LAND-COVER, ISOLET, and LETTER) generally have a larger optimal \u03b1. In other words, a large proportion of hard label needs to be included in the mixed label to give ReDT excellent performance. Two reasons may cause this: 1) The more categories, the more likely the soft labels will contain more error information; 2) The more categories, the higher the interference caused by error information contained in soft labels. Regardless of the reason, the number of categories of samples can be used to provide the initial intuition of \u03b1. In terms of compression, in nine of ten data sets, the number of nodes in ReDT is smaller than the decision tree. In other words, ReDT has a smaller model size than the decision tree, not to mention the teacher models, such as random forests and GBDT, which is usually more complicated. Overall, ReDT with the forest-based teacher has achieved a signi\ufb01cant improvement in both performance and compression. \fFigure 1: Visualization of key pixels for MNIST image classi\ufb01cation. (The key pixels are marked in red.) 4.4 Discussion In this section, We discuss the compression, interpretability, and the impact of hyperparameters \u03b1 on the model. Compression As we mentioned above, ReDT is an extension of a decision tree, and therefore the size of its model can still be measured by the number of nodes. Without loss of generality, we compare ReDT with Decision tree here. There are two advantages for such comparison: 1) The decision tree is almost the model requires the fewest number of nodes in the forest-based algorithms, not to mention its size is much smaller than the DNN or other complex algorithms. If the model has a relatively smaller size compared to the decision tree, then it must have excellent compression; 2) The size of the decision tree and ReDT model are both re\ufb02ected by the number of nodes, which is convenient for comparison. Without loss of generality, we use random forest and GBDT as the teacher model here. The compression rate ( Node(ReDT)/Node(DT) ) of multiple data sets under different hyperparameter \u03b1 as shown in Fig. 2. The smaller the compression rate, the smaller the model size of ReDT. It can be seen that the compression rate of almost every dataset is less than 1, which indicates that for all hyperparameter \u03b1, ReDT has a smaller model size than DT in most cases. In addition, as the hyperparameter \u03b1 increases, the compression rate has an upward trend. This is caused by the fact that the soft label carries a large amount of information about the distribution, whether it is correct or not, thus facilitating the decision tree to divide the data. The smaller the \u03b1, the more signi\ufb01cant the proportion of the soft label in the mixed label, therefore the smaller the size of the model. Thus, although there is no \u03b1 such that it can correspond to the highest test accuracy on all datasets (because this is closely related to complex factors such as the correctness of the soft label, dataset, etc.), using \u03b1 to adjust the size of the model is a good choice. Besides, as shown in the \ufb01gure, the growth of the compression rate of the data set with more categories is signi\ufb01cantly slower. Regardless of the reason, this opposite tendency between compression rate and accuracy (the more the categories, the larger the \u03b1\u2217) allows ReDT to have a smaller model size when achieving empirical soundness. Interpretability The decision tree makes the prediction depending on the leaf node to which the input x belongs. The corresponding leaf (a) (b) Figure 2: Compression rate under different teacher model. (a) Random forests teacher; (b) GBDT teacher. node is determined by traversing the tree from the root. Although its path can represent the decision of the model, when the sample is in high dimensions, especially when it is a picture or speech, a single category will have a large number of different decision paths, and therefore it is dif\ufb01cult to explain the output by simply listing its path. To address this problem, we propose to highlight the key features (pixels) used in the sample\u2019s decision path. Here, we use MNIST as an example to demonstrate the powerful interpretability of ReDT. We randomly select three samples for each number to predict. The pixels contained in its decision path, the key pixels, are marked in red, as shown in Fig. 1. Although we don\u2019t have the ground true decision path, since the key pixel is almost the outline of the number, so the prediction is with highly interpretability and con\ufb01dence. 5" + } + ], + "Lihua Zhang": [ + { + "url": "http://arxiv.org/abs/2306.12013v1", + "title": "Mixed-norm Herz-slice Spaces and Their Applications", + "abstract": "We introduce mixed-norm Herz-slice spaces unifying classical Herz spaces and\nmixed-norm slice spaces, establish dual spaces and the block decomposition, and\nprove that the boundedness of Hardy-Littlewood maximal operator on mixed-norm\nHerz-slice spaces.", + "authors": "Lihua Zhang, Jiang Zhou", + "published": "2023-06-21", + "updated": "2023-06-21", + "primary_cat": "math.FA", + "cats": [ + "math.FA", + "42B35 35R35 46E30 42B25" + ], + "main_content": "Introduction In 1964, Beurling [6] originally introduced the Herz space Au(Rn), which is the original version of the Herz space of non-homogeneous type. In 1968, the Herz space Ku(Rn) has been studied systematically by Herz [9]. In the 1990\u2019s, the homogeneous Herz space ( \u02d9 K\u03b2,s u )(Rn) and the non-homogeneous Herz space (K\u03b1,p q )(Rn) are introduced by Lu and Yang [16]. In recent years, the homogeneous Herz space ( \u02d9 K\u03b2,s u )(Rn) and the non-homogeneous Herz 1 Lihua Zhang, College of Mathematics and System Science, Xinjiang University, No. 777, Huarui Street, Shuimogou District, Urumqi City, Xinjiang Uygur Autonomous Region, China e-mail: hanyehua666@163.com 2 Jiang Zhou, College of Mathematics and System Science, Xinjiang University, No. 777, Huarui Street, Shuimogou District, Urumqi City, Xinjiang Uygur Autonomous Region, China e-mail: zhoujiang@xju.edu.cn 1 \f2 Lihua Zhang and Jiang Zhou space (K\u03b2,s u )(Rn) was investigated in harmonic analysis, see, [8, 20, 22]. For more research about Herz spaces in PDE and harmonic analysis, we refer to [3, 10, 19] and so on. In 1961, the mixed-norm Lebesgue space L\u20d7 u (Rn) with \u20d7 u = (u1, . . . , un) \u2208 (0, \u221e]n was researched by Benedek and Panzone [7], which can go back to [11] in 1960. Since function spaces with mixed norms have a broader purpose on PDE [2, 14], more people renewed interest. In recent years, to gain the convergence of the Fourier transform, in 2021, Huang, Weisz, Yang, and Yuan [12] introduced the mixed-Herz space \u02d9 E\u2217 \u20d7 u\u2032(Rn). In 2021, Wei [24] established the boundedness of the Hardy\u2013Littlewood maximal operators on mixed-Herz space ( \u02d9 K\u03b2,s \u20d7 u )(Rn). In 2022, Zhang and Zhou [26] introduced the mixed-norm amalgam space (L\u20d7 u, L\u20d7 v)(Rn), and the boundedness of the Hardy\u2013Littlewood maximal operators on the mixed-norm amalgam space (L\u20d7 u, L\u20d7 s)(Rn) was studied, we can see [18]. For more meticulous research about mixed-norm spaces, we refer to [1, 15, 27] and so on. Very recently, Auscher and Mourgoglou [5] introduced the slice space (Eu t ) (Rn). In 2017, Auscher and Prisuelos-Arribas [4] studied many classical operators of harmonic analysis on slice space (Eu v )t (Rn). In 2022, Lu, Zhou, and Wang [17] introduced the Herz-slice spaces. Based on these results, we introduce mixed-norm Herz-slice space ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn). Now, we elaborate on the context of this paper. In sect 2, we introduce the homogeneous mixed-norm Herz-slice space ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) and the non-homogeneous mixed-norm Herz-slice space (KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn). In sect 3, we establish the dual spaces and study some properties on these spaces. The block decomposition is obtained on mixed-norm Herz-slice spaces in sect 4. In sect 5, we estimate the boundedness of HardyLittlewood maximal operator on mixed-norm Herz-slice spaces. The symbol K (Rn) refers to the set of all measurable functions on Rn. We use 1G to denote the characteristic function and |G| is the Lebesgue measure of a measurable set G. Denote B(y, \u03bb) the open ball with centered at y with the radius \u03bb. Let Bm = B(0, 2m) = {x \u2208Rn : |x| \u22642m}. Denote Sm := Bm \\ Bm\u22121 for any m \u2208Z, 1m = 1Sm for m \u2208Z, and 1S0 = 1B0, where 1m is the characteristic function of Sm. The letters \u20d7 u,\u20d7 v, . . . will denote n-tuples of the numbers in [0, \u221e], \u20d7 u = (u1, . . . , un) ,\u20d7 v = (v1, . . . , vn), n \u2208N. 0 < \u20d7 u < \u221emeans that 0 < ui < \u221efor each i = 1, \u00b7 \u00b7 \u00b7 , n. Furthermore, for \u20d7 u = (u1, . . . , un) and \u03b7 \u2208R, let 1 \u20d7 u = \u0012 1 u1 , . . . , 1 un \u0013 , \u20d7 u \u03b7 = \u0012u1 \u03b7 , . . . , un \u03b7 \u0013 , \u2212 \u2192 u\u2032 = (u\u2032 1, . . . , u\u2032 n) . Where u\u2032 i = ui ui\u22121 is a conjugate exponent of ui, i = 1, ..., n. For di\ufb00erent positive constants we use C to denote them. We write \u03c6 \u2272\u03c8, \u03c6 \u2264C\u03c8 mean 2 \fMixed-norm Herz-slice spaces and their applications 3 that for some constant C > 0, and \u03c6 \u223c\u03c8 means that \u03c6 \u2272\u03c8 and \u03c8 \u2272\u03c6. 2 Main de\ufb01nitions Let us start by recalling some basic essential notions. De\ufb01nition 2.1. ([16]) Let \u03b2 \u2208R and s, u \u2208(0, \u221e]. The homogeneous Herz space ( \u02d9 K\u03b2,s u )(Rn) is de\ufb01ned by ( \u02d9 K\u03b2,s u )(Rn) := n f \u2208Lu loc (Rn \\ {0}) : \u2225f\u2225( \u02d9 K\u03b2,s u )(Rn) < \u221e o , (2.1) where \u2225f\u2225( \u02d9 K\u03b2,s u )(Rn) := \" \u221e X k=\u2212\u221e 2k\u03b2s \u2225f1Sk\u2225s Lu(Rn) # 1 s , with the usual modi\ufb01cation when s = \u221eor u = \u221e. And the non-homogeneous Herz space (K\u03b2,s u )(Rn) is de\ufb01ned by (K\u03b2,s u )(Rn) := n f \u2208Lu loc(Rn) : \u2225f\u2225(K\u03b2,s u )(Rn) < \u221e o , (2.2) where \u2225f\u2225(K\u03b2,s u )(Rn) := \" \u221e X k=0 2k\u03b2s \u2225f1Sk\u2225s Lu(Rn) # 1 s , with the usual modi\ufb01cation when s = \u221eor u = \u221e. De\ufb01nition 2.2. ([4]) Let u, v, t \u2208(0, \u221e). The slice space (Eu v )t(Rn) is de\ufb01ned by (Eu v )t (Rn) := \b f \u2208L1 loc (Rn) : \u2225f\u2225(Eu v )t(Rn) < \u221e \t , where \u2225f\u2225(Eu v )t(Rn) := \r \r \r \r \r \u0014 1 |B(\u00b7, t)| Z B(\u00b7,t) |f(y)|v dy \u0015 1 v \r \r \r \r \r Lu(Rn) , with the usual modi\ufb01cation when u = \u221e. De\ufb01nition 2.3. ([7]) Let \u20d7 u \u2208(0, \u221e]n. The mixed-norm Lebesgue space L\u20d7 u(Rn) is de\ufb01ned to be set of all measurable functions f \u2208K (Rn) such that \u2225f\u2225L\u20d7 u(Rn) := (Z R \u00b7 \u00b7 \u00b7 \u0014Z R |f(x1, . . . , xn)|u1dx1 \u0015u2/u1 \u00b7 \u00b7 \u00b7 dxn )1/un < \u221e. 3 \f4 Lihua Zhang and Jiang Zhou Obviously, if u1 = \u00b7 \u00b7 \u00b7 = un = u, we simplify L\u20d7 u(Rn) to classical Lebesgue spaces Lu and \u2225f\u2225Lu(Rn) = \u0014Z Rn |f(x)|u dx \u0015 1 u . When ui = \u221e, i = 1, \u00b7 \u00b7 \u00b7 , n, then we make the appropriate modi\ufb01cations. De\ufb01nition 2.4. ([26]) Let t \u2208(0, \u221e) and \u20d7 v, \u20d7 u \u2208(1, \u221e)n. The mixed amalgam spaces (E\u20d7 u \u20d7 v )t(Rn) is de\ufb01ned as the set of all measurable functions f satisfy f \u2208L1 loc (Rn), (E\u20d7 u \u20d7 v )t(Rn) := ( f : \u2225f\u2225(E\u20d7 u \u20d7 v )t(Rn) = \r \r \r \r \u2225f1B(\u00b7,t)\u2225L\u20d7 v(Rn) \u22251B(\u00b7,t)\u2225L\u20d7 v(Rn) \r \r \r \r L\u20d7 u(Rn) < \u221e ) , with the usual modi\ufb01cation for ui = \u221e, i = 1, \u00b7 \u00b7 \u00b7 , n. Now, we give the homogeneous mixed-norm Herz-slice space ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) and the non-homogeneous mixed-norm Herz-slice space (KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn). De\ufb01nition 2.5. Let \u03b2 \u2208R, t \u2208(0, \u221e), s \u2208(0, \u221e] and \u20d7 v, \u20d7 u \u2208(1, \u221e)n. (1)The homogeneous mixed-norm Herz-slice space ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) is de\ufb01ned by ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) := n f \u2208L1 loc (Rn) : \u2225f\u2225( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) < \u221e o , (2.3) where \u2225f\u2225( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) := \" \u221e X k=\u2212\u221e 2k\u03b2s \u2225f1Sk\u2225s (E\u20d7 u \u20d7 v )t(Rn) # 1 s , with the usual modi\ufb01cation made when s = \u221e. (2)The non-homogeneous mixed-norm Herz-slice space (KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) is de\ufb01ned by (KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) := n f \u2208L1 loc (Rn) : \u2225f\u2225(KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) < \u221e o , (2.4) where \u2225f\u2225(KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) := \" \u221e X k=0 2k\u03b2s \u2225f1Sk\u2225s (E\u20d7 u \u20d7 v )t(Rn) # 1 s , with the usual modi\ufb01cation made when s = \u221e. Remark 2.1. Let \u03b2 \u2208R, t \u2208(0, \u221e), s \u2208(0, \u221e] and \u20d7 v, \u20d7 u \u2208(1, \u221e)n. (1) \u22000 < s < \u221e, we have \u2225|f|r\u22251/r ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) = \u2225f\u2225( \u02d9 KE\u03b2/r,rs r\u20d7 u,r\u20d7 v )t(Rn). (2.5) 4 \fMixed-norm Herz-slice spaces and their applications 5 (2) ( \u02d9 KE0,s \u20d7 u,\u20d7 v)t(Rn) = (E\u20d7 u \u20d7 v )t(Rn), when u1 = \u00b7 \u00b7 \u00b7 = un, v1 = \u00b7 \u00b7 \u00b7 = vn, (E\u20d7 u \u20d7 v )t(Rn) = (Eu v )t(Rn). (3) Let \u03b2 \u2208R, if \u20d7 v = \u20d7 u, ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) = ( \u02d9 K\u03b2,s \u20d7 u )(Rn) and (KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) = (K\u03b2,s \u20d7 u )(Rn); obviously, when u1 = \u00b7 \u00b7 \u00b7 = un, \u02d9 K\u03b2,s \u20d7 u (Rn) = \u02d9 K\u03b2,s u (Rn), K\u03b2,s \u20d7 u (Rn) = K\u03b2,s u (Rn). In what follows, we give some preliminaries on ball quasi-Banach function spaces introduced in [21]. For any x \u2208Rn and \u03bb \u2208(0, \u221e), let B(x, \u03bb) := {y \u2208Rn : |x \u2212y| < \u03bb} and B := {B(x, \u03bb) : x \u2208Rn and \u03bb \u2208(0, \u221e)} . (2.6) De\ufb01nition 2.6. A quasi-Banach space X \u2282K (Rn) is called a ball quasiBanach function space if it satis\ufb01es (1) \u2225\u03c6\u2225X = 0 implies that \u03c6 = 0 almost everywhere; (2) \u03c8| \u2264|\u03c6| almost everywhere implies that \u2225\u03c8\u2225X \u2264\u2225\u03c6\u2225X; (3) 0 \u2264\u03c6m \u2191\u03c6 almost everywhere implies that \u2225\u03c6m\u2225X \u2191\u2225\u03c6\u2225X; (4) B \u2208B implies that 1B \u2208X, where B is as in (2.6). Moreover, a ball quasi-Banach function space X is called a ball Banach function space if the norm of X satis\ufb01es the triangle inequality: (5) for any \u03c6, \u03c8 \u2208X, \u2225\u03c6 + \u03c8\u2225X \u2264\u2225\u03c6\u2225X + \u2225\u03c8\u2225X, (2.7) and, for any B \u2208B, there exists a positive constant C(B), depending on B, such that, (6) for any \u03c6 \u2208X Z B |\u03c6(x)|dx \u2264C(B)\u2225\u03c6\u2225X. (2.8) Let us recall the notion of the Hardy-Littlewood maximal operator M . De\ufb01nition 2.7. For any \u03c6 \u2208L1 loc (Rn) and x \u2208Rn, we de\ufb01ne the HardyLittlewood maximal function M (\u03c6) by M (\u03c6)(x) := sup B 1 |B| Z B |\u03c6(y)|dy, (2.9) where the supremum is taken over all balls B \u2208B containing x. 5 \f6 Lihua Zhang and Jiang Zhou 3 Properties of mixed-norm Herz-slice spaces In this section, we \ufb01rst present the dual spaces of mixed-norm Herzslice spaces, then show some elementary properties on mixed-norm Herz-slice spaces. Before statement the dual spaces of mixed-norm Herz-slice spaces, we \ufb01rst recall H\u00a8 older\u2019s inequality on (E\u20d7 u \u20d7 v )t(Rn). Lemma 3.1. ([26]) Let t \u2208(0, \u221e) and \u20d7 v, \u20d7 u \u2208(1, \u221e)n. If \u03c6 \u2208(E\u20d7 u \u20d7 v )t(Rn) and \u03c8 \u2208(E \u20d7 u\u2032 \u20d7 v\u2032 )t(Rn), then \u03c6\u03c8 is integrable and \u2225\u03c6\u03c8\u2225L1(Rn) \u2264\u2225\u03c6\u2225(E\u20d7 u \u20d7 v )t(Rn)\u2225\u03c8\u2225(E \u20d7 u\u2032 \u20d7 v\u2032 )t(Rn), where 1/\u20d7 u + 1/\u20d7 u\u2032 = 1/\u20d7 v + 1/\u20d7 v\u2032 = 1. We began to prove the dual space of ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn). Theorem 3.1. Let \u03b2 \u2208R, t, s \u2208(0, \u221e) and \u20d7 v, \u20d7 u \u2208(1, \u221e)n. The dual space of ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) is \u0010\u0010 \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v \u0011 t (Rn) \u0011\u2217 = \uf8f1 \uf8f2 \uf8f3 \u0010 \u02d9 KE\u2212\u03b2,s\u2032 \u20d7 u\u2032,\u20d7 v\u2032 \u0011 t (Rn), 1 < s < \u221e, \u0010 \u02d9 KE\u2212\u03b2,\u221e \u20d7 u\u2032,\u20d7 v\u2032 \u0011 t (Rn), 0 < s \u22641. Proof. Let 1 < s < \u221eand L \u2208( \u02d9 KE\u2212\u03b2,s\u2032 \u20d7 u\u2032,\u20d7 v\u2032 )t(Rn). For any g \u2208( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn), we de\ufb01ne (L, g) := Z Rn L(x)g(x) dx = \u221e X l=\u2212\u221e Z Sl L(x)g(x) dx. By Lemma 3.1, we have |(L, g)| \u2264 \" \u221e X l=\u2212\u221e 2\u2212l\u03b2s\u2032\u2225L\u2225s\u2032 (E \u20d7 u\u2032 \u20d7 v\u2032 )t(Sl) # 1 s\u2032 \" \u221e X l=\u2212\u221e 2l\u03b2s\u2225g\u2225s (E\u20d7 u \u20d7 v )t(Sl) # 1 s = \u2225L\u2225( \u02d9 KE\u2212\u03b2,s\u2032 \u20d7 u\u2032,\u20d7 v\u2032 )t(Rn)\u2225g\u2225( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn). This indicates that L \u2208(( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn))\u2217. Let L \u2208(( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn))\u2217. For any l \u2208Z and gl \u2208(E\u20d7 u \u20d7 v )t(Rn), write e gl := gl1Sl, then e gl \u2208( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn), \u2225gl\u2225( \u02d9 KE\u03b2,w \u20d7 u,\u20d7 v )t(Rn) = 2l\u03b1\u2225e gl\u2225(E\u20d7 u \u20d7 v )t(Rn). Let (Ll, gl) := (L, e gl). We could easily know that Ll \u2208(E \u20d7 u\u2032 \u20d7 v\u2032 )t(Rn) and \u2225Ll\u2225(E \u20d7 u\u2032 \u20d7 v\u2032 )t(Rn) \u22642l\u03b2\u2225L\u2225( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn))\u2217. 6 \fMixed-norm Herz-slice spaces and their applications 7 Additionally, for any g \u2208( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn), we have g1Sl \u2208(E\u20d7 u \u20d7 v )t(Rn). Then, when given P, Q \u2208N, Q X l=\u2212P (Ll, g1Sl) = L, Q X l=\u2212P g1Sl ! . For any l \u2208Z, take g\u2032 l \u2208(E\u20d7 u \u20d7 v )t(Rn) with supp (g\u2032 l) \u2282Sl and \u2225f \u2032 l\u2225(E\u20d7 u \u20d7 v )t(Rn) = 2\u2212l\u03b2 such that (Ll, g\u2032 l) \u22652\u2212l\u03b2 \u2225Ll\u2225(E \u20d7 u\u2032 \u20d7 v\u2032 )t(Rn) \u2212\u03b4l, where \u03b4l > 0 is decided subsequently. Let gl := \u0010 2\u2212l\u03b2\u2225Ll\u2225(E \u20d7 u\u2032 \u20d7 v\u2032 )t(Rn) \u0011s\u2032\u22121 g\u2032 l. For any given \u03b4 \u2208(0, \u221e) , choose \u03b4l > 0 small enough such that (Ll, gl) + 2\u2212|l|\u03b4 \u22652\u2212l\u03b2s\u2032\u2225Ll\u2225s\u2032 (E \u20d7 u\u2032 \u20d7 v\u2032 )t(Rn) = 2l\u03b2s \u2225gl\u2225s (E\u20d7 u \u20d7 v )t(Rn) . Then we easily get Q X l=\u2212P 2\u2212l\u03b2s\u2032 \u2225Lk1Sl\u2225s\u2032 (E \u20d7 u\u2032 \u20d7 v\u2032 )t(Rn) \u22644\u03b5 + L, Q X l=\u2212P gl1Sl ! \u22644\u03b5 + \u2225L\u2225(( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn))\u2217 \r \r \r \r \r Q X l=\u2212P gl \r \r \r \r \r ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) \u22644\u03b5 + \u2225L\u2225(( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn))\u2217 Q X l=\u2212P 2l\u03b2p \u2225gl1Sl\u2225s (E\u20d7 u \u20d7 v )t(Rn) ! 1 s = 4\u03b5 + \u2225L\u2225(( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn))\u2217 Q X l=\u2212P 2\u2212l\u03b2s\u2032 \u2225Ll1Sl\u2225s\u2032 (E \u20d7 u\u2032 \u20d7 v\u2032 )t(Rn) ! 1 s . Using \u03b4 \u21920 and P, Q \u2192\u221e, we conclude that \" \u221e X l=\u2212\u221e 2\u2212l\u03b2s\u2032 \u2225Ll1Sl\u2225s\u2032 (E \u20d7 u\u2032 \u20d7 v\u2032 )t(Rn) # 1 s\u2032 \u2264\u2225L\u2225(( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn))\u2217. (3.1) 7 \f8 Lihua Zhang and Jiang Zhou De\ufb01ne e L(x) := \u221e X l=\u2212\u221e Ll(x)1Sl(x). From (3.1) we know that e T \u2208( \u02d9 KE\u2212\u03b2,s\u2032 \u20d7 u\u2032,\u20d7 v\u2032 )t(Rn). Furthermore, for any g \u2208 ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn), we can see that \u0010 e L, g \u0011 = \u221e X l=\u2212\u221e Ll(x)1Sl(x)g(x)dx, = \u221e X l=\u2212\u221e Z Sl Ll(x)g(x) dx = \u221e X l=\u2212\u221e (L, g1Sl) = (L, g). Thus, L is also in ( \u02d9 KE\u2212\u03b2,s\u2032 \u20d7 u\u2032,\u20d7 v\u2032 )t(Rn). So for 1 < s < \u221e, we will omit the details since the proof is similar. Based on the closed-graph theorem, we immediately receive the following result. Corollary 3.1. Let \u03b2 \u2208R, t, s \u2208(0, \u221e) and \u20d7 v, \u20d7 u \u2208(1, \u221e)n with 1/s+1/s\u2032 = 1/\u20d7 v + 1/\u20d7 v\u2032 = 1/\u20d7 u + 1/\u20d7 u\u2032 = 1. Then \u03c6 \u2208( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) if and only if Z Rn \u03c6(x)\u03c8(x) dx < \u221e, with \u03c8 \u2208( \u02d9 KE\u2212\u03b2,s\u2032 \u20d7 u\u2032,\u20d7 v\u2032 )t(Rn), and \u2225\u03c6\u2225( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) = sup \u001aZ Rn \u03c6(x)\u03c8(x) dx : \u2225\u03c8\u2225( \u02d9 KE\u2212\u03b2,s\u2032 \u20d7 u\u2032,\u20d7 v\u2032 )t(Rn) \u22641 \u001b . The next assertion follows in the same way as the previous one. Theorem 3.2. Let \u03b2 \u2208R, t, s \u2208(0, \u221e) and \u20d7 v, \u20d7 u \u2208(1, \u221e)n. The dual space of (KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) is \u0010 (KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) \u0011\u2217 = \uf8f1 \uf8f2 \uf8f3 \u0010 KE\u2212\u03b2,s\u2032 \u20d7 u\u2032,\u20d7 v\u2032 \u0011 t (Rn), 1 < s < \u221e, \u0010 KE\u2212\u03b2,\u221e \u20d7 u\u2032,\u20d7 v\u2032 \u0011 t (Rn), 0 < s \u22641. In addition, assume that \u03b2 \u2208R, t, s \u2208(0, \u221e), \u20d7 v, \u20d7 u \u2208(1, \u221e)n and 1/s+1/s\u2032 = 1, 1/\u20d7 v + 1/\u20d7 v\u2032 = 1/\u20d7 u + 1/\u20d7 u\u2032 = 1, then \u03c6 \u2208(KE\u03b2,s u,v)t(Rn) if and only if Z Rn \u03c6(x)\u03c8(x) dx < \u221e, 8 \fMixed-norm Herz-slice spaces and their applications 9 where \u03c8 \u2208(KE\u2212\u03b2,s\u2032 \u20d7 u\u2032,\u20d7 v\u2032 )t(Rn), and \u2225\u03c6\u2225(KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) = sup \u001aZ Rn \u03c6(x)\u03c8(x) dx : \u2225\u03c8\u2225(KE\u2212\u03b2,s\u2032 \u20d7 u\u2032,\u20d7 v\u2032 )t(Rn) \u22641 \u001b . Theorem 3.1 further implies that the following lemma, we acknowledge the details. Corollary 3.2. Let \u03b2 \u2208R, t, s \u2208(0, \u221e) and \u20d7 v, \u20d7 u \u2208(1, \u221e)n. If \u03c6 \u2208 ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) and \u03c8 \u2208( \u02d9 KE\u2212\u03b2,s\u2032 \u20d7 u\u2032,\u20d7 v\u2032 )t(Rn), then \u03c6\u03c8 is integrable and \u2225\u03c6\u03c8\u2225L1(Rn) \u2264\u2225\u03c6\u2225( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn)\u2225\u03c8\u2225( \u02d9 KE\u2212\u03b2,s\u2032 \u20d7 u\u2032,\u20d7 v\u2032 )t(Rn). Before we get into the properties of the investigation, let\u2019s make an important conclusion. Lemma 3.2. Let t \u2208(0, \u221e) and \u20d7 v, \u20d7 u \u2208(1, \u221e)n. The characteristic function on B(y0, \u03bb) with y0 \u2208Rn and 1 < \u03bb < \u221esatis\ufb01es \r \r1B(y0,\u03bb) \r \r (E\u20d7 u \u20d7 v )t(Rn) \u2272\u03bb Pn i=1 1 ui . (3.2) Proof. If t > R0, then \r \r1B(y0,\u03bb) \r \r (E\u20d7 u \u20d7 v )t(Rn) \u2264 \r \r \r \r \u22251B(y0,\u03bb)1B(\u00b7,t)\u2225L\u20d7 v \u22251B(\u00b7,t)\u2225L\u20d7 v \r \r \r \r L\u20d7 u \u2264 \r \r \r \r \u22251B(y0,\u03bb+t)\u2225L\u20d7 v1B(y0,\u03bb) \u22251B(\u00b7,t)\u2225L\u20d7 v \r \r \r \r L\u20d7 u \u2264(\u03bb + t) Pn i=1 1 vi t Pn i=1 1 vi \r \r1B(y0,\u03bb) \r \r L\u20d7 u\u2264\u2264C\u03bb Pn i=1 1 ui , if t \u2264R0, then \r \r1B(y0,\u03bb) \r \r (E\u20d7 u \u20d7 v )t(Rn) \u2264 \r \r \r \r \u22251B(y0,\u03bb)1B(\u00b7,t)\u2225L\u20d7 v \u22251B(\u00b7,t)\u2225L\u20d7 v \r \r \r \r L\u20d7 u \u2264 \r \r \r \r \u22251B(\u00b7,t)\u2225L\u20d7 v1B(y0,\u03bb+t) \u22251B(\u00b7,t)\u2225L\u20d7 v \r \r \r \r L\u20d7 u \u2264t Pn i=1 1 vi \u00b7 (2\u03bb) Pn i=1 1 ui t Pn i=1 1 vi \u2264C\u03bb Pn i=1 1 ui . This accomplishes the desired result. Remark 3.1. Let t \u2208(0, \u221e) and \u20d7 v, \u20d7 u \u2208(1, \u221e)n. A characteristic function on Sk satis\ufb01es \u22251Sk\u2225(E\u20d7 u \u20d7 v )t(Rn) \u2264\u22251Bk\u2225(E\u20d7 u \u20d7 v )t(Rn) \u22722k Pn i=1 1 ui . (3.3) Proposition 3.1. Let t \u2208(0, \u221e) and \u20d7 v, \u20d7 u \u2208(1, \u221e)n. The mixed-norm slice spaces (E\u20d7 u \u20d7 v )t(Rn) is a ball Banach function space. 9 \f10 Lihua Zhang and Jiang Zhou Before the proof of Proposition 3.1, we need some preliminary lemmas. Lemma 3.3. Let t \u2208(0, \u221e) and \u20d7 v, \u20d7 u \u2208(1, \u221e)n. Let \u03c6, \u03c8 \u2208K (Rn). Assume that |\u03c6| \u2264|\u03c8| almost everywhere on Rn, then \u2225\u03c6\u2225(E\u20d7 u \u20d7 v )t(Rn) \u2264\u2225\u03c8\u2225(E\u20d7 u \u20d7 v )t(Rn). Proof. Let all the signs be like this proposition. Let G := {x \u2208Rn : |\u03c8 (x)| < |\u03c6 (x)|} . Where x = (x1, \u00b7 \u00b7 \u00b7 , xn). Suppose that |\u03c6| \u2264|\u03c8| almost everywhere on Rn, then |G| = 0. Thus, for almost every x \u2208Rn, |\u03c6| \u2264|\u03c8| almost everywhere on Rn, by [23, Subsection 4.2], if |\u03c6| \u2264|\u03c8| almost everywhere on Rn, then \r \r\u03c6 (x) 1B(\u00b7,t) \r \r L\u20d7 v(Rn) \u2264 \r \r\u03c8 (x) 1B(\u00b7,t) \r \r L\u20d7 v(Rn) , namely \r \r\u03c6 (x) 1B(\u00b7,t) \r \r L\u20d7 v(Rn) \r \r1B(\u00b7,t) \r \r L\u20d7 v(Rn) \u2264 \r \r\u03c8 (x) 1B(\u00b7,t) \r \r L\u20d7 v(Rn) \r \r1B(\u00b7,t) \r \r L\u20d7 v(Rn) . Using De\ufb01nition 2.2, for almost every x \u2208Rn and any given \u20d7 u \u2208(1, \u221e)n, we easily obtain, \u2225\u03c6\u2225(E\u20d7 u \u20d7 v )t(Rn) \u2264\u2225\u03c8\u2225(E\u20d7 u \u20d7 v )t(Rn) . This accomplishes the desired result. Lemma 3.4. Let 0 < t < \u221eand \u20d7 v, \u20d7 u \u2208(1, \u221e)n. For any \u03c6 \u2208(E\u20d7 u \u20d7 v )t(Rn), assume that \u2225\u03c6\u2225(E\u20d7 u \u20d7 v )t(Rn) = 0, then \u03c6 = 0 almost everywhere. Proof. From Remark 2.1 and Lemma 3.3, for any \u03c6 \u2208(E\u20d7 u \u20d7 v )t(Rn), r \u2208(0, \u221e) and m \u2208N, we have \u2225|\u03c6|r1Sm\u22251/r (E\u20d7 u/r \u20d7 v/r )t(Rn) = \u2225\u03c61Sm\u2225(E\u20d7 u \u20d7 v )t(Rn) \u2264\u2225\u03c6\u2225(E\u20d7 u \u20d7 v )t(Rn) = 0. Using Lemma 3.1 and 3.2, it su\ufb03ces to prove that \u2225|\u03c6|r1Sm\u2225L1(Rn) \u2264\u2225|\u03c6|r1Sm\u2225(E\u20d7 u/r \u20d7 v/r )t(Rn) \u2225|\u03c6|r1Sm\u2225 (E \u20d7 u\u2032/r \u20d7 v\u2032/r )t(Rn) \u22640. We observe that \u2225|\u03c6|r\u2225L1(Rn) = 0 via the monotone convergence theorem, it follows from this, \u03c6 = 0 almost everywhere on Rn. The expected results were obtained. 10 \fMixed-norm Herz-slice spaces and their applications 11 Lemma 3.5. Let 0 < t < \u221eand \u20d7 v, \u20d7 u \u2208(1, \u221e)n. Assume that {\u03c6k}k\u2208N is measurable functions and \u03c6k \u22650 with k \u2208N, then \r \r \r lim k\u2192\u221e\u03c6k \r \r \r (E\u20d7 u \u20d7 v )t(Rn) \u2264lim k\u2192\u221e\u2225\u03c6k\u2225(E\u20d7 u \u20d7 v )t(Rn) . Proof. Let 0 < t < \u221eand \u20d7 v, \u20d7 u \u2208(1, \u221e)n. We know that L\u20d7 u(Rn) is ball Banach function spaces via [23, Subsection 4.2]. Then, we deduce that, when given \u20d7 v \u2208(1, \u221e)n \r \r \r \r lim k\u2192\u221e \u03c6k1Q(\u00b7,t) \r \r \r \r L\u20d7 v(Rn) \u2264lim k\u2192\u221e \r \r\u03c6k1Q(\u00b7,t) \r \r L\u20d7 v(Rn) . Namely \r \rlimk\u2192\u221e\u03c6k1Q(\u00b7,t) \r \r L\u20d7 v(Rn) \r \r1Q(\u00b7,t) \r \r L\u20d7 v(Rn) \u2264 limk\u2192\u221e \r \r\u03c6k1Q(\u00b7,t) \r \r L\u20d7 v(Rn) \r \r1Q(\u00b7,t) \r \r L\u20d7 v(Rn) . Using De\ufb01nition 2.2 we easily obtain, for any given \u20d7 u \u2208(1, \u221e)n, \r \r \r \r \r \r \rlimk\u2192\u221e\u03c6k1Q(\u00b7,t) \r \r L\u20d7 v(Rn) \r \r1Q(\u00b7,t) \r \r L\u20d7 v(Rn) \r \r \r \r \r L\u20d7 u(Rn) \u2264 \r \r \r \r \r limk\u2192\u221e \r \r\u03c6k1Q(\u00b7,t) \r \r L\u20d7 v(Rn) \r \r1Q(\u00b7,t) \r \r L\u20d7 v(Rn) \r \r \r \r \r L\u20d7 u(Rn) = lim k\u2192\u221e \r \r \r \r \r \r \r\u03c6k1Q(\u00b7,t) \r \r L\u20d7 v(Rn) \r \r1Q(\u00b7,t) \r \r L\u20d7 v(Rn) \r \r \r \r \r L\u20d7 u(Rn) . The expected results were obtained. Combining Lemma 3.4 and 3.5, we can get the following result. Corollary 3.3. Let t \u2208(0, \u221e) and \u20d7 v, \u20d7 u \u2208(1, \u221e)n. Assume that 0 \u2264\u03c6m \u2191\u03c6 almost everywhere as m \u2192\u221e, then \u2225\u03c6m\u2225(E\u20d7 u \u20d7 v )t(Rn) \u2191\u2225\u03c6\u2225(E\u20d7 u \u20d7 v )t(Rn) as m \u2192\u221e. Proof of Proposition 3.1. Let t \u2208(0, \u221e) and \u20d7 v, \u20d7 u \u2208(1, \u221e)n. By Lemma 3.4, 3.3, 3.2, and Corollary 3.3, we know that the space (E\u20d7 u \u20d7 v )t(Rn) satis\ufb01es (1), (2), (3) and (4) of De\ufb01nition 2.6. Proof of triangle inequality is analogue to that of L\u20d7 u(Rn). So, only need to check (5) of De\ufb01nition 2.6. by Lemma 3.1, 3.3 and Remark 3.1. Notice that, for any B(z, r) \u2208B, we deduce that \f \f \f \f Z Rn f(x)1B(x)dx \f \f \f \f \u2264\u2225f1B\u2225(E\u20d7 u \u20d7 v )t(Rn) \u22251B\u2225(E \u20d7 u\u2032 \u20d7 v\u2032 )t(Rn) \u2264C \u2225f\u2225(E\u20d7 u \u20d7 v )t(Rn) . The expected results were obtained. 11 \f12 Lihua Zhang and Jiang Zhou Proposition 3.2. Let \u03b2 \u2208R, t, s \u2208(0, \u221e) and \u20d7 v, \u20d7 u \u2208(1, \u221e)n. ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) and (KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) is a ball quasi-Banach function space if and only if \u03b2 \u2208 (\u2212Pn i=1 1/ui, \u221e) with i \u2208{1, . . . , n}. Remark 3.2. As a special case, Wei and Yan obtained Proposition 3.1 and Proposition 3.2 in [25], it is pointed out here we get Proposition 3.1 and Proposition 3.2 via a direct way rather than properties of ball Banach function spaces in [25]. To prove Proposition 3.2, we need some assertions. Lemma 3.6. Let \u03b2 \u2208R, t, s \u2208(0, \u221e) and \u20d7 v, \u20d7 u \u2208(1, \u221e)n. Let \u03c6, \u03c8 \u2208 K (Rn). Assume that |\u03c6| \u2264|\u03c8| almost everywhere on Rn, then \u2225\u03c6\u2225( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) \u2264\u2225\u03c8\u2225( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn), and \u2225\u03c6\u2225(KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) \u2264\u2225\u03c8\u2225(KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn). Proof. We are just proof the result of ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn). Let G := {x \u2208Rn : |\u03c8 (x)| < |\u03c6 (x)|} . Where x = (x1, \u00b7 \u00b7 \u00b7 , xn). Assume that |\u03c6(x)| \u2264|\u03c8(x)| almost everywhere on Rn, then |G| = 0. Therefore, |\u03c6 (x)| \u2264|\u03c8 (x)| almost everywhere on Rn. For almost every x \u2208Rn, by Lemma 3.3 and (2.3), we have \u2225\u03c6\u2225( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) \u2264\u2225\u03c8\u2225( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) . This completes the result of the proof. By Remark 3.1, we show that the following Corollary, we acknowledge the details. Corollary 3.4. Let \u03b2 \u2208R, t, s \u2208(0, \u221e) and \u20d7 v, \u20d7 u \u2208(1, \u221e)n. If \u03b2 \u2208 (\u2212Pn i=1 1/ui, \u221e) with i \u2208{1, . . . , n}. Then, for any m \u2208N, we have 1Sm \u2208( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) and 1Sm \u2208(KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn). Lemma 3.7. Let \u03b2 \u2208R, t, s \u2208(0, \u221e) and \u20d7 v, \u20d7 u \u2208(1, \u221e)n. For any f \u2208 ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn), if \u2225f\u2225( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) = 0, then f = 0 almost everywhere. Proof. From (1) of Remark 2.1 and Lemma 3.4, we then see that for any f \u2208(E\u20d7 u \u20d7 v )t(Rn), s \u2208(0, \u221e) and m \u2208N, \u2225|f|r1Sm\u22251/r ( \u02d9 KEr\u03b2,s/r r\u20d7 u/r,\u20d7 v/r)t(Rn) = \u2225f1Sm\u2225( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) \u2264\u2225f\u2225( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) = 0, 12 \fMixed-norm Herz-slice spaces and their applications 13 where Sm is as in Lemma 3.4, which, together with Lemma 3.2 and Corollary 3.1, we have \u2225|f|r1Sm\u2225L1(Rn) \u2264\u2225|f|r1Sm\u2225( \u02d9 KEr\u03b2,s/r \u20d7 u/r,\u20d7 v/r)t(Rn) \u22251Sm\u2225( \u02d9 KE\u2212r\u03b2,s\u2032/r \u20d7 u\u2032/r, \u20d7 v\u2032/r)t(Rn) \u22640. Thus f = 0 almost everywhere on Rn, because we can see that \u2225|f|r\u2225L1(Rn) = 0 via the monotone convergence theorem. This accomplishes the desired result. Lemma 3.8. Let \u03b2 \u2208R, t, s \u2208(0, \u221e) and \u20d7 v, \u20d7 u \u2208(1, \u221e)n. Assume that {\u03c6\u03c4}\u03c4\u2208N is measurable functions and \u03c6\u03c4 \u22650 with \u03c4 \u2208N, then \r \r \r lim \u03c4\u2192\u221e\u03c6\u03c4 \r \r \r ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) \u2264lim \u03c4\u2192\u221e\u2225\u03c6\u03c4\u2225( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) . Proof. By Lemma 3.5, if s \u2208(0, \u221e), using Fatou lemma and (2.3), we have \r \r \r lim \u03c4\u2192\u221e\u03c6\u03c4 \r \r \r ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) \u2264 \" \u221e X \u03c4=\u2212\u221e 2k\u03b2s lim \u03c4\u2192\u221e\u2225\u03c61S\u03c4\u2225s (E\u20d7 u \u20d7 v )t(Rn) # 1 s \u2264lim \u03c4\u2192\u221e\u2225\u03c6\u03c4\u2225( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) . This accomplishes the desired result. Lemma 3.7 and 3.8 imply the following assertion. Corollary 3.5. Let \u03b2 \u2208R, t, s \u2208(0, \u221e) and \u20d7 v, \u20d7 u \u2208(1, \u221e)n. Let \u03c6 \u2208K (Rn) and {\u03c6\u03c4}\u03c4\u2208N \u2282K (Rn). Assume that 0 \u2264\u03c6\u03c4 \u2191\u03c6 almost everywhere when \u03c4 \u2192\u221e, then \u2225\u03c6\u03c4\u2225( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) \u2191\u2225\u03c6\u2225( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) when \u03c4 \u2192\u221e. Proof of Proposition 3.2. We are just proof the result of ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn). By Lemma 3.7, 3.6, and 3.5, we \ufb01nd that the space ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) satis\ufb01es (1), (2), and (3) of De\ufb01nition 3.1. So, only need to check (4) of De\ufb01nition 3.1. Observe that, there exist a cube B (0, 2r) \u2208B with r \u2208(0, \u221e) such that B \u2282B (0, 2r). By this and Lemma 3.6, we have \r \r1B(x,r) \r \r ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) \u2264 \r \r1B(0,2r) \r \r ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) . We estimate \r \r1B(0,2r) \r \r ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn). If s \u2208(0, \u221e), then, by \u03b2 \u2208(\u2212Pn i=1 1/ui, \u221e), we conclude that \r \r1B(0,2r) \r \r ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) = \"X k\u2208Z 2ks\u03b2 \r \r1B(0,2r)1Bk \r \rs (E\u20d7 v \u20d7 v)t(Rn) # 1 s \u223c \"X k=M 2 ks \u0010 \u03b2+Pn i=1 1 ui \u0011# 1 s < \u221e, 13 \f14 Lihua Zhang and Jiang Zhou where M = [\u2212\u221e, r]n. By this, we obtain \r \r1B(x,r) \r \r ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) < \u221e, thus, ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) satis\ufb01es (4) of De\ufb01nition 2.6, which completes the proof of su\ufb03ciency. In what follows, we to prove the necessity. When \u03b2 \u2208 \u0010 \u2212\u221e, \u2212Pn j=1 1 qj i with j \u2208{1, . . . , n}, there exists a l \u2208Z \u2229(\u2212\u221e, 0] such that B(0, 1) \u2283 B \u00000, 2l\u0001 . Using Lemma 3.6, we conclude that \r \r \r1B(0,2l) \r \r \r ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) \u2264 \r \r1B(0,1) \r \r ( \u02d9 KE\u03b2,s \u20d7 v,\u20d7 u)t(Rn) . Namely, 1B(0,1) / \u2208( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) since the left-hand side of this inequality is in\ufb01nity. Thus, ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) is not a ball quasi-Banach function space. We get what we wanted. Proposition 3.3. Let \u03b2 \u2208R, t, s \u2208(0, \u221e) and \u20d7 v, \u20d7 u \u2208(1, \u221e)n. The following conclusion are correct: (1) if s1 \u2264s2, then ( \u02d9 KE\u03b2,s1 \u20d7 u,\u20d7 v )t(Rn) \u2282( \u02d9 KE\u03b2,s2 \u20d7 u,\u20d7 v )t(Rn) and (KE\u03b2,s1 \u20d7 u,\u20d7 v )t(Rn) \u2282 (KE\u03b2,s2 \u20d7 u,\u20d7 v )t(Rn); (2) if \u03b22 \u2264\u03b21, then (KE\u03b21,s \u20d7 u,\u20d7 v )t(Rn) \u2282(KE\u03b22,s \u20d7 u,\u20d7 v )t(Rn); (3) if \u20d7 u1 \u2a7d\u20d7 u2, then ( \u02d9 KE\u03b2,s \u20d7 u2,\u20d7 v)t(Rn) \u2282( \u02d9 KE\u03b2,s \u20d7 u1,\u20d7 v)t(Rn) and (KE\u03b2,s \u20d7 u2,\u20d7 v)t(Rn) \u2282 (KE\u03b2,s \u20d7 u1,\u20d7 v)t(Rn); (4) if \u20d7 v1 \u2a7d\u20d7 v2, then ( \u02d9 KE\u03b2,s \u20d7 u, \u20d7 v2)t(Rn) \u2282( \u02d9 KE\u03b2,s \u20d7 u, \u20d7 v1)t(Rn) and (KE\u03b2,s \u20d7 u, \u20d7 v2)t(Rn) \u2282 (KE\u03b2,s \u20d7 u, \u20d7 v1)t(Rn). Proof. Let\u2019s start with (1). It is easy to see that (1) is a consequence of the inequality in [13]. \u221e X m=1 |bm| !v \u2264 \u221e X m=1 |bm|v, if 0 < v < 1. (3.4) Then through this inequality, we got \u2225f\u2225(KE\u03b2,s2 \u20d7 u,\u20d7 v )t(Rn) = X k\u2208Z (2k\u03b2 \u2225f1k\u2225(E\u20d7 u \u20d7 v )t(Rn))s2 ! 1 s1 s1 s2 \u2a7d X k\u2208Z (2k\u03b2\u2225f1k\u2225(E\u20d7 u \u20d7 v )t(Rn))s1 ! 1 s1 \u2a7d\u2225f\u2225(KE\u03b2,s1 \u20d7 u,\u20d7 v )t(Rn). 14 \fMixed-norm Herz-slice spaces and their applications 15 We remark that, we can get the (2) and (3) immediately via H\u00a8 older\u2019s inequality. In what follows, we show proof of (4) \u2225f1B(\u00b7,t)\u2225L \u20d7 v1 = \uf8eb \uf8ed Z R . . . Z R \u0012Z R \f \ff1B(\u00b7,t) \f \fv11 dx1 \u0013 v12 v11 dx2 ! v13 v12 . . . dxn \uf8f6 \uf8f8 1 v1n \u2a7d Z 2k\u22121\u2a7d|xn|<2k . . . \u0012Z 2k\u22121\u2a7d|x1|<2k |f(x)1B(\u00b7,t)(x)|v11dx1 \u0013 v12 v11 . . . dxn ! 1 v1n \u22642k Pn i=1 1 u1i \u2212Pn i=1 1 u2i \uf8eb \uf8ed Z R . . . Z R \u0012Z R \f \ff1B(\u00b7,t) \f \fv21 dx1 \u0013 v22 v21 dx2 ! v23 v22 . . . dxn \uf8f6 \uf8f8 1 v2n \u223c\u22251B(\u00b7,t)\u2225L \u20d7 v1(Rn) \u22251B(\u00b7,t)\u2225L \u20d7 v2(Rn) \u2225f1B(\u00b7,t)\u2225L \u20d7 v2. Thus, \r \r \r \r \r \u2225f1B(\u00b7,t)\u2225L \u20d7 v1(Rn) \u22251B(\u00b7,t)\u2225L \u20d7 v1(Rn) \r \r \r \r \r L\u20d7 u \u2264 \r \r \r \r \r \u2225f1B(\u00b7,t)\u2225L \u20d7 v2(Rn) \u22251B(\u00b7,t)\u2225L \u20d7 v2(Rn) \r \r \r \r \r L\u20d7 u . this, together with De\ufb01nition 2.1, we obtain \u2225f\u2225( \u02d9 KE\u03b2,s \u20d7 u, \u20d7 v1)t(Rn) \u2264\u2225f\u2225( \u02d9 KE\u03b2,s \u20d7 u, \u20d7 v2)t(Rn). Using a similar method can also get the result (1), (3), and (4) for nonhomogeneous mixed-norm Herz-slice space. 4 Block decompositions In this section we establish the decomposition characterizations of mixednorm Herz-slice spaces. De\ufb01nition 4.1. Let \u03b2 \u2208R, t, s \u2208(0, \u221e) and \u20d7 v, \u20d7 u \u2208(1, \u221e)n. (i) A function \u03ba(x) on Rn is said to be a central (\u03b2, ui, vi)-block if (1) supp (\u03ba) \u2282B(0, \u03bb), for some \u03bb > 0; (2) \u2225\u03ba\u2225(E\u20d7 u \u20d7 v )t(Rn) \u2264C\u03bb\u2212\u03b2. (ii) A function \u00b5(x) on Rn is said to be a central (\u03b2, ui, vi)-block of restrict type if 15 \f16 Lihua Zhang and Jiang Zhou (1) supp (\u00b5) \u2282B(0, \u03bb) for some \u03bb \u22651; (2) \u2225\u00b5\u2225(E\u20d7 u \u20d7 v )t(Rn) \u2264C\u03bb\u2212\u03b2. If \u03bb = 2l for some l \u2208Z , then the corresponding central block is called a dyadic central block. Theorem 4.1. Let \u03b2 \u2208R, t, s \u2208(0, \u221e) and \u20d7 v, \u20d7 u \u2208(1, \u221e)n. The following statements are equivalent: (1) \u03c6 \u2208( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn); (2) \u03c6 be able to present as \u03c6(x) = X l\u2208Z \u03b7l\u00b5l(x), (4.1) where P k\u2208Z |\u03b7l|s < \u221eand each \u00b5l is a dyadic central (\u03b2, ui, vi)-block with support contained in Bl. Proof. For \u03c6 \u2208( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn), write \u03c6(x) = X l\u2208Z |Bl| \u03b2 n \u2225\u03c61Sl\u2225(E\u20d7 u \u20d7 v )t(Rn) \u03c6(x)1Sl(x) |Bl| \u03b2 n \u2225\u03c61Sl\u2225(E\u20d7 u \u20d7 v )t(Rn) . When \u03b7l = |Bl| \u03b2 n \u2225\u03c61Sl\u2225(E\u20d7 u \u20d7 v )t(Rn) and \u00b5l(x) = \u03c6(x)1Sl(x) |Bl| \u03b2 n \u2225\u03c61Sl\u2225(E\u20d7 u \u20d7 v )t(Rn) , it su\ufb03ces to show that, supp (\u00b5l) \u2282Bl, \u2225\u00b5l\u2225(E\u20d7 u \u20d7 v )t(Rn) = |Bl|\u2212\u03b2 n, and \u03c6(x) = P l\u2208Z \u03b7l\u00b5l(x). Therefore, each \u00b5l is a dyadic central (\u03b2, ui, vi)-block with the support Bl and X l\u2208Z |\u03bbl|s = X l\u2208Z |Bl| \u03b2s n \u2225\u03c61Sl\u2225s (E\u20d7 u \u20d7 v )t(Rn) = \u2225\u03c6\u2225s ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) < \u221e. It remains to be shown that other side. Let \u03c6(x) = P l\u2208Z \u03b7l\u00b5l(x) be a decomposition of \u03c6. For each m \u2208Z, we have \u2225\u03c61Sm\u2225(E\u20d7 u \u20d7 v )t(Rn) \u2264 X l\u2265m |\u03b7l| \u2225\u00b5l\u2225(E\u20d7 u \u20d7 v )t(Rn) . (4.2) 16 \fMixed-norm Herz-slice spaces and their applications 17 Thus, if 0 < s \u22641 \u2225\u03c6\u2225s ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) = X l\u2208Z 2l\u03b2s \u2225\u03c61Sl\u2225s (E\u20d7 v \u20d7 u)t(Rn) \u2264 X l\u2208Z 2l\u03b2s X m\u2265l |\u03b7l|s\u2225\u00b5m\u2225s (E\u20d7 v \u20d7 u)t(Rn) ! \u2264 X l\u2208Z 2l\u03b2s X m\u2265l |\u03b7m|s2\u03b2ms ! \u2264C X l\u2208Z |\u03b7l|s < \u221e. If 1 < s < \u221e, based on H\u00a8 older\u2019s inequality and (4.2), \u2225\u03c61Sm\u2225(E\u20d7 u \u20d7 v )t(Rn) \u2264 X l\u2265m |\u03b7l| \u2225\u00b5l\u2225 1 2 (E\u20d7 u \u20d7 v )t(Rn) \u2225\u00b5l\u2225 1 2 (E\u20d7 u \u20d7 v )t(Rn) \u2264 X l\u2265m |\u00b5l|s\u2225\u00b5l\u2225 s 2 (E\u20d7 u \u20d7 v )t(Rn) ! 1 s X l\u2265m \u2225\u00b5l\u2225 s\u2032 2 (E\u20d7 u \u20d7 v )t(Rn) ! 1 s\u2032 \u2264 X l\u2265m |\u03b7l|s2\u2212\u03b2ls 2 ! 1 s X l\u2265m 2\u2212\u03b2ls\u2032 2 ! 1 s\u2032 . Therefore, \u2225\u03c6\u2225s ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) \u2264C X m\u2208Z 2\u03b2ms X l\u2265m |\u03b7l|s2\u2212\u03b2ls 2 ! 1 s X l\u2265m 2\u2212\u03b2ls\u2032 2 ! 1 s\u2032 \u2264C X m\u2208Z |\u03b7l|s X m\u2264l 2\u03b2(m\u2212l)s/2 \u2264C X l\u2208Z |\u03b7l|s < \u221e. This completes the conclusion that we want. Remark 4.1. By using Theorem 4.1, we can show that \u2225\u03c6\u2225( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) \u223c X l\u2208Z |\u03b7l|s ! 1 s . A similar result is given for (KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) as follows, we acknowledge the details. Theorem 4.2. Let \u03b2 \u2208R, t, s \u2208(0, \u221e) and \u20d7 v, \u20d7 u \u2208(1, \u221e)n. The following statements are equivalent: (1) \u03c6 \u2208(KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn); (2) \u03c6 be able to present as \u03c6(x) = \u221e X l=0 \u03b7l\u00b5l(x), (4.3) 17 \f18 Lihua Zhang and Jiang Zhou where \u221e P l=0 |\u03b7l|s < \u221eand each \u00b5l is a dyadic central (\u03b2, ui, vi)-block of restrict type with support contained in Bl. Moreover, \u2225\u03c6\u2225(KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) \u223c X l\u22650 |\u03b7l|s ! 1 s . 5 Boundedness of the HardyLittlewood maximal operator The aim of this section is to give the boundedness of the HardyLittlewood maximal operator M on ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) and (KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn). First, we show that M is well de\ufb01ned on ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn). Lemma 5.1. Let \u03b2 \u2208R and t, s \u2208(0, \u221e). Let \u20d7 v, \u20d7 u \u2208(1, \u221e)n. If \u03b2 \u2208 (\u2212Pn i=1 1/ui, \u221e) with i \u2208{1, . . . , n}. Then, for any \u03c6 \u2208( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn), we have ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) \u2286L1 loc (Rn) . Proof. For any \u03c6 \u2208( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) and ball B (x0, \u03bb) with x0 \u2208Rn and \u03bb \u2208 (0, \u221e), by Lemma 3.2 and Corollary 3.1, we have \r \r\u03c61B(x0,\u03bb) \r \r L1 \u2264 \r \r\u03c61B(x0,\u03bb) \r \r ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) \r \r1B(x0,\u03bb) \r \r ( \u02d9 KE\u03b2\u2032,s\u2032 \u20d7 u\u2032, \u20d7 v\u2032 )t(Rn) \u2264\u2225\u03c6\u2225( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) \r \r1B(x0,\u03bb) \r \r ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) . Due to 1B(x0,\u03bb) \u2208( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) by Proposition 3.2. This proves the required conclusion. Theorem 5.1. Let \u03b2 \u2208R, t, s \u2208(0, \u221e), \u20d7 v, \u20d7 u \u2208(1, \u221e)n and \u2212Pn i=1 1/ui < \u03b2 < n \u22121/ Pn i=1 1/ui. Suppose that the M satis\ufb01es (1) for suitable function \u03c6 with supp (\u03c6) \u2282Sk and |x| \u22652k+1 with k \u2208Z, |M \u03c6(x)| \u2264C\u2225\u03c6\u2225L1(Rn)|x|\u2212n; (5.1) (2) for suitable function \u03c6 with supp (\u03c6) \u2282Sk and |x| \u22642k\u22122 with k \u2208Z, |M \u03c6(x)| \u2264C2\u2212kn\u2225\u03c6\u2225L1(Rn). (5.2) For any \u03c6 \u2208L1 loc (Rn), then M \u03c6 is bounded on ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn). 18 \fMixed-norm Herz-slice spaces and their applications 19 Proof. Let \u03c6(x) = X m\u2208Z \u03c6(x)1Sm(x) := X m\u2208Z \u03c6m(x). By Lemma 5.1, we know M \u03c6 is well de\ufb01ned on ( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn). Observe that \u2225M \u03c6\u2225( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn) = X k\u2208Z 2k\u03b2s \u2225M \u03c61Sk\u2225s (E\u20d7 u \u20d7 v )t(Rn) ! 1 s = \uf8eb \uf8edX k\u2208Z 2k\u03b2s \r \r \r \r \r \u221e X m=\u2212\u221e M \u03c6m1Sk \r \r \r \r \r s (E\u20d7 u \u20d7 v )t(Rn) \uf8f6 \uf8f8 1 s \u2264 \uf8eb \uf8edX k\u2208Z 2k\u03b2s \r \r \r \r \r k\u22122 X m=\u2212\u221e M \u03c6m1Sk \r \r \r \r \r s (E\u20d7 u \u20d7 v )t(Rn) \uf8f6 \uf8f8 1 s + \uf8eb \uf8edX k\u2208Z 2k\u03b2s \r \r \r \r \r k+1 X m=k\u22121 M \u03c6m1Sk \r \r \r \r \r s (E\u20d7 u \u20d7 v )t(Rn) \uf8f6 \uf8f8 1 s + \uf8eb \uf8edX k\u2208Z 2k\u03b2s \r \r \r \r \r \u221e X m=k+2 M \u03c6m1Sk \r \r \r \r \r s (E\u20d7 u \u20d7 v )t(Rn) \uf8f6 \uf8f8 1 s := I + II + III. For I, from Lemma 3.1 and (5.2), we \ufb01nd that \r \r \r \r \r k\u22122 X m=\u2212\u221e M \u03c6m1Sk \r \r \r \r \r s (E\u20d7 u \u20d7 v )t(Rn) \u2272 \r \r \r \r \r k\u22122 X m=\u2212\u221e \u2225\u03c6m\u2225L1(Rn) 2\u2212kn1Sk \r \r \r \r \r s (E\u20d7 u \u20d7 v )t(Rn) \u2272 \r \r \r \r \r k\u22122 X m=\u2212\u221e \u2225\u03c6m\u2225(E\u20d7 u \u20d7 v )t(Rn) \u22251Sm\u2225(E\u20d7 u\u2032 \u20d7 v\u2032 )t(Rn) 2\u2212kn1Sk \r \r \r \r \r s (E\u20d7 u \u20d7 v )t(Rn) . For s \u2208(0, 1], by Lemma 3.2, we have I \u2272 X k\u2208Z 2k\u03b2s k\u22122 X m=\u2212\u221e \u2225\u03c6m\u2225s (E\u20d7 u \u20d7 v )t(Rn)2(m\u2212k)s(n\u2212Pn i=1 1 ui ) ! 1 s \u2272 X m\u2208Z 2m\u03b2s\u2225\u03c6m\u2225s (E\u20d7 u \u20d7 v )t(Rn) ! 1 s \u2272\u2225\u03c6\u2225( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn), 19 \f20 Lihua Zhang and Jiang Zhou for s \u2208(1, \u221e), by Lemma 3.1 and (5.2), we have \r \r \r \r \r k\u22122 X m=\u2212\u221e M \u03c6m1Sk \r \r \r \r \r s (E\u20d7 u \u20d7 v )t(Rn) \u2272 \r \r \r \r \r k\u22122 X m=\u2212\u221e \u2225\u03c6m\u2225L1(Rn)2\u2212kn1Sk \r \r \r \r \r s (E\u20d7 u \u20d7 v )t(Rn) \u2272 \r \r \r \r \r k\u22122 X m=\u2212\u221e \u2225\u03c6m\u2225(E\u20d7 u \u20d7 v )t(Rn) \u22251Sk\u2225(E \u20d7 u\u2032 \u20d7 v\u2032 )t(Rn) 2\u2212kn1Sk \r \r \r \r \r s (E\u20d7 u \u20d7 v )t(Rn) \u2272 k\u22122 X m=\u2212\u221e \u2225\u03c6m\u2225(E\u20d7 u \u20d7 v )t(Rn) !s \u22251Sm\u2225s (E \u20d7 u\u2032 \u20d7 v\u2032 )t(Rn) 2\u2212kns \u22251Sk\u2225s (E\u20d7 u \u20d7 v )t(Rn) . By Lemma 3.2, we deduce I \u2272 X k\u2208Z 2k\u03b2s k\u22122 X m=\u2212\u221e \u2225\u03c6m\u2225(E\u20d7 u \u20d7 v )t(Rn) !s 2(k\u2212m)s(Pn i=1 1 ui \u2212n) ! 1 s \u2272 X k\u2208Z 2k\u03b2s k\u22122 X m=\u2212\u221e \u2225\u03c6m\u2225s (E\u20d7 u \u20d7 v )t(Rn)2(k\u2212m)(Pn i=1 1 ui \u2212n)s/2 !! 1 s \u00d7 \uf8eb \uf8ed k\u22122 X m=\u2212\u221e 2(k\u2212m)(Pn i=1 1 ui \u2212n)s\u2032/2 !s/s\u2032\uf8f6 \uf8f8 1 s \u2272\u2225\u03c6\u2225( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn). We remark that M is bounded on (E\u20d7 u \u20d7 v )t(Rn) [18, Lemma 2.5], for II, we see that II \u2272 \uf8eb \uf8edX k\u2208Z 2k\u03b2s \r \r \r \r \r k+1 X m=k\u22121 M \u03c6m \r \r \r \r \r s (E\u20d7 u \u20d7 v )t(Rn) \uf8f6 \uf8f8 1 s \u2272 X k\u2208Z k+1 X m=k\u22121 2(k\u2212m)\u03b2s2m\u03b2s\u2225\u03c6m\u2225s (E\u20d7 u \u20d7 v )t(Rn) ! 1 s \u2272 X m\u2208Z 2m\u03b2s\u2225\u03c6m\u2225s (E\u20d7 u \u20d7 v )t(Rn) ! 1 s \u2272\u2225\u03c6\u2225( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn). For the part of III, using Lemma 3.1 and (5.1), we see that \r \r \r \r \r \u221e X m=k+2 M \u03c6m1Sk \r \r \r \r \r s (E\u20d7 u \u20d7 v )t(Rn) \u2272 \r \r \r \r \r \u221e X m=k+2 \u2225\u03c6m\u2225L1(Rn)2\u2212mn1Sk \r \r \r \r \r s (E\u20d7 u \u20d7 v )t(Rn) 20 \fMixed-norm Herz-slice spaces and their applications 21 \u2272 \r \r \r \r \r \u221e X m=k+2 2\u2212mn1Sk\u2225\u03c6m\u2225(E\u20d7 u \u20d7 v )t(Rn) \u22251Sm\u2225(E \u20d7 u\u2032 \u20d7 v\u2032 )t(Rn) \r \r \r \r \r s (E\u20d7 u \u20d7 v )t(Rn) \u2272 \u221e X m=k+2 2\u2212mn\u2225\u03c6m\u2225(E\u20d7 u \u20d7 v )t(Rn)\u22251Sk\u2225(E\u20d7 u \u20d7 v )t(Rn) \u22251Sm\u2225(E \u20d7 u\u2032 \u20d7 v\u2032 )t(Rn) . When 0 < s \u22641, by Lemma 3.2, we can write III \u2272 \"X k\u2208Z 2k\u03b2s \u221e X m=k+2 \u0012 2\u2212mn\u2225\u03c6m\u2225(E\u20d7 u \u20d7 v )t(Rn) \u22251Bm\u2225(E \u20d7 u\u2032 \u20d7 v\u2032 )t(Rn) \u22251Bk\u2225(E\u20d7 u \u20d7 v )t(Rn) \u0013s# 1 s \u2272 X k\u2208Z 2k\u03b2s \u221e X m=k+2 \u2225\u03c6m\u2225s (E\u20d7 u \u20d7 v )t(Rn)2ms(n\u2212Pn i=1 1 ui )2\u2212msn2ks Pn i=1 1 ui ! 1 s \u2272 X k\u2208Z 2k\u03b2s \u221e X m=k+2 \u0010 2m\u03b2\u2212k\u03b2\u2225\u03c6m\u2225(E\u20d7 u \u20d7 v )t(Rn) \u0011s ! 1 s \u2272\u2225\u03c6\u2225( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn). Using Lemma 3.1 and 5.1, we know that \r \r \r \r \r \u221e X m=k+2 M \u03c6m1Sk \r \r \r \r \r s (E\u20d7 u \u20d7 v )t(Rn) \u2272 \r \r \r \r \r \u221e X m=k+2 \u2225\u03c6m\u2225L1(Rn)2\u2212mn1Sk \r \r \r \r \r s (E\u20d7 u \u20d7 v )t(Rn) \u2272 \r \r \r \r \r \u221e X m=k+2 \u2225\u03c6m\u2225(E\u20d7 u \u20d7 v )t(Rn) \u22251Sm\u2225(E \u20d7 u\u2032 \u20d7 u\u2032 )t(Rn) 2\u2212mn1Sk \r \r \r \r \r s (E\u20d7 u \u20d7 v )t(Rn) . For 1 < s < \u221e, by Lemma 3.2 yields III \u2272 \"X k\u2208Z 2k\u03b2s 2\u2212mn \u221e X m=k+2 \u2225\u03c6m\u2225(E\u20d7 u \u20d7 v )t(Rn) \u22251Bm\u2225(E \u20d7 u\u2032 \u20d7 v\u2032 )t(Rn) \u22251Bk\u2225(E\u20d7 u \u20d7 v )t(Rn) !s# 1 s \u2272 X k\u2208Z 2m\u03b2s\u2225\u03c6m\u2225s (E\u20d7 u \u20d7 v )t(Rn) ! 1 s \u2272\u2225\u03c6\u2225( \u02d9 KE\u03b2,s \u20d7 u,\u20d7 v)t(Rn). We got what we want. Competing interests The authors declare that they have no competing interests. 21 \f22 Lihua Zhang and Jiang Zhou Funding The research was supported by the National Natural Science Foundation of China (Grant No. 12061069). Authors contributions All authors contributed equality and signi\ufb01cantly in writing this paper. All authors read and approved the \ufb01nal manuscript. Acknowledgments All authors would like to express their thanks to the referees for valuable advice regarding previous version of this paper." + }, + { + "url": "http://arxiv.org/abs/2111.11711v1", + "title": "Sample Efficient Imitation Learning via Reward Function Trained in Advance", + "abstract": "Imitation learning (IL) is a framework that learns to imitate expert behavior\nfrom demonstrations. Recently, IL shows promising results on high dimensional\nand control tasks. However, IL typically suffers from sample inefficiency in\nterms of environment interaction, which severely limits their application to\nsimulated domains. In industrial applications, learner usually have a high\ninteraction cost, the more interactions with environment, the more damage it\ncauses to the environment and the learner itself. In this article, we make an\neffort to improve sample efficiency by introducing a novel scheme of inverse\nreinforcement learning. Our method, which we call \\textit{Model Reward Function\nBased Imitation Learning} (MRFIL), uses an ensemble dynamic model as a reward\nfunction, what is trained with expert demonstrations. The key idea is to\nprovide the agent with an incentive to match the demonstrations over a long\nhorizon, by providing a positive reward upon encountering states in line with\nthe expert demonstration distribution. In addition, we demonstrate the\nconvergence guarantee for new objective function. Experimental results show\nthat our algorithm reaches the competitive performance and significantly\nreducing the environment interactions compared to IL methods.", + "authors": "Lihua Zhang", + "published": "2021-11-23", + "updated": "2021-11-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction Imitation Learning (IL), also known as Learning from Demonstration (LfD), is a framework which learning an optimal behavior policy from expert demonstrations. The expert demonstration generates by expert policy, and composed of state-action pair. Each state-action pair indicates the action to take at the state being visited. There are two ways to attaining the goal that leaning optimal policy from demonstrations, include Behavioral Cloning (BC) (Bain and Sammut 1995) and Inverse Reinforcement Learning (IRL) (Ng and Russell 2000). BC methods learn an expert policy in a supervised fashion without environment interactions. Usually, behavioral cloning methods are the \ufb01rst option when have suf\ufb01cient expert demonstrations (Sasaki, Yohira, and Kawaguchi 2019). In the practical application, such as autonomous vehicles task, the environment usually has a vast majority of stateaction pairs, however, the expert demonstrations only have a limited number of the states. Therefore, BC methods often suffer from compounding error (Ross and Bagnell 2010), inaccuracies compound over time and can lead the learner to encounter unseen states in the expert demonstrations. Moreover, BC often can\u2019t take the optimal action when encounters unseen states. Inverse Reinforcement Learning (IRL) algorithm provide for an automated framework for decision making and control without reward function: by specifying a high-lever objective function, an IRL algorithm can, in principle, automatically learn a reward function and control policy that satis\ufb01es this objective. This has the potential to automate a range of applications, such as autonomous vehicles and robotic control. Given that the policy learns based on reward function, is the most succinct and robust way. Inverse Reinforcement Learning methods aim to recover reward or cost function by expert demonstrations, then IRL methods obtains optimal policy through a standard reinforcement learning algorithm. The target of the objective function is to train a reward function which guarantees the optimal policy trained by it as optimal as expert policy. IRL methods can take better action compared with BC algorithms, when encounters unseen states in the expert demonstrations. There for IRL can overcome the compounding error problem (Ziebart et al. 2008). The original IRL framework utilizes a linear mapping between input features and out reward. These algorithms assume the unknown reward function that can be expressed as a linear combination of state-action pair\u2019s features, this assumption severely restricts the complexity of reward structures that can be modeled accurately. and the reward function is dif\ufb01cult to apply to complex, high-dimensional tasks with large state-action space. Finn and levine who proposed an algorithm that use expressive, nonlinear function approximators, such as neural networks, to represent the reward function (Finn, Levine, and Abbeel 2016). This approach learns nonlinear reward function from expert demonstrations, at the same time, the approach learns a policy to perform the task, and the policy optimization \u201cguides\u201d the reward toward good regions of the space. To improve the representation ability of linear reward function, Levine proposed Gaussian Process Inverse Reinforcement Learning (GPIRL) (Levine, Popovic, and Koltun 2011) that recovers the reward function with Gaussian Process (GP). GPIRL extends the GP model to account for the stochastic relationship between actions and underlying rearXiv:2111.11711v1 [cs.LG] 23 Nov 2021 \fwards. This allows GPIRL to balance the simplicity of the learned reward function against its consistency with the expert\u2019s actions, without assuming the expert to be optimal. But GPIRL have a runtime dependent on the size of the dataset and are hard to processing large amounts of demonstration data. There has some recent work on IRL algorithms that focus on recover reward function ef\ufb01ciently. Ho and Ermon who introduced a model-free imitation learning method called Generative adversarial Imitation Learning (GAIL) (Ho and Ermon 2016). This algorithm intimately connected to generative adversarial network (Goodfellow et al. 2014), and does not interact with expert during training. The process of training generator and discriminator network is divided into two procedures: RL procedure and IRL procedure. IRL is a dual of an occupancy measure matching problem, in this procedure, discriminator manage to improve its ability to distinguish which action bring a policy\u2019s occupancy measure closer to the expert\u2019s. Unlike DAgger (Ross, Gordon, and Bagnell 2011), can simply ask the expert for such actions. GAIL suffers from the problems of mode collapse and low sample ef\ufb01ciency in terms of environment interaction (Le et al. 2019). The weakness of mode collapse is inheriting from GANs, and several works have built on GAIL to overcome this problem (Li, Song, and Ermon 2017) (Fei et al. 2020). Although, GAIL leverage the expert demonstrations ef\ufb01ciently, it needs a large number of environment interactions. Similar to GAIL, other model free inverse reinforcement learning algorithms are quite data-expensive to train, which often limits their application to simulated domains. These methods require solving the reinforcement learning procedure (\ufb01nding an optimal policy given the current reward function) in the inner loop of an iterative reward optimization. This makes them need a majority of interaction with environment when apply to continuous control task with large state-action space, where the reinforcement learning procedure is dif\ufb01cult to train. Particularly real-world systems with unknown dynamics. We are interested in imitation learning and want to address this issue, because we desire an algorithm that can be applied to real-word problems for which it is hard to design the reward, and the cost of interaction with environment is expensive. Furthermore, in most real-word problems, even if the expert safely demonstrated, the learner may have policies that damage the environments and the learner itself during training. In this paper we focus on model-free imitation learning for continuous control. To address this problem, we propose a new approach to improve sample ef\ufb01ciency suffered by previous methods, which we call Model Reward Function Based Imitation Learning (MRFIL). The main contribution of our work is we dividing IRL algorithm into two steps: IRL procedure and RL procedure. In IRL procedure, MRFIL pre-trains a \ufb01xed reward function by expert demonstrations. In RL procedure, based on the pre-trained reward function, MRFIL learns an optimal policy with standard RL algorithms. We also propose a new objective function, and analyze the convergence theoretically. Unlike prior IRL methods, by pre-training a \ufb01xed reward function and learning an optimal policy in a single RL procedure, we considerably shrink the amount of interactions that interact with environment. Our evaluation demonstrates the performance of our method on a set of simulated benchmark tasks, showing that it achieves state-ofthe-art performance as compared with a number of imitation learning task what dif\ufb01cult to interact to environment. 2 Related Work Currently, the problem of learning an optimal policy in sample ef\ufb01cient way is still not well understood. Sample ef\ufb01ciency imitation learning data to at least work of Guided Cost Learning (GCL), which updates the cost function in the inner loop of policy search. speci\ufb01cally, GCL directly optimizing a trajectory distribution with respect to the current cost function using sample-ef\ufb01cient reinforcement learning algorithm. Thus, the cost function can lead the learner toward regions where the samples are more useful, and inherits its sample ef\ufb01ciency. However, GCL require the model is well-approximated by iteratively \ufb01tted time-varying linear dynamics, and have worse imitation capability compared with GAIL (Fu, Luo, and Levine 2017). GAIL is based on prior IRL works, and has achieved stateof-the-art performance on a variety of continuous control tasks, GAIL overcome the compounding error of BC methods, and is quite sample ef\ufb01cient than BC in terms of expert demonstration data. However, it suffers from the sample ef\ufb01cient in terms of environment interaction, which severe restricts its application in practical problems. Hester and Osband proposed an approach use a prioritized replay mechanism to automate learn optimal policy(Hester et al. 2018). However, this algorithm needs expert demonstration and hand-crafted reward, we address the problem only need expert demonstration. At present, the adversarial IL (AIL) framework has become a popular choice for IL (Baram et al. 2017); (Hausman et al. 2017); (Li, Song, and Ermon 2017). But these methods require a majority of state-action pairs obtained through the interaction between the learner and environment. To address this problem, Blond\u00b4 e and Kalousis leveraging an off-policy architecture to reduce interaction number with environment (Blond\u00b4 e and Kalousis 2019). However, this algorithm still needs reward function to learn through interact with environment. Fumihiro and Tetsuya adopt off-policy actor-critic (OffPAC) algoritm (Sasaki, Yohira, and Kawaguchi 2019). Compare this algorithm estimating the state-action value using off-policy samples without learning reward function. In contrast, our method trains reward function with of\ufb02ine. 3 Background 3.1 Markov Decision Process We focus on model-free imitation learning for continuous control task in this work, and model this task as a Markov Decision Process (MDP), can be formalized by the tuple:(S, A, p, r, \u03b3, \u03c10). Where S is the state space, A is the action space, p is the unknown transition function, \fp : S \u00d7 A \u00d7 S \u2192[0, 1], specifying the probability density p(s\u2032, r|s, a) to the next state s\u2032 acquire reward r from the current state s by taking action a. r : S \u00d7 A \u00d7 S \u2192R is the reward function, \u03c10 : S \u2192[0, 1] is the distribution of initial states. A policy is a function \u03c0 : S \u00d7 A \u2192[0, 1], which outputs a distribution over the action space for a given state s. 3.2 Policy Gradient Methods In continuing tasks, where environment interactions are unbounded in sequence length, the returns Rt at time step t for a trajectory are de\ufb01ned as Rt = P\u221e k=t \u03b3k\u2212tr(sk, ak, sk+1). The policy gradient theorem assumes that every trajectory starts in some particular (non-random) state s0, and s0 leads learner to initial states according to distribution \u03c10. Then, the goal is to learn a policy that maximizes expected returns in state s0. J(\u03b8) \u225cv\u03c0\u03b8(s0) = \u221e X t=0 \u03b3tr(st, at, st+1) (1) where v\u03c0\u03b8(s0) denotes the value of state s0 under policy \u03c0. Thus, the performance depends on both the action selections and the distribution of states in which those selection are made, and that both of these are affected by the parameter of policy network (Sutton and Barto 2018). 3.3 Inverse Reinforcement Learning Methods A common assumption in IRL is that the demonstrator utilizes a Markov Decision Process for decision making. Normally, IRL algorithms assumes the expert policy is optimal. The goal of IRL is to recover the unknown reward function from expert demonstrations (Osa et al. 2018). Recover the reward function can be bene\ufb01cial when the reward function is the most parsimonious way to describe the desired behavior. In IRL methods, the reward function is designed for reward function which makes the current policy as optimal as expert policy. The current policy is update using standard reinforcement learning algorithms based on the current estimator of the reward function. By repeating this procedure, the optimal policy and reward function can be obtained. 4 Proposed Method Currently, Imitation learning methods obtains a policy as optimal as expert policy thorough at least millions interaction with environment. This severely limited imitation learning methods be widely utilized in the industrial \ufb01eld, due to the high interaction cost. Therefore, we introduce a noval imitation learning method that sucessfully addresses the impeding sample ef\ufb01ciency, in the number of interaction with environment. 4.1 Algorithm Formulation It has various reasons that cause a low sample inef\ufb01ciency. First, IL learner lacks of prior knowledge about environment, thus, all the information and knowledge that assist learner to take an action when visit a state can only be acquired by interact with environment. Secondly, based on current reward function, IRL methods alternate execute RL procedure and IRL procedure , this hugely increased the interaction number. Last, when human being learn a skill by imitating expert, the skill of human improves faster by learning a few expert demonstrations and samples. However, comparing with human, IL agent requires a majority of expert demonstrations and samples. One of the reason is that human being learn a new task with Prior-Knowledge. Based on the above analysis, in order to reduce the number of interact with environment, MRFIL performs three important modi\ufb01cations. 1. Training an ensemble of dynamic model and a single dynamic model with expert demonstration data, then we use the variance predicted by an ensemble of dynamic model which composed of \ufb01ve dynamic models as a reward. The motivation is that the prediction of ensemble dynamic model in expert demonstration sate-action space tends to certainty, conversely, and the prediction of ensemble dynamic model far from expert demonstration sate-action space tends to uncertainty. Consequently, according to this characteristic, the reward function could identify whether the visited state belong to expert demonstrations, and the reward function is acquired before algorithm interacting with environment, and the policy can be optimized through reinforcement learning method. Besides, a single dynamic mode is used to pre-process neural network. 2. In order to reduce the number that agent interactions with environment, we use a single dynamic model and expert demonstration data to train algorithm in an of\ufb02ine way. We pre-train the algorithm with multi branch and short rollout, and the rollouts is divided to two ways: exploration rollout and exploitation rollout. This pretreatment provides a regularizing effect before policy learning, and is helpful to reduce training time. 3. Our theoretical result show that, MRFIL will cannot guarantee the convergence, if still adopt traditional objective function used in RL methods. To solve this problem, we add a supervised loss term to the objective function, and the supervised loss term is as same as BC loss. Furthermore, we demonstrate that the modi\ufb01ed objective function guarantees the constringency in our approach. 4.2 Ensemble Reward Function Generally, inverse reinforcement learning algorithms recover reward function with a single neural network, and learn it iteratively. Such as the popular choice for now: adversarial imitation learning. We simplify this process to two single procedure: IRL procedure and RL procedure. In IRL procedure, MRFIL pretrains a reward function. The reward function is de\ufb01ned as an ensemble of dynamic model, and the enssemble dynamic model is M = m1, m2, . . . , mn. We use expert demonstrations to train the ensemble dynamic model via standard supervised learning method, each single dynamic model is only differ by the initial weights. In this way, the reward function can be acquired before the RL procedure start, and the optimal policy can be learned by standard reinforcement \flearning method. The input of dynamic model is state-action pair (s, a), the output is next state s\u2032. After the training procedure \ufb01nished, if the current state-action pair belong to expert demonstrations, the variance of ensemble dynamics will be small. Conversely, if the current state-action pair far from expert state-action pair, the variance of ensemble dynamics will be large. Based on this principle, we de\ufb01ne a hyper-parameter Th, to measure whether the input state-action pair belong to expert demonstrations. Concretely, the hyper-parameter is de\ufb01ned on the variance the output of ensemble dynamic models due to Pinsker\u2019s inequality. The ensmble reward function is as follows: var(M(s, a)) = var(m1(s, a), \u00b7 \u00b7 \u00b7 , mn(s, a)) (2) r(s, a) = \u001a 1 if var(M(s, a)) > Th, 0 else. (3) This modi\ufb01ed reward function encourages the learner to explore in the state-action space close to expert demonstrations, and the learner is capable to go back when it far from the expert sate-action space. 4.3 Pre-training with Dynamic Model We adopt actor-critic frame to learn optimal policy in RL procedure. In the most of case, the algorithm converges \ufb01nally. However, in the initial progress, the actor and critic network have a large variance which is derived from policy gradient methods that caused algorithm unstable and slower. Naturally, this problem leads to demand more samples to adjust learner policy. To address this problem, and improve sample ef\ufb01ciency, we pretrain the actor-critic network in an of\ufb02ine way. Same as ensemble dynamic model, we train a single dynamic model via expert demonstrations, which be used to predict next state, and the reward still provide by the ensemble dynamics. The reason that caused a gap between sampling in true dynamic and dynamic model include two aspects: prediction error due to training data, and distribution shift due to the policy encountering states outside expert policy state-action space (Janner et al. 2019). In current settings, the dynamic model is a local optimal model, only accurate when encountering the state-action space belong to expert demonstrations. Even the local optimal cannot be guaranteed when expert demonstration is insuf\ufb01cient. Based on the above analysis, we propose an of\ufb02ine pre-train algorithm via the expert demonstrations and dynamic model. In order to reduce the compound error with the increase of rollout length, we train the algorithm with multi branch and short rollout. The algorithm starts several rollouts under the state distribution of expert demonstrations, and the rollouts is divided to two types: exploration rollout and exploitation rollout. In the exploration rollout procedure, learner acts with stochastic noise, and the distance that agent runs is closes. The noisy action leads learner to \u201csub-optimal\u201d region Algorithm 1 Algorithm 1 MBSR: Multi Branch and Short Rollout Of\ufb02ine Pre-training 1: Require: Expert demonstrations DE 2: Learn approximate dynamics model m0 and M = m1, m2, . . . , mn, S \u00d7 A \u2192S using DE. 3: for each branch b = 1, 2, . . . do 4: for each step of exploration branch do 5: choose noisy a from s 6: m0 provide next state s\u2032, M provide reward r 7: update actor and critic network 8: end for 9: for each step of exploitation branch do 10: choose noisy a from s 11: m0 provide next state s\u2032, M provide reward r 12: update actor and critic network 13: end for 14: end for mostly, which is unfamiliar to dynamic model. In this region the dynamic model has high prediction error, but the ensemble dynamic models provide zero reward in all regions that outside expert demonstrations. Therefore, the prediction error has no impact on the accuracy of learner. Furthermore, this provides a regularization effect during policy learning by penalizing policy that visit \u201csub-optimal\u201d region (Kidambi et al. 2020). In the exploitation rollout procedure, learner act similar to standard model-based methods, but the distance that agent runs is far. This procedure in order to imitates expert policy in \u201coptimal\u201d region, and the dynamic model is familiar to this region thus has high prediction accuracy. We propose a corollary to measure the performance of pretrained learner, and the theoretical proof is given. This corollart could measure the gap between expert return in real environment and learner return in dynamic model. The expert return in real environment can be sampled in expert demonstrations, and the theoretically analysis presented in section 5.1. The more details about how to pre-train the IL agent are presented in algorithm 1. 4.4 Imitation Learning with Ensemble Reward Function In the RL procedure, we adopt Soft Actor-Critic (SAC) algorithm. If use the SAC objective function directly, the convergency of MRFIL will not guaranteed, and the theoretical analyses is given in section 5.2. Therefore, we modify the objective function by adding a supervise term about actor loss, this modi\ufb01cation is as follows: \u03b8t+1 = \u03b8t + \u03b1\u2207\u03b8t log \u03c0(at|st, \u03b8t)(rt+1 + \u03b3Q(st+1, at+1, \u03c9) \u2212Q(st, at, \u03c9)) + \u03c4\u2207\u03b8t||\u03c0E(at|st) \u2212\u03c0(at|st, \u03b8t)||2 | {z } supervise term (4) where \u03b8t denotes the parameter of actor network, rt+1 denotes the reward acquired by take action at in state st, which is provided by ensemble dynamics M. \u03b1, \u03b3, \u03c4 denotes hyper-parameters. \fAlgorithm 2 Algorithm 2 MRFIL: Model Reward Function Based Imitation Learning for Sample Ef\ufb01cient 1: Input: Expert demonstration data DE = n (SE, AE, S \u2032 E)i on i=1 2: Initialize network weight M = {m1, m2, . . . , mn} and \u03c0, \u03c0E, Q 3: Initialize an empty replay pool Dre \u2190\u2205 4: Use BC method to pretrain \u03c0E, M = m1, m2, . . . , mn with data DE 5: Use Algorithm 2 to pre-train the actor-critic network 6: while \u03c0 and Q not converged do 7: Sample action at from the policy \u03c0, and get transition st+1from the environment 8: Get reward rt+1 from ensemble dynamic model Dre \u2190Dre\u222a{st, at, rt+1, st+1} Sample from replay pool to minimize policy gradient and value gradient \u03b8t+1 = \u03b8t + \u03b1\u2207\u03b8t log \u03c0(at|st, \u03b8t)(rt+1 + \u03b3Q(st+1, at+1, \u03c9) \u2212Q(st, at, \u03c9)) + \u03c4\u2207\u03b8t||\u03c0E(at|st) \u2212\u03c0(at|st, \u03b8t)||2 \u03c9t+1 =\u03c9t + (rt+1 + \u03b3 max Q(st+1, at+1, \u03c9t) \u2212Q(st, at, \u03c9t)\u2207\u03c9tQ(st, at, \u03c9t) 9: end while 10: Output \u03b8 The more details are presented in algorithm 2. Adding this supervise loss term enables the imitation policy in line 6 of Algorithm 2 reach to expert state-action space quickly, and learning around this optima region. Moreover, this modi\ufb01cation guarantees the imitation policy converge to expert policy. We theoretically demonstrate this modi\ufb01cation in section 5.2, and the convergence is proved by mathematical illation in Theorem B3. 5 Theoretical Analysis This section provides the formal theoretical analysis of Model Reward Function Based Imitation Learning. 5.1 The Return Gap of Of\ufb02ine Pre-training Under such a scheme, we propose a corollary to measure the performance of pre-trained IL learner, the maximal gap between IL learner and expert can be obtined by Corollary A 1. The corollary is as follows: Corollary A 1. (multi branch and short rollout). Suppose the real dynamic transformation is pr(s\u2032|s, a) , the dynamic model transformation is pm(s\u2032|s, a). |\u03b7e \u2212\u03b7m| \u22642 \u0012\u03b3(\u03f5m + \u03f5\u03c0) (1 \u2212\u03b3)2 + \u03f5\u03c0 1 \u2212\u03b3 \u0013 (5) Proof. See Appendix A, Corollary A.1. \u03b7e denotes the returns of the expert policy in real dynamic MDP, \u03b7m denotes the returns of the current policy in dynamic model MDP, \u03b3 denotes the hyperparameter, \u03c0e denotes expert policy, \u03c0m denotes the policy learned by pre-training. where maxt Es\u223cpt m(s)[DKL(pm(s\u2032|s, a)||pr(s\u2032|s, a))] \u2264 \u03f5, and maxt DT V (\u03c0e(a|s)||\u03c0m(a|s) \u2264\u03f5\u03c0. The episode return is an indicator of learner performance, thus we the return gap \u03b7e \u2212\u03b7m to measure the pre-training performance. 5.2 Convergence of The Algorithm The objective function of inverse reinforcement learning algorithms is as follows: maximize r\u2208R \u0010 min \u03c0 \u2212H(\u03c0) \u2212E\u03c0[r(s, a)] +E\u03c0E[r(s, a)] \u0011 (6) where H(\u03c0) \u225cE\u03c0[\u2212log \u03c0(a|s)] is \u03b3 discounted causal entropy of policy \u03c0, \u03c0E denotes expert policy. Theorem 1. The dual of inverse reinforcement problem is as follows: minimize \u03c1\u2208D \u2212H(\u03c1) s.t. \u03c1(s, a) = \u03c1E(s, a) \u2200s \u2208S, a \u2208A (7) Proof. See Appendix B, Theorem B.1. Where we denote occupancy measure \u03c1\u03c0 : S \u00d7 A \u2192R as \u03c1\u03c0(s, a) = \u03c0(a|s) P\u221e t=0 \u03b3tP(st = s|\u03c0). Theorem 1 implies that the goal of IRL is to \ufb01nd an optimal policy that the occupancy measure equals to expert policy. Following Theorem 1, that the learner taking action as same as expert in every state visitation can be guaranteed. The objective function of policy gradient methods with entropy term, but without supervise term is as follows: min \u03c0 \u2212H(\u03c0) \u2212E\u03c0[r(s, a)] (8) Theorem 2. The dual problem of policy gradient methods objective function (Eq (7)) is as follows: minimize \u03c1\u2208D \u2212H(\u03c1) s.t. \u03c1(s, a) \u22650 \u2200s \u2208S, a \u2208A (9) Proof. See Appendix B, Theorem B.2. Theorem 2 implies that use SAC to learn optimal policy requiring the occupancy measure \u03c1(s, a) greater than zero, which is already satis\ufb01ed. Thus, the reward function that learnes from expert demonstrations must be very precise, but this mathematical condition is hard to satisfy without interaction with environment. To address this problem, we modify the objective function Eq(8), the modi\ufb01ed objective function is as follows: min \u03c0 \u2212H(\u03c0) \u2212E\u03c0[r(s, a)] + EDE[\u03c0(s, a) \u2212\u03c0E(s, a)] | {z } supervise term (10) Theorem 3. The dual problem of MRFIL objective function (Eq (9)) is as follows: minimize \u03c1\u2208D \u2212H(\u03c1) s.t. \u03c1(s, a) \u22650 \u2200s \u2208S, a \u2208A \u03c0(s, a) = \u03c0E(a|s) \u2200(s, a) \u2208DE (11) \fFigure 1: Image-based MuJoCo. Performance comparison between MRFIL, GAIL and SAMIL in terms of episodic return. The horizontal axis depicts, in logarithmic scale, the number of interactions with environment. Each algorithm is run across 5 random seeds. Proof. See Appendix B, Theorem B.3. In Theorem 3, the supervised term in Eq(10)) is converted to \u03c0(s, a) = \u03c0E(a|s), this require the learner becoming as optimal as expert. The constraint in Theorem 3 paly a role equal to the constraint in Theorem 1, thus the learner is guaranteed to take action as same as expert in every state visitation. 6 Experiments In this section, we comparatively evaluate our proposed method on \ufb01ve continuous control tasks which is built with MuJoCo physics engine, and is wrapped via the OpenAI Gym API. we want to investigate two aspect of MRFIL: the effectiveness of learning from expert demonstrations, and the accuracy of reward function. Speci\ufb01cally, inorder to show its learning performance, we compare MRFIL to two algorithms: GAIL and Sample Ef\ufb01cient Imitation Learning (SAMIL) (Blond\u00b4 e and Kalousis 2019). The goal of this experiment is that investigate not only how well each method can mimic the expert demonstrations, but also how well MRFIL can reduces the interaction numbers. 6.1 Experimental Settings All experiments in this section are conducted in a cluster with two machines with 2 NVIDIA Tesla P40 GPUs each. We implement our algorithms with PyTorch. The agent always starts from the origin point, MRFIL performs on all environments within a 1 million step threshold. We parameterize the actor-critic model using 2-layer ReLUMLPs and use an ensemble of 5 dynamics models to implement reward function as described in Section 4.3, we parameterize the dynamic model using a 4-layer ReLU-MLPs. For both environments, a Gaussian noise of N(0, 0.2) was added to the states to introduce stochasticity. 6.2 Empirical Results We generate the expert demonstration by SAC that based on Gaussian policy, the expert demonstrations which include 15000 episodes are collected starting from the initial state distribution, and 70 percent of it are used in training process. In the experiment, to simplify the training procedure, the supervised loss term in objective function is computed by calculating the difference betweed BC model and learner, the BC model is trained by expert demonstration. the experiment result demonstrates that this way is feasible. Results of all methods are shown in Figure 1. In these environments we found that MRFIL achieves a higher performance than GAIL and SAMIL in Hopper-v2, Ant-v2 and HalfCheetah-v2. GAIL performs somewhat better than MRFIL on Walker2D-v2. For Ant-v2, Due to the high action dimension, none of GAIL and SAMIL are able to learn optimal policy. The result demonstrates that MRFIL requires signi\ufb01cantly less environment interaction on both experimental. Since our approach pre-train the actor-critic network in of\ufb02ine way, and add a supvised loss item. Therefor, these mode\ufb01cations is able to guide the learner to explore nearby expert policy state-action space. Besids, the reward function \fpunishes learner to visit unknown states, thereby providing a safeguard against distribution shift. 7" + } + ], + "Yuzheng Wang": [ + { + "url": "http://arxiv.org/abs/2307.16601v1", + "title": "Sampling to Distill: Knowledge Transfer from Open-World Data", + "abstract": "Data-Free Knowledge Distillation (DFKD) is a novel task that aims to train\nhigh-performance student models using only the teacher network without original\ntraining data. Despite encouraging results, existing DFKD methods rely heavily\non generation modules with high computational costs. Meanwhile, they ignore the\nfact that the generated and original data exist domain shifts due to the lack\nof supervision information. Moreover, knowledge is transferred through each\nexample, ignoring the implicit relationship among multiple examples. To this\nend, we propose a novel Open-world Data Sampling Distillation (ODSD) method\nwithout a redundant generation process. First, we try to sample open-world data\nclose to the original data's distribution by an adaptive sampling module. Then,\nwe introduce a low-noise representation to alleviate the domain shifts and\nbuild a structured relationship of multiple data examples to exploit data\nknowledge. Extensive experiments on CIFAR-10, CIFAR-100, NYUv2, and ImageNet\nshow that our ODSD method achieves state-of-the-art performance. Especially, we\nimprove 1.50\\%-9.59\\% accuracy on the ImageNet dataset compared with the\nexisting results.", + "authors": "Yuzheng Wang, Zhaoyu Chen, Jie Zhang, Dingkang Yang, Zuhao Ge, Yang Liu, Siao Liu, Yunquan Sun, Wenqiang Zhang, Lizhe Qi", + "published": "2023-07-31", + "updated": "2023-07-31", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Deep learning has made refreshing progress in various computer vision fields [15, 43, 29, 44, 10, 26, 40, 9, 7]. Despite success, large-scale models [13, 24, 30, 28, 46, 8, 27] and unavailable privacy data [2, 36, 45, 39] often impede the application of advanced technology on mobile devices. Therefore, model compression and data-free technology have become the key to breaking the bottleneck. To this end, Lopes et al. [31] propose Data-Free Knowledge Distillation (DFKD). In this process, knowledge is transferred from the cumbersome model to a small model that is more suitable for deployment without using the original training dataset. As a result, this widely applicable technology has gained much attention. To replace unavailable private data and effectively train small models, most existing data-free knowledge distilla(a) Generation-based Computational parts: Generation module Student Teacher Student Teacher (b) Sampling-based Computational parts: Figure 1. Comparison of (a) generation-based and (b) samplingbased methods. The sampling-based process uses open-world unlabeled data to distill the student network, so it does not need additional generation costs. At the same time, the extra knowledge in these unlabeled data are helpful when the teacher predicts wrong. tion methods relied on alternately training of the generator and the student, called the generation-based method. Despite not using the original training data, these generationbased methods have many issues. First, their trained generators are abandoned after the students\u2019 training [5, 17, 33, 20, 14, 51]. The training of generators brings additional computational costs, especially for large datasets. For instance, a thousand generators are trained for the ImageNet dataset [12], which introduces more computational waste [32, 16]. Then, large domain shifts exist between the generated data and the original data. The substitute data are composed of random noise transformation without supervision information. Hence, the substitute domain usually does not match the unavailable original data domain and includes extensive label noise predicted by the teacher [47]. Rather than relying on generation-based methods, Chen et al. [4] propose a sampling-based method for training the student network via open-world unlabeled data without the generation calculations. Compared with generation-based methods, sampling-based methods can avoid the training cost of generators. The comparison of the two methods is shown in Figure 1. Meanwhile, they try to reduce label noise by updating the learnable noise matrix, but the noise matrix\u2019s computational costs are expensive. More importantly, their sampling method only relies on strict confiarXiv:2307.16601v1 [cs.CV] 31 Jul 2023 \fdence ranking and does not consider the data domain similarity problem, so the domain shift problem is still severe. In addition, the existing generation-based and sampling-based methods can be summarized as the distillation methods of the student to mimic the outputs of a particular data example represented by the teacher [49, 34, 41]. Therefore, these methods do not adequately utilize the implicit relationship among multiple data examples, which leads to the lack of effective knowledge expression in the distillation process. Based on the above observations, we construct a novel sampling-based method to avoid unnecessary computational costs. The difference is that we hope to mitigate the domain shifts issue better and exploit the relationship among multiple samples. To cope with the domain shifts issue between the open-world and source data, we propose a comprehensive solution to it from two aspects. Firstly, we preferentially try to sample data with similar distribution to the original data domain to reduce the shifts. Secondly, low-noise knowledge representation learning is introduced to suppress the interference of label noise. To explore the data knowledge adequately, we set up a structured representation of unlabeled data to enable the student to learn the implicit knowledge among multiple data examples. As a result, the student can learn from carefully sampled unlabeled data instead of absolutely relying on the teacher. At the same time, to explore an effective distillation process, we introduce a contrastive structured relationship between the teacher and student. The student can make better progress through the structured prediction of the teacher network. In this paper, we consider a solution of DFKD that does not require additional generation costs. On the one hand, we hope to find a solution to data domain shifts from both data source and distillation methods. On the other hand, we try to explore an effective structured knowledge representation method to deal with the issues of lack of supervision information and the training difficulties in DFKD scenes. Therefore, we propose an Open-world Data Sampling Distillation (ODSD) method, which includes Adaptive Prototype Sampling (APS) and Denoising Contrastive Relational Distillation (DCRD) modules. Specifically, the primary contributions and experiments are summarized as follows: \u2022 We propose an Open-world Data Sampling Distillation (ODSD) method. The method does not require additional training of one or more generation modules, thus avoiding unnecessary computational costs. \u2022 Considering the domain shifts between the open-world and source data, we introduce an Adaptive Prototype Sampling (APS) mechanism to obtain data closer to the original data distribution. \u2022 We propose a Denoising Contrastive Relational Distillation (DCRD) module, which utilizes a low-noise representation to suppress label noise and builds contrast structured relationships to exploit knowledge from data and the teacher adequately. \u2022 Experiments show that the proposed ODSD method improves the current state-of-the-art (SOTA) in various benchmarks. In particular, our method improves 1.50%-9.59% accuracy on the ImageNet dataset. 2. Related Work 2.1. Data-Free Knowledge Distillation Data-free knowledge distillation is proposed to deal with the problem of a lightweight model when the original data are unavailable. Therefore, substitute data are indispensable to help transfer knowledge from the cumbersome teacher to the flexible student. According to the source of these data, existing methods are divided into generation-based and sampling-based methods. Generation-based Methods. The generation-based methods depend on the generation module to synthesize the substitute data. Lopes et al. [31] propose the first generationbased DFKD method, which uses the data means to fit the training data. Due to the weak generation ability, it can only be used on a simple dataset such as the MNIST dataset. The following methods combine the Generative Adversarial Networks (GANs) to generate more authentic and reliable data. Chen et al. [5] firstly put the idea into practice and define an information entropy loss to increase the diversity of data. However, this method relies on a long training time and a large batch size. Fang et al. [17] suggest forcing the generator to synthesize images that do not match between the two networks to enhance the training effect. Hao et al. [20] suggest using multiple pre-trained teachers to help the student, which leads to additional computational costs. Do et al. [14] propose a momentum adversarial distillation method to help the student recall past knowledge and prevent the student from adapting too quickly to new generator updates. The same domain typically shares some reusable patterns, so Fang et al. [16] introduce the sharing of local features of the generated graph, which speeds up the generation process. Since the generation quality is still not guaranteed, some methods spend extra computational costs on gradient inversion to synthesize more realistic data [50, 18]. In addition, Choi et al. [11] combine DFKD with other compression technologies and achieve encouraging performance. However, generation-based DFKD methods generate a large number of additional calculation costs in generation modules, while these modules will be discarded after students\u2019 training [4]. Sampling-based Methods. To train the student more exclusively, Chen et al. [4] propose to sample unlabeled data to replace the unavailable data without the generation module. Firstly, they use a strict confidence ranking to sample unlabeled data. Then, they propose a simple distil\fSpatial mapping denoise module Contrastive relational distillation module Teacher (Fixed) Student (Trainable) Low-noise representation Sampled data Open-World Data Edge data Teacher's prediction Prototype subcenter Density box ... ... Stage 1: Adaptive Prototype Sampling Stage 2: Denoising Contrastive Relational Distillation Logits prediction Figure 2. The pipeline of our proposed ODSD. First, all open-world unlabeled data passes through adaptive prototype sampling so that the substitute dataset resembles the distribution of the source data. Then, based on these data, the student can make progress through low-noise information representation, data knowledge mining, and structured knowledge from the teacher. lation method with a learnable adaptive matrix. Despite no additional training costs and promoting encouraging results, their method ignores the intra-class relationships of multiple unlabeled data. Simultaneously, the simple strict confidence causes more data to be sampled for simple classes, leading to imbalanced data classes. In addition, their proposed distillation method is relatively simple and lacks structured relationship expression, which limits the student\u2019s performance. 2.2. Contrastive Learning Contrastive learning makes the model\u2019s training efficient by learning the data differences [48]. The unsupervised training usually requires to store negative data by a memory bank [42], large dictionaries [21], or a large batch size [6]. Even it requires a lot of computation, additional normalization [19], and network update operations [3]. The high storage and computing costs seriously reduce knowledge distillation efficiency. But at the same time, this idea of mining knowledge in unlabeled data may be helpful for the student\u2019s learning. Due to such technical conflicts, there are few methods to perfectly combine knowledge distillation and contrastive learning in the past. As a rare attempt, Tian et al. [37] propose a contrastive data-based distillation method by an update a large memory bank. But for datafree knowledge distillation, the data quality cannot be guaranteed, and data domain shifts are intractable, which makes the above process challenging to carry out. In this work, we attempt to explore additional knowledge from both data and the teacher. Therefore, we further stimulate students\u2019 learning ability by using the internal relationship of unlabeled data and constructing a structured contrastive relationship. To our knowledge, this is the first combination of data-free knowledge distillation and contrastive learning at a low cost, which achieves an unexpected effect. 3. Methodology 3.1. Overview Considering the existing issues, our pipeline includes two stages: 1) unlabeled data sampling and 2) distillation training, as shown in Figure 2. For the first stage, we sample unlabeled data by an adaptive sampling mechanism to obtain data closer to the original distribution. For the second stage, the student learns the knowledge representation after denoising through a spatial mapping denoise module. Further, we mine more profound knowledge of the unlabeled data and build the structured relational distillation to help the student gain better performance. The complete algorithm is shown in Supplementary Sec.3. 3.2. Adaptive Prototype Sampling The class and scale of the unavailable source dataset and the unlabeled dataset are different in many cases, so there is a severe issue of data domain shifts, which will be discussed in Figure 3. To this end, we aim to find unlabeled data closer to the distribution of source domain data. Therefore, we propose an Adaptive Prototype Sampling (APS) method that considers the teacher\u2019s familiarity, the intra-class outliers, and the class balance of the unlabeled data. Based on these, we design three score indicators to evaluate the effectiveness of the unlabeled data for student training: the \fdata confidence score, the data outlier score, and the class density score. (a) Data confidence score. To sample data with similar distribution to original training data, we try to keep consistent prediction logits of the teacher. Firstly the teacher provides the prediction logits for the unlabeled dataset as P = [p1, p2, ..., ps] \u2208Rs\u00d7C, where pi is the prediction for a single sample satisfying R1\u00d7C. i is the i-th data, s is the number of data for the unlabeled dataset, and C is the number of classes in the teacher\u2019s prediction. Then the prediction is converted into the probability of the unified scale as: pi\u2032 = softmax(pi), \u02dc pi = arg max(pi\u2032), and arg max(p\u2032) denotes the confidence probability corresponding to the predicted result class. Therefore, \u02dc p = [\u02dc p1, \u02dc p2, ..., \u02dc ps] represents the confidence of each data in the whole dataset. We choose the largest one max {\u02dc p} for normalization. The confidence score can be calculated as: sci = \u02dc pi |max{\u02dc p}|. (b) Data outlier score. The label space of the two data domains is different, so there are edge data, which may affect the student\u2019s learning and need to be excluded. For example, we try to exclude data like tigers from the real class of cats, as shown in the orange part of Stage 1 in Figure 2. Firstly we separate the data according to the classes predicted by the teacher. Each class is clustered to explore the intra-class relationships through prototype learning. Then we refer to a group of CK data sampling prototypes, i.e., \b \u00b5c,k \u2208R1\u00d7C\tC,K c,k=1, which are based solely on the subcenter of target classes. c is the c-th class, and K is a hyperparameter that represents the number of prototypes defined for each class. After clustering [23], the prediction results of the c-th class can be expressed as K prototypes as {\u00b5c,k}K k=1, which reflect the intra-class relationship of the data predicted as class c. To calculate the outlier of data in class c, the predictions of these data are expressed as \u03c1i,c = pi, which couples the predictions and class information. According to the predictions and the prototype centers of the class c, the intra-class outliers of i-th data can be calculated as \u02dc oi = PK k=1 cos(\u03c1i,c, \u00b5c,k), where cos denotes the cosine similarity. Similar to the above, we select the maximum value for normalization. As a result, the outlier score can be calculated as: soi = \u02dc oi |max{\u02dc o}|. (c) Class density score. To better help the student learn various classes, we calculate the class density to better meet the sampled data\u2019s balance. As shown in Stage 1 of Figure 2, we increase the sampling range for classes with sparse data (the blue part) while we reduce the sampling range for classes with redundant data (the orange part). Based on this, we firstly separate the above intra-class outliers \u02dc oi of all data by their classes. The outliers mean value of each class can be calculated as: uc = 1 nc P pi\u2208c\u02dc oi, where nc is the number of the data predicted as c-th class. Therefore, the Dcluster parameter Dc can be calculated as: Dc = \u221auc loge (nc+C). Each data\u2019s density value equals the density value of its class as: di = Dc(pi \u2208c). After selecting the maximum value for normalization, the density score of each data can be calculated as: sdi = di |max{d}|. Finally, we define the total score with Stotal, which is calculated as: Stotal = sci \u2212soi +sdi. According to the total score, the data closer to the distribution of the original data domain are sampled, which can help the student learn better. The quantitative analysis is shown in Table 7. 3.3. Denoising Contrastive Relational Distillation After obtaining the high score data, the distillation process can be carried out. We denote fT and fS as the output of the teacher and student networks and denote x as the sampled data. According to [22], the knowledge distillation loss is calculated as: LKD = X x\u2208X DKL(fT (x)/\u03c4, fS(x)/\u03c4), (1) where DKL is the Kullback-Leibler divergence, and \u03c4 is the distillation temperature. Although LKD allows the student to imitate the teacher\u2019s output, only its use leads to poor learning results. The main challenge is the distribution differences between the substitute and original data domains, leading to label noise interference. Simultaneously, the ground-truth labels are unavailable, so correct information supervision is missing. Therefore, we propose a Denoising Contrastive Relational Distillation (DCRD) module, which includes a spatial mapping denoise component and a contrastive relationship representation component to help the student get better performance. 3.3.1 Spatial Mapping Denoise The data distribution in the unlabeled data is different from the unavailable source data, which indicates the label noise is inevitable. Low dimensional information contains purer knowledge, which is subject to less noise interference [1]. Here, we use a low dimensional spatial mapping denoise component to help the student learn low-noise knowledge representation. Zt, Zs are the low dimensional representation of teacher and student prediction. In order to obtain a distance invariant spatial projection transformation \u03a6, the autocorrelation matrix d2 ij is defined as: d2 ij = \r \r \r\u2212 \u2192 fT (xi) \u2212\u2212 \u2192 fT (xj) \r \r \r = \u2225\u2212 \u2192 zi \u2212\u2212 \u2192 zj\u2225= bii +bjj \u22122bij, where bij = \u2212 \u2192 zi \u00b7 \u2212 \u2192 zj. We sum d2 ij in a mini-batch as: N X i N X j d2 ij = 2N \u00b7 tr(ZtZT t ), (2) where N denotes the batch size, and tr(\u00b7) denotes the trace of a matrix. Then Zt can be calculated as Zt = Vt\u039b1/2 t , \fwhere Vt is the eigenvalue after eigendecomposition, and \u039bt is the eigenmatrix. Similarly, we can get the student predictions of low dimensional representation as Zs. Then, we set up a distillation loss to correct the impact of label noise by the spatial mapping of the two networks. The spatial mapping denoise distillation loss is calculated as: Ln = \u2113h(\u03a6(fT \u00b7 fT T ), \u03a6(fS \u00b7 fS T )) = \u2113h(Zt, Zs), (3) where \u2113h(\u00b7, \u00b7) denotes the Huber loss. We can match the teacher-student relationship in a low dimensional space to learn a low-noise knowledge representation by Ln. 3.3.2 Contrastive Relational Distillation The missing supervision information limits the student\u2019s performance. It is indispensable to adequately mine the knowledge in unlabeled data to compensate for lack of information. To avoid single imitation of a particular data example, we build two kinds of structured relationship to mine knowledge from data and the teacher. Firstly, the student can adequately explore the structured relation among data by learning the instance invariant. xi, xj are the different data in a mini-batch. We calculate the prediction difference between data as: \u2113xixj s = cos(fS(xi), fS(xj))/\u03c41 P2N k=1,k\u0338=i cos(fS(xi), fS(xk))/\u03c41 , (4) where \u03c41 denotes contrastive temperature. Next, we can calculate the consistency instance discrimination loss as: Lc1 = \u22121 N N X j=1 log \u2113 xjx\u00af j s , (5) where x\u00af j denotes the data augmentation transform of data xj. The student can find knowledge directly from the multiple unlabeled data through data consistency learning. This unsupervised method is especially effective when the teacher makes wrong results. Secondly, we construct a structured contrastive relationship between the teacher and student, which promotes consistent learning between the teacher and student. The structured knowledge learning process is calculated as: \u2113x\u2032 i ts = cos(fT (x\u2032 i), fS(x\u2032 i))/\u03c42 P4N k=1,k\u0338=i cos(fT (x\u2032 i), fS(x\u2032 k))/\u03c42 , (6) where x\u2032 = x \u222a\u00af x. Then, we can calculate the teacherstudent consistency loss as: Lc2 = \u22121 2N 2N X j=1 log \u2113 x\u2032 j ts . (7) The student can obtain better learning performance through the mixed structured and consistent relationship learning between the two networks. Then, the contrastive relational distillation loss is Lc = Lc1 + Lc2. Finally, we can get the total denoising contrastive relational distillation loss as: Ltotal = LKD + \u03bb1\u00b7Ln + \u03bb2\u00b7Lc, (8) where \u03bb1, \u03bb2 are the trade-off parameters for training losses. 4. Experiments 4.1. Experimental Settings Datasets and Models. We evaluate the proposed ODSD method for the classification and semantic segmentation tasks. For classification, we evaluate it on widely used datasets: 32 \u00d7 32 CIFAR-10, CIFAR-100 [25], and 224 \u00d7 224 ImageNet [12]. In addition, we use the pre-trained models from CMI [18] and unify the teacher models among all baseline methods. The number of sampled data is 150k or 600k for CIFAR, and 600k for ImageNet following DFND [4]. More detailed classification settings are shown in Supplementary Sec.1. For semantic segmentation, we evaluate the proposed method on 128\u00d7128 NYUv2 dataset [35]. 200k data are sampled. More detailed segmentation settings are shown in Supplementary Sec.4. Besides, the corresponding open-world datasets are shown in Table 1, which is the same as DFND [4] for a fair comparison. Baselines. We compare two kinds of data-free knowledge distillation methods. One is to have to spend extra computing costs to obtain generation data by generation module, including: DeepInv [50], CMI [18], DAFL [5], ZSKT [33], DFED [20], DFQ [11], Fast [16], MAD [14], DFD [32], and DFAD [17]. Another is to use unlabeled data as the substitute data from easily accessible open source datasets based on sampling, i.e., DFND [4]. 4.2. Performance Comparison To evaluate the effectiveness of our ODSD, we comprehensively compare it with current SOTA DFKD methods regarding the student\u2019s performance, the effectiveness of the sampling method and training costs. In addition, some other methods only apply to small datasets and models (e.g., MNIST and AlexNet) for some reason. The test baselines mentioned in the article may be difficult for these methods, causing serious performance decline and non-competitive experimental results. To compare these methods fairly, we conduct experiments on the MNIST dataset and maintain Table 1. Illustration of original data and their substitute datasets. Original data CIFAR ImageNet NYUv2 Unlabeled data ImageNet Flickr1M ImageNet \fTable 2. Student accuracy (%) on CIFAR datasets. Bold and underline numbers denote the best and the second best results, respectively. Dataset Method Type ResNet-34 VGG-11 WRN40-2 WRN40-2 WRN40-2 ResNet-18 ResNet-18 WRN16-1 WRN40-1 WRN16-2 CIFAR-10 Teacher 95.70 92.25 94.87 94.87 94.87 Student 95.20 95.20 91.12 93.94 93.95 KD 95.58 94.96 92.23 94.45 94.52 DeepInvCVPR20 [50] Generation 93.26 90.36 83.04 86.85 89.72 CMIIJCAI 21 [18] 94.84 91.13 90.01 92.78 92.52 DAFLICCV 19 [5] 92.22 81.10 65.71 81.33 81.55 ZSKTNIP S 19 [33] 93.32 89.46 83.74 86.07 89.66 DFEDACMMM21 [20] 87.37 92.68 92.41 DFQCVPRW 20 [11] 94.61 90.84 86.14 91.69 92.01 FastAAAI 22 [16] 94.05 90.53 89.29 92.51 92.45 MADNIP S 22 [14] 94.90 92.64 DFND 150kCVPR21 [4] Sampling 94.18 91.77 87.95 92.56 92.02 DFND 600kCVPR21 [4] 95.36 91.86 90.26 93.33 93.11 ODSD 150k 95.05 92.02 89.14 92.94 92.34 ODSD 600k 95.70 92.55 91.53 94.31 94.02 CIFAR-100 Teacher 78.05 71.32 75.83 75.83 75.83 Student 77.10 77.10 65.31 72.19 73.56 KD 77.87 75.07 64.06 68.58 70.79 DeepInvCVPR20 [50] Generation 61.32 54.13 53.77 61.33 61.34 CMIIJCAI 21 [18] 77.04 70.56 57.91 68.88 68.75 DAFLICCV 19 [5] 74.47 54.16 20.88 42.83 43.70 ZSKTNIP S 19 [33] 67.74 54.31 36.66 53.60 54.59 DFEDACMMM21 [20] 41.06 60.96 60.79 DFQCVPRW 20 [11] 77.01 66.21 51.27 54.43 64.79 FastAAAI 22 [16] 74.34 67.44 54.02 63.91 65.12 MADNIP S 22 [14] 77.31 64.05 DFND 150kCVPR21 [4] Sampling 74.20 69.31 58.55 68.54 69.26 DFND 600kCVPR21 [4] 74.42 68.97 59.02 69.39 69.85 ODSD 150k 77.90 72.24 60.55 71.66 72.42 ODSD 600k 78.45 72.71 60.57 72.71 73.20 the same experimental settings. The experimental results are shown in Supplementary Sec.2. Experiments on CIFAR-10 and CIFAR-100. We first verify the proposed method on the CIFAR-10 and CIFAR100 [25]. We collate the performance of two kinds of SOTA methods based on data generation and data sampling. The baseline \u201cTeacher\u201d and \u201cStudent\u201d means to use the corresponding backbones of the teacher or student for direct training with the original training data, and \u201cKD\u201d represents distilling the student network with the original training data. Generation-based methods include training additional generators and calculating model gradient inversion. Samplingbased methods use the unlabeled ImageNet dataset. We reproduce the DFND using the unified teacher models, and the result is slightly higher than the original paper. As shown in Table 2, our ODSD has achieved the best results on each baseline. Under most baseline settings, ODSD brings gains of 1% or even higher than the SOTA methods, even though students\u2019 accuracy is very close to their teachers. In particular, the students of our ODSD outperform the teachers on some baselines. As far as we know, it is the first DFKD method to achieve such performance. The main reasons for its breakthrough in analyzing the algorithm\u2019s performance come from three aspects. First, our \fTable 3. Student accuracy (%) on ImageNet dataset. Method Type ResNet-50 ResNet-50 ResNet-50 ResNet-18 ResNet-50 MobileNetv2 Teacher 75.59 75.59 75.59 Student 68.93 75.59 63.97 KD 68.10 74.76 61.67 DFD [32] Generation 54.66 69.75 43.15 \u2217DeepInv2k [50] 68.00 Fast50 [16] 53.45 68.61 43.02 DFND [4] Sampling 42.82 59.03 16.03 ODSD 58.24 71.25 52.74 data sampling method comprehensively analyzes the intraclass relationships in the unlabeled data, excluding the difficult edge data and significant distribution differences data. At the same time, the number of data in each class is relatively more balanced, which is conducive to all kinds of balanced learning compared with other sampling methods. Second, our knowledge distillation method considers the representation of low-dimensional and low-noise information and expands the representation of knowledge through data augmentation. The structured relationship distillation method helps the student effectively learn knowledge from both multiple data and its teacher. Finally, the knowledge of our ODSD does not entirely come from the teacher but also the consistency and differentiated representation learning of unlabeled data, which is helpful when the teacher makes mistakes. The previous methods ignore the in-depth mining of data knowledge, which affect students\u2019 performance. Experiments on ImageNet. We conduct experiments on a large-scale ImageNet dataset to further verify the effectiveness. Due to the larger image size, it is challenging to synthesize training data for most generation-based methods effectively. Most of them failed. A small number of successful methods have to train 1,000 generators (one generator for one class), resulting in a large amount of additional computational costs. We set up three baselines to compare the performance of our method with the SOTA methods. Table 3 reports the experimental results. Our ODSD still achieves several percentage points increase compared with other SOTA methods, especially in the cross-backbones situation (9.59%). Due to the lack of structured knowledge representation, the DNFD algorithm performs poorly on the large-scale dataset. Comparing the performance of DFND and ODSD, our structured knowledge framework improves the overall understanding ability of the student. Comparison of training costs. In order to verify that the generation-based methods add extra costs that we mentioned in the introduction section, we further calculate the total floating point operations (FLOPs) and parameters (params) required by various DFKD algorithms, as shown \u2217For fair comparisons, we select the original version of DeepInv without the mixup data augmentation, which is the same as other methods. Table 4. Total FLOPs and params in DFKD methods. Method DeepInv CMI DAFL ZSKT DFQ DFND ODSD FLOPs 4.36G 4.56G 0.67G 0.67G 0.79G 0.56G 0.56G params 11.7M 12.8M 12.8M 12.8M 17.5M 11.7M 11.7M Table 5. APS compared with the SOTA sampling method. Sampling methods Method KD DFND ODSD Random 76.85 73.15 76.43 DFND 76.67 73.68 77.40 APS 77.27 73.89 77.90 Table 6. Segmentation results on NYUv2 dataset. Algorithm Teacher Student DAFL DFAD Fast DFND ODSD mIoU 0.517 0.375 0.105 0.364 0.366 0.378 0.397 in Table 4. Because without additional generation modules, our method only needs training costs and params of the student network. Other methods list the required calculation cost and params of both the generation module and the student. These generation modules will be discarded after student training, which causes a waste of computing power. Comparison of data sampling efficiency. To verify the effectiveness of the sampling mechanism, we compare the performance of our APS method compared with the current SOTA unlabeled data sampling method DFND [4]. Three data sampling methods (random sampling, DFND sampling, and our proposed APS) are set on three different distillation algorithms, including: KD [22], DFND [4], and our proposed ODSD. Table 5 reports the results. For KD, we use the sampled data instead of the original generated data with LKD distillation loss. From the result, this setting is competitive, even better than the distillation loss of DFND. For DFND, we reproduce it with open-source codes and keep the original training strategy unchanged. We find the performance of the DFND sampling method is unstable, which causes it to be lower than random sometimes. For ODSD, we use the distillation loss in Equation (8). Our proposed sampling method achieves the best performance in all three benchmarks and significantly improves performance. By comprehensively considering the data confidence, the data outliers, and the class density, our ODSD can more fully mine intra-class relationships of the unlabeled data. As a result, the sampled data are more helpful for subsequent student learning. Experiments about semantic segmentation. We also conduct experiments on segmentation tasks. Mean Intersection over Union (mIoU) is set as the evaluation metric. Table 6 shows segmentation results on the NYUv2 dataset. Our ODSD also achieves the best performance. The visualization results and more detailed analysis are shown in Supplementary Sec.4. \f4.3. Diagnostic Experiment To verify the effectiveness of our method, we conduct diagnostic studies on the CIFAR-100 dataset. We use ResNet34 as the teacher\u2019s backbone and ResNet-18 as the student\u2019s backbone. 150k data are sampled, and the student trains 200 epochs. The optimal values obtained by diagnostic experiments are also the default setting of 4.2 comparison experiments. In addition to what is shown in this section, more diagnostic experiments are shown in Supplementary Sec.2. Distillation training objective. We first investigate our overall training objective (cf. Equation (8)). Two different data sampling numbers are set in this experiment. As shown in the experiments (1-4) of Table 7, the model with LKD alone achieves accuracy scores of 74.39% and 77.27% on 50k and 150k data sampling settings. Adding Ln or Lc individually brings gains (i.e., 0.32%, 0.31%/ 0.43%, 0.44%), indicating the effectiveness of our proposed distillation method. By combining all the training objectives, our method achieves better performance with 75.26% and 77.90%. Therefore, the proposed training objectives are effective and can help students gain good performance. Data sampling scores. To obtain more efficient data, we define three scores for our sampling method in section 3.2. To verify their effectiveness, we further carry out ablation experiments. When using the complete three efficient evaluation criteria, the model can achieve the best performance with 75.26% and 77.90% accuracy shown in experiments (5-8) of Table 7. When the confidence score sci is abandoned, the familiarity of the teacher network with the sampled data decreases, reducing the amount of adequate information contained in the data. Without the outlier score soi, the lack of modelling of the intra-class relationship of the data to be sampled leads to increased data distribution difference between the substitute data domain and the original data domain. Further, the class density score sdi can measure the number of data in each class and maintain the balance of the sampled data. In summary, all three score indicators can help students perform better. 4.4. Visualization To verify the distribution difference between sampled data and the original data of each sampling-based method, we use t-SNE [38] to visualize the feature distribution. The pre-trained ResNet-34 network on the CIFAR-100 dataset and ResNet-50 network on the ImageNet dataset is used as the teacher network. For both datasets, we reserve 100 classes of validation data. We compare random sampling, the DFND sampling method, and our Adaptive Prototype Sampling (APS) method. Figure 3 shows the data distribution differentiation results. Our clustering results are closer to the extracted features of the original data. Reducing the distribution difference between sampled and original data Table 7. A set of diagnostic studies of proposed method. Training objective L Data sampling scores S ID Setting Accuracy (%) ID Setting Accuracy (%) 50k 150k 50k 150k (1) ours 75.26 77.90 (5) ours 75.26 77.90 (2) w/o Ln 74.82 77.71 (6) w/o sci 73.96 77.04 (3) w/o Lc 74.71 77.58 (7) w/o soi 68.07 76.67 (4) w/o Ln, Lc 74.39 77.27 (8) w/o sdi 70.24 76.59 Random DFND APS (ours) CIFAR-100 ImageNet Figure 3. T-SNE visualization of the data distribution similarity. Red dots denote source domain data, while blue dots denote unlabeled sampling data. The distance between dot groups reflects the similarity between data domains. The data distribution obtained by our APS sampling method is more similar to that of the source domain, which effectively reduces domain noise and improves learning performance. helps reduce data label noise, which is the key for the student to perform well. 5." + }, + { + "url": "http://arxiv.org/abs/2303.11611v2", + "title": "Out of Thin Air: Exploring Data-Free Adversarial Robustness Distillation", + "abstract": "Adversarial Robustness Distillation (ARD) is a promising task to solve the\nissue of limited adversarial robustness of small capacity models while\noptimizing the expensive computational costs of Adversarial Training (AT).\nDespite the good robust performance, the existing ARD methods are still\nimpractical to deploy in natural high-security scenes due to these methods rely\nentirely on original or publicly available data with a similar distribution. In\nfact, these data are almost always private, specific, and distinctive for\nscenes that require high robustness. To tackle these issues, we propose a\nchallenging but significant task called Data-Free Adversarial Robustness\nDistillation (DFARD), which aims to train small, easily deployable, robust\nmodels without relying on data. We demonstrate that the challenge lies in the\nlower upper bound of knowledge transfer information, making it crucial to\nmining and transferring knowledge more efficiently. Inspired by human\neducation, we design a plug-and-play Interactive Temperature Adjustment (ITA)\nstrategy to improve the efficiency of knowledge transfer and propose an\nAdaptive Generator Balance (AGB) module to retain more data information. Our\nmethod uses adaptive hyperparameters to avoid a large number of parameter\ntuning, which significantly outperforms the combination of existing techniques.\nMeanwhile, our method achieves stable and reliable performance on multiple\nbenchmarks.", + "authors": "Yuzheng Wang, Zhaoyu Chen, Dingkang Yang, Pinxue Guo, Kaixun Jiang, Wenqiang Zhang, Lizhe Qi", + "published": "2023-03-21", + "updated": "2023-12-18", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Deep learning has achieved great success in many fields (Devlin et al. 2018; Dosovitskiy et al. 2020; Yang et al. 2023c,a,b,d; Liu et al. 2023b,c,a; Wang et al. 2023c,b). Along with this process, deep learning models are increasingly expected to be deployed in established and emerging artificial intelligence fields. However, high-performance models\u2019 large scale and high computational costs (Ramesh et al. 2022) prevent this technology from being applied to mobile devices, driverless cars, and tiny robots. More importantly, many studies have shown that well-trained deep learning models are vulnerable to adversarial examples containing only minor changes (Goodfellow, Shlens, and Szegedy 2014; Chen et al. 2022a, 2023). Therefore, training robust small-capacity models has become the key to breaking the bottleneck. *Equal contributions \u2020Corresponding author Copyright \u00a9 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Various defensive strategies have been proposed (Madry et al. 2017; Jia et al. 2019; Chen et al. 2022b; Wang et al. 2023a) for adversarial robustness. Among them, Adversarial Training (AT) has been considered the most effective approach (Athalye, Carlini, and Wagner 2018; Croce and Hein 2020). By generating adversarial examples, the models can learn robustness knowledge to deal with various adversarial attacks. Therefore, it can significantly improve the robustness of large-capacity models. However, the robustness performance is struggling for small models only relying on AT due to the limited model capacity. Based on this, guided by the pre-trained robust teacher with insights, the robustness of small models is improved. This process is called Adversarial Robustness Distillation (ARD) (Goldblum et al. 2020). Despite improving the robustness of small models, existing ARD methods are still hard to apply in real-world scenes due to impractical settings. The original training data is of primary concern. Firstly, all existing ARD methods assume that the original training data is available throughout the distillation process (Goldblum et al. 2020; Zhao et al. 2022). In practical application, for scenes with high robustness requirements, the original data is usually private and unavailable (e.g., face data for face recognition system, disease data for medical diagnosis, and financial data for quantitative investment). Secondly, some technologies avoid relying on original private data by using open-world or out-of-domain (OOD) unlabeled data (Fang et al. 2021a). However, these methods rely on a necessary assumption that private data can always be obtained simply from open datasets. Although some methods claim to use OOD data, the performance of these methods degrades drastically when the discrepancy between the unavailable original data and unlabeled data increases. Therefore, good performance extremely depends on broadly similar image patterns between the data domains instead of OOD (Yang et al. 2021). Based on these, existing technologies are still challenging to deploy in high-security robustness scenes. One question is whether we can efficiently train small, easily deployable, robust models to improve robustness without original private data and specific data with similar patterns. To explore this question, we propose a novel task called Data-Free Adversarial Robustness Distillation (DFARD). The diagrams are shown in Figure 1. Compared with the existing KD (a) and ARD (b) tasks, our DFARD only uses generated data, which is more general and practical. arXiv:2303.11611v2 [cs.CV] 18 Dec 2023 \f(b) ARD (c) DFARD S T S T (a) KD Data Unavailable Adversarial Robustness S T Data Unavailable Adversarial Robustness Data Unavailable Adversarial Robustness Figure 1: Diagrams of (a) Knowledge Distillation (KD), (b) Adversarial Robustness Distillation (ARD), and (c) Data-Free Adversarial Robustness Distillation (DFARD). S and T represent the student and the teacher network respectively. Fs and Ft represent the search spaces. xori and xgen is the original and generated data. xadv is the adversarial examples. Considering the knowledge transfer process between the teacher and student networks, we demonstrate that the information upper bound is lower in the DFARD than in existing tasks. While removing the ARD task\u2019s dependence on private data, the challenges lie in less effective knowledge transfer and less data knowledge in the generated data. To tackle the issues, we select the commonly used generator training objectives as a DFARD baseline and optimize it from the following aspects: 1) To improve the effectiveness of knowledge transfer, we first propose an Interactive Temperature Adjustment (ITA) strategy to help students find more suitable training objectives for each training epoch. 2) To retain more data information, we then propose an Adaptive Generator Balance (AGB) module to better balance the similarity of the data domains and the information content. In addition, our method uses adaptive hyperparameters to avoid a large number of parameter tuning. Specifically, the primary contributions and experiments are summarized below: \u2022 To our best knowledge, we are the first to propose a novel task named DFARD to apply higher security level application scenes. Further, we theoretically demonstrate the challenges of this new task via the information bound. \u2022 We optimize DFARD to improve the effectiveness of knowledge transfer and retain more data information. A plug-and-play ITA strategy and an AGB module are proposed to gain the simplest combination of generator losses, avoiding complex loss designs and weight balance, significantly reducing parameter tuning costs. \u2022 Experiments show that our DFARD method achieves stable and reliable performance on multiple benchmarks comparing combinations of existing technologies. 2 Related Work 2.1 Data-Free Generation Data-free generation technology is proposed to generate substitute data with Generative Adversarial Networks (GANs) or other generation modules. During this process, researchers do not need to access any data, thus being able to deal with data privacy and other data unavailable issues. Chen et al. (Chen et al. 2019) first introduce the generator into a data-free generation process to get more vital generation capabilities. To obtain the generated data that the student does not learn well, Micaelli et al. (Micaelli and Storkey 2019) introduce the method of adversarial generation. They prompt the generator to generate data with more significant differences between the student\u2019s and teacher\u2019s predictions so that the shortcomings are made up in the learning process. Choi et al. (Choi et al. 2020) add batch categorical entropy into the data-free generation process to promote class balance. To further improve the generation speed, Fang et al. (Fang et al. 2022) propose feature sharing to simplify the generation process of each step. To improve generation quality, Bhardwaj et al. (Bhardwaj, Suda, and Marculescu 2019) introduce model inversion and use the intermediate layer statistics of the teacher model to restore the original data. Based on this, Yin et al. (Yin et al. 2020) introduce adversarial inversion, and Fang et al. (Fang et al. 2021b) introduce contrastive learning to enhance the generation quality further. 2.2 Adversarial Robustness Distillation Early adversarial training methods focus on learning directly from adversarial examples to improve model adversarial robustness (Madry et al. 2017; Zhang et al. 2019). However, expanding the adversarial training set leads to increased training costs. More importantly, the robustness improvement of small models is not evident due to the limitation of model capacity. Adversarial robustness distillation is proposed to address these issues. The setting is that both pre-trained robust teacher models and original training data are available. Goldblum et al. (Goldblum et al. 2020) first propose the concept of adversarial robustness distillation. They show that improving the robustness of small models is feasible without additional training costs. Zi et al. (Zi et al. 2021) find that the soft labels given by the teacher are very effective and can significantly improve the robustness performance of the student. Zhu et al. (Zhu et al. 2022) find that the teacher\u2019s confidence in the student\u2019s adversarial examples continues to decline, which may not be able to give the correct guidance. They propose a multi-stage strategy to allow the student to learn independently in later training. Zhao et al. (Zhao et al. 2022) utilize multiple teachers to learn from nature and robust scenes separately. Based on this, they try to focus on clean accuracy while improving adversarial robustness. 3 The Challenges of DFARD To explore the impact of missing original training data on existing ARD tasks, we start with the effectiveness of knowledge transfer in the distillation process. By analyzing the lipschitzness of the robust model and the properties of the \fFigure 2: A toy experiment about the effect of different temperatures and simple verification of the easy-to-hard process. (a) and (b) show the impact of using different fixed temperatures for student performance on ARD and DFARD, respectively. (c) shows simple step temperature strategies, which means the trend of varying difficulty changes in the learning objective. (d) shows the performance comparison of students under these strategies. generated data, we theoretically demonstrate why DFARD is more challenging than KD and ARD tasks. DFARD has a lower information upper limit than KD and ARD in the knowledge transfer process. This conclusion implies that for the DFARD task, more knowledge is needed. Based on this, we try to improve the efficiency of knowledge transfer and ensure higher data information to meet this challenge. Inspired by Human Education and Curriculum Learning (Bengio et al. 2009; Pentina, Sharmanska, and Lampert 2015), we try to look at the knowledge transfer process from the perspective of 1) the knowledge from the teacher and 2) the knowledge from the data. Detailed discussions are as follows: Distillation Temperature Lower Higher Figure 3: Diagrams of teacher\u2019s soft labels. As the temperature increases, the distance between the teacher\u2019s and naive student\u2019s prediction distributions decreases, and the difficulty of the learning objectives decreases. The Knowledge from the Teacher. In the process of human education, teachers always teach students simple knowledge in the beginning. With improving students\u2019 abilities, more difficult knowledge is gradually covered. This easy-to-hard training process improves the efficiency of knowledge transfer. Inspired by this, we analogize the DFARD task to the complex challenge of the human learning process. The key lies in how to build an easy-to-hard process. In fact, distillation temperature enables the teacher network to provide suitable soft labels to transfer knowledge from the cumbersome model to a small model (Hinton et al. 2015; Romero et al. 2014). The temperature controls the discrepancy between two distributions and represents learning objectives of varying degrees of difficulty (M\u00a8 uller, Kornblith, and Hinton 2019; Li et al. 2022a; Zi et al. 2021; Li et al. 2022b) as shown in Figure 3. Most existing methods ignore the usefulness of the distillation temperature itself, regard it as a fixed hyperparameter, and inefficiently search for optimum. On this basis, they spend several times on computational costs. To verify that the easy-to-hard process can or cannot improve knowledge transfer efficiency and better deal with DFARD tasks, we conduct a toy experiment as shown in Figure 2. We first verify the effect of different fixed distillation temperatures on two tasks. We train all student models for 50 epochs with or without original data and report the best robustness accuracy under AutoAttack (AA) attack (Croce and Hein 2020). From Figure 2(a) and (b), a general conclusion is that different temperatures have effects on the two tasks. Further, we respectively construct three temperature strategies of step increase, fixed constant, and step decrease to build learning objectives with different difficulties for each epoch (Figure 2(c)). The inflection points of temperature change are at the 15th and 35th epochs. Based on these strategies, we test the robustness performance as shown in Figure 2(d). We find that the strategy of step decrease achieves the best results. As shown in Figure 3, the decrease in temperature means the learning difficulty increases. That is, the easy-to-hard knowledge promotes the student\u2019s progress. The Knowledge from the Data. For human education, apart from teachers, good tutorials are equally important. Generally speaking, tutorials that contain more knowledge can give students more help in the process of learning. More knowledge with more information is crucial. Inspired by this, we analogize the generated data as a medium of knowledge transfer to the tutorials. The generated data with different information content may also help students differently. Existing all datafree generation methods set a fixed generation loss weight to train a generator and constrain the teacher\u2019s confidence in the generated data (Chen et al. 2019; Yin et al. 2020; Choi et al. 2020; Fang et al. 2021b, 2022). To obtain generated data that is closer to the original distribution, the teacher model\u2019s predictions ft(\u02c6 x) should be close to the one-hot labels y, e.g., minimize them with the following cross-entropy loss: Lcls = CE(ft(\u02c6 x), y), (1) where y can be randomly generated labels or pseudo-labels of the teacher. \u02c6 x is synthesized by the generator g through random noise z and the label y: \u02c6 x = g(z, y). In the above process, the teacher provides a more prominent target logit or less varied wrong logits. We argue that the above process gradually decreases the information content of the teacher\u2019s soft labels. The information content is a basic quantity derived from the probability prediction for the \fGeneration Stage Random noise Generator Synthetic data Student Teacher Random labels Predictions Back propagation KL Loss CE Loss Distillation Stage Training data Student Teacher Predictions KL Loss Back propagation Figure 4: The pipeline of our optimized DFARD method from the most commonly used training baseline. Our method consists of two stages: (1) In the generation stage, we design an Interactive Temperature Adjustment strategy to adjust the temperature \u02dc \u03c4 according to the student\u2019s learning. Simultaneously, we propose an Adaptive Generator Balance module to balance the similarity between data domains and the information content of data. (2) In the distillation stage, we keep the interactive temperature to help the student learn better. Training data represents the generated adversarial examples. generated data of the teacher model. In this paper, we measure the information content by Information Entropy. Some studies have shown that such soft labels reduce information entropy and are not conducive to the knowledge distillation process (Shen et al. 2021; Zhang et al. 2022b). We provide a theoretical analysis based on the definition of information entropy. Information Entropy. The entropy of a random variable is the average level of \u201cinformation\u201d or \u201cuncertainty\u201d (Shannon 1948) inherent to the variable\u2019s possible outcomes. Given a discrete random variable X, which takes values in the alphabet X and is distributed according to p : X \u2192[0, 1] : H(X) = \u2212 X x\u2208X p(x) log p(x) = E[\u2212log p(X)], (2) where P denotes the sum over the variable\u2019s possible values. Based on the definition, in the existing process, the teacher\u2019s soft label will also change in the existing generation process as \u201cinformation\u201d decreases and \u201cuncertainty\u201d decreases. Although better distribution similarity between generated and original domains (Chen et al. 2019; Fang et al. 2021b), we think existing methods ignore the information content of data and the teacher\u2019s soft labels. A trade-off relationship rather than ignoring one of these two helps improve student performance (Tests and analyses are shown in Table 3). 4 Data-Free Adversarial Robustness Distillation According to the above analysis, we clarify the motivation and perform simple analytical tests. In this section, we try to use the adaptive approach to improve the efficiency of knowledge transfer and ensure higher data information while reducing hyperparameter tuning costs. Firstly, we propose an Interactive Temperature Adjustment (ITA) strategy, which dynamically adjusts the distillation temperature according to the training status of the students in the current training epoch. The strategy helps students find the appropriate learning objectives for each training epoch. Secondly, we design an Adaptive Generator Balance (AGB) module to balance similarity and information capacity, avoiding the excessive pursuit of one. The pipeline is shown in Figure 4. The generator is trained via the ITA and AGB methods. Then the student is trained with the ITA strategy. Besides, the detailed training process is shown in Algorithm 1. 4.1 Interactive Temperature Adjustment Our ITA strategy adjusts the teacher\u2019s soft label through the interactive distillation temperature \u02dc \u03c4 so that the confidence gap between the teacher and student models is kept in a suitable range. In the generate stage, the generated data should transfer the information of decision boundary from the teacher model to the student model as effectively as possible (Heo et al. 2019). Unlike previous methods, our generator does not directly synthesize specific data but starts from accessible data for the student to learn. We maximize the predictions of the student and the teacher to find effective generated data via the adversarial generation loss as: Ladv = \u2212KL(ft(\u02c6 x; \u03b8t), fs(\u02c6 x; \u03b8s), \u02dc \u03c4), (3) where KL denotes the Kullback-Leibler (KL) divergence loss. Then we calculate the teacher\u2019s confidence for the generated data and collect the numerical value of the confidence Cont and the class with the highest confidence c as Cont, c = arg max ft(\u02c6 x). The student\u2019s confidence for class c can be directly obtained as Cons. The interactive temperature \u02dc \u03c4 can be calculated as: \u02dc \u03c4 = max ( 1 bs bs X i=1 |Cont \u2212Cons| \u00b7 C, 1 ) , (4) where C is total number of classes and bs denotes the batch size. The calculated absolute value represents the difference between the student\u2019s and the teacher\u2019s prediction in the current training epoch, thus reflecting the current learning situation. In the early epochs, higher distillation temperatures are \fset to obtain the generated data that is easier for students to learn. As the student\u2019s prediction gets closer to the teacher\u2019s, the temperature drops to synthesize more challenging data. Notably, ITA can be combined with other generation methods as a plug-and-play strategy. Similarly, we consider the effectiveness of knowledge transfer in the distillation stage. As the progress of the students continues to increase the learning difficulty, we define the interactive knowledge distillation loss as: LKD = X \u02c6 x\u2032\u2208\u02c6 X \u2032 KL(ft(\u02c6 x\u2032; \u03b8t), fs(\u02c6 x\u2032; \u03b8s), \u02dc \u03c4), (5) where \u02c6 x\u2032 is the adversarial examples of the generated data \u02c6 x (Goldblum et al. 2020). 4.2 Adaptive Generator Balance For generator training objectives, we choose the common training losses (called Vanilla DFARD) to elaborate on our proposed optimization for simplicity and persuasiveness. The proposed AGB module can adaptively adjust the weight of the losses to balance the domain similarity and data information content according to the current confidence of the teacher (related to information entropy). For the generator g, we combine Equation (1) and (3) as: Lgen = \u03bb \u00b7 Lcls + (1 \u2212\u03bb) \u00b7 Ladv, (6) where \u03bb is the trade-off parameter. When the teacher\u2019s confidence is too high, the information content of the data may be ignored. At this time, \u03bb adaptively reduces to avoid blindly pursuing similarity. Specifically, \u03bb is calculated as: \u03bb = 1 C \u00b7 1 bs Pbs i=1 Cont . (7) The average confidence 1 bs Pbs i=1 Cont is greater than or equal to the randomly expected value 1 C for the dataset with the number of classes C. Therefore, it always satisfies 0 < \u03bb \u22641. With the help of AGB, we can increase the amount of information in the generated data while satisfying the similarity, which helps the student\u2019s performance. Simultaneously, we no longer need to try many different weight combinations to test results. Therefore, our method is more simple and more convenient. 5 Experiments 5.1 Experimental Setup Dataset and Model. We evaluate the proposed DFARD method on 32\u00d732 CIFAR-10 and CIFAR-100 datasets (Krizhevsky, Hinton et al. 2009), which are the most commonly used datasets for testing adversarial robustness. For a fair comparison, we use the same pre-trained WideResNet (WRN) teacher models with (Zi et al. 2021). Furthermore, we evaluate all methods using the student with ResNet-18 (RN-18) (He et al. 2016) and MobileNet-V2 (MN-V2) (Sandler et al. 2018) following existing ARD methods (Zhu et al. 2022; Zi et al. 2021). Algorithm 1: Training process of our Data-Free Adversarial Robustness Distillation Input: A pre-trained teacher network ft, a generator g with parameter \u03b8g, a student fs with parameter \u03b8s, distillation epochs T, the iterations of generator g in each epoch Tg, the iterations of student fs in each epoch Ts. 1: Initialize parameter \u03b8g and \u03b8s 2: for i in [1, . . . , T] do 3: // Generation stage 4: for j in [1, . . . , Tg] do 5: Randomly sample noises and labels (z, y) 6: Synthesize training data \u02c6 x = g(z, y) 7: Update generator g through Equation (6) 8: end for 9: // Distillation stage 10: for j in [1, . . . , Ts] do 11: Synthesize training data \u02c6 x = g(z, y) 12: Generate adversarial examples \u02c6 x \u2192\u02c6 x\u2032 13: Distill the student fs through Equation (5) 14: end for 15: end for Output: The student fs with adversarial robustness. Baselines. We compare our optimized method with different data-free generation methods, including Dream (Bhardwaj, Suda, and Marculescu 2019), DeepInv (Yin et al. 2020), DAFL (Chen et al. 2019), DFAD (Fang et al. 2019), ZSKT (Micaelli and Storkey 2019), DFQ (Choi et al. 2020), CMI (Fang et al. 2021b), and Fast (Fang et al. 2022). We use the same PGD-attack to generate the adversarial examples to train the student for all baseline methods. For a fair comparison, the distillation process uses the same training loss as ARD (Goldblum et al. 2020). Implementation details. Our proposed method and all others are implemented in PyTorch. All models are trained on RTX 3090 GPUs (Paszke et al. 2019). The students are trained via SGD optimizer with cosine annealing learning rate with an initial value of 0.05, momentum of 0.9, and weight decay of 1e-4. The generators are trained via Adam optimizer with a learning rate of 1e-3, \u03b21 of 0.5, \u03b22 of 0.999. The distillation batch size and the synthesis batch size are both 256. The distillation epochs T is 200, the iterations of generator Tg is 1, and the iterations of student Ts is 5. Both the student model and the generator are randomly initialized. A 10-step PGD (PGD-10) with a random start size of 0.001 and step size of 2/255 is used to generate adversarial samples. The perturbation bounds are set to L\u221enorm \u03f5 = 8/255. Attack Evaluation. We evaluate the adversarial robustness with five adversarial attacks: FGSM (Goodfellow, Shlens, and Szegedy 2014), PGDS (Madry et al. 2017), PGDT (Zhang et al. 2019), CW\u221e(Carlini and Wagner 2017) and AutoAttack (AA) (Croce and Hein 2020). These methods are the most commonly used for adversarial robustness evaluation. The maximum perturbation is set as \u03f5 = 8/255. The perturbation steps for PGDS, PGDT and CW\u221eare 20. In addition, we test the accuracy of the models in normal conditions without adversarial attacks (Clean). \fModel Method CIFAR-10 CIFAR-100 Attacks Evaluation Attacks Evaluation Clean FGSM PGDS PGDT CW AA Average Clean FGSM PGDS PGDT CW AA Average RN-18 Dream 68.26 34.76 29.72 31.36 27.96 26.70 30.10 22.00 10.18 9.52 9.85 7.11 6.68 8.67 DeepInv 64.53 35.18 31.26 32.49 28.77 27.93 31.13 40.91 19.46 17.86 18.68 15.27 14.54 17.16 DAFL 54.98 27.04 24.75 25.87 22.90 22.25 24.56 41.67 21.42 20.13 20.81 17.96 17.16 19.50 DFAD 57.58 31.54 29.68 30.65 26.94 26.47 29.06 37.57 18.95 17.53 18.14 15.06 14.57 16.85 ZSKT 58.08 31.98 29.94 30.92 27.21 26.68 29.35 38.91 20.16 18.78 19.41 16.38 15.52 18.05 DFQ 54.44 26.90 24.63 25.78 22.37 21.57 24.25 45.24 22.49 20.78 21.61 18.24 17.38 20.10 CMI 53.28 25.78 23.14 23.97 21.03 20.38 22.86 45.04 22.78 21.02 21.90 17.90 16.97 20.11 Fast 61.13 31.40 28.01 29.17 26.26 25.42 28.05 36.75 18.66 17.72 18.33 15.57 14.77 17.01 Ours* 65.10 36.36 33.47 34.89 30.79 30.06 33.11 45.33 24.08 22.71 23.38 19.84 19.00 21.80 Ours 66.44 38.53 35.94 37.15 32.79 32.14 35.31 46.33 24.56 22.94 23.59 20.12 19.19 22.08 MN-V2 Dream 64.95 32.03 26.09 27.63 23.83 22.28 26.37 18.73 9.78 8.96 9.37 6.93 6.33 8.27 DeepInv 59.53 31.76 28.42 29.74 25.86 24.99 28.15 37.75 16.94 15.54 16.19 12.65 11.80 14.62 DAFL 47.53 24.51 21.18 22.09 19.50 18.86 21.23 40.46 20.63 19.03 19.78 16.54 15.82 18.36 DFAD 56.13 29.73 26.48 27.64 24.35 24.02 26.44 25.41 12.75 11.42 11.95 9.58 9.24 10.99 ZSKT 57.02 30.29 27.07 28.25 24.89 24.40 26.98 25.16 12.34 11.36 11.78 9.69 9.16 10.87 DFQ 44.25 21.13 19.14 20.07 16.87 16.20 18.68 40.26 19.45 17.74 18.44 15.14 14.35 17.02 CMI 44.53 21.34 19.67 19.97 16.25 15.97 18.64 40.23 19.76 17.96 18.56 14.86 14.02 17.03 Fast 54.06 28.23 25.69 26.83 23.18 22.42 25.27 38.69 18.58 16.77 17.58 14.62 13.75 16.26 Ours* 59.79 32.25 29.25 30.24 26.18 25.56 28.70 40.94 21.47 20.18 20.89 17.60 16.82 19.39 Ours 61.16 34.46 31.66 32.80 28.40 27.90 31.04 41.78 22.04 20.84 21.68 17.93 17.04 19.91 Table 1: Adversarial robustness accuracy (%) on CIFAR-10 and CIFAR-100. The maximum adversarial perturbation \u03f5 is 8/255. Bold numbers denote the best results. Average indicates the average value of the robustness test, which does not include the clean accuracy. \u201cOurs*\u201d means training the generator as Equation (6). \u201cOurs\u201d means the complete method as Algorithm 1. Method Dream DeepInv DAFL ZSKT DFQ CMI Fast Ours CIFAR-10 29.17h \u2217m 24.70h \u2217nm 4.04h \u2217nm 3.03h \u2217n 4.09h \u2217nm 48.69h \u2217nm 6.10h \u2217nm 3.28h CIFAR-100 125.14h \u2217m 101.28h \u2217nm 13.43h \u2217nm 5.62h \u2217n 13.11h \u2217nm 77.99h \u2217nm 12.16h \u2217nm 5.85h Table 2: The synthesis time of various data-free generation methods. We test the specific GPU time on a single RTX 3090 for the entire generation process. h is short for hours, n denotes the distillation temperature hyperparameter tuning times, and m denotes the generator loss weights tuning times. 5.2 Comparison with Other Methods To compare the effects of various data-free generation methods, we set the same distillation process as ARD (Goldblum et al. 2020). Therefore, the difference only lies in the generator loss function of these methods. We select and report the best checkpoint of all methods among all epochs. The best checkpoints are based on the adversarial robustness performance against PGDT attack. For the computational costs, we compare the synthesis time on the generation stage of different generation methods. Performance Comparison. The robustness performances of our and other baseline methods are shown in Table 1. Our generation method (Ours*) achieves better adversarial robustness performance in all baselines. The results demonstrate that our interactive and adaptive approach can be more effective for the challenging DFARD task. For different backbone and dataset combinations, our method improves the average adversarial robustness by 1.98%, 0.55%, 1.69%, and 1.03%, respectively, compared to other best results. Notably, our method maintains the most stable performance in various settings, while others may perform poorly in some settings. We consider that one reason is our interactive learning objective, which helps to improve students\u2019 versatility in different settings. Specifically, Dream (Bhardwaj, Suda, and Marculescu 2019) inverts enough data for normal ARD. However, these data might not be suitable for student learning as the data comes exclusively from teachers. DeepInv (Yin et al. 2020) and CMI (Fang et al. 2021b) excessively pursue distribution similarity between generated and original domains ignoring the information content of data. Fast (Fang et al. 2022) uses a feature-sharing method, but the lack of rich new features in complex datasets leads to performance degradation. In contrast, some early methods (DAFL (Chen et al. 2019), DFAD (Fang et al. 2019), ZSKT (Micaelli and Storkey 2019) and DFQ (Choi et al. 2020)) are more stable and effective, but these methods keep the same teacher predictions throughout the generation process. Therefore, their learning objectives may not meet every epoch for the randomly initialized students. Good results are often inseparable from multiple hyperparameter tuning. Generation Computational Costs. In addition, we also compare the overall generation computational costs of all methods while considering the hyperparameter tuning calculation costs. The results are shown in Table 2. Other methods must test multiple sets of temperature parameters (denoted by n) or trade-offs between multiple generator losses (denoted by m). Notably, some methods require significantly higher weights tuning times m when using four or more generator losses, e.g., Dream, DeepInv, DFQ, and CMI. Thanks to fully adaptive parameters, our generator\u2019s computational costs are significantly lower than most other methods. In summary, our method has the most stable performance while maintaining the advantage of significantly lower generation cost. Therefore, our method is simple, reliable, and convenient. \f(a) (b) Figure 5: Performance of RN-18 students trained with different teachers. Students train 100 epochs for 4 generative methods. (a) shows clean accuracy, and (b) shows robust accuracy under AA attack. ID Settings Attacks Evaluation Clean FGSM PGDS PGDT CW AA Average (1) Vanilla DFARD 58.35 30.68 28.59 29.75 24.67 23.95 27.53 (2) w/ ITA (G) 59.16 31.05 28.59 29.90 24.87 24.21 27.72 (3) w/ ITA (G+D) 60.36 33.09 30.97 32.02 26.79 26.24 29.82 (4) w/ AGB 64.41 36.33 33.53 34.91 30.57 29.92 33.05 (5) w/ AGB w/ ITA (G) 65.10 36.36 33.47 34.89 30.79 30.06 33.11 (6) Ours 66.44 38.53 35.94 37.15 32.79 32.14 35.31 Table 3: Ablation study on CIFAR-10. For vanilla DFARD, we choose the best hyperparameters (the distillation temperature \u03c4 = 3, the loss weight \u03bb = 0.3). \u2018G\u2019 means that ITA is applied only in the generation stage, and \u2018G+D\u2019 means that ITA is applied in both the generation and distillation stages. 5.3 Adaptability for Different Teachers The method\u2019s adaptability is important to reduce reliance on customized teachers (Zi et al. 2021). To compare the adaptability for different teachers, we select four teachers (RN-18, WRN-34-10, WRN-34-20, WRN-70-16 (Croce et al. 2021)) to train the RN-18 student on CIFAR-10. The results are shown in Figure 5. For the other methods, we find the robust saturation phenomenon. Due to the capacity gap between the teacher and student, the learning objectives provided by larger-scale teachers may not be suitable for small students to learn. However, our method is less susceptible to the gap due to the interactive learning objectives. Our proposed easyto-hard process alleviates the saturation and has more vital adaptability for teachers with different capacities. 5.4 Ablation Study Impact of Interactive Temperature Adjustment. To thoroughly verify the effectiveness of the proposed Interactive Temperature Adjustment (ITA) strategy, we test it in both the generation and distillation stages. As shown in Table 3(1-3) and (4-6), compared with the best-fixed temperature parameters (w/o ITA), students\u2019 performance improves when ITA is applied in the generation stage. Further, when the interactive temperature is deployed in the both generation and distillation stages, the performance improves again. It is worth noting that Table 3 does not reflect hyperparameter tuning time. The best temperature comes from multiple experimental tests. Even so, the student\u2019s feedback can dynamically adjust the difficulty of knowledge transfer to improve student performance, which verifies the effectiveness of ITA. Method Clean CW AA DAFL + ITA 56.26 (+1.28) 24.78 (+1.88) 24.36 (+2.11) DFQ + ITA 55.79 (+1.35) 24.12 (+1.75) 23.74 (+2.17) ZSKT + ITA 59.93 (+1.85) 29.03 (+1.82) 28.58 (+1.90) Table 4: Other methods with our proposed ITA. Other Methods with the ITA Strategy. Further, we combine the proposed plug-and-play Interactive Temperature Adjustment (ITA) strategy with three other methods for both the generation and distillation stages. Experiments are carried out on CIFAR-10 with the RN-18 student. The results are shown in Table 4. Compared with the baseline performance in Table 1, for the three methods, both clean and robust accuracy are significantly improved, which proves that our ITA strategy promotes student performance through an easy-to-hard knowledge transfer process. Impact of Adaptive Generator Balance. Further, we evaluate the effectiveness of the proposed AGB module. The results are shown in Table 3 (4-6). Compared with the bestfixed \u03bb, our AGB module significantly improves student performance for both clean and robust accuracy. At the same time, our adaptive approach omits the weight-tuning costs. 6" + }, + { + "url": "http://arxiv.org/abs/2302.08771v2", + "title": "Explicit and Implicit Knowledge Distillation via Unlabeled Data", + "abstract": "Data-free knowledge distillation is a challenging model lightweight task for\nscenarios in which the original dataset is not available. Previous methods\nrequire a lot of extra computational costs to update one or more generators and\ntheir naive imitate-learning lead to lower distillation efficiency. Based on\nthese observations, we first propose an efficient unlabeled sample selection\nmethod to replace high computational generators and focus on improving the\ntraining efficiency of the selected samples. Then, a class-dropping mechanism\nis designed to suppress the label noise caused by the data domain shifts.\nFinally, we propose a distillation method that incorporates explicit features\nand implicit structured relations to improve the effect of distillation.\nExperimental results show that our method can quickly converge and obtain\nhigher accuracy than other state-of-the-art methods.", + "authors": "Yuzheng Wang, Zuhao Ge, Zhaoyu Chen, Xian Liu, Chuangjia Ma, Yunquan Sun, Lizhe Qi", + "published": "2023-02-17", + "updated": "2023-02-23", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "INTRODUCTION Deep neural networks are gradually developing toward largescale models [1\u20136]. The changes have brought about an impressive technological breakthrough [7\u201316], but applying these technologies to mobile devices such as mobile phones, driverless cars, and tiny robots is dif\ufb01cult. Besides, the source data cannot be obtained in many cases due to data security, such as \ufb01ngerprints, faces, and medical records images. Therefore, model compression and data-free technology are the keys to breaking barriers. In this situation, Data-Free Knowledge Distillation (DFKD) is proposed [17]. In this process, an easy-to-deploy lightweight student model is trained with the help of redundant teacher models without original training data, which is much more ef\ufb01cient than retraining models. Therefore, it is widely used in various \ufb01elds and has developed rapidly in recent years. There are currently two ideas in the DFKD \ufb01eld. One idea is to set up a generation module to supplement the training B The corresponding authors are Lizhe Qi and Yunquan Sun. This work is supported by Natural Science Foundation of Jiangxi Province (No.20212BAB202026), Shanghai Municipal Science and Technology Major Project (No.2021SHZDZX0103), the Shanghai Engineering Research Center of AI & Robotics, Fudan University, China, and the Engineering Research Center of AI & Robotics, Ministry of Education, China. Source dataset Substitute dataset (Unlabeled) 0 ...... Student (Trainable) ...... Teacher (Fixed) Data domain shifts Fig. 1. The pipeline of our method. Only the teacher model and the unlabeled substitute dataset are available during training. The red arrows denote our proposed explicit and implicit distillation losses. data. Chen et al. [18] combine knowledge distillation with Generative Adversarial Networks (GANs). Fang et al. [19] introduce a model difference to force the generator to produce more complex samples. Micaelli et al. [20] use the generated samples that can confuse the discriminator to make student learning more ef\ufb01cient. Fang et al. [21] propose a local sharing method to reduce the cost of data generation. Besides, Yin et al. [22] and choi et al. [23] propose a method based on model inversion of teacher network to synthesize more realistic samples. Fang et al. [24] propose a method of combining distillation with other compression technologies and achieving extensive results. Another idea is to use unlabeled substitute data. Chen et al. [25] propose selecting samples in the wild without generation module. The wild dataset represents a substitute dataset that is easily accessible while ignoring labels, such as the ImageNet dataset [26]. Despite encouraging performance, \ufb01rstly, the methods based on the generation module will generate a large amount of additional computational costs and parameters. The method based on unlabeled sample selection can avoid these problems. However, the previous selection mechanism ignores the amount of information on unlabeled samples, representing the effectiveness of students learning. Secondly, the training dataset is composed of unlabeled data or random noise transform and lacks supervision information, so it contains a large amount of label noise. However, the previous methods ignore the disturbance of the noise. Finally, arXiv:2302.08771v2 [cs.CV] 23 Feb 2023 \fprevious methods force the student to mimic the outputs of a particular data example represented by the teacher, resulting in low convergence speed and lack of a structured knowledge representation, which affects students\u2019 performance. To tackle these issues, we consider a low computational and low noise ef\ufb01cient distillation framework called Ef\ufb01cient Explicit and Implicit Knowledge Distillation (EEIKD). Speci\ufb01cally, we design an adaptive threshold selection module to avoid additional generation costs. To suppress the sample noise, we design a class-dropping mechanism, which hardly adds additional computation. To increase the convergence speed and explore the relationship between multiple samples, we propose a distillation method combining explicit and implicit knowledge, as shown in Fig. 1. The primary contributions and experiments are summarized below: \u2022 We propose an Ef\ufb01cient Explicit and Implicit Knowledge Distillation method, which selects unlabeled substitute samples without additional generation module calculation and parameter costs. \u2022 To \ufb01nd more ef\ufb01cient samples from the substitute dataset, we propose an adaptive threshold selection module, which comprehensively considers unlabeled samples\u2019 con\ufb01dence and information content. \u2022 We design a lightweight class-dropping mechanism to suppress label noise. Then, we combine explicit and implicit knowledge, which signi\ufb01cantly improves the convergence speed and learning ef\ufb01ciency. \u2022 Experimental results show that our EEIKD method signi\ufb01cantly improves students\u2019 performance compared with previous state-of-the-art DFKD methods. 2. METHODOLOGY In this section, we \ufb01rst introduce an adaptive threshold module for unlabeled data selection. Then we introduce a label noise suppression mechanism to deal with data domain shifts. Finally, an ef\ufb01cient knowledge distillation method is proposed to train an impressive student. 2.1. Adaptive Threshold Selection To discard unnecessary generator costs, we propose an unlabeled data selection method. At the same time, we try to select suitable samples to enhance learning ef\ufb01ciency. On the one hand, the high con\ufb01dence prediction of the teacher network for an unlabeled sample means that it comprehends the sample better, which helps improve the utilization of samples. On the other hand, higher con\ufb01dence means that the prediction of the teacher network is closer to one-hot encoding. Compared with the soft target, its prediction has lower entropy and can provide less training information than the soft target [27], thus reducing the sample utilization ef\ufb01ciency. Here, we design a mechanism to balance the two parts. We denote the unlabeled input sample as x, the number of classes as n, the unlabeled substitute dataset as X (x \u2208X), the candidate dataset as X \u2032 and \ufb01nal student training dataset after selection as X \u2032\u2032. \u03b4 = 1 n\u03b3 , arg max x S(fT (x)) > \u03b4, x \u2192X \u2032, (1) where \u03b4 is the adaptive threshold, \u03b3 is a hyperparameter satis\ufb01ed 0<\u03b3<1, S is the softmax function, and fT (x) is the prediction of teacher network. Then ns samples are selected in X \u2032 to form training set X \u2032\u2032. As \u03b3 increases and \u03b4 decreases, samples with more information are valued. As \u03b3 decreases and \u03b4 increases, more con\ufb01dent samples are selected. Through the adaptive threshold setting, samples with low con\ufb01dence or too little information will not be selected. Therefore, ef\ufb01cient samples are \ufb01nally selected, which can better help the student perform well. 2.2. Class-Dropping Noise Suppression Since classes differ between the unavailable original dataset and the unlabeled substitute dataset, we propose a classdropping noise suppression module to face the datasets\u2019 domain shifts and improve the learning effect of the student network. The predictions given by the teacher network for the classes with low con\ufb01dence are often in\ufb02uenced by the shifts between the datasets\u2019 domains. First, the prediction of these parts will not positively affect the \ufb01nal results. Further, it also affects the learning ef\ufb01ciency of complex samples as noise. Here, we propose a simple yet effective method of con\ufb01dence mask to suppress the noise from the unlabeled data domain shifts shown in Fig. 2(a). We denote the class-dropping rate as \u03b1 (0<\u03b1<1), a class as c, the mask for class c in a sample as mc, the con\ufb01dence for class c in a sample as pc and the prediction con\ufb01dence for a sample as P = {p1, p2, . . . , pn}. K is the number of classes reserved, and K is equal to \u230a(1 \u2212\u03b1) \u00d7 n\u230b. mc = ( 1, if pc \u2265top-K(P), 0, otherwise, (2) Mx = {m1, m2, \u00b7 \u00b7 \u00b7 , mn} , \u02c6 fT (x) = fT (x) \u2299Mx, (3) where Mx is the sample con\ufb01dence mask and top-K(P) is the prediction con\ufb01dence of the K-th largest class of a sample. Then the masked con\ufb01dence matrix \u02c6 fT (x) can be obtained. When calculating the constraints of the \ufb01nal output between the teacher and student, the masked con\ufb01dence matrix is used to replace the complete output. The student will learn a lownoise representation to suppress noise. 2.3. Explicit and Implicit Knowledge Distillation 2.3.1. Knowledge distillation loss Knowledge distillation [27] is an important model compression technique. fT (x) and fS(x) denote the output of teacher \fTeacher Student \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 Teacher \u00a0 \u00a0 Student (a)\u00a0Label\u00a0noise\u00a0suppression (b)\u00a0Structured\u00a0relationship Fig. 2. The columns denote the con\ufb01dence prediction. The blue arrows show the difference between the past and our method. (a) Label noise suppression based on the class-dropping module. (b) An implicit structured relation distillation method. network and student network. Knowledge distillation loss is expressed to minimize objective function: LKD = X x\u2208X DKL \u0012 S \u0012fT (x) \u03c4 \u0013 , S \u0012fS(x) \u03c4 \u0013\u0013 , (4) where DKL is the Kullback-Leibler divergence and \u03c4 is the distillation temperature. The knowledge distillation loss LKD allows the student to imitate the teacher\u2019s output. However, LKD is usually not particularly ef\ufb01cient [28], especially facing the datasets domain shifts. 2.3.2. Explicit feature distillation In multi-layer neural networks, the output of the lower layer is locally concerned with texture information. The high-level output vision is gradually increasing, and the global information is increasingly focused. Here we choose the output after the \ufb01rst Batch Normalization (BN) layer [29] and the input before the \ufb01nal linear layer. The former focuses on local texture features and preserves the features of training pictures which is helpful for the rapid convergence of the student network. The latter is directly related to the \ufb01nal effect. The attention distillation loss is described as: LAT f = Ex\u223cPdata (x) \u2225fT f(x) \u2212fS f(x)\u22251 , (5) LAT b = Ex\u223cPdata (x) \u2225fT b(x) \u2212fS b(x)\u22251 , (6) LAT = LAT f + LAT b, (7) where \u2225\u00b7\u22251 is the \u21131 norm, fT f(x) and fT b(x) is the output after the \ufb01rst BN layer and the input before the \ufb01nal linear layer of the teacher, fS f(x) and fS b(x) is the output of the two layers of the student. We believe that we can obtain a faster convergence speed by learning these features. The relevant experimental veri\ufb01cation is in the next section. 2.3.3. Implicit structured relation distillation In the process of model learning, the learning effect of different classes is usually different. It is challenging to learn complex classes directly but relatively easy to learn the differences in sample con\ufb01dence distribution in a mini-batch. We aim to make the learning of complex classes ef\ufb01ciently through a structured relational distillation shown in Fig. 2(b). The structured relation denotes the connection between multiple samples rather than a single sample example. We denote the batch size as N, the structured differentiation relationships as \u03c8, teacher\u2019s predictions as \u02c6 fT (N) = (t1, . . . , tN) and the structured differentiation loss as LD. The implicit structured distillation calculation is as follows: \u03c8 (ti)= 1 N \u22121 N X j=1,j\u0338=i \u2225ti \u2212tj\u22252 , \u03bet = 1 N N X i=1 \u03c8 (ti) , (8) LD = N X i=1 \u2113\u03b4 \u0012\u03c8 (ti) \u03bet , \u03c8 (si) \u03bes \u0013 , (9) where \u2113\u03b4 is the Huber loss. The structured differentiation relationships of the student \u03c8(si) are similar to Eq. 8. Finally, we can get the total loss by summing up all losses as: Ltotal = LKD + \u03bb1\u00b7LAT + \u03bb2\u00b7LD, (10) where \u03bb1, \u03bb2 are the loss trade-off parameters. 3. EXPERIMENTS In this section, we \ufb01rst verify the effectiveness of our proposed method through the hyperparametric and ablation experiments. Then we compare it with current state-of-the-art methods to prove its superiority. 3.1. Experimental Settings Datasets: Our setting is selecting the samples that can better help the student learn from the unlabeled substitute dataset to replace the unavailable source dataset following DFND [25]. Unavailable source dataset: 32\u00d732 CIFAR-10 and CIFAR100 [30] contain 50K training and 10K testing datasets from 10 and 100 classes. Unlabeled substitute dataset: ImageNet dataset [26]. The ImageNet dataset is resized to 32\u00d732 to meet the input requirements of the original model. \fImplementation Details: The proposed method is implemented in PyTorch [31] and trained with eight RTX 2080 Ti GPUs. In the comparative experiment, we expect two groups of experiments to meet the needs of different situations (Tiny or Large). We select ResNet-34 [32] as the teacher network and ResNet-18 [32] as the student network following the past baseline. Then we choose 150K and 500K samples for Tiny and Large schemes and train for 200 and 800 epochs. For the DFND, we keep the 600K samples and 800 epochs from the original paper, and the student model is trained more times than our method. Finally, we choose \u03bb1 as 0.1 and \u03bb2 as 1, use the SGD optimizer with the momentum as 0.9, weight decay as 5 \u00d7 10\u22124, and the learning rate initially equal to 0.1. Table 1. Student accuracy (%) about parameter experiments on adaptive threshold \u03b3 and class-dropping rate \u03b1. ID \u03b3 CIFAR-10 CIFAR-100 ID \u03b1 CIFAR-10 CIFAR-100 1 0.05 94.24 76.36 6 0 94.49 76.44 2 0.1 94.49 76.44 7 0.3 94.57 76.47 3 0.2 94.13 75.77 8 0.5 94.35 76.96 4 0.3 92.56 75.25 9 0.7 92.31 75.25 5 0.5 92.36 74.41 10 0.9 72.91 67.25 Table 2. Ablation experiments on CIFAR-100 with selected 300K samples at different epochs. Method Epochs 20 50 100 200 DFND [25] 55.25 60.25 72.87 74.78 LKD 60.85 64.90 72.52 74.54 LKD + LAT 62.50 65.66 74.48 75.95 LKD + LD 61.03 66.49 74.92 76.23 Full (ours) 65.18 66.17 75.91 76.44 3.2. Diagnostic Experiment We \ufb01rst verify the effectiveness of the adaptive threshold \u03b3. We select 300K training samples and conduct 200 epochs uniformly with an average of three training rounds for each combination. Simultaneously, the class-dropping rate \u03b1 is set to 0. The experimental results are shown in Table 1 (1-5). The closer \u03b3 is to 0, the closer it is to the selection method of DFND; the closer to 1, the closer to random selection. It can be seen from the results that when \u03b3 = 0.1, the sample con\ufb01dence from the teacher model and the amount of information from the sample can achieve the best combination, which can better help the student model to learn. We then verify the effect of the class-dropping rate \u03b1. Other experimental settings are the same as above, shown in Table 1 (6-10). When \u03b1 is set to an appropriate value, the accuracy is improved compared with the original method (\u03b1 = 0). For datasets with fewer total classes, a too high classdropping rate may lose too much information. For datasets with larger total classes, a too low class-dropping rate may not be able to suppress label noise effectively. So this is why there are differences between the two datasets. In the next experiment, these will also be the default setting for \u03b3 and \u03b1. Finally, we make ablation experiments to verify the effectiveness of each loss. We set different loss combinations in our method, and the DFND [25] method. As seen from Table 2, our method has higher accuracy at the 20th epoch than the DFND at the 50th epoch and higher accuracy at the 100th epoch than the DFND at the 200th epoch. Our method converges faster than the previous state-of-the-art method based on data selection. Through learning structured knowledge, a better student model can be obtained. Table 3. Classi\ufb01cation result on the CIFAR dataset. * denotes the results we reproduced using source code and a uni\ufb01ed teacher model. Algorithm Extra Costs Accuracy (%) CIFAR-10 CIFAR-100 Teacher 95.35 78.60 Student 94.63 77.62 DAFL [18] \u2713 92.22 74.47 DFAD [19] \u2713 93.30 67.70 DeepInversion [22] \u2713 93.26 CMI* [24] \u2713 94.64 77.17 ZSKT [20] \u2713 93.32 67.74 DFQ* [23] \u2713 94.51 77.13 Fast [21] \u2713 94.05 74.34 DFND [25] \u00d7 94.02 76.35 EEIKD-Tiny \u00d7 94.37 76.16 EEIKD-Large \u00d7 94.94 77.67 3.3. Comparison to State-of-the-arts Table 3 shows the results of our proposed EEIKD compared to the state-of-the-art data-free knowledge distillation methods. The baselines contain methods that have to require additional computational resources to train additional generators [18\u201324] and use an unlabeled substitute dataset like DFND [25], which can effectively avoid unnecessary costs. From the results, we can see the superiority of our method. Our unlabeled data selection method can overstep complex generative methods with improved sample utilization ef\ufb01ciency and implicit structured knowledge. Although the accuracy of the student is very close to that of the teacher, our method still improves by 0.92% and 1.32% compared with the best unlabeled data selection method. 4." + }, + { + "url": "http://arxiv.org/abs/2302.08764v2", + "title": "Adversarial Contrastive Distillation with Adaptive Denoising", + "abstract": "Adversarial Robustness Distillation (ARD) is a novel method to boost the\nrobustness of small models. Unlike general adversarial training, its robust\nknowledge transfer can be less easily restricted by the model capacity.\nHowever, the teacher model that provides the robustness of knowledge does not\nalways make correct predictions, interfering with the student's robust\nperformances. Besides, in the previous ARD methods, the robustness comes\nentirely from one-to-one imitation, ignoring the relationship between examples.\nTo this end, we propose a novel structured ARD method called Contrastive\nRelationship DeNoise Distillation (CRDND). We design an adaptive compensation\nmodule to model the instability of the teacher. Moreover, we utilize the\ncontrastive relationship to explore implicit robustness knowledge among\nmultiple examples. Experimental results on multiple attack benchmarks show\nCRDND can transfer robust knowledge efficiently and achieves state-of-the-art\nperformances.", + "authors": "Yuzheng Wang, Zhaoyu Chen, Dingkang Yang, Yang Liu, Siao Liu, Wenqiang Zhang, Lizhe Qi", + "published": "2023-02-17", + "updated": "2023-02-23", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "INTRODUCTION Deep learning models have achieved great success in computer vision [1\u20135], signal processing [6\u20138], and other \ufb01elds [9]. These models, however, can usually be attacked by adding small permutations to natural inputs [10\u201312]. The vulnerability has aroused people\u2019s concern about applying deep learning technology in automatic driving, \ufb01nancial forecasting, and face \ufb01ngerprint detection. Concurrently, this also helps researchers to rethink the robustness of the model [13]. Recently, many defense strategies have emerged to improve the adversarial robustness of models, such as data processing and model training methods. Among them, Adversarial Training (AT) is recognized as the most effective defensive strategy [14]. AT takes the adversarial examples as a kind of data enhancement so that the model can learn the defense strategy against the potential attack threat. Despite the \u0000 The corresponding authors are Lizhe Qi and Wenqiang Zhang. This work is supported by Natural Science Foundation of Jiangxi Province (No.20212BAB202026), Shanghai Municipal Science and Technology Major Project (No.2021SHZDZX0103), the Shanghai Engineering Research Center of AI & Robotics, Fudan University, China, and the Engineering Research Center of AI & Robotics, Ministry of Education, China. outstanding performance, AT continuously expands the training dataset resulting in expensive model training costs. In addition, the effectiveness of AT is often related to the model capacity [15]. The robustness of small models is often limited, which makes it challenging to apply this technology to micro-robots, mobile phones, and driverless cars. All these have led to introducing knowledge distillation to improve AT, called Adversarial Robustness Distillation (ARD). Goldblum et al. [16] propose the concept of ARD. They show that the robust model can avoid the expensive AT cost. On the contrary, by transferring the knowledge of the pretrained robust model, the small model can obtain a higher robustness performance than the standard robust training. Zhu et al. [17] propose a multi-stage strategy to improve the ef\ufb01ciency of knowledge transfer further, thus improving the robustness of the student. Zi et al. [15] think the soft target label is essential in robustness distillation, so they use the entirely soft target label of a large robustness model to replace one-hot label to help the student further improve robustness. Although the previous ARD methods can avoid the expensive AT cost, there are still many issues. On the one hand, the teacher\u2019s predictions are not always correct. Especially with the student\u2019s progress, the con\ufb01dence level of the teacher\u2019s predictions for the adversarial examples generated by the student will gradually decrease [17]. As the main source of robust knowledge, this unstable prediction limits the student\u2019s performance. Zhu et al. [17] try to model this instability, but in fact, this instability is closely related to the capacity of the backbones used by the teacher and student. Hence, their methods are not universal for all backbone pairs. On the other hand, all previous methods can be summarized as oneto-one naive example imitation learning [18]. Therefore, the robust knowledge comes entirely from the pre-trained teacher, which ignores the implicit similarity between multiple examples. Moreover, the learning potential of the student model may not be fully developed, which is shown by the fact that the teacher\u2019s performance limits the student\u2019s robust accuracy. In this paper, we propose a novel structured adversarial robustness distillation method called Contrastive Relationship DeNoise Distillation (CRDND). Speci\ufb01cally, considering the unstable teacher\u2019s predictions, we design an adaptive compensation module to help the student correct possible prediction noise through the learnable robustness layer. We then introduce the idea of contrastive learning into the \ufb01eld of ARD arXiv:2302.08764v2 [cs.CV] 23 Feb 2023 \fStudent (Trainable) Predictions for Input 1: Predictions for Input 2: Teacher (Fixed) Adversarial augmentation ACM Learnable denoise layer: (a) Contrastive Relationship DeNoise Distillation (c) Contrastive Relationship Distillation The predictions of two models: Input 1 Input 2 Origin predictions Compensated predictions (b) Adaptation Compensation Module Fig. 1. (a) Overview of our Contrastive Relationship DeNoise Distillation. (b) The proposed adaptive compensation module (ACM). The columns denote the predictions. (c) The proposed contrastive relationship distillation method. to model the robustness relationship among multiple examples. The student model can simultaneously learn the knowledge of the robust teacher and suf\ufb01ciently explore the knowledge from examples to improve performance. The main contributions of this work are summarized as follows: \u2022 We propose a novel adversarial robustness distillation method called Contrastive Relationship DeNoise Distillation. The structured relationship among multiple examples replaces one-to-one imitation learning to help the student achieve better results. \u2022 To restrain the in\ufb02uence of the teacher\u2019s unstable prediction, we design a plug-and-play adaptive compensation module. The possible prediction noise of the teacher is re\ufb01ned through the learnable denoise layer. \u2022 Experimental results against multiple adversarial attacks show that our CRDND method achieves state-ofthe-art performances. As a result, the robustness of the small model is greatly improved. 2. METHODOLOGY 2.1. Overview The overview of the proposed Contrastive Relationship DeNoise Distillation is shown in Fig. 1(a). Following the previous ARD methods, we assume that natural examples and the \ufb01xed and pre-trained robust teacher model (e.g., WideResNet [19]) are available. Then, the goal is to train a small student model (e.g., MobileNetV2 [20]) while inheriting the robustness of the teacher. The input includes natural and adversarial examples to help the student deal with multiple examples scenarios. To overcome the uncertainty of the teacher\u2019s prediction, we design an Adaptive Compensation Module (ACM) to model the instability. A learnable denoise layer after the logit predictions of the student model is added to estimate the correctness of the teacher\u2019s answers. To improve the ef\ufb01ciency of knowledge transfer, we denote the Contrastive Relationship Distillation of natural and adversarial examples, respectively, to deeply explore the knowledge among multiple examples. As a result, our method is not completely limited by the robustness of the teacher and achieves good performance. 2.2. Adaptive Compensation Module Similar to traditional knowledge distillation [21], we transfer knowledge by constraining the teacher\u2019s and student\u2019s predictions. We denote fT and fS as the logits predictions of the teacher and student models, x as the natural examples, and x\u2032 as the adversarial examples. The process of knowledge transfer can be expressed as: L = X x,x\u2032\u2208X D(fT (x, x\u2032), fS(x, x\u2032)), (1) where D is a distance representation. However, as mentioned above, the prediction of the teacher model is not necessarily correct. Incorrect predictions often lead to incorrectly information for the student to learn [17]. Therefore, we de\ufb01ne a learnable noise layer M \u2208Rk\u00d7k to model the instability of the teacher\u2019s prediction as shown in Fig. 1(b). It represents the correct probabilities of the teacher\u2019s predictions. We regulate parameters in M with the estimate of the teacher\u2019s accuracy. Speci\ufb01cally, we calculate the accuracy of the teacher model in the current training epoch about natural or adversarial examples set. The setting rule of M is: the main class weight is represented by the current accuracy rate, and the rest of the class is averaged (the sum is 1). The calculated value is used as an estimate of \fthe teacher\u2019s true accuracy in the current training epoch. The column of M can be regarded as a probability distribution, satisfying Pk j=1Mij = 1, where k is the number of classes. M is denoted as: M = \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 a1 1\u2212a2 k\u22121 \u00b7 \u00b7 \u00b7 1\u2212ak k\u22121 1\u2212a1 k\u22121 a2 \u00b7 \u00b7 \u00b7 1\u2212ak k\u22121 . . . . . . ... . . . 1\u2212a1 k\u22121 1\u2212a2 k\u22121 \u00b7 \u00b7 \u00b7 ak \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb, (2) where ai denotes the accuracy of i-th class. M1, M2 denote the noise layers for natural and adversarial scenarios. Then Eq. 1 can be rewritten as: L\u2032 = X x,x\u2032\u2208X D(fT (x, x\u2032), M1/M2(fS(x, x\u2032))). (3) Since our method only needs to estimate the teacher\u2019s current accuracy, it applies to various teacher-student backbone combinations. By modeling the instability of the teacher, our method can also overcome the dilemma that the reliability of the teacher declines gradually during the training epoch. 2.3. Contrastive Relationship Distillation Although Eq. 3 can compensate the mistakes that the teacher may make when transferring robustness knowledge, such knowledge depends entirely on the teacher. To further explore the structured knowledge among multiple examples, we propose a contrastive relationship distillation method to replace the above process as shown in Fig. 1(c). Speci\ufb01cally, we focus on the consistency of the teacher\u2019s and student\u2019s example predictions in a mini-batch. Concurrently, we hope to separate two predictions that are not corresponding. Based on these, we build two kinds of structured knowledge to respond to natural and adversarial scenarios. Firstly, for natural examples x = {x1, . . . , xn}, we \ufb01rst obtain predictions from robust teacher fT (xi) and noise compensated student M1(fS(xi)). Then, the contrastive relationship can be represented as: \u2113xi nat = exp(cos(M1(fS(xi)), fT (xi))/\u03c41) P2N k=1,k\u0338=i exp(cos(M1(fS(xk)), fT (xi))/\u03c41) , (4) where \u03c41 denotes temperature parameter and N denotes the batch size. Next, we can calculate the relationship distillation loss for natural examples as: Lnat = \u22121 N N X j=1 log \u2113xj nat. (5) For adversarial examples x\u2032, the knowledge representation and transfer process are similar to the above: \u2113x\u2032 i adv = exp(cos(M2(fS(x\u2032 i)), fT (x\u2032 i))/\u03c42) P2N k=1,k\u0338=i exp(cos(M2(fS(x\u2032 k)), fT (x\u2032 i))/\u03c42) , (6) Ladv = \u22121 N N X j=1 log \u2113 x\u2032 j adv. (7) Finally, we can get the total Contrastive Relationship DeNoise Distillation loss as: Ltotal = \u03bb\u00b7Lnat + (1 \u2212\u03bb)\u00b7Ladv, (8) where the \u03bb is the loss trade-off parameter. Unlike Eq. 3, Eq. 8 models the consistency between the teacher and student and the difference among multiple examples. The knowledge from multiple examples is crucial, especially when the teacher model cannot give reliable predictions. Especially, it is worth noting that our contrastive learning method differs from previous methods. First, our method does not rely on large negative examples sets, such as a large memory bank [22], large divisions [23], or a large batch size [24]. It also does not rely on additional normalization and pre-training network update [25] with high computing costs. Second, our method does not need to design suitable data augmentation operators [24] carefully. As a result, our method is simple and computationally ef\ufb01cient. 3. EXPERIMENTS 3.1. Experimental Settings We evaluate proposed CRDND method on CIFAR-10, and CIFAR-100 [26], the commonly used adversarial robustness test datasets. The baseline methods consider two AT methods: SAT [14], TRADES [27], three ARD methods: ARD [16], IAD [17], RSLAD [15], and a natural training method. Teacher and Student. For fair comparison, we choose the same teacher models following RSLAD [15] including WideResNet-34-10 [19] for CIFAR-10 and WideResNet-7016 [28] for CIFAR-100. The teacher model is \ufb01xed during the whole training process. Besides, we set two backbones of the students including ResNet-18 [29] and MobileNetV2 [20]. Implementation Details. The proposed model is implemented in PyTorch and trained on eight RTX 2080 Ti GPUs. We set the loss trade-off parameter \u03bb as 0.2, and the temperature parameters \u03c41, \u03c42 as 0.5. The student is trained via SGD optimizer with cosine annealing learning rate with an initial value of 0.1, momentum 0.9 and weight decay 2e-4. The batch size is 128, and the total number of training epochs is 300, the same as the previous works. For other baseline methods, we follow the setting of RSLAD [15]. Attacks Evaluation. We evaluate the model against multiple adversarial attacks: FGSM [30], PGDSAT (PGDS) [14], PGDTRADES (PGDT) [27] and AutoAttack (AA) [31]. Besides, the above attack methods are the same as the settings of RSLAD [15]. \fTable 1. Adversarial robustness accuracy (%) on CIFAR-10 and CIFAR-100 datasets. The maximum adversarial perturbation \u03f5 is 8/255. RN-18 and MN-V2 are abbreviations of ResNet-18 and MobileNetV2 respectively. Bold and underline numbers denote the best and the second best results, respectively. CIFAR-10 CIFAR-100 Model Method Attacks Evaluation Model Method Attacks Evaluation Clean FGSM PGDS PGDT AA Clean FGSM PGDS PGDT AA RN-18 Nature 94.65 19.26 0.0 0.0 0.0 RN-18 Nature 75.55 9.48 0.0 0.0 0.0 SAT 83.38 56.41 49.11 51.11 45.83 SAT 57.46 28.56 24.07 25.39 21.79 TRADES 81.93 57.49 52.66 53.68 49.23 TRADES 55.23 30.48 27.79 28.53 23.94 ARD 83.93 59.31 52.05 54.20 49.19 ARD 60.64 33.41 29.16 30.30 25.65 IAD 83.24 58.60 52.21 54.18 49.10 IAD 57.66 33.26 29.59 30.58 25.12 RSLAD 83.38 60.01 54.24 55.94 51.49 RSLAD 57.74 34.20 31.08 31.90 26.70 CRDND 84.11 64.24 59.91 61.25 49.88 CRDND 59.00 38.02 35.29 36.29 27.05 MN-V2 Nature 92.95 14.47 0.0 0.0 0.0 MN-V2 Nature 74.58 7.19 0.0 0.0 0.0 SAT 82.48 56.44 50.10 51.74 46.32 SAT 56.85 31.95 28.33 29.50 24.71 TRADES 80.57 56.05 51.06 52.36 47.17 TRADES 56.20 31.37 29.21 29.83 24.16 ARD 83.20 58.06 50.86 52.87 48.34 ARD 59.83 33.05 29.13 30.26 25.53 IAD 81.91 57.00 51.88 53.23 48.40 IAD 56.14 32.81 29.81 30.73 25.74 RSLAD 83.40 59.06 53.16 54.78 50.17 RSLAD 58.97 34.03 30.40 31.36 26.12 CRDND 83.89 65.25 59.93 61.33 48.79 CRDND 58.60 38.03 36.05 37.02 26.56 3.2. Comparison to State-of-the-art Methods The robustness performances of our and other baseline methods are shown in Table 1. We compare the best checkpoint of various methods. The \u2018Nature\u2019 training method is selected based on the performance of clean test examples. Besides, other training methods are selected based on the robustness performance against PGDT following previous methods. It can be seen from the results that our CRDND method achieves state-of-the-art robustness performances on multiple benchmarks. Especially for FGSM and PGD evaluation metrics, our method has greatly improved (4%-6%). In general, the performance of adversarial robustness is in a trade-off relationship with the performance of clean conditions unless ground truth labels are used (e.g., Nature and ARD methods). Our method is particularly competitive in both conditions without any labels, which shows that our student improves overall performance by deeply exploring additional knowledge among multiple examples. Table 2. Ablation studies on CIFAR-100 dataset (%). ID Model Method Attacks Evaluation FGSM PGDS PGDT AA 1 RN-18 Ours 38.02 35.29 36.29 27.05 2 w/o ACM 38.02 35.14 36.11 26.29 3 MN-V2 Ours 38.03 36.05 37.02 26.56 4 w/o ACM 37.91 35.80 36.78 26.37 5 RN-18 w/o Lnat 37.41 34.87 35.84 25.20 6 w/o Ladv 34.58 29.41 30.40 26.33 7 MN-V2 w/o Lnat 37.80 35.85 36.71 25.67 8 w/o Ladv 34.97 29.56 30.78 26.14 3.3. Ablation Study To verify the effectiveness of our proposed Adaptive Compensation Module (ACM), we set the baselines to bold (Ours: full CRDND) and discard ACM to demonstrate the impact on the results. Table 2 (1-4) contrasts the impacts of model robustness with or without ACM (the w/o in the table means without). When the ACM is discarded, the robustness of students decreases to varying degrees on various attack metrics. We believe that the decline is due to the lack of estimation of the prediction accuracy of the teacher model, which indicates that the frequent incorrect predictions given by the teacher model may interfere with the student\u2019s learning. To verify the two optimization objectives we designed, we separate them to test the effectiveness of using them separately. Table 2 (5-8) shows the robustness performance without Lnat or Ladv. Compared with Table 2 (1, 3), we observe that the student\u2019s performance will decline no matter which objective is missing. It is worth noting that when only Lnat is used, the student model does not directly learn any knowledge about adversarial examples. We analyze that the robustness here comes from our structured relationship distillation method, which can transfer the robust information among examples by learning a relative relationship. 4." + } + ], + "Zhaoyu Chen": [ + { + "url": "http://arxiv.org/abs/2402.01220v1", + "title": "Delving into Decision-based Black-box Attacks on Semantic Segmentation", + "abstract": "Semantic segmentation is a fundamental visual task that finds extensive\ndeployment in applications with security-sensitive considerations. Nonetheless,\nrecent work illustrates the adversarial vulnerability of semantic segmentation\nmodels to white-box attacks. However, its adversarial robustness against\nblack-box attacks has not been fully explored. In this paper, we present the\nfirst exploration of black-box decision-based attacks on semantic segmentation.\nFirst, we analyze the challenges that semantic segmentation brings to\ndecision-based attacks through the case study. Then, to address these\nchallenges, we first propose a decision-based attack on semantic segmentation,\ncalled Discrete Linear Attack (DLA). Based on random search and proxy index, we\nutilize the discrete linear noises for perturbation exploration and calibration\nto achieve efficient attack efficiency. We conduct adversarial robustness\nevaluation on 5 models from Cityscapes and ADE20K under 8 attacks. DLA shows\nits formidable power on Cityscapes by dramatically reducing PSPNet's mIoU from\nan impressive 77.83% to a mere 2.14% with just 50 queries.", + "authors": "Zhaoyu Chen, Zhengyang Shan, Jingwen Chang, Kaixun Jiang, Dingkang Yang, Yiting Cheng, Wenqiang Zhang", + "published": "2024-02-02", + "updated": "2024-02-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CR" + ], + "main_content": "Introduction Deep neural networks (DNNs) have made unprecedented advancements and are extensively employed in various fundamental vision tasks, such as semantic segmentation [5, 25, 35] and video object segmentation [17\u201319]. However, recent studies have revealed the susceptibility of DNNs to adversarial examples [6, 7, 9, 31] by adding specially designed small perturbations to the input that are imperceptible to humans. The emergence of adversarial examples has prompted researchers to focus on the security of underlying visual tasks and seek inspiration for the development of robust DNNs through the exploration of adversarial examples. *indicates equal contributions. Semantic segmentation is a primary visual task for pixellevel classification. Despite its extensive utilization in realworld safety-critical applications like autonomous driving and medical image segmentation, it remains susceptible to adversarial examples. Recently, the emergence of Segment Anything Model (SAM) [22] has attracted people\u2019s attention to segmentation models and inspired exploration of their robustness. However, there are few adversarial attacks on semantic segmentation [15], and they focus more on white-box attacks. White-box attacks require access to all information about the model (e.g., gradients and network architecture), which is challenging and often unattainable in real-world scenarios. Consequently, black-box attacks offer a more effective means to explore the adversarial robustness of semantic segmentation models in real-world scenarios. In this paper, we explore for the first time black-box attacks on semantic segmentation in the decision-based setting. The decision-based setting represents the most formidable challenge among black-box attacks, as it restricts access solely to the output category provided by the target model, without any information regarding probabilities or confidences. Nevertheless, the efficacy of decisionbased attacks on semantic segmentation remains severely constrained by the inherent characteristics of pixel-level classification, as evidenced by the following observations: i) Inconsistent Optimization Goal. In image classification, decision-based attacks often reduce the magnitude of perturbations under the premise of misclassification. However, in semantic segmentation, the larger the perturbation amplitude, the lower the metric, and it is difficult to constrain the perturbation to the lp norm. ii) Perturbation Interaction. Perturbations from different iterations interfere with each other, so a pixel is classified incorrectly in this iteration but may be classified correctly under perturbation in the next iteration, which leads to optimization difficulties. iii) Complex Parameter Space. Attacking semantic segmentation is a multi-constraint optimization problem, wherein the complexity of the parameter space imposes lim1 arXiv:2402.01220v1 [cs.CV] 2 Feb 2024 \fitations on attack efficiency. In practice, it becomes imperative to employ an efficient decision-based black-box attack to assess the adversarial robustness of semantic segmentation. Therefore, the proposed attack must exhibit both high attack efficiency and reliable attack performance. To tackle the aforementioned challenges, we first propose the decision-based attack on semantic segmentation, termed Discrete Linear Attack (DLA). DLA employs a random search framework to effectively generate adversarial examples from clean images, utilizing a proxy index to guide the optimization process. Specifically, we optimize the adversarial examples by leveraging the changes in the proxy index corresponding to the image. Additionally, we alleviate the challenges identified in Section 3.2 by proposing discrete linear noises for updating the adversarial perturbation. For interference between perturbations, we find that locally adding noises has a good attack effect, but the added colorful patches are easily perceived. Therefore, we introduce linear noises and update the perturbation by adding horizontal or vertical linear noises to the image. To further compress the parameter space, we convert the complex continuous parameter space into a discrete parameter space and bisect the discrete noise from the extreme point of the l\u221e-norm ball. The overall process can be divided into two parts: perturbation exploration and perturbation calibration. In perturbation exploration, DLA adds discrete linear noises to the input to obtain a better initialization. In the perturbation calibration, DLA adaptively flips the perturbation direction of some regions according to the proxy index, updates and calibrates the perturbation. We evaluate the adversarial robustness of semantic segmentation models based on convolutional neural networks (FCN [25], PSPNet [35], DeepLabv3 [5]) and transformer (SegFormer [34] and Maskformer [10]) on Cityscapes [11] and ADE20K [36]. Extensive experiments demonstrate that DLA achieves state-of-the-art attack efficiency and performance on semantic segmentation. Our main contributions and experiments are as follows: \u2022 We first explore the adversarial robustness of existing semantic segmentation models based on decisionbased black-box attacks, including CNN-based and transformer-based models. \u2022 We analyze and summarize the challenges of decisionbased attacks on semantic segmentation. \u2022 We first propose the decision-based attack on semantic segmentation, called Discrete Linear Attack (DLA), which applies discrete linear noises to perturbation exploration and perturbation calibration. \u2022 Extensive experiments show the adversarial vulnerability of existing semantic segmentation models. On Cityscapes, DLA can reduce PSPNet\u2019s mIoU from 77.83% to 2.14% within 50 queries. 2. Related Work Semantic Segmentation. Semantic segmentation is a visual task of pixel-level classification. Currently, DNNbased methods have become the dominant way of semantic segmentation since the seminal work of Fully Convolutional Networks (FCNs) [25]. The subsequent model focuses on aggregating long-range dependencies in the final feature map: DeepLabv3 [5] uses atrous convolutions with various atrous rates and PSPNet [35] applies pooling technology with different kernel sizes. The subsequent work began to introduce transformers [32] to model context: SegFormer [34] replaces convolutional backbones with Vision Transformers (ViT) [12] that capture long-range context starting from the very first layer. MaskFormer [10] introduces the mask classification and employs a Transformer decoder to compute the class and mask prediction. Black-box Adversarial Attack. In this paper, we primarily concentrate on query-based black-box attacks, where it is assumed that attackers have limited access to the target network and can only make queries to obtain the network\u2019s outputs (confidences or labels) for specific inputs [8, 23]. The former are called score-based attacks, while the latter are decision-based attacks. Generally speaking, scorebased attacks have higher attack efficiency on image classification. For decision-based attacks on semantic segmentation, we define the model output as the label of each pixel. Considering that the mIoU of semantic segmentation is a continuous value calculated based on the label of each pixel, we choose score-based attacks on image classification as the competitors in this paper. Most score-based attacks on image classification estimate the approximate gradient through zeroth-order optimizations [20]. Bandits [21] further introduce the gradient prior information and bandits to accelerate [20]. Then, Liu et al. [24] introduce the zeroth-order setup to sign-based stochastic gradient descent (SignSGD) [3] and propose ZO-SignSGD [24]. Then, SignHunter [1] exploits the separability property of the directional derivative and improves the query efficiency. Recently, methods based on random search have been proposed and have better query efficiency. SimBA [16] randomly samples a vector from a predefined orthonormal basis to images. Square Attack [2] selects localized squareshaped updates at random positions to update perturbations. Compared with previous work, DLA analyzes the challenges of semantic segmentation and implements queryefficient attacks based on discrete linear noise. Adversarial Attack on Semantic Segmentation. Compared to image classification, there are few adversarial attacks on semantic segmentation. [14] and [33] are the first to study the adversarial robustness of semantic segmentation and illustrate its vulnerability through extensive experiments. Indirect Local Attack [28] reveals the adversarial vulnerability of semantic segmentation models due to 2 \fFigure 1. Based on Random attack, we give the changes in mIoU under various perturbation magnitudes. If we add a very large perturbation, this can make the mIoU very small. However, when reducing the perturbation magnitude, the mIoU increases, which makes the optimization goal and attack direction inconsistent. long-range context. SegPGD [15] improves white-box attacks from the perspective of loss functions and can better evaluate and boost segmentation robustness. ALMA prox [29] produces adversarial perturbations with much smaller l\u221enorms with a proximal splitting. The aforementioned attacks primarily prioritize enhancing the strength of white-box attacks on semantic segmentation, while allocating comparatively less emphasis on the adversarial robustness of query-based black-box attacks. Consequently, as a complementary approach, we undertake the pioneering exploration of adversarial robustness within the highly challenging decision-based setting. 3. Method 3.1. Preliminaries In semantic segmentation, given the semantic segmentation model f(\u00b7), the clean image is x \u2208[0, 1]C\u00d7H\u00d7W and the corresponding labels are yi \u2208{1, ..., K}d (d = HW and i = 1, ..., d), where C is the number of channels, H and W are the height and width of the image, and K is the number of semantic classes. We denote the adversarial example xadv = x + \u03b4, where \u03b4C\u00d7H\u00d7W is the adversarial perturbation and it satisfies ||\u03b4||\u221e\u2264\u03f5. Because the attack is the decision-based setting, we denote the model output as the per-pixel predicted labels \u02c6 y = f(x) \u2208{1, ..., K}d. We hope that the adversarial example can make all pixels misclassified as much as possible, so the optimization goal can be expressed as: \\ lab e l {equ : opt } \\beg in { aligne d } & \\ u nd erse t {\\delta }{\\arg \\max } \\sum \\mathsf {1}(f(x+\\delta )_i \\neq y_i), \\\\ & \\mathrm {s.t.}\\ ||\\delta ||_\\infty \\leq \\epsilon \\ \\mathrm {and}\\ i=1,...,d, \\end {aligned} (1) where 1 is the indicator function. When the condition is met, it is recorded as 1, otherwise it is 0. Figure 2. Random attack with different proxy indexes. Our design focuses on optimizing the adversarial perturbation by initiating from clean images and iteratively updating the example based on the observed changes in the proxy index. Clean i-th iteration (i+1)-th iteration Figure 3. When facing black-box attacks on semantic segmentation, the update of perturbations causes the attacked pixels to revert to their original categories, resulting in optimization difficulties. 3.2. Attack Analysis Decision-based attacks on image classification have been extensively and intensively studied [23], however, semantic segmentation has not been fully explored. Semantic segmentation is a pixel-level classification, which is far more difficult to attack than image classification, because attacking image classification is a single-constraint optimization, while every pixel in semantic segmentation must be classified, which results in attacking semantic segmentation as a multi-constraint optimization. Consequently, decisionbased attacks on semantic segmentation encounter substantial challenges, often leading to optimization convergence towards local optima, as illustrated below. Inconsistent Optimization Goal. Decision-based attacks on image classification commonly rely on boundary attacks [23]. Boundary attack [23] requires that the image is classified incorrectly, and then minimizes perturbations\u2019 magnitude so that the adversarial example is near the decision boundary. However, this strategy cannot be applied in semantic segmentation. As shown in Figure 1, we can add very large noise to make the mean Intersection-over Union (mIoU) very small, but when reducing the noise magnitude, unlike image classification that maintains misclassification, the mIoU also becomes greater, which makes the optimization goal and attack direction inconsistent. 3 \fClean Random Patch with Overlap Patch without Overlap Line Figure 4. Description of Perturbation Interaction. We use perturbations in the form of random, patch with overlap, patch without overlap, and line to attack, which shows that there is interference between perturbations. Less overlap can lead to better attack performance and linear noises achieve better results in both imperceptibility and attack. To address this challenge, we propose the utilization of a proxy index to generate adversarial examples from clean images. Our approach involves optimizing the adversarial perturbation starting from clean images and updating the example based on the changes observed in the proxy index associated with the image. To gain a deeper understanding of the proxy index, we propose a simple baseline method called Random Attack. The update process for this baseline method is as follows: x^0 _ {a dv}= x,\\ x^ { t+ 1}_ { adv}= \\ Pi _ \\ eps i lon \\left (x^t_{adv} + rand[-\\frac {\\epsilon }{16}, +\\frac {\\epsilon }{16}] \\right ), (2) where rand[\u2212\u03f5 16, + \u03f5 16] generates a noise that is the same as the input\u2019s dimension and satisfies the random distribution [\u2212\u03f5 16, + \u03f5 16], and \u03a0\u03f5 clips the input to [x \u2212\u03f5, x + \u03f5]. During the iteration, Random Attack updates the perturbation only when the proxy index becomes smaller. The complete algorithm of Random Attack is in Supplementary Material B. Building upon Random Attack, we conduct a toy study using PSPNet [35] and SegFormer [34] on Cityscapes [11] and ADE20K [36], following the same experimental settings as described in Section 4. Considering that mIoU is a widely adopted metric [13] for evaluating semantic segmentation, it holds potential as a suitable proxy index. Additionally, the per-pixel classification accuracy (PAcc) can also reflect the attack performance. Hence, we select mIoU and PAcc as the proxy indices, and the attack process using Random Attack is illustrated in Figure 2. Our observations are as follows: i) Random Attack based on the proxy index can reduce the mIoU of the image, ii) when mIoU is employed as the proxy index, the attack performance is superior. This is because when PAcc is used as the proxy index, the adversarial example only needs to maximize misclassification at each pixel, without considering the overall class. Conversely, when mIoU is used as the proxy index, the mIoU of an individual image approaches the mIoU of the entire dataset, resulting in improved attack performance. Therefore, we select mIoU as the proxy index for our study. Perturbation Interaction. Despite the effectiveness of Random Attack with the proxy index, as depicted in Figure 2, we observe that its attack performance is constrained and prone to convergence, suggesting that it may have reached a local optimal solution. Recent research [15] demonstrates that during white-box attacks on semantic segmentation, the classification of each pixel exhibits instability. In one iteration, a pixel may be misclassified, while in the next iteration, it could be classified correctly. This situation also occurs in black-box attacks on semantic segmentation, as shown in Figure 3. Upon revisiting Random Attack, we hypothesize that there exists interference between the perturbations added in each iteration. This interference arises due to the inconsistent update direction between black-box and white-box attacks. Consequently, a pixel may succeed in one iteration of the attack but fail in the subsequent iteration, leading to convergence towards a local optimal solution. To mitigate this issue, we propose updating the perturbation not on the entire image but on a local region. This localized perturbation update approach may alleviate the interference and enhance the attack performance. Taking inspiration from this observation, we explore different perturbation update strategies of varying shapes and conduct corresponding experiments. The visualization of these strategies is presented in Figure 4. When random perturbations are added to the entire image, the resulting segmented mask generally remains close to the original prediction, and the object\u2019s outline is relatively well-preserved. This aligns with the limited attack performance depicted in Figure 2. However, when we update the perturbation in the form of patches with overlap [2], we observe minimal changes in the attack performance, and the object\u2019s outline is still well-maintained. Conversely, when the perturbations are patches without overlap [21], a significant portion of the object\u2019s outline is destroyed, indicating the presence of interference between perturbation updates. Looking back at the adversarial example in patches without overlap, although its attack effect is significant, it is easy to observe that carefully designed perturbations are added because the added patches are blocky and the color is ob4 \fTable 1. Search strategy for perturbation values. We report the mIoU under 50/200 query budgets and observe that for the same queries, discrete perturbations always obtain lower mIoU (%). Datset Model Attack Clean Random NES Discrete Cityscapes PSPNet 77.83 48.81/47.18 48.34/47.40 33.57/33.54 SegFormer 80.43 58.59/56.07 58.00/55.34 41.70/41.70 ADE20K PSPNet 37.68 26.63/26.34 25.67/25.52 23.50/23.31 SegFormer 43.74 34.72/34.45 34.66/34.55 33.68/33.53 vious. To ensure an effective attack and the perturbation is imperceptible simultaneously, we consider modeling the form of the perturbation as a line, as shown in Figure 4. We primarily choose linear noises for the following reasons: i) local adversarial perturbations can be spread to the global through context modeling of semantic segmentation [28], thereby attacking pixels in other areas, so linear noises are still effective. ii) Linear noises are thinner compared to patches, making them relatively harder to detect by the human eye. As depicted in Figure 4, linear noises exhibit superior performance compared to other strategies while remaining imperceptible. Complex Parameter Space. Despite the effectiveness of linear noises in enhancing attack performance, the presence of complex parameter spaces still hampers attack efficiency. Semantic segmentation poses a multi-constraint optimization problem, making it challenging to find the optimal adversarial example within a limited query budget. In black-box attacks, we usually use two methods to update the perturbation value: random noise [2, 16] and gradient estimation [20, 21]. Random noise causes clean images to randomly walk on the decision boundary and hope to cross it. Gradient estimation is a gradient-free optimization technology that approximates a gradient direction through random sampling, which can speed up attack efficiency and the most commonly used one is Natural Evolutionary Strategies (NES) [20]. Although both of the above strategies are effective, they still require many queries, and the query budget increases significantly as the parameter space becomes larger [21]. Even if [21] introduces prior information to reduce the parameter space, the query efficiency is relatively limited. Therefore, we consider further reducing the parameter space. For limited queries, it is unlikely to enumerate the entire parameter space. Recent work [4, 27] shows that adversarial examples are often generated at the extreme points of l\u221e norm ball, which illustrates that it is easier to find adversarial examples at these extreme points. Empirical findings in [27] also suggest that adversarial examples obtained from PGD attacks [26] are mostly found on the extreme points of l\u221enorm ball. Inspired by this, we directly restrict the possible perturbation as the extreme points of the l\u221enorm ball and change the parameter space from continuous space to Algorithm 1 Discrete Linear Attack (DLA) Input: the image x, model f, proxy index L, iteration T Output: xadv 1: lmin \u2190L(x), \u02c6 \u03b4 \u21900, i \u21900, M \u21901, n \u21900 2: for t \u2208[1, T] do 3: if t \u2264T 5 then 4: // Perturbation Exploration 5: k \u2190t % 2 6: \u03b4 \u223ck \u00b7 {\u2212\u03f5, \u03f5}h + (1 \u2212k) \u00b7 {\u2212\u03f5, \u03f5}w 7: if lmin > L(x + \u03b4) then 8: lmin \u2190L(x + \u03b4), \u02c6 \u03b4 \u2190\u03b4, d \u2190k 9: end if 10: else 11: // Perturbation Calibration 12: c \u2190d \u00b7 \u0006 h 2n \u0007 + (1 \u2212d) \u00b7 \u0006 w 2n \u0007 13: M[d \u00d7 i \u00d7 c : d \u00d7 (i + 1) \u00d7 c + (1 \u2212d) \u00d7 h, (1 \u2212 d)\u00d7i\u00d7c : (1\u2212d)\u00d7(i+1)\u00d7c+d\u00d7w] \u2217= \u22121 14: if lmin > L(x + \u02c6 \u03b4 \u00b7 M) then 15: lmin \u2190L(x + \u02c6 \u03b4 \u00b7 M), \u02c6 M \u2190M 16: else 17: M[d\u00d7i\u00d7c : d\u00d7(i+1)\u00d7c+(1\u2212d)\u00d7h, (1\u2212 d)\u00d7i\u00d7c : (1\u2212d)\u00d7(i+1)\u00d7c+d\u00d7w] \u2217= \u22121 18: end if 19: i \u2190i + 1 20: if i == 2n then 21: i \u21900, n \u2190n + 1 22: end if 23: if n == \u2308log2(d \u00b7 h + (1 \u2212d) \u00b7 w)\u2309+ 1 then 24: \u02c6 \u03b4 \u2190\u02c6 \u03b4 \u00b7 M, i \u21900, n \u21900 25: end if 26: end if 27: end for 28: xadv \u2190x + \u02c6 \u03b4 \u00b7 \u02c6 M 29: return xadv discrete space. Specifically, the adversarial perturbation \u03b4 is sampled from the Binomial distribution {\u2212\u03f5, \u03f5}d, called discrete noises. In this way, we directly reduce the parameter space from [\u2212\u03f5, \u03f5]d to {\u2212\u03f5, \u03f5}d, which only 2d possible search directions. We conduct a case study to illustrate the effectiveness of these discrete noises, as shown in Table 1. Here, we use Random Attack as the baseline and report the mIoU under 50 and 200 query budget. We observe that for the same number of queries, discrete noises can always obtain lower mIoU, and there is a significant gap with other strategies, which illustrates the effectiveness of reducing the parameter space. 3.3. Discrete Linear Attack In this section, we introduce the proposed Discrete Linear Attack (DLA) based on the aforementioned analysis. DLA consists of two main components: perturbation ex5 \fploration and perturbation calibration. In the perturbation exploration phase, DLA introduces discrete perturbations in the horizontal or vertical direction to the input, aiming to achieve a better initialization. In the perturbation calibration phase, DLA dynamically flips the perturbation direction in certain regions based on the proxy index. This allows for iterative updates and calibration of the perturbation. The pipeline of DLA is as outlined in Algorithm 1. Perturbation Exploration. In Section 3.2, discrete linear noises can greatly compress the parameter space and improve attack efficiency. Combined with the proxy index and considering the aspect ratio of the image, we initialize the perturbation as follows: x_ { a d v} \\ left arrow x+\\delta ,\\quad \\delta \\sim \\{-\\epsilon , \\epsilon \\}^d, (3) where d denotes the height or weight of images. In perturbation exploration, we alternately sample discrete linear noises with horizontal or vertical directions and add them to the clean image. Then, we calculate the proxy index and retain the adversarial perturbation that obtains the minimum proxy index as \u02c6 \u03b4. Perturbation Calibration. Although perturbation exploration has demonstrated high attack performance, the obtained adversarial perturbations still fall short of optimality. This limitation arises from the coarse-grained nature of perturbation exploration, which fails to consider the finegrained updating of local perturbations. Given the discrete nature of the noise, we propose generating new perturbations by flipping the sign of the existing perturbation. In the perturbation calibration phase, we adopt a hierarchical approach to randomly flip the sign of local perturbations, thereby further refining the perturbations. This process involves first attempting to flip the global perturbation and subsequently dividing the image into blocks, performing flipping operations on each block. Specifically, we first partition the entire image into blocks, then iterate over each block and flip the sign of the discrete linear perturbation. If the mIoU after flipping is lower, the current perturbation is saved. After traversing the current block, DLA further divides the image into more fine-grained blocks and then traverses. By employing hierarchical blocking and flipping, we aim to obtain the most effective adversarial examples. This operations are outlined in Lines 12-25 of Algorithm 1. 4. Experiments 4.1. Experimental Setup Datasets. We attack the semantic segmentation models with two widely used semantic segmentation datasets: Cityscapes [11] (19 classes) and ADE20K [36] (150 classes). Following [28] and [15], we randomly select 150 and 250 images from the validation set of Cityscapes and ADE20K. For evaluation metrics, we choose the standard metric, mean Intersection-over Union (mIoU) [13], a perpixel metric that directly corresponds to the per-pixel classification formulation. After attacking, the less the mIoU, the better the attack performance. Models. We select two types of semantic segmentation models: traditional convolutional models (FCN [25], DeepLabv3 [5], and PSPNet [35]), and transformer-based models (SegFormer [34] and MaskFormer [10]). Please refer to Supplementary Material C for more model details. Attacks. We select 7 attack algorithms for performance comparison, including zero-order optimization (NES [20], Bandits [21], ZO-SignSGD [24], and SignHunter [1]) and random search (Random attack (Random), SimBA [16], and Square Attack [2] (Square)). Implementation details. In all experiments, the maximum perturbation epslion \u03f5 is 8. For NES [20], we set the number of queries for a single attack q = 10. For Bandit Attack, we set the initial value of patch size prioritysize = 20, and the learning rate priorexploration = 0.1. For ZOSignSGD [24], we set the same number of single attack queries as NES q = 10. For Square Attack [2], we set the initial value of the fraction of pixels pinit is 0.05. For SimBA [16], we set the magnitude of the perturbation delta as 50. The setting of SignHunter [1] is consistent with the original paper. To alleviate the effect of randomness, we report average mIoU (%) after three attacks. 4.2. Performance Comparison Attack Results. Table 2 illustrates the attack results of 8 black-box attacks on Cityscapes [11] and ADE20K [36]. We report mIoU (%) of 5 models under 50 and 200 query budget. Random and NES [20] have lower attack performance due to their complex parameter spaces. ZOSignSGD [24], SimBA [16], and Square [2] introduce local prior information, which can further improve attack performance. Furthermore, both Bandits [21] and SignHunter [1] use non-overlapping local noise, thus achieving sub-optimal performance. However, as shown in Figure 5, Bandits\u2019 patch noise and SignHunter\u2019s strip noise are colored and are very easy to perceive by humans. Our DLA significantly outperforms other competing attacks on both datasets. On Cityscapes\u2019 PSPNet, DLA reduces mIoU by 15.49% and 23.98% compared to Bandits and Signhunter under 200 queries. Further, on the more challenging PSPNet of ADE20K, DLA reduces mIoU by 14.08% and 10.31% compared to Bandits and Signhunter under 200 queries. In terms of visualization, our DLA maintains the imperceptibility of adversarial perturbations and is able to destroy the outline of objects well. In terms of attack efficiency, the attack performance of DLA under 50 queries exceeds the results of other attacks under 200 queries by a very significant gap. Overall, our DLA has extremely high attack efficiency and can more efficiently evaluate the adversarial 6 \fTable 2. Attack results on Cityscapes and ADE20K. We report mIoU (%) under 50/200 query budget. Model Dataset Attack FCN [25] PSPNet [35] DeepLab V3 [5] SegFormer [34] MaskFormer [10] Clean 77.89 77.83 77.70 80.43 73.91 Random 35.76/34.94 48.81/47.18 54.57/52.77 58.59/56.07 39.09/39.06 NES [20] 34.47/33.82 48.34/47.40 54.32/52.99 58.00/55.34 51.94/52.56 Bandits [21] 18.17/15.65 20.81/17.55 29.85/26.73 39.43/36.14 26.94/26.88 ZO-SignSGD [24] 34.97/34.01 46.69/45.80 51.83/50.54 55.67/54.81 49.65/49.59 SignHunter [1] 23.88/21.67 33.52/26.04 44.24/35.93 41.38/34.18 47.05/27.06 SimBA [16] 33.74/29.58 46.27/40.22 54.67/50.17 54.17/52.67 33.52/32.71 Square [2] 35.47/35.99 48.47/49.18 54.45/56.23 56.71/52.18 50.87/49.84 Cityscapes [11] Ours 3.18/3.07 2.14/2.06 1.79/1.71 18.12/17.78 2.79/2.78 Clean 33.54 37.68 39.36 43.74 45.50 Random 22.85/22.13 27.72/27.36 25.82/24.81 38.02/37.64 25.37/24.06 NES [20] 24.47/23.96 26.57/26.26 23.83/23.41 36.35/36.06 34.78/34.55 Bandits [21] 25.10/23.67 25.02/23.93 27.52/26.36 36.32/35.03 26.14/26.91 ZO-SignSGD [24] 23.29/22.94 26.82/26.47 25.38/24.41 35.22/32.18 33.32/32.86 SignHunter [1] 20.15/16.72 24.21/20.16 25.40/20.48 32.56/28.22 28.78/16.78 SimBA [16] 24.20/21.49 26.36/22.92 25.56/22.13 36.70/34.81 35.62/33.18 Square [2] 23.94/22.90 26.87/25.89 27.70/26.46 35.43/34.76 26.29/26.41 ADE20K [36] Ours 8.18/7.97 10.19/9.85 11.34/10.67 28.91/27.85 12.14/12.14 Clean Random Nes Bandits ZO-signsgd SignHunter SimBA Square Ours Figure 5. Visualization of different attacks on Cityscapes and the threat model is SegFormer. robustness of existing semantic segmentation models. Discussion. In Table 2, we observe that decision-based attacks on ADE20K [36] are more challenging than attacking Cityscapes [11]. We think the possible reason is that the category distribution of images in Cityscapes is relatively even, and they are all urban scenes with relatively high similarity and low complexity, so it is easier to attack. ADE20K has more categories and the differences between images are larger, so the attack is more difficult. In addition, we also find that SegFormer [34] demonstrates the best adversarial robustness under 8 attacks on both datasets, compared with the other 4 semantic segmentation models. This is because SegFormer is a transformer-based model, its main components are transformers, and the self-attention mechanism leads to higher adversarial robustness [8, 30], which is consistent with the description in SegFormer. Furthermore, it is worth noting that the backbone of MaskFormer has the structure of CNN, which implies that it does not exhibit a higher level of robustness compared to SegFormer. 4.3. Diagnostic Experiment To study the effect of our core designs, we conduct ablative studies on Cityscapes and ADE20K. We use SegFormer [34] as the threat model and attack it under 50/200 query budget. Attack Design. We first study the attack design of DLA, as shown in Table 3. In perturbation exploration, random is the random noise of Random Attack, and horizontal and vertical is to add discrete linear noise horizontally and vertically respectively. iterative is to add discrete linear noise alternately horizontally and vertically. In perturbation calibration, random is the random update noise of Random attack, and flip is the update strategy of DLA\u2019s filp perturbation sign. We observe that flip can achieve better attack performance than random in perturbation calibration. When the perturbation exploration is iterative, under 200 7 \fFigure 6. Attack performance of different black-box attacks under different perturbation magnitudes \u03f5 within a 200 query budget. queries, it exceeds random 0.76% on Cityscapes and 0.83% on ADE20K. In perturbation exploration, discrete linear noise significantly surpasses random noise by a large advantage. vertical and iterative achieve the best performance on Cityscapes and ADE20K respectively. We find that the aspect ratio of Cityscapes is fixed, so resulting in vertical noises being more effective. The aspect ratio of ADE20K changes, so iterative has better attack performance, which means it is more generalizable when facing images of more scales. Considering the robustness of facing images with different aspect ratios, we choose iterative as the strategy for adding discrete linear noises. Perturbation Magnitude \u03f5. To assess the impact of different perturbation magnitudes \u03f5 on attack performance, we select \u03f5 as 4, 8, and 16 for experiments on SegFormer. Figure 6 depicts the attack performance of different black-box attacks under different perturbation magnitudes \u03f5. As the magnitude of perturbations increases, all attacks exhibit a greater decrease in overall mIoU. Notably, DLA consistently achieves the highest attack performance across all three perturbation magnitudes \u03f5. Additionally, we observe that as the magnitude of perturbations increases, DLA outperforms other competing attacks in terms of the extent to which it can degrade mIoU. The above experiments illustrate that DLA has a stronger ability to evaluate the adversarial robustness of semantic segmentation under different perturbation magnitudes than other competing attacks. Limited Queries. Since a large number of queries leads to detection by the target system, we test the attack performance under extremely limited queries. To simulate a limited number of queries, we give 10 query budgets and evaluate the mIoU after the attack, as shown in Table 4. On Cityscapes, DLA demonstrated extreme attack efficiency, Table 3. Ablation study on the attack design of DLA. Pert. Explor. Pert. Callbr. Dataset random horizontal vertical iterative random flip Cityscapes ADE20K 80.43 43.74 \u2713 \u2713 58.59/56.07 38.02/37.64 \u2713 \u2713 55.47/55.12 36.94/36.57 \u2713 \u2713 26.41/26.21 32.02/31.65 \u2713 \u2713 17.53/17.10 29.61/28.49 \u2713 \u2713 18.29/18.26 29.56/28.77 \u2713 \u2713 18.12/17.78 28.91/27.85 Table 4. Attack results under limited queries (10 query budget). Attack Clean Random NES Bandits ZO-SignSGD SignHunter SimBA Square Ours Cityscapes 80.43 59.22 59.26 41.20 56.23 47.90 54.62 57.22 18.89 ADE20K 43.74 37.82 38.20 35.71 37.82 35.95 37.57 37.64 30.85 reducing SegFormer\u2019s mIoU of 61.54% within 10 queries and surpassing the second-best bandits attack of 22.31% by a significant margin. In the more challenging ADE20K, our DLA reduces the mIoU of SegFormer by 12.89% in 10 queries. Likewise, we beat the next best bandits attack by 4.86%. Combined with the attack results in Table 1, our DLA has attack performance that exceeds other current competitive attacks under both limited queries and a large number of queries. It indicates DLA can effectively evaluate the adversarial robustness of semantic segmentation in industrial and academic scenarios. 5." + }, + { + "url": "http://arxiv.org/abs/2305.10665v2", + "title": "Content-based Unrestricted Adversarial Attack", + "abstract": "Unrestricted adversarial attacks typically manipulate the semantic content of\nan image (e.g., color or texture) to create adversarial examples that are both\neffective and photorealistic, demonstrating their ability to deceive human\nperception and deep neural networks with stealth and success. However, current\nworks usually sacrifice unrestricted degrees and subjectively select some image\ncontent to guarantee the photorealism of unrestricted adversarial examples,\nwhich limits its attack performance. To ensure the photorealism of adversarial\nexamples and boost attack performance, we propose a novel unrestricted attack\nframework called Content-based Unrestricted Adversarial Attack. By leveraging a\nlow-dimensional manifold that represents natural images, we map the images onto\nthe manifold and optimize them along its adversarial direction. Therefore,\nwithin this framework, we implement Adversarial Content Attack based on Stable\nDiffusion and can generate high transferable unrestricted adversarial examples\nwith various adversarial contents. Extensive experimentation and visualization\ndemonstrate the efficacy of ACA, particularly in surpassing state-of-the-art\nattacks by an average of 13.3-50.4% and 16.8-48.0% in normally trained models\nand defense methods, respectively.", + "authors": "Zhaoyu Chen, Bo Li, Shuang Wu, Kaixun Jiang, Shouhong Ding, Wenqiang Zhang", + "published": "2023-05-18", + "updated": "2023-11-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.CR" + ], + "main_content": "Introduction Deep neural networks (DNNs) have significantly progressed in many tasks [18, 7]. However, with the rise of adversarial examples, the robustness of DNNs has been dramatically challenged [16]. Adversarial examples show the vulnerability of DNNs and expose security vulnerabilities in many security-sensitive applications. To avoid potential risks and further research the robustness of DNNs, it is of great value to expose as many \u201cblind spots\u201d of DNNs as possible at the current research stage. Nowadays, various methods are proposed to generate adversarial examples [4, 5, 6]. To maintain human visual imperceptibility and images\u2019 photorealism, adversarial perturbations within the constraint of lp norm are generated by these adversarial attacks. However, it is well known that the adversarial examples generated under lp norm have obvious limitations: firstly, they are not ideal in terms of perceptual similarity and are still easily perceptible by humans [24, 23, 62]; secondly, these adversarial perturbations are not natural enough and have an inevitable domain shift with the noise in the natural world, resulting in the adversarial examples being different from the hard examples that appear in the real world [64]. In addition, current defense methods against lp norm adversarial examples overestimate their abilities, known as the Dunning-Kruger effect [28]. It can effectively defend against lp norm adversarial examples but is not robust enough when facing new and unknown attacks [25]. Therefore, unrestricted adversarial attacks are beginning to emerge, using unrestricted but natural changes to replace small lp norm perturbations, which are more practically meaningful. \u2217This work was done when Zhaoyu Chen was an intern at Youtu Lab, Tencent. \u2020indicates corresponding authors. 37th Conference on Neural Information Processing Systems (NeurIPS 2023). arXiv:2305.10665v2 [cs.CV] 29 Nov 2023 \fExisting unrestricted adversarial attacks generate adversarial examples based on image content such as shape, texture, and color. Shape-based unrestricted attacks [56, 1] iteratively apply small deformations to the image through a gradient descent step. Then, texture-based unrestricted attacks [2, 40] are introduced, which manipulate an image\u2019s general attributes (texture or style) to generate adversarial examples. However, texture-based attacks result in unnatural results and have low adversarial transferability. Researchers then discover that manipulating pixel values along dimensions generates more natural adversarial examples, leading to the rise of color-based unrestricted attacks [20, 30, 2, 61, 47, 60]. Nonetheless, color-based unrestricted attacks tend to compromise flexibility in unconstrained settings to guarantee the photorealism of adversarial examples. They are achieved either through reliance on subjective intuition and objective metrics or by implementing minor modifications, thereby constraining their potential for adversarial transferability. Considering the aforementioned reasons, we argue that an ideal unrestricted attack should meet three criteria: i) it needs to maintain human visual imperceptibility and the photorealism of the images; ii) the attack content should be diverse, allowing for unrestricted modifications of image contents such as texture and color, while ensuring semantic consistency; iii) the adversarial examples should have a high attack performance so that they can transfer between different models. However, there is still a substantial disparity between the current and ideal attacks. To address this gap, we propose a novel unrestricted attack framework called Content-based Unrestricted Adversarial Attack. Firstly, we consider mapping images onto a low-dimensional manifold. This low-dimensional manifold is represented by a generative model and expressed as a latent space. This generative model is trained on millions of natural images, possessing two characteristics: i) sufficient capacity to ensure the photorealism of generated images; ii) well-alignment of image contents with latent space ensures a diversity of content. Subsequently, more generalized images can be generated by walking along the low-dimensional manifold. Optimizing the adversarial objective on this latent space allows us to achieve more diverse adversarial contents. In this paper, we propose Adversarial Content Attack (ACA) utilizing the diffusion model as a low-dimensional manifold. Specifically, we employ Image Latent Mapping (ILM) to map images onto the latent space, and utilize Adversarial Latent Optimization (ALO) to optimize the latents, thereby generating unrestricted adversarial examples with high transferability. In conclusion, our main contributions are: \u2022 We propose a novel attack framework called Content-based Unrestricted Adversarial Attack, which utilizes high-capacity and well-aligned low-dimensional manifolds to generate adversarial examples that are more diverse and natural in content. \u2022 We achieve an unrestricted content attack, known as the Adversarial Content Attack. By utilizing Image Latent Mapping and Adversarial Latent Optimization techniques, we optimize latents in a diffusion model, generating high transferable unrestricted adversarial examples. \u2022 The effectiveness of our attack has been validated through experimentation and visualization. Notably, we have achieved a significant improvement of 13.3\u223c50.4% over state-of-the-art attacks in terms of adversarial transferability. 2 Background and Preliminary Problem Definition. For a deep learning classifier F\u03b8(\u00b7) with parameters \u03b8, we denote the clean image as x and the corresponding true label as y. Formally, unrestricted adversarial attacks aim to create imperceptible adversarial perturbations (such as image distortions, texture or color modifications, etc.) for a given input x to generate an adversarial example xadv that can mislead the classifier F\u03b8(\u00b7): max xadv L(F\u03b8(xadv), y), s.t. xadv is natural, (1) where L(\u00b7) is the loss function. Because existing unrestricted attacks are limited by their attack contents, it prevents them from generating sufficiently natural adversarial examples and restricts their attack performance on different models. We hope that unrestricted adversarial examples are more natural and possess higher transferability. Therefore, we consider a more challenging and practical black-box setting to evaluate the attack performance of unrestricted adversarial examples. In contrast to the white-box setting, the black-box setting has no access to any information about the target model (i.e., architectures, network weights, and gradients). It can only generate adversarial examples by using a substitute model F\u03d5(\u00b7) and exploiting their transferability to fool the target model F\u03b8(\u00b7). 2 \fContent-based Unrestricted Adversarial Attack. Existing unconstrained attacks tend to modify fixed content in images, such as textures or colors, compromising flexibility in unconstrained settings to ensure the photorealism of adversarial examples. For instance, ColorFool [47] manually selects the parts of the image that are sensitive to human perception in order to modify the colors, and its effectiveness is greatly influenced by human intuition. Natural Color Fool [60], utilizes ADE20K [63] to construct the distribution of color distributions, resulting in its performance being restricted by the selected dataset. These methods subjectively choose the content to be modified for minor modifications, but this sacrifices the flexibility under unrestricted settings and limits the emergence of more \"unrestricted\" attacks. Taking this into consideration, we contemplate whether it is possible to achieve an unrestricted adversarial attack that can adaptively modify the content of an image while ensuring the semantic consistency of the image. Figure 1: Adversarial examples are generated along the adversarial direction of the low-dimensional manifold of natural images. This manifold represents many contents of natural images, so the generated unrestricted adversarial examples combine multiple adversarial contents (shape, texture and color). An ideal unrestricted attack should ensure the photorealism of adversarial examples, possess diverse adversarial contents, and exhibit potential attack performance. To address this gap, we propose a novel unrestricted attack framework called Content-based Unrestricted Adversarial Attack. Within this framework, we assume that natural images can be mapped onto a low-dimensional manifold by a generative model. As this lowdimensional manifold is well-trained on natural images, it naturally ensures the photorealism of the images and possesses the rich content present in natural images. Once we map an image onto a low-dimensional manifold, moving it along the adversarial direction on the manifold yields an unrestricted adversarial example. We argue that such a framework is closer to the ideal of unrestricted adversarial attacks, as it inherently guarantees the photorealism of adversarial examples and rich image content. Moreover, since the classifier itself fits the distribution of this low-dimensional manifold, adversarial examples generated along the manifold have more adversarial contents and the potential for strong attack performance, as shown in Figure 1. Naturally, selecting a low-dimensional manifold represented by a generative model necessitates careful consideration. There are two characteristics taken into account: i) sufficient capacity to ensure photorealism in the generated images; and ii) well-alignment ensures that image attributes are aligned with the latent space, thereby promoting diversity in content generation. Recently, diffusion models have emerged as a leading approach for generating high-quality images across varied datasets, frequently outperforming GANs [9]. However, several large-scale text-to-image diffusion models, including Imagen [44], DALL-E2 [41], and Stable Diffusion [42], have only recently come to the fore, exhibiting unparalleled semantic generation capabilities. Considering the trade-off between computational cost and high-fidelity image generation, we select Stable Diffusion as the low-dimensional manifold in this paper. It is based on prompt input and is capable of generating highly realistic natural images that conform to the semantics of the prompts. 3 Adversarial Content Attack Based on the aforementioned framework and the full utilization of the diffusion model\u2019s capability, we achieve the unrestricted content-based attack known as Adversarial Content Attack (ACA), as shown in Figure 2. Specifically, we first employ Image Latent Mapping (ILM) to map images onto the latent space represented by this low-dimensional manifold. Subsequently, we introduce an Adversarial Latent Optimization (ALO) technique that moves the latent representations of images along the adversarial direction on the manifold. Finally, based on iterative optimization, ACA can generate highly transferable unrestricted adversarial examples that appear quite natural. The 3 \f Latent Space QKV QKV QKV QKV Text Encoder Fire Engine Tow Truck Prompt: A lego fire truck with an american flag on it Text embedding Image Latent Mapping Adversarial Latent Optimization T Iterations Grad Skip Grad Constrain and preserve image content Null text embedding Condition Classifier Adversarial example Clean image Skip Grad Grad T Iterations Inversion of DDIM sampling DDIM sampling Figure 2: Pipeline of Adversarial Content Attack. First, we use Image Latent Mapping to map images into latent space. Next, Adversarial Latent Optimization is used to generate adversarial examples. Eventually, the generated adversarial examples can fool the target classifier. algorithm for ACA is presented in Algorithm 1, and we further combine the diffusion model to design the corresponding mapping and optimization methods. 3.1 Image Latent Mapping For the diffusion model, the easiest image mapping is the inverse process of DDIM sampling [9, 48] with the condition embedding C = \u03c8(P) of prompts P, based on the assumption that the ordinary differential equation (ODE) process can be reversed in the limit of small steps: zt+1 = r\u03b1t+1 \u03b1t zt + \u221a\u03b1t+1( s 1 \u03b1t+1 \u22121 \u2212 r 1 \u03b1t \u22121) \u00b7 \u03f5\u03b8(zt, t, C), (2) where z0 is the given real image, a schedule {\u03b20, ..., \u03b2T } \u2208(0, 1) and \u03b1t = Qt 1(1 \u2212\u03b2i). In general, this process is the reverse direction of the denoising process (z0 \u2192zT instead of zT \u2192z0), which can map the image z0 to zT in the latent space. Image prompts are automatically generated using image caption models (e.g., BLIP v2 [31]). For simplicity, the encoding of the VAE is ignored. Text-to-image synthesis usually emphasizes the effect of the prompt. Therefore, a classifier-free guidance technique [19] is proposed. Its prediction is also performed unconditionally, which is then extrapolated with the conditioned prediction. Given w as the guidance scale parameter and \u2205= \u03c8(\u201d\u201d) as the embedding of a null text, the classifier-free guidance prediction is expressed by: \u02dc \u03f5\u03b8(zt, t, C, \u2205) = w \u00b7 \u03f5\u03b8(zt, t, C) + (1 \u2212w) \u00b7 \u03f5\u03b8(zt, t, \u2205), (3) where w = 7.5 is the default value for Stable Diffusion. However, since the noise is predicted by the model \u03f5\u03b8 in the inverse process of DDIM sampling, a slight error is incorporated in every step. Due to the existence of a large guidance scale parameter w in the classifier-free guidance technique, slight errors are amplified and lead to cumulative errors. Consequently, executing the inverse process of DDIM sampling with classifier-free guidance not only disrupts the Gaussian distribution of noises but also induces visual artifacts of unreality [37]. To mitigate cumulative errors, we follow [37] and optimize a null text embedding \u2205t for each timestamp t. First, the inverse process of DDIM sampling with w = 1 outputs a series of consecutive latents {z\u2217 0, ..., z\u2217 T } where z\u2217 0 = z0. Then, we conduct the following optimization with w = 7.5 (the default value for Stable Diffusion) and \u00af zT = zt during N iterations for the timestamps t = {T, ..., 1}: min \u2205t ||z\u2217 t\u22121 \u2212zt\u22121( \u00af zt, t, C, \u2205t)||2 2, (4) zt\u22121( \u00af zt, t, C, \u2205t) = r\u03b1t\u22121 \u03b1t \u00af zt + \u221a\u03b1t\u22121 s 1 \u03b1t\u22121 \u22121 \u2212 r 1 \u03b1t \u22121 ! \u00b7 \u02dc \u03f5\u03b8(zt, t, C, \u2205t). (5) At the end of each step, we update \u00af zt\u22121 to zt\u22121( \u00af zt, t, C, \u2205t). Finally, we obtain the latent of the given image in the low-dimensional manifold, consisting of the noise \u00af zT , the null text embedding \u2205t, and the text embedding C = \u03c8(P). Note that compared to other strategies [12, 43, 26, 54], the current strategy is simple and effective and does not require fine-tuning to obtain high-quality image reconstruction. Next, we exploit this latent to generate unrestricted adversarial examples. 4 \f3.2 Adversarial Latent Optimization In this section, we propose an optimization method for latents to maximize the attack performance on unrestricted adversarial examples. In the latent space of a given image after ILM, the null text embedding \u2205t ensures the quality of the reconstructed image, while the text embedding C ensures the semantic information of the image. Therefore, optimizing both embeddings may not be ideal. Considering that the noise \u00af zT largely represents the image\u2019s information in the latent space, we choose to optimize it instead. However, this optimization is still challenged by complex gradient calculations and the overflow of the value range. Based on the latents generated by ILM, we define the denoising process of diffusion models as \u2126(\u00b7) through Equation 5, and it involves T iterations: \u2126(zT , T, C, {\u2205t}T t=1) = z0 (z1 (..., (zT \u22121, T \u22121, C, \u2205T \u22121) , ..., 1, C, \u22051) , 0, C, \u22050) . (6) Therefore, the reconstructed image is denoted as \u00af z0 = \u2126(zT , T, C, {\u2205t}). The computational process of VAE is disregarded herein, as it is differentiable. Combining Equation 7, our adversarial objective optimization is expressed by: max \u03b4 L (F\u03b8( \u00af z0), y) , s.t. ||\u03b4||\u221e\u2264\u03ba, \u00af z0 = \u2126(zT + \u03b4, T, C, {\u2205t}) and \u00af z0 is natural, (7) where \u03b4 is the adversarial perturbation on the latent space. Our loss function consists of two parts: i) cross-entropy loss Lce, which mainly guides adversarial examples toward misclassification. ii) mean square error loss Lmse mainly guides the generated adversarial examples to be as close as possible to clean images on l2 distance. Therefore, the total loss function L is expressed as: L(F\u03b8( \u00af z0), y, z0) = Lce(F\u03b8( \u00af z0), y)) \u2212\u03b2 \u00b7 Lmse( \u00af z0, z0), (8) where \u03b2 is 0.1 in this paper. The loss function L aims to maximize the cross-entropy loss and minimize the l2 distance between the adversarial example \u00af z0 and the clean image z0. To ensure the consistency of z0 and \u00af z0, we assume that \u03b4 does not change the consistency when \u03b4 is extremely small, i.e., ||\u03b4||\u221e\u2264\u03ba. The crux pertains to determining the optimal \u03b4 that yields the maximum classification loss. Analogous to conventional adversarial attacks, we employ gradientbased techniques to estimate \u03b4 through: \u03b4 \u2243\u03b7\u2207zT L (F\u03b8( \u00af z0), y), where \u03b7 denotes the magnitude of perturbations that occur in the direction of the gradient. To expand \u2207zT L (F\u03b8( \u00af z0), y) by the chain rule, we can have these derivative terms as follows: \u2207zT L (F\u03b8( \u00af z0), y) = \u2202L \u2202\u00af z0 \u00b7 \u2202\u00af z0 \u2202z1 \u00b7 \u2202z1 \u2202z2 \u00b7 \u00b7 \u00b7 \u2202zT \u22121 \u2202zT . (9) Skip Gradient. After observing the items, we find that although each item is differentiable, it is not feasible to derive the entire calculation graph. First, we analyze the term \u2202L \u2202\u00af z0 , which represents the derivative of the classifier with respect to the reconstructed image \u00af z0 and provides the adversarial gradient direction. Then, for \u2202zt \u2202zt+1 , each calculation of the derivative represents the calculation of a backpropagation. Furthermore, a complete denoising process accumulates T calculation graphs, resulting in memory overflow (similar phenomena are also found in [45]). Therefore, the gradient of the denoising process cannot be directly calculated. Fortunately, we propose a skip gradient to approximate \u2202z0 \u2202zT = \u2202\u00af z0 \u2202z1 \u00b7 \u2202z1 \u2202z2 \u00b7 \u00b7 \u00b7 \u2202zT \u22121 \u2202zT . Recalling the diffusion process, the denoising process aims to eliminate the Gaussian noise added in DDIM sampling [48, 9, 42]. DDIM samples zt at any arbitrary time step t in a closed form using reparameterization trick: zt = \u221a\u03b1tz0 + \u221a 1 \u2212\u03b1t\u03b5, \u03b5 \u223cN(0, I). (10) Consequently, we perform a manipulation by rearranging Equation 10 to obtain z0 = 1 \u221a\u03b1t zt \u2212 q 1\u2212\u03b1t \u03b1t \u03b5. Hence, we further obtain \u2202z0 \u2202zt = 1 \u221a\u03b1t . In Stable Diffusion, timestep t is at most 1000, so lim t\u21921000 \u2202z0 \u2202zt = lim t\u21921000 1 \u221a\u03b1t \u224814.58. In summary, \u2202z0 \u2202zt can be regarded as a constant \u03c1 and Equation 9 can be re-expressed as \u2207zT L (F\u03b8( \u00af z0), y) = \u03c1 \u2202L \u2202\u00af z0 . In summary, skip gradients approximate the gradients of the denoising process while reducing the computation and memory usage. Differentiable Boundary Processing. Since the diffusion model does not explicitly constrain the value range of \u00af z0, the modification of zT may cause the value range to be exceeded. So we introduce 5 \fAlgorithm 1 Adversarial Content Attack Input: a input image z0 with the label y, a text embedding C = \u03c8(P), a classifier F\u03b8(\u00b7), DDIM steps T, image mapping iteration Ni, attack iterations Na, and momentum factor \u00b5 1: Calculate latents {z\u2217 0, ..., z\u2217 T } using Equation 5 over z0 with w = 1 2: Initialize w = 7.5, \u00af zT \u2190z\u2217 T , \u2205\u2190\u03c8(\u201d\u201d), \u03b40 \u21900, g0 \u21900 3: // Image Latent Mapping 4: for t = T, T \u22121 . . . , 1 do 5: for j = 1, . . . , Ni do 6: \u2205t \u2190\u2205t \u2212\u03b6\u2207\u2205t||z\u2217 t\u22121 \u2212zt\u22121( \u00af zt, t, C, \u2205t)||2 2 7: end for 8: \u00af zt\u22121 \u2190zt\u22121( \u00af zt, t, C, \u2205t), \u2205t\u22121 \u2190\u2205t 9: end for 10: // Adversarial Latent Optimization 11: for k = 1, . . . , Na do 12: \u00af z0 \u2190\u2126 \u0000\u00af zT + \u03b4k\u22121, T, C, {\u2205t}T t=1 \u0001 13: gk \u2190\u00b5 \u00b7 gk\u22121 + \u2207zT L(F\u03b8(\u03f1( \u00af z0),y)) ||\u2207zT L(F\u03b8(\u03f1( \u00af z0),y))||1 14: \u03b4k \u2190\u03a0\u03ba (\u03b4k\u22121 + \u03b7 \u00b7 sign(gk)) 15: end for 16: \u00af z0 \u2190\u03f1 \u0000\u2126 \u0000\u00af zT + \u03b4Na, T, C, {\u2205t}T t=1 \u0001\u0001 Output: The unrestricted adversarial example \u00af z0. differentiable boundary processing \u03f1(\u00b7) to solve this problem. \u03f1(\u00b7) constrains the values outside [0, 1] to the range of [0, 1]. The mathematical expression of DPB is as follows: \u03f1(x) = \uf8f1 \uf8f2 \uf8f3 tanh(1000x)/10000, x < 0, x, 0 \u2264x \u22641, tanh(1000(x \u22121))/10001, x > 1. (11) Next, we define \u03a0\u03ba as the projection of the adversarial perturbation \u03b4 onto \u03ba-ball. We introduce momentum g and express the optimization adversarial latents as: gk \u2190\u00b5 \u00b7 gk\u22121 + \u2207zT L (F\u03b8 ((\u03f1( \u00af z0), y)) ||\u2207zT L (F\u03b8 (\u03f1( \u00af z0), y)) ||1 , (12) \u03b4k \u2190\u03a0\u03ba (\u03b4k\u22121 + \u03b7 \u00b7 sign(gk)) . (13) In general, Adversarial Latent Optimization (ALO) employs skip gradient to determine the gradient of the denoising process, and integrates differentiable boundary processing to regulate the value range of adversarial examples, and finally performs iterative optimization according to the gradient. Combined with Image Latent Mapping, Adversarial Content Attack is illustrated in Algorithm 1. 4 Experiments 4.1 Experimental Setup Datasets. Our experiments are conducted on the ImageNet-compatible Dataset [29]. The dataset consists of 1,000 images from ImageNet\u2019s validation set [8], and is widely used in [10, 13, 58, 60]. Attack Evaluation. We choose SAE [20], ADer [1], ReColorAdv [30], cAdv [2], tAdv [2], ACE [61], ColorFool [47], NCF [60] as comparison methods of Adversarial Content Attack (ACA). The parameters for these unrestricted attacks follow the corresponding default settings. Our attack evaluation metric is the attack success rate (ASR, %), which is the percentage of misclassified images. Models. To evaluate the adversarial robustness of network architectures, we select convolutional neural networks (CNNs) and vision transformers (ViTs) as the attacked models, respectively. For CNNs, we choose normally trained MoblieNet-V2 (MN-v2) [46], Inception-v3 (Inc-v3) [50], ResNet50 (RN-50) and ResNet-152 (RN-152) [18], Densenet-161 (Dense-161) [22], and EfficientNet-b7 (EFb7) [52]. For ViTs, we consider normally trained MoblieViT (MobViT-s) [35], Vision Transformer (ViT-B) [11], Swin Transformer (Swin-B) [34], and Pyramid Vision Transformer (PVT-v2) [55]. 6 \fTable 1: Performance comparison of adversarial transferability on normally trained CNNs and ViTs. We report attack success rates (%) of each method (\u201c*\u201d means white-box attack results). Surrogate Model Attack Models Avg. ASR (%) CNNs Transformers MN-v2 Inc-v3 RN-50 Dense-161 RN-152 EF-b7 MobViT-s ViT-B Swin-B PVT-v2 Clean 12.1 4.8 7.0 6.3 5.6 8.7 7.8 8.9 3.5 3.6 6.83 ILM 13.5 5.5 8.0 6.3 5.9 8.3 8.3 9.0 4.8 4.0 7.36 MobViT-s SAE 60.2 21.2 54.6 42.7 44.9 30.2 82.5* 38.6 21.1 20.2 37.08 ADef 14.5 6.6 9.0 8.0 7.1 9.8 80.8* 9.7 5.1 4.6 8.27 ReColorAdv 37.4 14.7 26.7 22.4 21.0 20.8 96.1* 21.5 16.3 16.7 21.94 cAdv 41.9 25.4 33.2 31.2 28.2 34.7 84.3* 32.6 22.7 22.0 30.21 tAdv 33.6 18.8 22.1 18.7 18.7 15.8 97.4* 15.3 11.2 13.7 18.66 ACE 30.7 9.7 20.3 16.3 14.4 13.8 99.2* 16.5 6.8 5.8 14.92 ColorFool 47.1 12.0 40.0 28.1 30.7 19.3 81.7* 24.3 9.7 10.0 24.58 NCF 67.7 31.2 60.3 41.8 52.2 32.2 74.5* 39.1 20.8 23.1 40.93 ACA (Ours) 66.2 56.6 60.6 58.1 55.9 55.5 89.8* 51.4 52.7 55.1 56.90 MN-v2 SAE 90.8* 22.5 53.2 38.0 41.9 26.9 44.6 33.6 16.8 18.3 32.87 ADer 56.6* 7.6 8.4 7.7 7.1 10.9 11.7 9.5 4.5 4.5 7.99 ReColorAdv 97.7* 18.6 33.7 24.7 26.4 20.7 31.8 17.7 12.2 12.6 22.04 cAdv 96.6* 26.8 39.6 33.9 29.9 32.7 41.9 33.1 20.6 19.7 30.91 tAdv 99.9* 27.2 31.5 24.3 24.5 22.4 40.5 16.1 15.9 15.1 24.17 ACE 99.1* 9.5 17.9 12.4 12.6 11.7 16.3 12.1 5.4 5.6 11.50 ColorFool 93.3* 9.5 25.7 15.3 15.4 13.4 15.7 14.2 5.9 6.4 13.50 NCF 93.2* 33.6 65.9 43.5 56.3 33.0 52.6 35.8 21.2 20.6 40.28 ACA (Ours) 93.1* 56.8 62.6 55.7 56.0 51.0 59.6 48.7 48.6 50.4 54.38 RN-50 SAE 63.2 25.9 88.0* 41.9 46.5 28.8 45.9 35.3 20.3 19.6 36.38 ADer 15.5 7.7 55.7* 8.4 7.8 11.4 12.3 9.2 4.6 4.9 9.09 ReColorAdv 40.6 17.7 96.4* 28.3 33.3 19.2 29.3 18.8 12.9 13.4 23.72 cAdv 44.2 25.3 97.2* 36.8 37.0 34.9 40.1 30.6 19.3 20.2 32.04 tAdv 43.4 27.0 99.0* 28.8 30.2 21.6 35.9 16.5 15.2 15.1 25.97 ACE 32.8 9.4 99.1* 16.1 15.2 12.7 20.5 13.1 6.1 5.3 14.58 ColorFool 41.6 9.8 90.1* 18.6 21.0 15.4 20.4 15.4 5.9 6.8 17.21 NCF 71.2 33.6 91.4* 48.5 60.5 32.4 52.6 36.8 19.8 21.7 41.90 ACA (Ours) 69.3 61.6 88.3* 61.9 61.7 60.3 62.6 52.9 51.9 53.2 59.49 ViT-B SAE 54.5 26.9 49.7 38.4 41.4 30.4 46.1 78.4* 19.9 18.1 36.16 ADer 15.3 8.3 9.9 8.4 7.6 12.0 12.4 81.5* 5.3 5.5 9.41 ReColorAdv 25.5 12.1 17.5 13.9 14.4 15.4 22.9 97.7* 10.9 8.6 15.69 cAdv 31.4 27.0 26.1 22.5 19.9 26.1 32.9 96.5* 18.4 16.9 24.58 tAdv 39.5 22.8 25.8 23.2 22.3 20.8 34.1 93.5* 16.3 15.3 24.46 ACE 30.9 11.4 22.0 15.5 15.2 13.0 17.0 98.6* 6.5 6.3 15.31 ColorFool 45.3 13.9 35.7 24.3 28.8 19.8 27.0 83.1* 8.9 9.3 23.67 NCF 55.9 25.3 50.6 34.8 42.3 29.9 40.6 81.0* 20.0 19.1 35.39 ACA (Ours) 64.6 58.8 60.2 58.1 58.1 57.1 60.8 87.7* 55.5 54.9 58.68 Implementation Details. Our experiments are run on an NVIDIA Tesla A100 with Pytorch. DDIM steps T = 50, image mapping iteration Ni = 10, attack iterations Na = 10, \u03b2 = 0.1, \u03b6 = 0.01, \u03b7 = 0.04, \u03ba = 0.1, and \u00b5 = 1. The version of Stable Diffusion [42] is v1.4. Prompts for images are automatically generated using BLIP v2 [31]. 4.2 Attacks on Normally Trained Models In this section, we assess the adversarial transferability of normally trained convolutional neural networks (CNNs) and vision transformers (ViTs), including methods such as SAE [20], ADer [1], ReColorAdv [30], cAdv [2], tAdv [2], ACE [61], ColorFool [47], NCF [60], and our ACA. Adversarial examples are crafted via MobViT-s, MN-v2, RN-50, and ViT-B, respectively. Avg. ASR (%) refers to the average attack success rate on non-substitute models. Table 1 illustrates the performance comparison of adversarial transferability on normally trained CNNs and ViTs. It can be observed that adversarial examples by ours generally exhibit superior transferability compared to those generated by state-of-the-art competitors and the impact of ILM on ASR is exceedingly marginal. When CNNs (RN-50 and MN-v2) are used as surrogate models, our ACA exhibits minimal differences with state-of-the-art NCF in MN-v2, RN-50, and RN-152. However, in Inc-v3, Dense-161, and EF-b7, such as when RN-50 is used as the surrogate model, we significantly outperform NCF by 28.0%, 13.4% and 27.9%, respectively. This indicates that our ACA has higher transferability in heterogeneous CNNs. Furthermore, our ACA demonstrates state-ofthe-art transferability in current unconstrained attacks under the more challenging cross-architecture 7 \fTable 2: Performance comparison of adversarial transferability on adversarial defense methods. Attack HGD R&P NIPS-r3 JPEG Bit-Red DiffPure Inc-v3ens3 Inc-v3ens4 IncRes-v2ens Res-De Shape-Res Avg. ASR (%) Clean 1.2 1.8 3.2 6.2 17.6 15.4 6.8 8.9 2.6 4.1 6.7 6.77 ILM 1.5 1.9 3.5 7.1 18.5 16.1 6.8 9.8 3.0 5.1 8.1 7.40 SAE 21.4 19.0 25.2 25.7 43.5 39.8 25.7 29.6 20.0 35.1 49.6 30.42 ADer 2.9 3.6 6.9 10.4 27.5 18.1 10.1 12.1 5.6 6.0 9.7 10.26 ReColorAdv 5.1 7.0 10.0 20.0 24.3 20.0 11.1 15.5 7.4 11.6 18.4 13.67 cAdv 12.2 14.0 17.7 11.1 33.9 32.9 19.9 23.2 14.6 16.2 25.3 20.09 tAdv 10.9 12.4 14.4 17.8 29.6 21.2 17.7 19.0 12.5 16.4 25.4 17.94 ACE 4.9 5.9 11.1 12.6 28.1 24.9 12.4 15.4 7.6 11.6 21.0 14.14 ColorFool 9.1 9.6 15.3 18.0 37.9 33.8 17.8 21.3 10.5 20.3 35.0 20.78 NCF 22.8 21.1 25.8 26.8 43.9 39.6 27.4 31.9 21.8 34.4 47.5 31.18 ACA (Ours) 52.2 53.6 53.9 59.7 63.4 63.7 59.8 62.2 53.6 55.6 60.8 58.05 setting. Specifically, when the surrogate model is RN-50, we surpass NCF by significant margins of 10.0%, 16.1%, 32.1%, and 32.5% in MobViT-s, ViT-B, Swin-B, and PVT-v2, respectively. There are two primary reasons for this phenomenon: i) our ACA utilizes a low-dimensional manifold search of natural images for adversarial examples, with the manifold itself determining the transferability of the adversarial examples, independent of the model\u2019s architecture; ii) the diffusion model incorporates the self-attention structure, exhibiting a certain degree of architectural similarity. Overall, the deformation-based attack (ADer) exhibits lower attack performance in both white-box and black-box settings. Texture-based attacks (tAdv) show better white-box attack performance, but are less transferable than existing color-based attacks (NCF and SAE). Our ACA leverages the low-dimensional manifold of natural images to adaptively combine image attributes and generate unrestricted adversarial examples, resulting in a significant outperformance of state-of-the-art methods by 13.3%\u223c50.4% on average. These results convincingly demonstrate the effectiveness of our method in fooling normally trained models. 4.3 Attacks on Adversarial Defense Methods The situation in that adversarial defense methods can effectively protect against current adversarial attacks exhibits the Dunning-Kruger effect [28]. Actually, such defense methods demonstrate efficacy in defending against adversarial examples within the lp norm, yet their robustness falters in the face of novel and unseen attacks [25]. Therefore, we investigate whether unrestricted attacks can break through existing defenses. Here, we choose input pre-process defenses (HGD [33], R&P [57], NIPS-r33, JPEG [17], Bit-Red [59], and DiffPure [39]) and adversarial training models (Inc-v3ens3, Inc-v3ens4, and Inc-v2ens [53]). Considering that some unrestricted attacks are carried out from the perspective of shape and texture, we also choose shape-texture debaised models (ResNet50-Debaised (Res-De) [32] and Shape-ResNet (Shape-Res) [14]). The results of black-box transferability in adversarial defense methods are demonstrated in Table 2. the surrogate model is ViT-B, and the target model is Inc-v3ens3 for input pre-process defenses. Our method displays persistent superiority over other advanced attacks by a significant margin. Our ACA surpasses NCF, SAE, ColorFool by 27.13%, 27.63%, and 37.27% on average ASR. In robust models, based on lp adversarial training and shape-texture debiased models are not particularly effective and can still be easily broken by unrestricted adversarial examples. Our approach can adaptively generate various combinations of adversarial examples based on the manifold, thus exhibiting high transferability to different defense methods. Additionally, Bit-Red and Diffpure reduce the groundtruth class\u2019s confidence and increase the adversarial examples\u2019 transferability. These findings further reveal the incompleteness and vulnerability of existing adversarial defense methods. 4.4 Visualization Quantitative Comparison. Following [47] and [60], we quantitatively assess the image quality using the non-reference perceptual image quality measure. Therefore, we choose NIMA [51], HyperIQA [49], MUSIQ [27], and TReS [15]. NIMA-AVA and MUSIQ-AVA are trained on AVA [38], and MUSIQ-KonIQ is trained on KonIQ-10K [21], following PyIQA [3]. As illustrated in Table 3, our ILM maintains the same image quality as clean images, and ACA achieves the best results in 3https://github.com/anlthms/nips-2017/tree/master/mmd 8 \fClean ILM SAE cAdv tAdv ColorFool NCF ACA (Ours) (a) Visualization of state-of-the-art unrestricted attacks (b) Adversarial examples of Adversarial Content Attack (ACA) (c) Case Study Figure 3: (a) Compared with other attacks, ACA generates the most natural adversarial examples; (b) ACA can generate images with various adversarial content, which can combine shape, texture, and color changes; (c) In some cases, ACA may slightly modify the semantic subject. image quality assessment. ColorFool obtains equal or higher image quality than the clean images because it requires manually selecting several human-sensitive semantic classes and adds uncontrolled perturbations on human-insensitive semantic classes. In other words, ColorFool is bound by human intuition, so do not deteriorate the perceived image quality (our results are similar to [47]). ACA even surpasses ColorFool, mainly because: i) Our adversarial examples are generated based on the low-dimensional manifold of natural images, which can adaptively combine the adversarial content and ensure photorealism; ii) Stable Diffusion itself is an extremely powerful generation model, which produces images with very high image quality; iii) These no-reference image metrics are often trained on aesthetic datasets, such as AVA [38] or KonIQ-10K [21]. Some of the images in these datasets are post-processed (such as Photoshop), which is more in line with human aesthetics. Because ACA adaptively generates adversarial examples on a low-dimensional manifold, this kind of minor image editing is similar to post-processing, which is more in line with human aesthetic perception and better image quality. Table 3: Image quality assessment. Attack NIMA -AVA\u2191HyperIQA\u2191MUSIQ -AVA\u2191 MUSIQ -KonIQ\u2191TReS\u2191 Clean 5.15 0.667 4.07 52.66 82.01 ILM 5.15 0.672 4.08 52.55 81.80 SAE 5.05 0.597 3.79 47.24 71.88 ADer 4.89 0.608 3.89 47.39 72.10 ReColorAdv 5.07 0.668 3.97 51.08 80.32 cAdv 4.97 0.623 3.87 48.32 75.12 tAdv 4.83 0.525 3.78 44.71 67.07 ACE 5.12 0.648 3.96 50.49 77.25 ColorFool 5.24 0.662 4.05 52.27 78.54 NCF 4.96 0.634 3.87 50.33 74.10 ACA (Ours) 5.54 0.691 4.37 56.08 85.11 Qualitative Comparison. We visualize unrestricted attacks of Top-5 black-box transferability, including SAE, cAdv, tAdv, ColorFool, and NCF. In Figure 3(a), we visualize adversarial examples generated by different unrestricted attacks. In night scenes and food, color and texture changes are easily perceptible, while our method still keeps image photorealism. Next, we give more adversarial examples generated by ACA. It is clearly observed that our method can adaptively combine content to generate adversarial examples, as shown in Figure 3(b). For example, the hot air balloon in the lower left corner modifies both the color of the sky and the texture of the hot air balloon. The strawberry in the lower right corner has some changes in shape and color while keeping the semantics unchanged. However, in some cases, the body of semantics changes, as shown in Figure 3(c). It may be because the prompts generated by BLIP v2 cannot describe the content of the image well. 4.5 Time Analysis In this section, we illustrate the attack speed of various unrestricted attacks. We choose MN-v2 [46] as the surrogate model and evaluate the inference time on an NVIDIA Tesla A100. Table 4 shows the 9 \fTable 4: Attack speed of unrestricted attacks. We choose MN-v2 as the surrogate model and evaluate the inference time on an NVIDIA Tesla A100. Attack SAE ADer ReColorAdv cAdv tAdv ACE ColorFool NCF ACA (Ours) Time (sec) 8.80 0.41 3.86 18.67 4.88 6.64 12.18 10.45 60.0+65.33=125.33 average time (in seconds) required to generate an adversarial example per image. ACA does have a significant time cost compared to other attacks. Further, we analyze the time cost and find that Image Latent Mapping (ILM) and Adversarial Latent Optimization (ALO) each accounted for 50% of the time cost. However, most of the time cost of ILM and ALO lies in the sampling process of the diffusion model. In this paper, our main contribution is to propose a new unrestricted attack paradigm. Therefore, we focus on the improvement of the attack framework, rather than the optimization of time cost. Since the time cost is mainly focused on the sampling of the diffusion model, we have noticed that many recent works have accelerated or distilled the diffusion model, which can greatly reduce the time of the sampling process. For example, [36] can reduce the total number of sampling steps by at least 20 times. If these acceleration technologies are applied to our ACA, ACA can theoretically achieve an attack speed of close to 6 seconds and we think this is a valuable optimization direction. 4.6 Ablation Study Figure 4: Ablation studies of momentum (MO) and differentiable boundary processing (DBP). The ablation studies of momentum (MO) and differentiable boundary processing (DBP) are shown in Figure 4 and the surrogate model is ViT-B [11]. Origin stands for ACA without MO and DBP. MO is Origin with momentum, and it can be observed that the adversarial transferability is significantly improved after the introduction of momentum. MO+DBP is Origin with momentum and DBP. Since DBP further optimizes the effectiveness of the adversarial examples and constrains the image within the range of values, it can still improve the adversarial transferability. Although the above strategies are not the main contribution of this paper, the above experiments illustrate that they can boost adversarial transferability. 5" + }, + { + "url": "http://arxiv.org/abs/2203.08519v1", + "title": "Towards Practical Certifiable Patch Defense with Vision Transformer", + "abstract": "Patch attacks, one of the most threatening forms of physical attack in\nadversarial examples, can lead networks to induce misclassification by\nmodifying pixels arbitrarily in a continuous region. Certifiable patch defense\ncan guarantee robustness that the classifier is not affected by patch attacks.\nExisting certifiable patch defenses sacrifice the clean accuracy of classifiers\nand only obtain a low certified accuracy on toy datasets. Furthermore, the\nclean and certified accuracy of these methods is still significantly lower than\nthe accuracy of normal classification networks, which limits their application\nin practice. To move towards a practical certifiable patch defense, we\nintroduce Vision Transformer (ViT) into the framework of Derandomized Smoothing\n(DS). Specifically, we propose a progressive smoothed image modeling task to\ntrain Vision Transformer, which can capture the more discriminable local\ncontext of an image while preserving the global semantic information. For\nefficient inference and deployment in the real world, we innovatively\nreconstruct the global self-attention structure of the original ViT into\nisolated band unit self-attention. On ImageNet, under 2% area patch attacks our\nmethod achieves 41.70% certified accuracy, a nearly 1-fold increase over the\nprevious best method (26.00%). Simultaneously, our method achieves 78.58% clean\naccuracy, which is quite close to the normal ResNet-101 accuracy. Extensive\nexperiments show that our method obtains state-of-the-art clean and certified\naccuracy with inferring efficiently on CIFAR-10 and ImageNet.", + "authors": "Zhaoyu Chen, Bo Li, Jianghe Xu, Shuang Wu, Shouhong Ding, Wenqiang Zhang", + "published": "2022-03-16", + "updated": "2022-03-16", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.CR" + ], + "main_content": "Introduction Despite achieving outstanding performance on various computer vision tasks [18, 19, 22\u201326, 28, 35, 43, 44], deep neural networks (DNNs) are vulnerable and susceptive to adversarial examples [11,14,15, 29], which are attached to a perturbation on images by adversaries [34]. Patch attack is one of the most threatening forms of adversarial examples, *indicates equal contributions. \u2020indicates corresponding author. which can modify pixels arbitrarily in a continuous region, and can implement physical attacks on autonomous systems via their perception component. For example, putting stickers on traffic signals can make the model misprediction [10]. While several practical patch defenses are proposed [12, 31], they only obtain robustness against known attacks but not against more powerful attacks that may be developed in the future [5,36]. Therefore, we focus on certifiable defense against patch attacks in this paper, which allows guaranteed robustness against all possible attacks for the given threat model. In recent years, this community has received great attention. For example, Chiang et al. propose the first certifiable defense against patch attacks by extending interval bound propagation (IBP) [5] on CIFAR10. Then later work introduces small receptive fields or randomized smoothing to improve certification on CIFAR10 and scale to ImageNet. However, the accuracy gap between certifiable patch defense and normal model limits the practical application of these defense methods. For example, PatchGuard [40] can achieve 84.7% clean accuracy and 57.7% certified accuracy under 4\u00d74 patches on CIFAR10. However, when PatchGuard [40] is extended to large-scale datasets, such as ImageNet, it can only obtain 54.6% clean accuracy and 26.0% certified accuracy under 2% patches, which is much lower than the normal ResNet-50 [13] (76.2%). Consequently, a breakthrough is needed urgently to narrow the gap and move towards practical certifiable patch defenses. Recently, transformer [37] has achieved significant success in speech recognition and natural language processing. Inspired by this, Vision Transformer (ViT) [9] has been proposed and obtain potential performance in computer vision, such as image classification [1,9], objects detection [4] and semantic segmentation [1]. ViT models the context between different patches and obtains long-range dependencies by self-attention. Compared with convolutional neural networks (CNNs), ViT has achieved promising performance, which has the potential to improve certification. Furthermore, Derandomize smoothing (DS) [21] is a classic certifiable patch defense based on randomized smoothing robustness schemes and provides high confident certified robustness by structured ablation. It can also be generalized to \fother network architectures. Therefore, integrating ViT into DS is a potential certifiable patch defense. However, direct replacing the CNN structure in DS with ViT leads to trivial results: (1) the accuracy is still lower than normal classification networks; (2) the excessive inference time limits the application of the method in practice. To address these issues and move towards practical certifiable patch defense, we propose an efficient certifiable patch defense with ViT to improve accuracy and inference efficiency. First, we introduce a progressive smoothed image modeling task to train ViT. Specifically, the training objective is to gradually recover the original image tokens based on the smoothed image bands. By gradually reconstructing, the base classifier can explicitly capture the local context of an image while preserving the global semantic information. Consequently, more discriminative local representations can be obtained through very limited image information (a thin smoothed image band), which improves the performance of the base classifier. Then, we renovate the global self-attention structure of the original ViT into the isolated band-unit self-attention. The input image is divided into bands and the self-attention in each band-like unit is calculated separately, which provides the feasibility for the parallel calculation of multiple bands. Finally, our method achieves 78.58% clean accuracy on ImageNet and 41.70% certified accuracy within efficient inference under 2% area patch attacks. The clean accuracy is quite close to the normal ResNet-101 accuracy. Extensive experiments demonstrate that our method obtains state-of-the-art clean and certified accuracy with inferring efficiently on CIFAR10 and ImageNet. Our major contributions are as follows: \u2022 We introduce ViT into certifiable patch defense and propose a progressive smoothed image modeling task, which lets the model capture more discriminable local context of an image while preserving the global semantic information. \u2022 We renovate the global self-attention structure of the ViT into the isolated band-unit self-attention, which considerably accelerates inference. \u2022 Experiments show that our method obtains state-ofthe-art clean and certified accuracy with inferring efficiently on CIFAR-10 and ImageNet. Additionally, our method achieves 78.58% clean accuracy on ImageNet and 41.70% certified accuracy in efficient inference under 2% area patch attacks. The clean accuracy is quite close to the normal ResNet-101 accuracy. 2. Related Work 2.1. Patch Attacks Patch attacks are one of the most threatening forms of physical attacks, in which adversaries can arbitrarily modify pixels within the small continuous region. GAP [3] first creates universal, robust, targeted adversarial image patches in the real world and causes a classifier to output any target class. Then LaVAN [17] shows that it is possible to learn visible and localized adversarial patches that cover only 2% of the pixels in the image and cause image classifiers to misclassify to arbitrary labels in the digital domain. Because patch attack can be implemented in the form of stickers in the physical world, it brings great harm to vision systems, such as object detection [16,39] and visual tracking [8]. 2.2. Certifiable Patch Defense Several practical patch defenses were proposed such as digital watermark [12] and local gradient smoothing [31]. However, Chiang et al. [5] demonstrate that these defenses can be easily broken by white-box attacks which account for the pre-processing steps in the optimization procedure. It means that practical patch defenses only obtain robustness against known attacks but not against more powerful attacks that may be developed in the future. Therefore, it is important to have guarantees of robustness in face of the worst-case adversarial patches. Recent work [27] focuses on certified defenses against patch attacks, which allow guaranteed robustness against all possible attacks for the given threat model. Chiang et al. [5] propose the first certifiable defense against patch attacks by extending interval bound propagation (IBP) on MNIST and CIFAR10. However, it is hard to scale to the ImageNet. Levine et al. propose Derandomized Smoothing (DS) [21], which trains a base classifier by smoothed images and a majority vote determines the final classification. This method provides significant accuracy improvement when compared to IBP on ImageNet but its inference is computationally expensive. Some work is based on using CNNs with the small receptive field, such as Clipped BagNet (CBN) [42], Patchguard [40] and BagCert [30]. 2.3. Vision Transformer Transformer [37] is the mainstream method in the natural language processing field, which captures long-range dependencies through self-attention and achieves state-of-theart performance. Vision Transformer (ViT) [9] is the first work to achieve comparable results with traditional CNN architectures constructed only by self-attention blocks. It divides the image into a sequence of fixed-size patches and models the context between different patches and obtains long-range dependencies by multi-head self-attention. The accuracy and certified robustness of the existing certifiable patch defenses are still not enough to be applied in practice. We introduce ViT into certifiable patch defense with the progressive smoothed image modeling task. With isolated band unit self-attention, our method achieves significant improvements in accuracy and inference efficiency, \f\u2026\u2026 Count Certified Class Margin (a) Band Smoothing (b) Derandomized Smoothing Base Classifier Band Smoothing Classification Result \u2026\u2026 Figure 1. Introduction of Derandomized Smoothing (DS). The red patch represents the adversarial patch and the blue band represents the retained image after the band smoothing in (a). (b) describes the pipeline of DS. First, DS smoothes the image in the band smoothing and obtains the smoothed images from different positions. Then the smoothed images are fed into the base classifier fc and we obtain the classification result by the threshold \u03b8. Finally, DS counts the result and applies Equation 2 to judge whether the image is certified. Algorithm 1 Progressive Smoothed Image Modeling Task Input: the image x, the label Y , the tokenizer Z, transformer encoder f, weighting factor \u03bb, MLP head mlp and the number of stages Ns Output: f, mlp 1: for i \u2208[1, Ns] do 2: Smooth images x and obtain smoothed images xs 3: Determine expected reconstructed images xe 4: Calculate visual tokens z with x via the tokenizer Z 5: Calculate the output representation HO with xe via the transformer encoder f 6: Calculate logits l with HO via the MLP head mlp 7: Select the reconstructed tokens HR and corresponding visual tokens ZR by Equation 3 and Equation 4 8: Calculate the loss L by Equation 5 or Equation 6 9: Update f and mlp through L backward 10: end for 11: return f, mlp which enables practical certifiable patch defense. 3. Method In this section, we first review the certified mechanism of DS. Second, we propose a progressive smoothed image modeling task to help ViT capture the more discriminable local context of an image while preserving the global semantic information. Finally, we propose the isolated band unit self-attention to accelerate inference and move towards practical certifiable patch defense. 3.1. Preliminaries Smoothing in derandomized smoothing means to keep a part of the continuous image and smooth other parts of the image. For example, band smoothing means smoothing the entire image except for a band of a fixed-width b, as shown in Figure 1 (a). DS trains the base classifier with smoothed images. For an input image x \u2208Rc\u00d7h\u00d7w, let the base classifier be expressed as fc(x, b, p, \u03b8), where x is the input images, b is the width of the band, p is the position of the retained band, \u03b8 is the threshold for voting and c is the class label. For each class c, fc(x, b, p, \u03b8) is 1 if its logits of the class c is greater than the threshold \u03b8, otherwise it is 0. To calculate certified robustness, DS counts the number of bands on which the base classifier is applied to each class. \\ foral l c ,\\q uad n _c (p )=\\sum _{p=1}^w f_c(x,b,p,\\theta ). (1) An image is certified only if the statistics of the highest class (e.g. the label c) are greater than a margin than the next highest class c\u2032. The shape of the adversarial patch is supposed to be m \u00d7 m. The number of intersections between the band and this patch is at most \u2206= m + b \u22121. Therefore, an image is certified when \u2206satisfies the following conditions: \\la b el {eq:c ertifi c ation} n_c(x)> \\max _{c' \\neq c} n_{c'}(x) + 2\\Delta . (2) When the threshold \u03b8 is determined, the highest class has been guaranteed not to be affected by the adversarial patch. Therefore, we define clean accuracy as the accuracy that the classification is correct after voting. Certified accuracy is the accuracy that the classification is correct and \u2206 satisfies Equation 2 after voting. 3.2. Progressive Smoothed Image Modeling Since the base classifier can only use very limited information (such as bands), we need the base classifier to have \fTransformer Encoder Smooth Original Image Smoothed Image Image Patches Expected Reconstructed Image Tokenizer [S] Flatten Patch Embedding \u2026\u2026 Visual Tokens Output Represention MLP Head Token Reconstruction Loss Classification Loss (a) Single Stage Training in Progressive Smoothed Image Modeling First Stage Second Stage Third Stage (b) Reconstruction Ratios in Progressive Smoothed Image Modeling Reconstructed Token Reconstructed Token Selection Figure 2. Introduction of Progressive Smoothed Image Modeling. (a) describes a single stage smooth training in progressive smoothed image modeling. We expect smoothed images to be reconstructed as expected reconstruction images. (b) describes reconstructed ratios in multi-stage training. The blue boxes represent the band after smoothing and the red boxes represent the expected reconstructed band. the ability to better capture discriminable features. Specifically, in natural language processing, the masked language modeling (MLM) training paradigm (like BERT [7]) has been proved to be effective in learning more discriminative features and improving the performance of the model. Inspired by MLM, we propose a smoothed image modeling task to train ViT. However, unlike languages which are human-created signals with dense semantic correlations between words, as natural signals, the visual content of different parts in an image has a high degree of freedom. Therefore, it is very difficult to use a band with a width of b (b \u226ah, w) to recover the full-scale image tokens in one stage. Hence, we use a multistage smoothed image modeling task called progressive smoothed image modeling to train the base classifier, as shown in Figure 2. By gradually reconstructing the smoothed image parts, the base classifier can explicitly capture the local context of an image while preserving the global semantic information. Consequently, more discriminative local representations can be obtained through very limited image information, which improves the performance of the base classifier. In ViT, an image is spilt into a sequence of patches as inputs. Formally, we need to flatten a image x \u2208Rc\u00d7h\u00d7w into (N = hw/p2) patches xp \u2208RN\u00d7p2c, where the shape of the image x is (h, w), the number of channels is c, and (p, p) is the shape of patches (e.g. p = 16). The patches {xp i }N i=1 are projected to obtain the patch embeddings {Exp i }N i=1, where E \u2208Rp2c\u00d7d and d is the embedding dimension. Like Bert [7], we concatenate class token E[s] to patch embeddings Exp i . Simultaneously, in order to encode position information, we need to add 1D learnable position embeddings Epos to patch embeddings Exp i . Then, the input vector HI = [E[s], Exp i , ..., Exp N] + Epos is fed to transformer and HO = [h[s], hi, ..., hN] is used as the output representation for the image patches with respect to x. Here, we first introduce single stage training in progressive smoothed image modeling, as illustrated in Figure 2 (a). BERT-based training has been explored in vision tasks. The difficulty is that it is non-trivial to recover tokens in computer vision. For accelerating convergence, we introduce a tokenizer as the supervision of reconstruction. There are two types of supervision for reconstruction: VAE and distillation. For VAE, we use pre-trained VAE [32] for su\fpervision. For distillation, we use the output of pre-trained ViT [9] for supervision. As given a smoothed image xs, we split it into N image patches {xs p i }N i=1 and obtain N visual tokens {zi}N i=1. Centered on the band of xs, we select the reconstructed band and generate the expected reconstructed image. According to the expected reconstructed image, we have a reconstructed token selection to obtain reconstructed tokens. The band in the expected reconstructed image produces a band mask {M b i }N i=1 corresponding to the patch xp i which needs constructing. Hence, the reconstructed tokens and corresponding visual tokens are rephrased as: H _R= \\ { h _ i : M ^b_i=1\\}^N_{i=1}. \\label {hr} (3) Z _R= \\ { z _ i : M ^b_i=1\\}^N_{i=1}. \\label {zr} (4) The objective of Smoothed Training is to simultaneously minimize the classification loss and token reconstruction loss. For VAE, the total loss can be expressed as: \\ min \\ u n d er b race {CE(l,\\ Y )}_{ \\m a thrm { cla s si f icati on\\ loss}} + \\ lamb da \\cdot \\underbrace {CE(Z_R,\\ H_R)}_{\\mathrm {token\\ reconstruction\\ loss}}. \\label {ce} (5) For distillation, the total loss can be expressed as: \\ min \\ u n d er b race {CE(l,\\ Y )}_{ \\m a thrm {clas s if i catio n\\ loss}} + \\l ambd a \\cdot \\underbrace {||Z_R-H_R||_2}_{\\mathrm {token\\ reconstruction\\ loss}}. \\label {l2} (6) Here, l is the output logits after passing MLP head and Y is the label of x. \u03bb = 1000 balances the gradients between token reconstruction loss and classification loss. Figure 2 (b) shows the reconstruction ratio varies within each stage. The blue boxes represent the band after smoothing and the red boxes represent the expected reconstructed band. In the first stage, we randomly smooth the approximately 40% image. The remaining 60% of the images are used to reconstruct the whole image. In the second stage, we smooth away 70% of the images and utilize the remaining 30% of the bands, to reconstruct 60% of the patches within a neighborhood centered on the 30% band, including the 30% band. In the last stage, only the band with width b is reserved, and all other parts are smoothed. The band with width b is used to reconstruct 30% of the patches in the neighborhood centered on the band. Our method greatly narrows the accuracy gap between DS and normal model by progressive smoothed image modeling, making it possible to achieve certifiable patch defense in practice. 3.3. Isolated Band Unit Self-attention To achieve practical certifiable patch defense, high accuracy needs to be combined with efficient inference. With progressive smoothed image modeling, DS improves the clean and certified accuracy, however, it requires hundreds Isolated Self-Attetion (b) Isolated Band Unit Self-Attention Single Forward Parallel Self-Attetion Single Forward Isolated Smoothing Normal Smoothing (a) Full Self-Attention Isolated Self-Attetion Isolated Self-Attetion Parallel Parallel Self-Attetion Self-Attetion Single Forward Single Forward Figure 3. Introduction of Isolated Band Unit Self-attention. (a) describes the normal training that smoothed parts are redundant and unnecessary to calculate. (b) introduces the isolated band unit self-attention that smoothed parts are dropped and self-attention is only calculated within parallel windows. of inferences for various smoothed images, which limits its application in practice. Smoothed parts of the smoothed image introduce redundant information and invalid calculation. As shown in Figure 3 (a), normal smoothing utilizes the whole smoothed image but its calculation is unnecessary for smoothed parts. The long-range dependencies of ViT use this redundant information, which harms their accuracy and introduces extra calculation cost. Furthermore, it is inefficient to calculate one forward calculation for every smoothed image. Therefore, we innovatively renovate the global selfattention structure of the original ViT into isolated band unit self-attention. Specifically, the input image is divided into bands by sliding windows, and the self-attention in each band-like unit is calculated separately, which provides the feasibility for the parallel calculation of multiple bands. As shown in Figure 3 (b), we choose patches of each band by parallel sliding windows and infer multiple bands within one forward calculation. In the isolated band unit selfattention, a window is a band and isolated self-attention is only calculated within the window. Compared to normal smoothing, ViT splits xs into hw/p2 patches and we only select the N = hb/p2 patches within windows for fine-tuning and inferring. The time complexity of the whole image input of self-attention operation is O \u0000N 2d + Nd2\u0001 , where the first term is the complexity of attention operation and another is the complexity of fully-connected operation. Compared with the input of the whole image, the isolated self-attention can reduce the \fTable 1. The parameters of models on ImageNet. Model BagNet33 ViT-s ResNet50 ResNext101 ViT-B Parameters 18M 22M 26M 88.79M 86M Table 2. Clean and certified accuracy compared with state-of-theart certifiable patch defenses on CIFAR10. Method Clean Accuracy (%) Certified Accuracy (%) 2 \u00d7 2 4 \u00d7 4 Baseline CBN 84.20 44.20 9.30 DS 83.90 68.90 56.20 PG 84.70 69.20 57.70 BagCert 86.00 73.33 64.90 Smooth Model ViT-S 80.40 61.50 51.78 ECViT-S 87.56 73.82 65.10 ResNext101 85.34 69.32 60.68 ViT-B 91.28 78.10 70.78 ECViT-B 93.48 82.80 76.38 Table 3. Clean and certified accuracy compared with state-of-theart certifiable patch defenses on ILSVRC2012. Method Clean Accuracy (%) Certified Accuracy (%) 1% pixels 2% pixels 3% pixels CBN 49.50 13.40 7.10 3.10 PG (1%) 55.10 32.30 PG (2%) 54.60 26.00 26.00 PG (3%) 54.10 19.70 19.70 19.70 DS 64.67 30.14 24.70 20.88 BagCert 46.00 23.00 ECViT-S(b=37) 69.88 35.03 29.74 25.74 ECViT-B(b=37) 78.58 47.39 41.70 37.26 amount of calculation to 1 w/b of the former. Furthermore, it has \u2308w b \u2309adjacent windows for inference at the same time, where the shape of windows is (h, b). Therefore, we change the original forward calculation from w times to b (w \u226bb) times. Therefore, it is possible to deploy a certifiable patch defense in real systems through efficient inference. 4. Experiments We conduct extensive experiments on CIFAR10 [20] and ImageNet [6]. First, we compare our method with the stateof-the-art certifiable patch defenses on both datasets. In addition, we conduct ablation studies to investigate the factors that affect clean and certified accuracy. 4.1. Experimental Setup In our experiments, we use Pytorch for the implementation and train on NVIDIA Tesla V100 GPUs. We Table 4. Clean and certified accuracy compared with smooth models on ILSVRC2012. Method Clean Accuracy (%) Certified Accuracy (%) Inference Time (s) 1% pixels 2% pixels 3% pixels ResNet50(b=19) 62.03 29.03 23.53 19.77 69.00 ResNet50(b=25) 64.67 30.14 24.70 20.88 ResNet50(b=37) 67.60 27.15 21.86 18.14 ViT-S(b=19) 63.88 33.08 27.78 23.84 87.13 ViT-S(b=25) 66.49 33.90 28.59 24.57 ViT-S(b=37) 69.01 33.37 28.07 24.15 ECViT-S(b=19) 64.69 34.38 28.85 24.74 9.66 ECViT-S(b=25) 67.14 35.57 30.06 25.98 13.20 ECViT-S(b=37) 69.88 35.03 29.74 25.74 23.54 ResNext101(b=19) 69.36 40.74 34.97 30.58 567.75 ResNext101(b=25) 71.96 41.86 36.03 32.17 ResNext101(b=37) 74.89 42.79 36.69 33.24 ViT-B(b=19) 66.92 34.65 28.71 24.80 136.75 ViT-B(b=25) 70.57 36.72 31.13 26.80 ViT-B(b=37) 74.68 37.61 31.88 27.44 ECViT-B(b=19) 73.49 46.83 40.72 36.29 16.63 ECViT-B(b=25) 75.30 46.56 40.79 36.21 22.72 ECViT-B(b=37) 78.58 47.39 41.70 37.26 40.50 choose different networks as base classifiers, such as ResNet50 [13], ResNext101-32x8d (ResNext101) [41], ViT-S/16-224 (ViT-S) and ViT-B/16-224 (ViT-B) [9]. In the smooth model, the methods directly apply the corresponding backbone into DS and are all fine-tuned from the ImageNet pre-trained model [38]. The parameters of models on ImageNet are shown in Table 1. We train the models in 120 epochs on ImageNet and 600 epochs on CIFAR10. The parameters of our Efficient Certifiable ViT (ECViT) is the same as the ViT. We train ECViT from the same ViT pre-trained models. For exmaple, ECViT-B is based on ViT-B and ECViT-S is based on ViT-S. For CIFAR10, we train ECViT 150 epochs for each stage during the progressive smoothed image modeling task and fine-tune 150 epochs in isolated band unit self-attention. For ImageNet, we train ECViT 30 epochs for each stage during the progressive smoothed image modeling task and fine-tune 30 epochs in isolated band unit self-attention. We report clean and certified accuracy compared ECViT with Interval Bound Propagation (IBP) [5], Derandomized Smoothing (DS) [21], Clipped BagNet (CBN) [42], Patchguard [40] (PG) and BagCert [30]. Here, DS is based on the band smoothing and Patchguard is based on the mask BagNet [2]. For each stage and fine-tuning in the progressive smoothed image modeling or smoothed models, we set the optimizer to AdamW, the loss function to be a crossentropy loss, the batch size to 512, the warm-up epoch to 5, the learning rate to be 2e-5, the threshold \u03b8 to 0.2, and the weight decay to be 1e-8. ECViT is trained on CIFAR10 and ImageNet for a total of 600 and 120 epochs, consistent with \fTable 5. Ablation study of training for different stages and tokenizers on ILSVRC2012 . Network Band Size Stages Clean Accuracy (%) Certified Accuracy (%) 1% 2% 3% Distllation b=19 one stage 72.01 43.49 37.64 33.31 two stage 72.78 44.87 39.01 34.63 three stage 72.93 45.78 40.03 35.61 b=25 one stage 74.48 44.14 38.39 33.87 two stage 75.09 45.63 39.91 35.54 three stage 75.12 46.63 40.93 36.49 b=37 one stage 78.05 44.89 39.24 34.60 two stage 77.98 45.54 39.94 35.45 three stage 78.05 46.46 40.84 36.39 VAE b=19 one stage 72.60 43.91 38.00 33.60 two stage 73.30 45.50 39.69 35.12 three stage 73.49 46.83 40.72 36.29 b=25 one stage 75.15 45.50 39.56 35.18 two stage 75.48 46.34 40.49 35.99 three stage 75.30 46.56 40.79 36.21 b=37 one stage 77.95 44.49 38.78 34.28 two stage 78.40 46.53 40.73 36.36 three stage 78.58 47.36 41.70 37.26 smoothed models. 4.2. Certification on CIFAR10 Following the setting of the previous work [21], we evaluate clean and certified accuracy on 5,000 images of the CIFAR10 validation set. We select two patch sizes, including 2 \u00d7 2 and 4 \u00d7 4. In the experiment, the images are all up-sampled from 32 to 224, and for the band smoothing, the band size b is fixed to 4 on the original size of 32. Table 2 shows clean and certified accuracy against patches of different sizes. Experiments show that ECViT can effectively improve smoothed ViT and achieve state-of-the-art accuracy. For ViT-S, under the same structure, ECViT-S improves clean accuracy by 7.16% and certified accuracy by \u223c13%. Our best ECViT-B still has 82.80% certified accuracy under 2 \u00d7 2 patches with 93.48% clean accuracy. 4.3. Certification on ImageNet We evaluate our proposed ECViT on ILSVRC2012 validation set [33]. ILSVRC2012 validation set has 50,000 images and we test clean accuracy and certified accuracy on the whole 50,000 images. Following the setting of previous work [21, 40], we select three different patch sizes, including 1% (23 \u00d7 23), 2% (32 \u00d7 32) and 3% (39 \u00d7 39). Table 3 shows clean and certified accuracy compared stateof-the-art methods on ILSVRC2012. ECViT-B surpasses BagCert\u2019s clean and certified accuracy by 7.48% and 9.47% and achieves state-of-the-art clean and certified accuracy in comparison to the previous state-of-the-art methods. Figure 4. Certified accuracy under different patch sizes on ILSVRC2012. Patch size represents the proportion of adversarial patch in the image. Table 4 shows clean and certified accuracy compared other smooth models on ILSVRC2012. Smooth models compared to other than ECViT are based on different backbones in DS. Inference time is calculated when the batch size of the image is 1024 to complete the vote in seconds. To simulate the inference in practice, inference time contains the time of the complete process, including data pre-processing and inference. In the case of band size b = 37, our ECViT-B improves the clean accuracy by 3.9% compared to ViT-B. The certified accuracy is improved by about 10% in different patch sizes, and it speeds up 4-8x times for ViT-B in the inference time. In the smoothing model, we achieve the fastest inference efficiency. Our method achieves state-of-the-art certification on ILSVRC2012 while maintaining 78.58% clean accuracy, which is very close to the normal ResNet-101. 4.4. Ablation Study In this section, we mainly focus on studying the effect of the number of stages, different patch sizes, and tokenizers on the clean and certified accuracy. Number of Stages. To verify the effectiveness of multistage progressive smoothed image modeling, we study the effect of different stages on accuracy. Specifically, taking two stage as an example, two stage represents the direct fine-tuning after the progressive smoothed image modeling of the first two stages. Among them, the total training epoch is the same. The meanings of one stage and three stage are similar with two stage. Table 5 shows the training of different stages and tokenizers on ILSVRC2012 validation set for ECViT-B. Table 6 reflects the different stages of ablation experiments on CIFAR10. Clean and certified accuracy basically increases with the increase of training stages. Experiments show that the progressive smoothed image mod\fFigure 5. Certified accuracy under different patch sizes on CIFAR10. Patch size represents the proportion of adversarial patch in the image. Table 6. Ablation study of training for different stages on CIFAR10. Clean and certified accuracy basically increases with the increase of training stages. Networks Band Size Stages Clean Accuracy (%) Certified Accuracy (%) 2 \u00d7 2 4 \u00d7 4 ECViT-S b=2 one stage 77.94 66.36 57.86 two stage 77.84 66.56 56.80 three stage 80.50 69.06 59.58 b=4 one stage 86.56 71.26 62.60 two stage 86.30 71.36 62.38 three stage 87.56 73.82 65.10 ECViT-B b=2 one stage 85.60 76.42 68.66 two stage 86.06 77.20 69.06 three stage 86.40 77.86 70.20 b=4 one stage 92.46 81.54 74.56 two stage 93.14 82.40 75.98 three stage 93.48 82.80 76.38 eling task allows the base classifier to explicitly capture the local context of an image while preserving the global semantic information. Consequently, more discriminative local representations can be obtained through a very limited image information (a smoothed thin image bands), which improves the performance of the base classifier. Patch Sizes. The size of adversarial patches will greatly affect the certification. Figure 4 and Figure 5 respectively reflect the certified accuracy variation on ImageNet and CIFAR10 for ECViT-B. The certified accuracy decreases as the patch size becomes larger. When the patch size reaches 10%, ECViT-B still has a certified accuracy of \u223c17.00% and \u223c39.00% on ImageNet and CIFAR10. Tokenizer. In order to verify that progressive smoothed image modeling is also effective for other tokenizers, we Table 7. Clean and certified accuracy compared with other tokenizers on ILSVRC2012. The VAE is better than the distilled. Band size Smooth Model Clean Accuracy (%) Certified Accuracy (%) 1% pixels 2% pixels 3% pixels b=19 ViT-B(b=19) 66.92 34.65 28.71 24.80 Ours(distilled) 72.93 45.78 40.03 35.61 Ours(vae) 73.49 46.83 40.72 36.29 b=25 ViT-B(b=25) 70.57 36.72 31.13 26.80 Ours(distilled) 75.12 46.63 40.93 36.49 Ours(vae) 75.30 46.56 40.79 36.21 b=37 ViT-B(b=37) 74.68 37.61 31.88 27.44 Ours(distilled) 78.05 46.46 40.84 36.39 Ours(vae) 78.58 47.39 41.70 37.26 conduct the following experiments on ECViT-B. Table 7 illustrates the comparison of different tokenizers and other network architectures. We can see that compared to ViT-B, the distilled tokenizer has a significant improvement, but it is still lower than the VAE tokenizer. This also verifies that our method can be adapted to different tokenizers. 5." + }, + { + "url": "http://arxiv.org/abs/0904.1452v1", + "title": "Flat-Spectrum Radio Quasars from SDSS DR3 Quasar Catalogue", + "abstract": "We constructed a sample of 185 Flat Spectrum Radio Quasars (FSRQs) by\ncross-correlating the Shen et al.'s SDSS DR3 X-ray quasar sample with FIRST and\nGB6 radio catalogues. From the spectrum energy distribution (SED) constructed\nusing multi-band (radio, UV, optical, Infrared and X-ray) data, we derived the\nsynchrotron peak frequency and peak luminosity. The black hole mass and the\nbroad line region (BLR) luminosity (then the bolometric luminosity) were\nobtained by measuring the line-width and strength of broad emission lines from\nSDSS spectra. We define a subsample of 118 FSRQs, of which the nonthermal jet\nemission is thought to be dominated over the thermal emission from accretion\ndisk and host galaxy. For this subsample, we found 25 FSRQs having synchrotron\npeak frequency > 10^{15} Hz, which is higher than the typical value for FSRQs.\nWhile only a weak anti-correlation is found between the synchrotron peak\nfrequency and peak luminosity, it becomes significant when combining with the\nWu et al.'s sample of 170 BL Lac objects. At similar peak frequency, the peak\nluminosity of FSRQs with $\\nupeak > 10^{15}$ Hz is systematically higher than\nthat of BL Lac objects, with some FSRQs out of the range covered by BL Lac\nobjects. Although high $\\nupeak$ are found in some FSRQs, they do not reach the\nextreme value of BL Lacs. For the subsample of 118 FSRQs, we found significant\ncorrelations between the peak luminosity and black hole mass, the Eddington\nratio, and the BLR luminosity, indicating that the jet physics may be tightly\nrelated with the accretion process.", + "authors": "Zhaoyu Chen, Minfeng Gu, Xinwu Cao", + "published": "2009-04-09", + "updated": "2009-04-09", + "primary_cat": "astro-ph.GA", + "cats": [ + "astro-ph.GA" + ], + "main_content": "INTRODUCTION Blazars, including BL Lac objects and \ufb02at-spectrum radio quasars (FSRQs), are the most extreme class of active galactic nuclei (AGNs), characterized by strong and rapid variability, high polarization, and apparent superluminal motion. These extreme properties are generally interpreted as a consequence of non-thermal emission from a relativistic jet oriented close to the line of sight. As such, they represent a fortuitous natural laboratory with which to study the physical properties of jets, and, ultimately, the mechanisms of energy extraction from the central supermassive black holes. \u22c6E-mail:zychen@shao.ac.cn \f2 Z. Y. Chen, M. F. Gu and X. Cao The most prominent characteristic of the overall spectral energy distribution (SED) of blazars is the double-peak structure with two broad spectral components. The \ufb01rst, lower frequency component is generally interpreted as being due to synchrotron emission, and the second, higher frequency one as being due to inverse Compton emission. BL Lac objects (BL Lacs) usually have no or only very weak emission lines, but have a strong highly variable and polarized non-thermal continuum emission ranging from radio to \u03b3-ray band, and their jets have synchrotron peak frequencies ranging from IR/optical to UV/soft-X-ray energies. Compared to BL Lacs, FSRQs have strong narrow and broad emission lines, however generally have low synchrotron peak frequency. According to the synchrotron peak frequency, BL Lac objects can be divided into three subclasses, i.e. low frequency peaked BL Lac objects (LBL), intermediate objects (IBL) and high frequency peaked BL Lac objects (HBL) (Padovani & Giommi 1995). In general, radio-selected BL Lacs tend to be LBLs, and XBLs are HBLs (Urry & Padovani 1995). Fossati et al. (1998) and Ghisellini et al. (1998) have proposed the well-known \u2018blazar sequence\u2019 , which plot various powers vs the synchrotron peak frequency \u03bdpeak including FSRQs and BL Lacs. They draw a conclusion that the peak frequency seem to be anti-correlated with the source power with the most powerful sources having the relatively small \u03bdpeak and the least powerful ones having the highest \u03bdpeak. Ghisellini et al. (1998) gave a theoretical interpretation to these anti-correlations, namely, the more powerful sources su\ufb00ered a larger probability of losing energy, the more cooling the sources subjected, thus translates into a lower value of \u03bdpeak. However, recently the blazar sequence has been largely in debates. By constructing a large sample of about 500 blazars from the Deep X-Ray Radio Blazar Survey (DXRBS) and the ROSAT All-Sky Survey-Green Bank Survey (RGB), Padovani et al. (2003) found that the \u201cX-ray-strong\u201d radio quasars, with similar SED to that of HBLs, have much higher synchrotron peak frequencies than those of classical FSRQs. Their DXRBS sample does not show the expected blazar sequence. Exceptions to blazar sequences were also found by Ant\u00b4 on & Browne (2005) that most of the low radio luminosity sources have synchrotron peaks at low frequencies, instead of the expected high frequencies in their blazar sample. They claimed that at least part of the systematic trend seen by Fossati et al. (1998) and Ghisellini et al. (1998) results from selection e\ufb00ects. Nieppola, Tornikoski & Valtaoja (2006) have studied a sample of over 300 BL Lacs objects, of which 22 objects have high \u03bdpeak > 1019 Hz. There are negative correlations between \u03bdpeak and the luminosity at 5 GHz, 37 GHz, and 5500\u02da A, however, the correlation turns to slightly positive in X-ray band. Moreover, they claimed that there is no signi\ufb01cant correlation between source luminosity at synchrotron peak frequency and \u03bdpeak, and several low energy peaked BL Lacs with low radio luminosity were also found. By using the results of very recent surveys, Padovani (2007) claimed that there is no anti-correlation between the radio power and synchrotron peak frequency in blazars once selection e\ufb00ects are properly taken into account, and some blazars were found to have low power as well as low \u03bdpeak, or high power and high \u03bdpeak as well. Furthermore, FSRQs with synchrotron peak frequency in the UV/X-ray band have been claimed (e.g. Giommi et al. 2007; Padovani et al. 2002). In summary, it seems that the blazar sequence in its simplest form cannot be valid (Padovani 2007). In this paper, we investigate the dependence of synchrotron peak frequency on the peak luminosity for a sample of FSRQs selected from SDSS DR3 quasar catalogue. The advantage of our sample is that the SDSS spectra enable us to measure the various broad emission lines, and then to estimate the black hole mass and the broad line region luminosity. Therefore, the relationship between the synchrotron peak frequency and black hole mass and Eddington ratio can be explored, which might have potential importance for us to study the jet formation and physics and the jet-disk relation. We present the sample selection in \u00a7 2. The reduction of SDSS spectra and the estimation of black hole mass and broad line region luminosity are described in \u00a7 3. The derivation of synchrotron peak frequency and luminosity are given in \u00a7 4, in which the thermal emission from accretion disc and host galaxy are also calculated. The various correlation analysis are shown in \u00a7 5, of which we focus on the relationship between the synchrotron peak frequency and luminosity. \u00a7 6 is dedicated to discussions. Finally, the summary is given in \u00a7 7. The cosmological parameters H0 = 70 km s\u22121 Mpc\u22121, \u2126m=0.3, \u2126\u039b = 0.7 are used throughout the paper, and the spectral index \u03b1 is de\ufb01ned as f\u03bd \u221d\u03bd\u2212\u03b1 with f\u03bd being the \ufb02ux density at frequency \u03bd. 2 THE SAMPLE SELECTION 2.1 The SDSS DR3 X-ray quasar sample We started from the SDSS DR3 X-ray quasar sample of Shen et al. (2006), which is the result of individual X-ray detections of SDSS DR3 quasar catalogue (Schneider et al. 2005) in the images of ROSAT All Sky Survey (RASS). The SDSS DR3 quasar catalogue consists of 46,420 objects with luminosities brighter than Mi = \u221222, with at least one emission line with full width at half-maximum (FWHM) larger than 1000 km s\u22121 and with highly reliable redshifts. A few unambiguous broad absorption line quasars are also included. The sky coverage of the sample is about 4188 deg2 and the redshifts range from 0.08 to 5.41. The \ufb01ve-band (u, g, r, i, z) magnitudes have typical errors of about 0.03 mag. The spectra cover the wavelength range from 3800 to 9200 \u02da A with a resolution of about 1800 2000 (see Schneider et al. 2005 for details). The soft X-ray properties of SDSS DR3 quasars have been investigated by Shen et al. (2006) through individual detection and stacking analysis. Shen et al. (2006) applied the upper-limit maximum likelihood method to detect the X-ray \ufb02ux at the position of each SDSS DR3 quasar and accept the objects with detection liklihood L > 7 as individual detections. The number of these individual X-ray detected \fFSRQs from SDSS DR3 Quasar Catalogue 3 quasars were 3366, which is about 25 percent higher than the RASS catalogue matches (see Shen et al. 2006, for details). The 1 keV X-ray luminosity in the source rest frame of this 3366 X-ray quasar sample is obtained by assuming a power-law distribution of X-ray photons N(E) \u221dE\u2212\u0393 with \u0393 \u223c2 and corrected for absorption using the \ufb01xed column density at the Galactic value according to Dickey & Lockman (1990) for each source (Shen et al. 2006). 2.2 Cross-correlation with FIRST and GB6 radio catalogues In this paper, we de\ufb01ne a quasar to be FSRQ according to the radio spectral index. Therefore, we cross-correlate the SDSS DR3 X-ray quasar sample with Faint Images of the Radio Sky at Twenty-Centimeters 1.4 GHz radio catalogue (FIRST, Becker ,White & Helfand 1995) and the Green Bank 6 cm radio survey at 4.85 GHz radio catalogue (GB6, Gregory et al. 1996), which are two of the largest radio surveys well matched with SDSS sky coverage. The FIRST survey used the VLA to observe the sky at 20 cm (1.4 GHz) with a beam size of 5. \u2032\u20324. FIRST was designed to cover the same region of the sky as the SDSS, and it observed 9000 deg2 at the north Galactic cap and a smaller \u223c2.5\u25e6wide strip along the celestial equator. It is 95% complete to 2 mJy and 80% complete to the survey limit of 1 mJy. The survey contains over 800,000 unique sources, with an astrometric uncertainty of \u22721 \u2032\u2032. Due to the deeper survey limit and higher resolution, we prefer to use FIRST, instead of NVSS, which was also carried out using the VLA at 1.4 GHz to survey the entire sky north of \u03b4 = \u221240\u25e6and contains over 1.8 million unique detections brighter than 2.5 mJy, however with lower spatial resolution 45 \u2032\u2032beam\u22121. The GB6 survey at 4.85 GHz was executed with the 91 m Green Bank telescope in 1986 November and 1987 October. Data from both epochs were assembled into a survey covering the 0\u25e6< \u03b4 < 75\u25e6sky down to a limiting \ufb02ux of 18 mJy, with 3.5 \u2032 resolution. GB6 contains over 75,000 sources, and has a positional uncertainty of about 10 \u2032\u2032 at the bright end and about 50 \u2032\u2032 for faint sources (Kimball & Ivezi\u00b4 c 2008). The sample of 3366 quasars was \ufb01rstly cross-correlated between the SDSS quasar positions and the FIRST catalogue within 2 arcsec (see e.g. Ivezi\u00b4 c et al. 2002; Lu et al. 2007), resulting in a sample of 516 quasars. These 516 quasars were further cross-correlated between the SDSS quasar positions and the GB6 catalogue within 1 arcmin (e.g. Kimball & Ivezi\u00b4 c 2008). This results in a sample of 212 quasars. The 187 quasars are thus de\ufb01ned as FSRQs conventionally with a spectral index between 1.4 and 4.85 GHz \u03b1 < 0.5. After excluding two FSRQs due to the weakness of emission lines, our \ufb01nal sample consists of 185 FSRQs. The source redshift ranges from \u223c0.1 to \u223c4.0, however only one source (SDSS J081009.9+384756.9, z = 3.95) has a redshift of z > 3.0. In 79 out of 185 source, the redshift is z < 0.8, enabling us to measure the broad H\u03b2 line, which is commonly used to estimate the black hole mass (e.g. Kaspi et al. 2000; Gu, Cao & Jiang 2001). Based on the identi\ufb01ed infrared counterparts labelled in SDSS DR3 catalogue, the NIR (J, H, Ks) data are archived from the Two Micron All Sky Survey (2MASS, Skrutskie et al. 2006) for 98 sources among 185 sources. These sources were consistently re-identi\ufb01ed through cross-correlating between 2MASS and SDSS within a matched radius of 2 arcsec, which is much larger than the sub-arcsec positional accuracy of the SDSS and the 2MASS surveys. Moreover, we collected the Farand near-UV magnitudes from the Galaxy Evolution Explorer (GALEX; Martin et al. 2005) Data Release 4 matched within 3 arcsec of SDSS positions for 117/185, and 146/185 sources, respectively. Our sample is listed in Table 1 and Table 2, of which: (1) Source SDSS name; (2) redshift; (3) black hole mass (\u00a7 3.2); (4) synchrotron peak luminosity (\u00a7 4.4); (5) broad line region luminosity (\u00a7 3.2); (6) bolometric luminosity (\u00a7 3.2); (7) synchrotron peak frequency (\u00a7 4.4); (8) (10) spectral indices between the rest-frame frequencies of 5 GHz, 5000 \u02da A and 1 keV (\u00a7 5.1). 3 PARAMETERS DERIVATION 3.1 Spectral analysis The spectra of quasars are characterized by a featureless continuum and various of broad and narrow emission lines (Vanden Berk et al. 2001). For our FSRQs sample, we ignore the host galaxy contribution to the spectrum, since only very little, if any, starlight is observed. In a \ufb01rst step, the SDSS spectra were corrected for the Galactic extinction using the reddening map of Schlegel, Finkbeiner & Davis (1998) and then shifted to their rest wavelength, adopting the redshift from the header of each SDSS spectrum. In order to reliably measure line parameters, we choose those wavelength ranges as pseudo-continua, which are not a\ufb00ected by prominent emission lines, and then decompose the spectra into the following three components: 1. A power-law continuum to describe the emission from the active nucleus. The 15 line-free spectral regions were \ufb01rstly selected from SDSS spectra covering 1140\u02da A to 7180\u02da A for our sample, namely, 1140\u02da A \u2013 1150\u02da A, 1275\u02da A \u2013 1280\u02da A, 1320\u02da A \u2013 1330\u02da A, 1455\u02da A \u2013 1470\u02da A, 1690\u02da A \u2013 1700\u02da A, 2160\u02da A \u2013 2180\u02da A, 2225\u02da A \u2013 2250\u02da A, 3010\u02da A \u2013 3040\u02da A, 3240\u02da A \u2013 3270\u02da A, 3790\u02da A \u2013 3810\u02da A, 4210\u02da A \u2013 4230\u02da A, 5080\u02da A \u2013 5100\u02da A, 5600\u02da A \u2013 5630\u02da A, 5970\u02da A \u2013 6000\u02da A, 7160\u02da A \u2013 7180\u02da A (Vanden Berk et al. 2001; Forster et al. 2001). Depending on the source redshift, the spectrum of individual quasar only covers \ufb01ve to eight spectral regions, from which the initial power-law are obtained for each source. We found that a single power-law can not give a satis\ufb01ed \ufb01t for some low-redshift sources whose spectra cover H\u03b1 and H\u03b2 region. In these cases, a double power-law was adopted to obtain the initial power-law continuum (Vanden Berk et al. 2001) with the break point around 5100\u02da A. \f4 Z. Y. Chen, M. F. Gu and X. Cao 2. An Fe II template. The spectra of our sample covers UV and optical regions, therefore, we adopt the UV Fe II template from Vestergaard & Wilkes (2001), and optical one from V\u00b4 eron-Cetty et al. (2004). The Fe II template obtained by V\u00b4 eronCetty et al. (2004) covers the wavelengths between 3535 and 7534 \u02da A, extending farther to both the blue and red wavelength ranges than the Fe II template used in Boroson & Green (1992). This makes it more advantageous in modeling the Fe II emission in the SDSS spectra. For the sources with both UV Fe II and optical Fe II lines prominent in the spectra, we connect the UV and optical templates into one template covering the whole spectra (see also Hu et al. 2008). In the \ufb01tting, we assume that Fe II has the same pro\ufb01le as the relevant broad lines, i.e. the Fe II line width usually was \ufb01xed to the line width of broad H\u03b2 or Mg II or C IV, which in most cases gave a satis\ufb01ed \ufb01t. In some special cases, a free-varying line width was adopted in the \ufb01tting to get better \ufb01ts. 3. A Balmer continuum generated in the same way as Dietrich et al. (2002) (see also Hu et al. 2008). Grandi (1982) and Dietrich et al. (2002) proposed that a partially optically thick cloud with a uniform temperature could produce the Balmer continuum, which can be expressed as: F BaC \u03bb = FBEB\u03bb(Te)(1 \u2212e\u2212\u03c4\u03bb); (\u03bb < \u03bbBE) (1) where FBE is a normalized coe\ufb03cient for the \ufb02ux at the Balmer edge (\u03bbBE = 3646\u02da A), B\u03bb(Te) is the Planck function at an electron temperature Te, and \u03c4\u03bb is the optical depth at \u03bb and is expressed as: \u03c4\u03bb = \u03c4BE( \u03bb \u03bbBE ) (2) where \u03c4BE is the optical depth at the Balmer edge. There are two free parameters, FBE and \u03c4BE. Following Dietrich et al. (2002), we adopt the electron temperature to be Te = 15, 000 K. The modeling of above three components is performed by minimizing the \u03c72 in the \ufb01tting process. The \ufb01nal multicomponent \ufb01t is then subtracted from the observed spectrum. The examples of the \ufb01tted power-law, Fe II lines, Balmer continuum and the residual spectra for the sources with low, middle and high redshift are shown in Fig. 1. The Fe II \ufb01tting windows are selected as the regions with prominent Fe II line emission while no other strong emission lines, according to Vestergaard & Wilkes (2001) and Kim et al. (2006). The \ufb01tting window around the Balmer edge (3625 \u22123645\u02da A) is used to measure the contribution of Balmer continuum, which extends to Mg II line region. The Balmer continuum is not considered when 3625 \u22123645\u02da A is out of the spectrum. Therefore, the inclusion of Balmer continuum depends on the source redshift, which is illustrated in Fig. 1. While the Balmer continuum should be included in the \ufb01tting for middle redshift sources, it can be ignored for low and high redshift sources. The broad emission lines were measured from the continuum subtracted spectra. We mainly focused on several prominent emission lines, i.e. Ly \u03b1, H\u03b1, H\u03b2, Mg II, C IV. Generally, two gaussian components were adopted to \ufb01t each of these lines, indicating the broad and narrow line components, respectively. The blended narrow lines, e.g. [O III] \u03bb\u03bb4959, 5007\u02da A and [He II] \u03bb4686\u02da A blending with H\u03b2, and [S II] \u03bb\u03bb6716, 6730\u02da A, [N II] \u03bb\u03bb6548, 6583\u02da A and [O I]\u03bb6300\u02da A blending with H\u03b1, were included as one gaussian component for each line at the \ufb01xed line wavelength. The \u03c72 minimization method was used in \ufb01ts. The line width FWHM, line \ufb02ux of broad Ly \u03b1, H\u03b1, H\u03b2, Mg II and C IV lines were obtained from the \ufb01nal \ufb01ts for our sample. The examples of the \ufb01tting are shown in Fig. 1 for the sources in the di\ufb00erent redshift. 3.2 Mbh and LBLR There are various empirical relations between the radius of broad line region (BLR) and the continuum luminosity, which can be used to calculate the black hole mass in combination with the line width FWHM of broad emission lines. However, there are defects when using the continuum luminosity to estimate the BLR radius for blazars since the continuum \ufb02ux of blazars are usually doppler boosted due to the fact that the relativistic jet is oriented close to the line of sight. Alternatively, the broad line emission can be a good indicator of thermal emission from accretion process. Therefore for our FSRQs sample, we estimate the black hole mass by using the empirical relation based on the luminosity and FWHM of broad emission lines. According to the source redshift, we use various relations to estimate the black hole mass: Greene et al. (2005) relation for broad H\u03b1 line; Vestergaard et al. (2006) for broad H\u03b2 and Kong et al. (2006) for broad Mg II and C IV lines. Greene & Ho (2005) provided a formula to estimate the black hole mass using the line width and luminosity of broad H\u03b1 alone, which is expressed as MBH(H\u03b1) = (2.0+0.4 \u22120.3) \u00d7 106\u201c L(H\u03b1) 1042 erg s\u22121 \u201d0.55\u00b10.02 \u201cFWHM(H\u03b1) 103 km s\u22121 \u201d2.06\u00b10.06 M\u2299 (3) For the sources with available FWHM and luminosity of broad H\u03b2, the method to calculate MBH is given by Vestergaard & Peterson (2006): MBH(H\u03b2) = 4.68 \u00d7 106 \u201e L(H\u03b2) 1042 erg s\u22121 \u00ab0.63 \u201eFWHM(H\u03b2) 1000 km s\u22121 \u00ab2 M\u2299 (4) In addition, Kong et al. (2006) presented the empirical formula to obtain the black hole mass using broad Mg II and \fFSRQs from SDSS DR3 Quasar Catalogue 5 C IV for high redshift sources as follows, MBH(Mg II) = 2.9 \u00d7 106 \u201e L(Mg II) 1042 erg s\u22121 \u00ab0.57\u00b10.12 \u201eFWHM(Mg II) 1000 km s\u22121 \u00ab2 M\u2299, (5) MBH(C IV) = 4.6 \u00d7 105 \u201e L(C IV) 1042 erg s\u22121 \u00ab0.60\u00b10.16 \u201eFWHM(C IV) 1000 km s\u22121 \u00ab2 M\u2299 (6) In the redshift range of our sample, MBH can be estimated using two of above relations for 122 out of 185 FSRQs. In most sources (113/122), two MBH values are consistent with each other within a factor of three. For low-redshift sources, we \ufb01rst selected the black hole mass estimated from broad H\u03b2 line, which is commonly used to estimate the black hole mass for low-redshift sources (e.g. Kaspi et al. 2000; Gu, Cao & Jiang 2001). In the case that the spectral quality or the spectral \ufb01tting of H\u03b2 region is poor, we instead use broad H\u03b1 line to estimate the black hole mass. Moreover, we adopted the average value of MBH (Mg II) and MBH (C IV) when both values are available for one source. In this work, the BLR luminosity LBLR is derived following Celotti, Padovani & Ghisellini (1997) by scaling the strong broad emission lines Ly \u03b1, H\u03b1, H\u03b2, Mg II and C IV to the quasar template spectrum of Francis et al. (1991), in which Ly\u03b1 is used as a reference of 100. By adding the contribution of H\u03b1 with a value of 77, the total relative BLR \ufb02ux is 555.77, of which Ly\u03b1 is 100, H\u03b1 77, H\u03b2 22, Mg II 34, and C IV 63 (Celotti, Padovani & Ghisellini 1997; Francis et al. 1991). From the BLR luminosity, we estimate the bolometric luminosity as Lbol = 10LBLR (Netzer 1990). 4 THE THERMAL EMISSION In addition to the relativistically beamed, non-thermal jet emission, the thermal emission from the accretion disk and the host galaxy are expected to be present in radio quasars. In some cases, the thermal emission can be dominated over the nonthermal jet emission (e.g. Landt et al. 2008). We estimated the contribution of thermal emission in SEDs as follows (see also Landt et al. 2008), and our sample is thus re\ufb01ned to include only the sources with SEDs dominated by the nonthermal jet emission. 4.1 The accretion disk Following D\u2019Elia, Padovani & Landt (2003) and Landt et al. (2008), we calculated accretion disk spectra assuming a steady geometrically thin, optically thick accretion disk. In this case the emitted \ufb02ux is independent of viscosity, and each element of the disk face radiates roughly as a blackbody with a characteristic temperature, which depends only on the mass of the black hole, MBH, the accretion rate, \u02d9 M, and the radius of the innermost stable orbit (e.g., Peterson 1997; Frank et al. 2002). We have adopted the Schwarzschild geometry (nonrotating black hole), and for this the innermost stable orbit is at rin = 6rg, where rg is the gravitational radius de\ufb01ned as rg = GMBH/c2, G is the gravitational constant, and c is the speed of light. Furthermore, we have assumed that the disk is viewed face-on. The accretion disk spectrum is fully constrained by the two quantities, accretion rate and mass of the black hole. We have calculated the accretion rate using the relations Lbol = \u01eb \u02d9 Mc2, where \u01eb is the e\ufb03ciency for converting matter to energy, with \u01eb \u223c6% in the case of a Schwarzschild black hole. The bolometric luminosity is estimated as Lbol = f \u22121LBLR with f the BLR covering factor, which is not well known, and can be in the range of \u223c5% \u221230% (Maiolino et al. 2001 and references therein). As in Section 3.2, we adopt a canonical value of f \u223c10% (Peterson 1997). The contribution of accretion disk thermal emission are estimated by calculating the fraction of the thermal emission to the SED data at SDSS optical and GALEX UV region. Tentatively, we simply use a marginal value of 50% at most of SED wavebands to divide the FSRQs into thermal-dominated (> 50%) and nonthermal-dominated (< 50%), and found that the thermal emission can be dominant in 100/185, 35/185, 2/185 FSRQs for f = 5%, 10%, and 30%, respectively. 4.2 The host galaxy Usually, the host galaxies of radio quasars are bright ellipticals, and their luminosity only spread in a relatively narrow range (e.g. McLure et al. 2004). We estimated the contribution from host galaxy thermal emission using the elliptical galaxy template of Mannucci et al. (2001), which extends from near-IR to UV frequencies (see also Landt et al. 2008). Slightly di\ufb00erent from Landt et al. (2008), we use the bulge absolute luminosity in R-band estimated from MBH \u2212MR relation of McLure et al. (2004) in combination with the estimted black hole mass in Section 3.2, log MBH/M\u2299= \u22120.50(\u00b10.02)MR \u22122.74(\u00b10.48) (7) The contribution of host galaxy thermal emission are estimated by comparing the calculated thermal emission with the SED data at SDSS optical, GALEX UV and 2MASS NIR region. The value of 50% is used to distinguish thermal-dominated with thermal emission > 50% and non-thermal dominated with thermal emission < 50%. We found that the thermal emission can be dominant in about 16 of 185 FSRQs. \f6 Z. Y. Chen, M. F. Gu and X. Cao 4.3 Sample re\ufb01nement Although the thermal emission may dominate in only small fraction of sources, i.e. accretion disk thermal emission in 35 sources, and host galaxies emission in 16 sources, we combine the contribution of accretion disk and host galaxy to maximize the source number, of which the non-thermal jet emission is not dominated in SED. We simply add the expected thermal emission from accretion disk (using a canonical value f = 10%) and host galaxy, then compare with the SED data at SDSS optical, GALEX UV and 2MASS NIR region. Finally, we found the combined thermal emission can dominate over (> 50%) nonthermal jet emission in about 67 sources, which are then called thermal-dominated FSRQs in this work and listed in Table 2. The remaining 118 sources are recognized as nonthermal jet-dominated FSRQs in this work (see Table 1). 4.4 \u03bdpeak and \u03bdL\u03bdpeak The SED of each quasar was constructed from multi-band data, which covers radio (1.4 and 4.85 GHz), optical (5 8 line-free windows selected from SDSS spectra), and X-ray (1keV) data. The optical continuum were picked out with \ufb01ve to eight line-free regions (Forster et al. 2001; Vanden Berk et al. 2001) from the SDSS spectra in the source rest frame, of which no or only very weak Fe lines are present (see Section 3.1). The radio \ufb02ux at 1.4 and 4.85 GHz are K-corrected to the source rest frame using the spectral index between these two frequencies. The X-ray 1 keV luminosity are obtained after K-correction and correcting the Galactic extinction (Shen et al. 2006). The IR (J, H, and Ks) data collected from 2MASS (Skrutskie et al. 2006) were also added to construct SEDs, which are available for 98 FSRQs. Moreover, the Far(for 117 sources) and near-UV (for 146 sources) data are also added in constructing SEDs, after correcting the Galactic extinction and K-correction using a spectral index of 0.5. Considering the possibility that the X-ray emission of FSRQs can be from the inverse Compton process, we \ufb01tted the data points for each source (in a \u03bd versus \u03bdL\u03bd diagram) with a third-degree polynomial following Fossati et al. (1998), which yields an upturn allowing for X-ray data-points that do not lie on the direct extrapolation from the lower energy spectrum (see examples in Fig. 2). Through \ufb01tting, we obtained the synchrotron peak frequency and the corresponding peak luminosity for each source. In following analysis, we will present the analysis only for the non-thermal jet-dominated FSRQs (see Table 1). 5 RESULTS 5.1 \u03b1ro \u2013 \u03b1ox plane The broadband properties of our sources can be \ufb01rstly studied by deriving their \u03b1ox, \u03b1ro, and \u03b1rx values, which are the usual rest-frame e\ufb00ective spectral indices de\ufb01ned between 5 GHz, 5000 \u02da A, and 1 keV. The 5000 \u02da A optical continuum \ufb02ux density is derived (or extrapolated when 5000 \u02da A is out of the SDSS spectral region) from the direct power-law \ufb01t on the 5 8 selected line-free SDSS spectral windows in the source rest frame (see Section 3.1). The 5 GHz \ufb02ux density have been k-corrected using the spectral index between FIRST 1.4 GHz and GB6 4.85 GHz. In Fig. 3, we present \u03b1ro \u03b1ox relation for the sample. Following Padovani et al. (2003), three lines are indicated: \u03b1rx = 0.85, typical of 1 Jy FSRQs and LBLs; \u03b1rx = 0.78, the dividing line between HBLs and LBLs; and \u03b1rx = 0.70, typical of RGB BL Lac objects. Moreover, the \u2018HBL\u2019 and \u2018LBL\u2019 boxes de\ufb01ned in Padovani et al. (2003) are also indicated in the \ufb01gure, which represent the regions within 2\u03c3 from the mean \u03b1ro, \u03b1ox, and \u03b1rx values of HBLs and LBLs in the multifrequency AGN catalog of Padovani et al. (1997), respectively (see Fig. 1 in Padovani et al. 2003). This \u201cHBL box\u201d is expected to be populated by high-energy peaked blazars, both BL Lacs and FSRQs. Among total 118 FSRQs, 48 sources have \u03b1rx < 0.78, of which 28 sources locate in HBL box. In contrast, 59 sources among 70 \u03b1rx > 0.78 sources are in LBL box. 5.2 \u03bdpeak and \u03bdL\u03bdpeak The relation of the synchrotron peak frequency and peak luminosity is presented in Fig. 4. We found only a weak anticorrelation between the synchrotron peak frequency \u03bdpeak and the peak luminosity \u03bdL\u03bdpeak with the Spearman correlation coe\ufb03cient r = \u22120.161 at \u223c92% con\ufb01dence level. The \u03bdpeak distribution ranges 1012.4 and 1016.3 Hz for whole sample, and between 1013 and 1015.5 Hz for most (111/118) of sources, with \u27e8log \u03bdpeak\u27e9= 14.41 \u00b1 0.74 Hz for whole sample. We found \u03bdpeak > 1015 Hz in 25 sources (see Fig. 4), which is larger than the typical value of FSRQs (Fossati et al. 1998). As outliers to blazar sequence, the blue quasars are supposed to have large \u03bdpeak and high \u03bdL\u03bdpeak as well (see e.g. Padovani et al. 2003). To further understand the nature of these high \u03bdpeak FSRQs, we combine our sample with the sample of 170 BL Lac objects in Wu, Gu & Jiang (2008). A signi\ufb01cant anti-correlation between \u03bdpeak and \u03bdL\u03bdpeak is present with the Spearman correlation coe\ufb03cient r = \u22120.343 at \u226b99.99% con\ufb01dence level for the combined sample of 288 blazars, which is shown in Fig. 5. This anti-correlation covers about seven order of magnitude in \u03bdpeak for the combined sample. Our FSRQs with high \u03bdpeak > 1015 Hz have systematically higher peak luminosity than that of BL Lacs, with \u27e8log \u03bdL\u03bdpeak\u27e9= 46.41 \u00b1 0.94 \fFSRQs from SDSS DR3 Quasar Catalogue 7 compared to 44.90 \u00b1 0.71 for BL Lacs at same \u03bdpeak range, and the peak luminosity of some FSRQs are out of the range covered by BL Lacs. However, we found that the thermal emission from accretion disk can be dominated over nonthermal jet emission in 11 of 25 sources with \u03bdpeak > 1015 Hz if a BLR covering factor of 5% is assumed. Therefore, the possibility that the contribution from the thermal emission causes the high synchrotron peak frequency can not be completely excluded. Although high \u03bdpeak are found in our FSRQs, they do not reach the extreme value of HBLs, which is also found in DXRBS sample when comparing high \u03bdpeak FSRQs with BL Lacs (Padovani et al. 2003). At the lower-left corner of Fig. 5, there are some FSRQs with relatively low synchrotron peak frequency as well as low luminosity, comparable to some low luminosity LBLs. While only a weak anti-correlation present between \u03bdpeak and \u03bdL\u03bdpeak for Wu, Gu & Jiang (2008) BL Lacs sample, it becomes signi\ufb01cant when combining with our sample. To some extent, this re\ufb02ects the importance of sample selection in investigating the blazar sequence (e.g. Padovani 2007). Although the anti-correlation is signi\ufb01cant in Fig. 5, we notice that the scatter is signi\ufb01cant, mainly caused by the sources in the lower-left corner. However, the extreme sources with high \u03bdpeak as well as high peak luminosity (i.e. at upper-right corner) are still lacking. On the other hand, the large scatter may imply that other parameters may be at work, and the scatter can be much lower once this parameter is included. We investigate the relationship between \u03b1rx and \u03bdpeak for our sample in Fig. 6. A signi\ufb01cant anti-correlation is found with a Spearman correlation coe\ufb03cient r = \u22120.443 at con\ufb01dence level \u226b99.99%. However, the considerable scatter is also present with the scatter in \u03bdpeak more than one order of magnitude for any given value of \u03b1rx. The mean \u03bdpeak of FSRQs in HBL box is only slightly larger than that of sources out of the HBL box with a factor of four. Consequently, the FSRQs in HBL box do not exclusively have high \u03bdpeak. The \u03bdpeak of souces in HBL box are comparable to those with \u03b1rx < 0.78 but out of HBL box. This actually can be re\ufb02ected from the relationship between \u03b1ro and \u03bdpeak shown in Fig. 7. The signi\ufb01cant anti-correlation is found at a con\ufb01dence level of \u223c99.9%, however the scatter is signi\ufb01cant. At any given value of \u03b1ro, the source could have a low \u03bdpeak or a high \u03bdpeak. As a result, the sources out of HBL box with \u03b1rx < 0.78 are not necessary to have lower or higher \u03bdpeak than that of sources in HBL box, though the latter show a wider \u03b1ro range. Consistent with the anti-correlation in Fig. 6, the mean \u03bdpeak value of FSRQs with \u03b1rx < 0.78 is larger with a factor of four than that of \u03b1rx > 0.78 sources. Owing to the large scatters in both \u03b1rx \u2212\u03bdpeak and \u03b1ro \u2212\u03bdpeak relations, it seems that neither solely \u03b1rx nor \u03b1rx \u03b1rx combination can precisely predict the \u03bdpeak value for our present FSRQs sample. 5.3 (\u03bdpeak, \u03bdL\u03bdpeak) & (Mbh, Lbol/LEdd) The estimation of black hole mass and BLR luminosity (then the corresponding bolometric luminosity) enable us to investigate the relationship between the synchrotron emission and the accretion process. In Figs. 8 and 9, we show the relation of \u03bdpeak & Mbh, and \u03bdpeak & Lbol/LEdd, respectively. The black hole mass ranges from 107.4 to 1010.5 M\u2299with most of sources (\u223c86%; 102 out of 118 sources) in the range of 108.5 \u22121010 M\u2299, while the eddington ratio Lbol/LEdd ranges from \u223c10\u22122 to \u223c100.6, with most of source (\u223c94%; 111 out of 118 sources) in the range of 0.01 to 1. No strong correlations are found either between \u03bdpeak and Mbh, or between \u03bdpeak & Lbol/LEdd. We found a signi\ufb01cant correlation between the black hole mass MBH and the synchrotron peak luminosity \u03bdL\u03bdpeak with the Spearman correlation coe\ufb03cient r = 0.724 at \u223c99.99% con\ufb01dence level, which is shown in Fig. 10. Moreover, a signi\ufb01cant correlation between \u03bdL\u03bdpeak and Lbol/LEdd is also found with the Spearman correlation coe\ufb03cient r = 0.842 at \u226b99.99% con\ufb01dence level (see Fig. 11). Both correlations still present when we perform the Spearman partial correlation analysis to exclude the common dependence on redshift. The ordinary least-square (OLS) bisector linear \ufb01t to \u03bdL\u03bdpeak and Lbol/LEdd gives, log \u03bdL\u03bdpeak = (1.73 \u00b1 0.11) log Lbol/LEdd + (47.92 \u00b1 0.11) (8) This result indicates that the jet physics may be tightly related with the accretion process. We plot the ratio of BLR luminosity to synchrotron peak luminosity LBLR/\u03bdL\u03bdpeak and synchrotron peak frequency \u03bdpeak in Fig. 12, from which a signi\ufb01cant correlation between these two parameters is found with the Spearman correlation coe\ufb03cient r = 0.480 at \u226b99.99% con\ufb01dence level. However, there is no strong correlation between the synchrotron peak frequency and BLR luminosity. The jet-disk relation can be further explored through the relationship between \u03bdL\u03bdpeak and LBLR shown in Fig. 13. A signi\ufb01cant correlation is present with a Spearman correlation coe\ufb03cient of r = 0.909 at con\ufb01dence level \u226b99.99%, which remains in partial correlation analysis to exclude the common dependence on redshift. The ordinary least-square (OLS) bisector linear \ufb01t gives, log \u03bdL\u03bdpeak = (0.95 \u00b1 0.06) log LBLR + (3.14 \u00b1 2.84) (9) Our result implies a tight relation between jet and disk, which is consistent with previous \ufb01ndings in various occasions, from the strong correlations either between the radio emission and emission line luminosity (e.g. Rawlings et al. 1989; Cao & Jiang 2001), or between the emission line luminosity and jet kinetic power in di\ufb00erent scales (Rawlings & Saunders 1991; Celotti & Fabian 1993; Wang et al. 2004; Gu, Cao & Jiang 2009). Moreover, our result is consistent with the nearly proportional \f8 Z. Y. Chen, M. F. Gu and X. Cao relation, Q \u221dL0.9\u00b10.2 NLR , found between the jet bulk kinetic power and narrow line luminosity (Rawlings & Saunders 1991). However, it should be noted that the synchrotron peak luminosity of blazars are usually Doppler boosted, therefore, it may not be a good indicator of jet power. It would be necessary to improve our result using intrinsic parameters, either the intrinsic synchrotron peak luminosity after eliminating the beaming e\ufb00ect, or the jet power. 6 DISCUSSION According to the blazar sequence, FSRQs with high synchrotron peak frequency, e.g. \u03bdpeak > 1015 Hz, and X-rays dominated by synchrotron emission are not expected to exist. Due to its importance, such objects have been extensively searched (see Padovani 2007, and references therein). The discoveries of such blazars have been claimed, however, mainly on the basis of their broad spectral indices, i.e. the ratios of radio/optical/X-ray \ufb02uxes (Padovani et al. 2002; Bassani et al. 2007; Giommi et al. 2007). In this sense, the X-ray spectroscopy is required to con\ufb01rm the nature of the X-ray emission in these blazars. In its simplest form, a X-ray spectral index of \u03b1 < 1 indicates a inverse Compton origin, while \u03b1 > 1 for synchrotron X-ray emission. Maraschi et al. (2008) investigate the X-ray spectra for a sample of 10 X-ray selected FSRQs from the Einstein Medium Sensitivity Survey (EMSS) and four controversial sources claimed to have synchrotron X-ray emission. They found that, in the case of the EMSS broad line blazars, X-ray selection does not lead to \ufb01nd sources with synchrotron peaks in the UV/X-ray range, as was the case for X-ray-selected BL Lacs. Instead, for a wide range of radio powers all the sources with broad emission lines show similar SEDs, with synchrotron components peaking below the optical/UV range. Moreover, the authors argued that four \u2018anomalous\u2019 blazars are no longer \u2018anomalous\u2019 after a complete analysis of Swift and INT EGRAL data, with two sources having inverse Compton X-rays, one source being HBL, and the remaining one being narrow line Seyfert 1 galaxy without unambiguous evidence of X-ray emission from a relativistic jet. Similarly, the XMM \u2212Newton and Chandra X-ray spectroscopy of 10 FSRQs were investigated by Landt et al. (2008), which are candidates to have an X-ray spectrum dominated by jet synchrotron emission. However, the authors failed to \ufb01nd FSRQs with X-ray spectra dominated by jet synchrotron emission, instead, the X-rays are either from inverse Compton or are at transition between the synchrotron and inverse Compton jet components as in IBLs. So, despite the e\ufb00orts to search for objects which may violate the sequence trends no strong outliers have been found (Maraschi et al. 2008). As a severe challenge to blazar sequence, the selection of important candidates of high \u03bdpeak luminous FSRQs are re\ufb01ned to choose highly core-dominated radio quasars with low radio core to X-ray luminosity ratios, e.g. log (Lcore/LX) \u22725 (Landt et al. 2008). The inverse Compton emission is expected to peak at \u03b3-ray frequencies, therefore, the high energy-peaked FSRQs could be prime targets for the Fermi Gamma-ray Telescope (Landt et al. 2008). Although the high energy-peaked FSRQs are not \ufb01rmly found yet, Ghisellini & Tavecchio (2008) proposed a model to explain the existence of blue quasars. In their scenario, the jet dissipation region is out of the broad line region, resulting in a much reduced energy density of BLR photons in jet region. Therefore, the cooling due to the inverse Compton process is not severe, causing a high synchrotron peak frequency though the source luminosity is high. Although the anti-correlation between \u03bdpeak and \u03bdL\u03bdpeak for the combined sample of 288 blazars (Fig. 5) is signi\ufb01cant, it largely di\ufb00ers with the blazar sequence in its signi\ufb01cant scatter, which indicates that the synchrotron peak luminosity can not explicitly determine the synchrotron peak frequency, and vice verse. The advantages of using the synchrotron peak luminosity is that the most of synchrotron emission are radiated at synchrotron peak frequency, at which the luminosity can be a good indicator of synchrotron emission. Since the synchrotron peak frequency varies from source to source, the luminosity at \ufb01xed wavebands (e.g. optical) is actually from the di\ufb00erent portion of source SED. The defect of the synchrotron peak luminosity lies in the contamination from beaming e\ufb00ect, which precludes it to well indicate the intrinsic source power. Only when the Doppler boosting is known for each source, the intrinsic source power can be obtained. This can not be performed at present stage. However, this e\ufb00ect has been explored through eliminating the Doppler boosting in blazar samples. Interestingly, recent studies have shown that the negative correlation between \u03bdpeak and \u03bdL\u03bdpeak is likely an artefact of Doppler boosting (Nieppola et al. 2008; Wu, Gu & Jiang 2008). According to these authors, the negative correlation is not present when the intrinsic parameters are used, conversely, a positive correlation is claimed. The key point in these studies is the strong anticorrelation between the Doppler factor and synchrotron peak frequency in the way that the sources with their synchrotron peak at low energies are signi\ufb01cantly more boosted than high \u03bdpeak sources, which is found either from the variability Doppler factor (Nieppola et al. 2008) or from the one estimated with empirical relation (Wu, Gu & Jiang 2008). The synchrotron peak frequency \u03bdpeak \u221dB\u03b4\u03b32 peak, where B is the magnetic \ufb01eld, \u03b4 the Doppler factor, and \u03b3peak a characteristic electron energy that is determined by a competition between accelerating and cooling processes. Di\ufb00erent from BL Lacs, the external inverse Compton scattering is thought to be the dominant cooling process in FSRQs, especially that upon BLR photons. Through model \ufb01tting to blazar SEDs, the electron peak energy is found to be well anti-correlated with the total energy density (radiative and magnetic), which is thought to be the physics behind the phenomenological blazar sequence (Ghisellini et al. 1998; Ghisellini, Celotti & Costamante 2002; Celotti & Ghisellini 2008). More energy density inside the jet cause a more severe cooling, resulting in a smaller \u03b3peak then \u03bdpeak. According to the equations we used to estimate the black hole mass, we have the BLR radius approximately with RBLR \u221dL0.6 BLR for H\u03b1, H\u03b2, MgII, CIV broad lines. Consequently, \fFSRQs from SDSS DR3 Quasar Catalogue 9 the energy density of BLR photons u\u2217 BLR is expected to be proportional to L\u22120.2 BLR. We expect to see an anti-correlation between \u03bdpeak and the BLR luminosity LBLR. However, we failed to \ufb01nd any correlation between \u03bdpeak and LBLR. Several factors may erase the expected anti-correlation, e.g. the scatters in the derivation of BLR luminosity from individual lines, the accuracy of empirical relation in estimating BLR radius, and the inclusion of B and \u03b4 (vary from source to source) in \u03bdpeak. As a result, the positive correlation between \u03bdpeak and LBLR/\u03bdL\u03bdpeak can be partly (if not all) the result of the weak anti-correlation between \u03bdpeak and \u03bdL\u03bdpeak (see Fig. 12). Nevertheless, it indicates that FSRQs with higher ratio of disk to jet emission could have higher peak frequency. The FSRQs in our sample are de\ufb01ned from the \ufb02at spectrum between 1.4 and 4.85 GHz \u03b1 < 0.5. However, this de\ufb01nition may in\ufb02uenced by several factors. It is well known that FSRQs usually show strong and rapid variability (e.g. Gu et al. 2006). Another factor is the di\ufb00erent resolution at 1.4 and 4.85 GHz. FIRST 1.4 GHz data are obtained from VLA observations, which have much higher resolution than Green Bank telescope observations for 4.85 GHz data. In these respects, the simultaneous multi-band observations with same telescope con\ufb01guration (same resolution) is required to calculate the radio spectral index, and then to understand the nature of sources. 7 SUMMARY We have constructed a sample of 185 Flat Spectrum Radio Quasars (FSRQs) by cross-correlating the Shen et al. (2006) SDSS DR3 X-ray quasar sample with FIRST and GB6 radio catalogues. From the spectrum energy distraction (SED) constructed using multi-band (radio, optical, Infrared and X-ray) data, we derived the synchrotron peak frequency and peak luminosity. The black hole mass MBH and the broad line region (BLR) luminosity (then the bolometric luminosity Lbol) were obtained by measuring the line-width and strength of broad emission lines from SDSS spectra. We de\ufb01ne a subsample of 118 FSRQs, of which the nonthermal jet emission are thought to be dominated over thermal ones from accretion disk and host galaxy. The various correlations were explored for this subsample. The main results are summarized below. 1. A weak anti-correlation is found between the synchrotron peak frequency and peak luminosity. When combining our FSRQs sample with the Wu, Gu & Jiang (2008) sample of 170 BL Lac objects, a signi\ufb01cant anti-correlation between the synchrotron peak frequency and luminosity apparently presents covering about seven order of magnitude in \u03bdpeak. However, the anti-correlation di\ufb00ers with the blazar sequence in the large scatter. 2. We found 25 FSRQs having synchrotron peak frequency \u03bdpeak > 1015 Hz, which is higher than the typical value for FSRQs. These sources with high \u03bdpeak could be the targets for the Fermi Gamma-ray telescope. At similar peak frequency, the peak luminosity of FSRQs with \u03bdpeak > 1015 Hz is systematically higher than that of BL Lac objects, with some FSRQs out of the range covered by BL Lac objects. Though high \u03bdpeak are found in some FSRQs, they do not reach the extreme value of BL Lacs. 3. No strong correlations are found either between the synchrotron peak frequency and black hole mass, or between the synchrotron peak frequency and the Eddington ratio. The peak luminosity is found to be tightly correlated with both black hole mass and the Eddington ratio indicating that the jet physics may be tightly related with the accretion process, which is further con\ufb01rmed by the tight correlation between the synchrotron peak luminosity and the BLR luminosity. 8 ACKNOWLEDGEMENTS We thank the referee, Hermine Landt, for insightful suggestions, which is great helpful in improving our paper. We thank Tinggui Wang, Xiaobo Dong, Dawei Xu, Wei Zhang, Tuo Ji and Chen Hu for helps on data reduction. Z.Y. Chen thanks Center for Astrophysics of USTC for hospitality during his stay in USTC. Shiyin Shen are appreciated for providing the X-ray data, Marianne Vestergaard for Fe templates, and Paolo Padovani for information on \u2018HBL box\u2019. We thank Markus B\u00a8 ottcher, Dongrong Jiang, Zhonghui Fan and Fuguo Xie for useful discussions. This work makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This work is supported by National Science Foundation of China (grants 10633010, 10703009, 10833002, 10773020 and 10821302), 973 Program (No. 2009CB824800), and the CAS (KJCX2-YW-T03). Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli \f10 Z. Y. Chen, M. F. Gu and X. Cao Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington." + } + ], + "Qing Yu": [ + { + "url": "http://arxiv.org/abs/2307.16204v1", + "title": "Open-Set Domain Adaptation with Visual-Language Foundation Models", + "abstract": "Unsupervised domain adaptation (UDA) has proven to be very effective in\ntransferring knowledge obtained from a source domain with labeled data to a\ntarget domain with unlabeled data. Owing to the lack of labeled data in the\ntarget domain and the possible presence of unknown classes, open-set domain\nadaptation (ODA) has emerged as a potential solution to identify these classes\nduring the training phase. Although existing ODA approaches aim to solve the\ndistribution shifts between the source and target domains, most methods\nfine-tuned ImageNet pre-trained models on the source domain with the adaptation\non the target domain. Recent visual-language foundation models (VLFM), such as\nContrastive Language-Image Pre-Training (CLIP), are robust to many distribution\nshifts and, therefore, should substantially improve the performance of ODA. In\nthis work, we explore generic ways to adopt CLIP, a popular VLFM, for ODA. We\ninvestigate the performance of zero-shot prediction using CLIP, and then\npropose an entropy optimization strategy to assist the ODA models with the\noutputs of CLIP. The proposed approach achieves state-of-the-art results on\nvarious benchmarks, demonstrating its effectiveness in addressing the ODA\nproblem.", + "authors": "Qing Yu, Go Irie, Kiyoharu Aizawa", + "published": "2023-07-30", + "updated": "2023-07-30", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction With the increasing availability of large datasets and powerful machine learning techniques, deep learning models have achieved remarkable success in many computer vision applications, such as image recognition [11], object detection [23] and natural language processing [6]. To solve the problem of acquiring labeled large-scale data, unsupervised domain adaptation (UDA) [9, 31, 32, 28] aims to transfer knowledge learned from a labeled source domain to an unlabelled target domain. Traditional domain adaptation techniques assume that the source and target domains share the same set of classes. However, in many applications, there may exist unknown classes in the target domain that were not present in the source domain. This scenario is named open-set domain adaptation (ODA), which is a more Figure 1: The conceptions of the existing ODA methods, zero-shot prediction of CLIP, and the proposed method. The proposed ODA with CLIP trains the ODA model with the guidance of CLIP. challenging problem that addresses the transfer of knowledge across domains with different class sets, including unknown classes. The major challenge in ODA is to identify unknown classes during the training phase. Existing ODA methods typically initialize their models with pre-trained models on ImageNet, and then fine-tune them with the source and target data, aiming to solve distribution shifts between the two domains. However, the performance of these methods heavily relies on the quality of the pre-trained models and the degree of distribution shift between the two domains. In recent years, visual-language foundation models (VLFM), such as Contrastive Language-Image PreTraining (CLIP) [21], have shown impressive performance in various computer vision and natural language processing tasks. Because these models are trained on extremely arXiv:2307.16204v1 [cs.CV] 30 Jul 2023 \flarge-scale datasets containing various types of data, these models are shown to have the ability to generalize to many domains [38]. Based on this kind of observation, we consider that CLIP can be used to improve the performance of ODA, including the classification of known classes and the identification of unknown classes. In this work, we focus on exploring the potential of CLIP for ODA. Specifically, we first investigate the robustness of CLIP for ODA on different domains and datasets. We then explore a framework to use the zero-shot predictions of CLIP to enhance the ODA performance. In our approach, we calculate the entropy of the outputs of CLIP on the target domain and the target samples having low entropy are regarded as known samples, while the target samples having high entropy are regarded as unknown samples. To achieve ODA, we train another image classification model with source samples, named ODA model. For detected known samples of the target domain, the predictions of CLIP are distilled to the ODA model, where we try to use the knowledge of CLIP to help the adaptation of target known samples. For detected unknown samples of the target domain, these samples are further separated from the known samples by maximizing the entropy of the ODA model, where the ODA model is trained to output low-confidence predictions on these unknown samples. By incorporating the outputs of CLIP with the entropy optimization strategy, we aim to provide ODA models with more informative and discriminative features, leading to better performance on ODA. Moreover, because the ODA model can be trained separately from the adaptation of the target domain, the coexistence of source and target samples during training is not required. This means our method can also be applied to source-free ODA (SF-ODA), where the adaptation step of target samples can be achieved only with the ODA model and no access to the source domain data is needed. We evaluated the proposed method under various DA settings and our experimental results demonstrated that our method enhanced the ODA performance via CLIP and our technique performed far better than current ODA and SFODA methods. This study made the following contributions. \u2022 We investigate the performance of the zero-shot predictions obtained from CLIP in the ODA problem. \u2022 We proposed an entropy optimization strategy for the predictions of CLIP to improve ODA models in the classification of known samples and the detection of unknown samples. \u2022 The proposed method can not only solve ODA, but also works in the SF-ODA setting. We evaluate our method across several benchmarks of domain adaptation and our approach outperformed other existing methods by a large margin. Method Pre-trained Model Source-free? Need fine-tuning? CLIP [21] CLIP \u2713 \u2717 DANCE [26] ImageNet \u2717 \u2713 OVA [27] ImageNet \u2717 \u2713 SHOT [13] ImageNet \u2713 \u2713 OneRing [36] ImageNet \u2713 \u2713 Proposed CLIP + ImageNet \u2713 \u2713 Table 1: Summary of recent related methods for ODA. Our proposed method is the only method that incorporates CLIP into the ODA methods. 2. Related Work Currently, there are several different approaches to ODA and SF-ODA. Table 1 summarizes the key methods. 2.1. Open-set Domain Adaptation Several techniques for UDA have demonstrated notable success in learning a robust classifier for labeled source data and unlabeled target data. The label sets of the source and target domains are denoted as Cs and Ct, respectively. UDA often involves a closed-set domain adaptation task where Cs equals Ct, and distribution alignment methods such as those proposed by [8, 14] have been suggested to address this task. In the presence of unknown target classes, where Cs is a subset of Ct, ODA has been proposed as a solution to address the class mismatch problem in real-world scenarios. One potential method for ODA is to use the importance weighting of source and target samples within a universal adaptation network, as proposed by [37]. Domain adaptive neighborhood clustering through entropy optimization (DANCE), introduced by [26], achieves strong performance by leveraging neighborhood clustering and entropy separation for weak domain alignment. The most advanced ODA approach is the one-versus-all network (OVANet) developed by [27], which trains one-versus-all classifiers for each class using labeled source data and adapts the open-set classifier to the target domain by minimizing the cross-entropy. 2.2. Source-free Open-set Domain Adaptation It is worth noting that all prior UDA and ODA approaches require the presence of both source and target samples during training. This presents a significant challenge, as access to labeled source data may not be available after deployment for various reasons such as privacy concerns (e.g., biometric data), proprietary datasets, or simply because training on the entire source data is computationally infeasible in real-time deployment scenarios. To solve these \fproblems, source hypothesis transfer (SHOT) [13] has been proposed for source-free UDA, which freezes the classifier module of the source model and instead focuses on learning a target-specific feature extraction module by leveraging both information maximization and self-supervised pseudolabeling techniques. USFDA [12] exploits the knowledge of class-separability to detect unknown samples for SFODA. OneRing proposed by [36] can be adapted to the target domain easily by the weighted entropy minimization to achieve SF-ODA. 2.3. Visual-Language Foundation Models With the development of Transformers for both vision [17, 7] and language [33] tasks, large-scale pre-training frameworks have become increasingly popular in recent years and have shown promising results in computer vision and natural language processing. One of the pioneer works for language pre-training is GPT [22], which optimizes the probability of output based on previous words in the sequence. Meanwhile, BERT [6] adopts the masked language modeling technique and predicts masked tokens conditioned on the unmasked ones. In computer vision, the emergence of large-scale image datasets has also led to the development of pre-training models. IGPT [3] proposes a generative pre-training technique and shows promising results on classification tasks, while MAE [10] adopts a similar pre-training scheme as BERT and predicts the masked regions of an image with unmasked ones. In recent years, vision-language foundation models have gained significant attention due to the availability of enormous image-text pairs collected from the internet. Various pre-training schemes have been adopted in these approaches, including contrastive learning [15], masked language modeling [30], and masked region modeling [4]. CLIP [21] is a recent representative pre-training model that aims to learn joint representations of vision and language by training on a large-scale dataset of image-text pairs. CLIP has achieved state-of-the-art performance on several visuallanguage benchmarks and has been shown to generalize well to different datasets. Moreover, CLIP has also been used to detect unknown samples [16]. In both ODA and SF-ODA, the existing methods usually start by initializing their models with pre-trained models on ImageNet, which is a relatively small dataset compared to the ones used in VLFM. Because the efficacy of these methods largely relies on the quality of the pre-trained models, VLFM like CLIP has a large potential to improve the performance of ODA and SF-ODA. Instead of fine-tuning VLFM with a large computational cost, we propose a lightweight way to apply the CLIP for ODA by simply using the zeroshot predictions of CLIP. 3. Method In this section, we present our problem statement and proposed entropy optimization framework with CLIP for ODA and SF-ODA as shown in Fig. 2. 3.1. Problem Statement We assume that a source image-label pair {xs, ys} is drawn from a set of labeled source images, {Xs, Ys}, while an unlabeled target image xt is drawn from a set of unlabeled images Xt. Cs and Ct denote the label sets of the source samples and target samples, respectively. In ODA, the known classes are the classes of source data, and certain unknown classes are present in the unlabeled source and target data, i.e., Cs \u2282Ct. These unknown target classes are denoted by f Ct = Ct\\Cs. Given a target sample xt, the goal of ODA is to predict its label yt as one of the source classes Cs correctly or detect it as an unknown sample if it belongs to e Ct. In SF-ODA, {Xs, Ys} is not accessible when training with Xt, so the adaptation needs to be achieved only with Xt and the model trained on {Xs, Ys}. The mini-batch training process involves two sets of data, where Ds = (xi s, yi s) N i=1 represents a mini-batch of size N that is sampled from the source samples, and Dt = (xi t) N i=1 represents a mini-batch of size N that is sampled from the target samples. 3.2. Zero-shot Prediction using CLIP CLIP is composed of an image encoder Fclip and a language model Gclip. It utilizes the similarity between the embeddings of a text prompt t and image features to classify images, instead of using a classification head trained from scratch. The prediction is obtained by computing the cosine similarity between Fclip(xi t) and Gclip(tk) for class prompts tk: \u02c6 yi = arg max k\u2208Cs Fclip(xi t) \u00b7 Gclip(tk), (1) where Cs is class categories in the source domain and \u00b7 is cosine similarity. To evaluate the power of the pre-trained CLIP model for ODA, we tested the zero-shot prediction of CLIP. We froze both the image encoder and the language model and replaced the class labels in each dataset with the text prompt t as \u201cA photo of a {label}\u201d. In ODA, because there are unknown samples exist in the target domain, we detect these samples according to the entropy of the predictions. First, we transfer the cosine similarity to the probability \u02c6 p as follows: \u02c6 p(k|xi t) = exp(Fclip(xi t) \u00b7 Gclip(tk)/\u03c4) PK k=1 exp(Fclip(xi t) \u00b7 Gclip(tk)/\u03c4) , (2) \fFigure 2: Overview of the proposed framework. Our network has an ODA model (Foda) to classify the source and target samples. CLIP is also used to generate zero-shot predictions of target samples. The output of Foda is trained with the guidance of CLIP\u2019s predictions and entropy optimization. where \u02c6 p(k|xi t) denotes probability of the sample xi t belonging to class k and the \u03c4 controls the distribution concentration degree and is set as \u03c4 = 0.01 in this paper. We then calculate the entropy of \u02c6 p as H(\u02c6 p), and if H(\u02c6 p) of a target sample xi t is larger than a threshold \u03b4, it will be predicted as the unknown class because the prediction has low confidence on all the known classes. For other known samples having small entropy, their classes will be predicted by Eq. (1). 3.3. ODA Model Preparation To improve ODA models with the help of CLIP, we first need to prepare a simple model to classify the source samples, which has a feature extractor and a classifier. This model is denoted as Foda and outputs a probability vector Foda(xi) \u2208R|Cs|. To classify the known categories correctly, we simply train the model using standard crossentropy loss on labeled source data, expressed as: Ls(Ds) = \u22121 N N X i=1 |Cs| X k=1 yik s log p(k|xi s), (3) where p(k|xi s) represents the probability that sample xi s belongs to class k predicted by the classifier, which is the k-th output of Foda(xi s), and yik s denotes the binary label whether the sample belongs to class k. 3.4. Entropy Optimization with CLIP As noted in [37, 26], when compared to target known samples, the output of the classifier for target unknown samples is likely to have a higher entropy due to the absence of common features shared by known source classes. Building on this observation, we propose utilizing the entropy to distinguish between known and unknown samples. We apply this entropy strategy to both the outputs from the ODA model and the predictions from CLIP. 3.4.1 Domain Adaptation via Entropy Separation To adapt the ODA model Foda to the target domain, we apply the entropy separation loss proposed by [26] to the target samples as follows: Lent(Dt) = 1 N N X i=1 \u02dc Lent(Dt) (4) \u02dc Lent(Dt) = ( \u2212|H(p) \u2212\u03b4| if |H(p) \u2212\u03b4| > m 0 otherwise , (5) where H(p) is the entropy of p(xi t), \u03b4 denotes the threshold and m denotes the margin for separation. \u03b4 is set to log(|Cs|) 2 , because log(|Cs|) is the maximum value of H(p). When the entropy is larger than the threshold and not in the margin area, i.e., H(p) > \u03b4 + m (or H(p) \u2212\u03b4 > m), this sample will be considered as an unknown sample and its \fentropy will be increased by minimizing Eq. (5). This step can keep the unknown samples far from the source samples. Otherwise, when the entropy is small enough, i.e., H(p) < \u03b4 \u2212m (or H(p) \u2212\u03b4 < \u2212m), this sample will be considered as a known sample and its entropy in Eq. (5) will be decreased. This kind of entropy minimization facilitates DA of known classes in UDA tasks [2, 25]. 3.4.2 CLIP-guided Domain Adaptation By the entropy separation of the ODA model, Foda is able to achieve ODA to a certain extent. To improve the performance of ODA, we additionally use the zero-shot predictions of CLIP to train Foda. As mentioned in Section 3.2, we detect the unknown samples according to the entropy of CLIP\u2019s prediction H(\u02c6 p). We denote the detected target unknown samples as \u02c6 Dunk t whose H(\u02c6 p) > \u03b4 in Dt, and the remaining known samples as \u02c6 Dkwn t whose H(\u02c6 p) <= \u03b4. For the target known samples \u02c6 Dkwn t , we directly use the zero-shot predictions of CLIP \u02c6 p on these samples as the pseudo-label to train the Foda as follows: Lkwn( \u02c6 Dkwn t ) = \u2212 1 | \u02c6 Dkwn t | | \u02c6 Dkwn t | X i=1 |Cs| X k=1 \u02c6 p(k|xi t) log p(k|xi t), (6) where we aim to provide ODA models with the knowledge of CLIP, leading to better classification results of known classes. Regarding the target unknown samples \u02c6 Dunk t , we increase the entropy of the outputs on these samples obtained from Foda as follows: Lunk( \u02c6 Dunk t ) = 1 | \u02c6 Dunk t | | \u02c6 Dunk t | X i=1 \u2212H(\u02c6 p), (7) where we try to incorporate the unknown classes detected by CLIP with the ones detected by Foda in Eq. (5). 3.5. Overall Objective Function In summary, our entropy optimization framework performs the supervised training with source samples, entropy separation with target samples, and CLIP-guided domain adaptation. The overall learning objective for ODA is min Foda Ltotal =Ls(Ds) + Lent(Dt) + Lkwn( \u02c6 Dkwn t ) + Lunk( \u02c6 Dunk t ). (8) For SF-ODA, because Ds is not accessible during the training with Dt, we assume that the ODA model Foda pretrained over the source samples Ds is available instead. In our experiments, we pre-train Foda in a standard supervised classifier learning manner, i.e., by minimizing: min Foda Lpretrain = Ls(Ds). (9) Dataset Domain #Total known samples #Total unknown samples Office Amazon (A) 958 1009 DSLR (D) 157 175 Webcam (W) 295 269 Office -Home Art (A) 743 1,684 Clipart (C) 1,116 3,249 Product (P) 1,077 3,362 Real (R) 1,203 3,154 VisDA Synthetic 79,765 Real 34,146 21,242 DomainNet Clipart (C) 8,333 10,370 Painting (P) 13,049 18,453 Real (R) 33,238 37,120 Sketch (S) 9,309 15,273 Table 2: Overall statistics of each dataset. We further train it with entropy separation and CLIP-guided domain adaptation over Dt as follows: min Foda Ltotal = Lent(Dt) + Lkwn( \u02c6 Dkwn t ) + Lunk( \u02c6 Dunk t ). (10) 4. Experiment 4.1. Experimental Setup 4.1.1 Datasets Following existing studies [37, 26], we used four datasets to validate our approach. (1) Office [24] consists of three domains (Amazon, DSLR, Webcam), and 21 of the total 31 classes are used in ODA [29]. (2) Office-Home [34] contains four domains (Art, Clipart, Product, and Real) and 65 classes. (3) VisDA [20] contains two domains (Synthetic and Real) and 12 classes. (4) A subset of DomainNet [19] contains four domains (Clipart, Real, Painting, Sketch) with 126 classes. To create the scenarios for ODA, we split the classes of each dataset according to [26], as |Cs|/|f Ct| = 10/11 for Office, 15/50 for OfficeHome, 6/6 for VisDA, 60/66 for DomainNet. Table 2 summarizes the overall statistics of each dataset used in our experiments. For Office and Office-Home, each domain is used as the source and target domains. For VisDA, only the synthetic-to-real task was performed. For DomainNet, seven tasks from four domains (C2S, P2C, P2R, R2C, R2P, R2S, S2P) were performed as described in [25]. \f4.1.2 Comparison of Methods We compared the proposed method with two baseline methods: (1) the zero-shot prediction by CLIP [21] as described in Section 3.2, and (2) source only (SO), in which the model is trained only with labeled source data and the unknown classes are detected based on the entropy. We also compared it with two ODA methods, (1) DANCE [26] and (2) OVA [27], and two SF-ODA methods, (1) SHOT [13] and (2) OneRing [36]. We chose to exclude the results of standard domain alignment baselines such as DANN [8] and CDAN [14] from our analysis, as prior research of ODA [26, 35] has demonstrated that these methods can lead to a notable decline in performance when rejecting unknown samples. 4.1.3 Evaluation Protocols Evaluating ODA methods requires taking into account the trade-off between the accuracy of known and unknown classes. We used the H-score metric [27, 1]. When the unknown classes are regarded as a unified unknown class, the H-score is the harmonic mean of the accuracy of known classes (acckwn) and that of the unified unknown class (accunk), as follows. Hscore = 2acckwn \u00b7 accunk acckwn + accunk . (11) The H-score is high only when both acckwn and accunk are high, indicating that this metric accurately measures both accuracies. We report the averages of the scores obtained from three trials with different random seeds in all experiments. 4.1.4 Implementation Details All experiments are implemented in PyTorch [18]. We used the same network architecture and hyperparameters as in [26]. We implemented our network using ResNet-50 [11] pre-trained on ImageNet [5] as the ODA model Foda. We set the threshold of the entropy as log(|Cs|) 2 and the margin m as 0.5 for our method. For CLIP, we use the original implementation and model in [21]. 4.2. Experimental Results Main results. Table 3 shows the ODA and SF-ODA results on each dataset. We compared our method (nonsource-free and source-free) with ODA methods, DANCE and OVA, and SF-ODA methods, SHOT and OneRing. It is noticeable that the proposed method outperformed other existing methods by a large margin. It is also surprising to find that the zero-shot prediction of CLIP is comparable to some ODA and SF-ODA methods in Office and OfficeHome datasets. In VisDA and DomainNet, we found that Method OF OH VD DN CLIP 76.37 65.71 79.49 66.16 SO 60.91 56.88 43.57 59.14 DANCE 77.82 63.33 67.87 58.03 OVA 89.57 70.61 59.80 62.23 Ours 92.79 79.43 80.68 76.23 SHOT 78.00 63.08 47.08 58.64 OneRing 89.87 67.65 51.21 58.46 Ours SF 91.87 80.67 83.81 76.13 Table 3: H-scores (%) of ODA and SF-ODA on each dataset (OF: Office, OH: Office-Home, VD: VisDA, DN: DomainNet). \u201cOurs SF\u201d denotes the source-free version of the proposed method. The average scores of all tasks for each dataset are reported. The bold values represent the highest scores for each row. (a) Amazon \u2192Webcam in Office dataset. (b) Art \u2192Real in Office-Home dataset. Figure 3: Histogram of entropy of CLIP\u2019s zero-shot prediction on the target known (in orange) and unknown (in blue) samples. (a) Amazon \u2192Webcam in Office dataset. (b) Art \u2192Real in Office-Home dataset. Figure 4: Histogram of the prediction\u2019s entropy obtained from the proposed method on the target known (in orange) and unknown (in blue) samples. CLIP performs better than all the compared methods without any fine-tuning. We considered that this is because the classes in these two datasets are more coarse-grained and the domains are more common in the training data of CLIP, e.g., Painting, Clipart, Sketch, and Real. These results \f(a) Amazon \u2192Webcam in Office dataset. (b) Art \u2192Real in Office-Home dataset. Figure 5: Confusion matrix of CLIP\u2019s zero-shot prediction on the Amazon \u2192Webcam dataset. (a) Amazon \u2192Webcam in Office dataset. (b) Art \u2192Real in Office-Home dataset. Figure 6: Confusion matrix of the proposed method on the Amazon \u2192Webcam dataset. Method Office Avg A2D A2W D2A D2W W2A W2D CLIP 81.64 75.08 72.39 75.08 72.39 81.64 76.37 SO 56.59 55.81 71.04 68.06 58.58 55.37 60.91 DANCE 83.87 77.80 77.84 84.80 68.06 74.55 77.82 OVA 90.29 86.30 88.28 92.66 86.14 93.75 89.57 Ours 93.69 92.05 91.59 93.45 90.97 95.00 92.79 SHOT 77.42 76.90 83.03 79.76 74.12 76.76 78.00 OneRing 89.31 85.87 88.00 92.07 90.25 93.72 89.87 Ours SF 93.31 89.25 91.86 92.55 90.79 93.48 91.87 Table 4: H-scores (%) on Office under the ODA and SFODA settings. demonstrate that with the power of large-scale pre-training containing training data from multiple domains, existing vision-language foundation models like CLIP are enough to cover the existing datasets for ODA in image classification. Comparison between zero-shot CLIP and the proposed method. To further investigate the performance of CLIP, In Fig. 3, we show the histograms of the entropy of CLIP\u2019s zero-shot predictions on the target known and unknown samples. We can find that there are some overlaps between the known and unknown samples. Especially, in the Art \u2192Real task of the Office-Home dataset, a lot of unknown samples have small entropy, which leads to the low performance of the detection of unknown samples. We plot the histograms of the proposed method in Fig. 4 and the proposed method can separate the known and unknown samples better than CLIP by the threshold \u03b4. We calculate the area under the receiver operating characteristic curve (AUROC) by regarding the detection of unknown samples as a binary classification. The AUROC of CLIP and the proposed method in Amazon \u2192Webcam are 95.91% and 98.18%, respectively. For Art \u2192Real, the AUROC of CLIP and the proposed method is 93.82% and 94.71%. Furthermore, we plot the confusion matrix of CLIP in Fig. 5 in the task Amazon \u2192Webcam of Office dataset and Art \u2192Real task of Office-home dataset. We can see that although the classification of known samples is almost correct, there are some unknown samples predicted as other known classes. In Fig. 6, we also plot the confusion matrix of the proposed method. We can find that the proposed method achieves better performance in the detection of unknown samples with the help of the ODA model trained on the source domain. Detail results on each dataset. The results of each task in the Office dataset are shown in Table 4, which compares the classification results of ODA and SF-ODA obtained via the proposed method with existing state-of-the-art ODA and SF-ODA methods. In terms of ODA, although OVA demonstrated superior performance compared to other existing techniques, our approaches surpassed OVA in most tasks. Furthermore, in the SF-ODA setting, although there is no access to the source data, OneRing achieved good results, which is a little better than that of OVA. Meanwhile, the source-free version of the proposed approach outperformed existing methods by a considerable margin and achieved similar performance to the ODA version. The results of each task in Office-Home, VisDA and DomainNet are shown in Table 5 and Table 6. The performance ranking of each existing method and the proposed method is similar to the Office dataset, where OVA performs best in most datasets but the proposed method outperforms it by a large margin. As mentioned in the main results, CLIP shows strong results on VisDA and DomainNet. 4.3. Ablation Study Variants of the proposed method were evaluated using the Office dataset, for further exploration of the efficacy of the proposed method. The following variants were studied. (1) \u201cOurs w/o Ls\u201d is a variant that does not train with source data in Eq. (3). (2) \u201cOurs w/o Lent\u201d is a variant that does not use entropy separation on target samples in Eq. (5). (3) \u201cOurs w/ Lkwn\u201d is a variant that does not use the prediction \fMethod OfficeHome Avg A2C A2P A2R C2A C2P C2R P2A P2C P2R R2A R2C R2P CLIP 71.17 55.15 63.94 72.57 55.15 63.94 72.57 71.17 63.94 72.57 71.17 55.15 65.71 SO 56.54 54.36 56.89 60.61 53.08 57.09 59.65 54.82 56.24 59.06 57.39 56.85 56.88 DANCE 63.10 60.28 64.78 66.62 57.98 63.62 67.55 61.92 62.81 66.87 61.86 62.59 63.33 OVA 64.42 74.72 77.59 69.09 69.91 73.84 66.64 58.58 77.44 73.65 62.95 78.53 70.61 Ours 76.66 76.02 83.41 81.51 77.28 82.44 81.93 76.48 82.65 82.76 76.10 75.89 79.43 SHOT 60.22 61.19 66.08 64.48 62.24 68.07 62.64 57.67 65.81 64.64 58.58 65.30 63.08 OneRing 60.99 69.92 73.68 67.01 67.77 71.36 67.74 57.50 74.51 67.83 61.65 71.90 67.65 Ours SF 76.54 79.28 85.46 82.33 78.55 84.83 82.21 76.50 84.59 82.09 76.77 78.94 80.67 Table 5: H-scores (%) on Office-Home under the ODA and SF-ODA settings. Method VisDA DomainNet P2C P2R C2S R2P R2C R2S S2P Avg CLIP 79.49 66.23 67.41 64.05 67.55 66.23 64.05 67.55 66.16 SO 43.57 61.51 62.41 56.32 59.99 59.90 54.42 59.40 59.14 DANCE 67.87 60.62 58.01 55.89 59.50 59.09 53.80 59.29 58.03 OVA 59.80 64.27 65.18 59.58 63.79 63.67 57.28 61.81 62.23 Ours 80.68 78.26 84.48 72.26 74.45 78.18 72.07 73.89 76.23 SHOT 47.08 61.55 62.82 55.45 58.36 61.54 55.40 55.36 58.64 OneRing 51.21 61.40 66.17 53.85 59.38 59.53 52.19 56.72 58.46 Ours SF 83.81 78.45 84.96 72.06 73.58 78.89 71.95 73.03 76.13 Table 6: H-scores (%) on VisDA and DomainNet under the ODA and SF-ODA settings. Method OF OH VD DN Ours 92.79 79.43 80.68 76.23 w/o Ls 90.04 71.86 78.10 74.79 w/o Lent 90.05 78.74 77.32 72.35 w/o Lkwn 87.87 72.98 72.12 70.55 w/o Lunk 82.67 66.98 71.63 69.14 Table 7: H-score (%) of ablation study tasks on each dataset. of CLIP as the pseudo label in Eq. (6). (4) \u201cOurs w/o Lunk\u201d is the variant that does not maximize the entropy of unknown samples detected by CLIP in Eq. (7). Table 7 reveals that the version of our approach that uses all the losses outperforms other variants in all settings. Specifically, the most important component for our method is Lunk, and Lkwn is also necessary to achieve higher performance, which shows the importance of the CLIP\u2019s guidance. 5. Limitations and Future work We proposed a simple method to use CLIP for enhancing the performance of the ODA model, but we believe there could be more efficient methods to adopt CLIP in ODA. Fine-tuning CLIP directly can also be considered, but the computationally cost needs to be concerned, and preventing the fine-tuning process from causing the model to overfit to the source domain is necessary. It is certainly intriguing to develop a more complicated method for ODA with CLIP, which is our future work. 6." + } + ] + }, + "edge_feat": {} + } +} \ No newline at end of file