diff --git "a/abs_29K_G/test_abstract_long_2405.03690v2.json" "b/abs_29K_G/test_abstract_long_2405.03690v2.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.03690v2.json" @@ -0,0 +1,778 @@ +{ + "url": "http://arxiv.org/abs/2405.03690v2", + "title": "How Good is my Video LMM? Complex Video Reasoning and Robustness Evaluation Suite for Video-LMMs", + "abstract": "Recent advancements in Large Language Models (LLMs) have led to the\ndevelopment of Video Large Multi-modal Models (Video-LMMs) that can handle a\nwide range of video understanding tasks. These models have the potential to be\ndeployed in real-world applications such as robotics, AI assistants, medical\nsurgery, and autonomous vehicles. The widespread adoption of Video-LMMs in our\ndaily lives underscores the importance of ensuring and evaluating their robust\nperformance in mirroring human-like reasoning and interaction capabilities in\ncomplex, real-world contexts. However, existing benchmarks for Video-LMMs\nprimarily focus on general video comprehension abilities and neglect assessing\ntheir reasoning capabilities over complex videos in the real-world context, and\nrobustness of these models through the lens of user prompts as text queries. In\nthis paper, we present the Complex Video Reasoning and Robustness Evaluation\nSuite (CVRR-ES), a novel benchmark that comprehensively assesses the\nperformance of Video-LMMs across 11 diverse real-world video dimensions. We\nevaluate 9 recent models, including both open-source and closed-source\nvariants, and find that most of the Video-LMMs, especially open-source ones,\nstruggle with robustness and reasoning when dealing with complex videos. Based\non our analysis, we develop a training-free Dual-Step Contextual Prompting\n(DSCP) technique to enhance the performance of existing Video-LMMs. Our\nfindings provide valuable insights for building the next generation of\nhuman-centric AI systems with advanced robustness and reasoning capabilities.\nOur dataset and code are publicly available at:\nhttps://mbzuai-oryx.github.io/CVRR-Evaluation-Suite/.", + "authors": "Muhammad Uzair Khattak, Muhammad Ferjad Naeem, Jameel Hassan, Muzammal Naseer, Federico Tombari, Fahad Shahbaz Khan, Salman Khan", + "published": "2024-05-06", + "updated": "2024-05-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Multi AND Modal AND LLM", + "gt": "Recent advancements in Large Language Models (LLMs) have led to the\ndevelopment of Video Large Multi-modal Models (Video-LMMs) that can handle a\nwide range of video understanding tasks. These models have the potential to be\ndeployed in real-world applications such as robotics, AI assistants, medical\nsurgery, and autonomous vehicles. The widespread adoption of Video-LMMs in our\ndaily lives underscores the importance of ensuring and evaluating their robust\nperformance in mirroring human-like reasoning and interaction capabilities in\ncomplex, real-world contexts. However, existing benchmarks for Video-LMMs\nprimarily focus on general video comprehension abilities and neglect assessing\ntheir reasoning capabilities over complex videos in the real-world context, and\nrobustness of these models through the lens of user prompts as text queries. In\nthis paper, we present the Complex Video Reasoning and Robustness Evaluation\nSuite (CVRR-ES), a novel benchmark that comprehensively assesses the\nperformance of Video-LMMs across 11 diverse real-world video dimensions. We\nevaluate 9 recent models, including both open-source and closed-source\nvariants, and find that most of the Video-LMMs, especially open-source ones,\nstruggle with robustness and reasoning when dealing with complex videos. Based\non our analysis, we develop a training-free Dual-Step Contextual Prompting\n(DSCP) technique to enhance the performance of existing Video-LMMs. Our\nfindings provide valuable insights for building the next generation of\nhuman-centric AI systems with advanced robustness and reasoning capabilities.\nOur dataset and code are publicly available at:\nhttps://mbzuai-oryx.github.io/CVRR-Evaluation-Suite/.", + "main_content": "Introduction Recently, Large Language Models (LLMs) [Touvron et al., 2023, Zheng et al., 2023, Jiang et al., 2024] have demonstrated impressive reasoning and planning capabilities while simultaneously handling a wide range of NLP tasks [Wei et al., 2022a, Brown et al., 2020]. Consequently, their integration with the vision modality, specifically for video understanding tasks, has given rise to Video Large Multi-modal Models (Video-LMMs) [Li et al., 2023b]. These models act as visual chatbots that accept both text and video as input and handle a diverse set of tasks, including video comprehension [Maaz et al., 2023], detailed video understanding [Lin et al., 2023], and action grounding [Zhang et al., 2023]. As these models directly capture video data, they hold substantial potential for deployment in real-world applications such as robotics, surveillance, medical surgery, and autonomous vehicles. However, as these models assume an expanding role in our everyday lives, assessing their performance in comprehending complex videos and demonstrating reliable reasoning and robustness capabilities arXiv:2405.03690v2 [cs.CV] 8 May 2024 \fBenchmark Textual Complex In the wild Contextual Multiple Temporal Order Robustness Reasoning (OOD) Dependency Actions & Fine-grained MSVD-QA [Xu et al., 2017] MSRVTT-QA [Xu et al., 2017] TGIF-QA [Jang et al., 2017] Activity Net-QA [Yu et al., 2019] VideoChat-GPT [Maaz et al., 2023] MVBench [Li et al., 2023c] SEED-Bench [Li et al., 2023a] CVRR-ES (ours) Table 1: Comparison of CVRR-ES with existing benchmarks for video QA. The CVRR-ES benchmark represents an initial effort to assess Video-LMMs in the context of their applicability and suitability in real-world applications. Non-existent actions with non-existent scene depictions. 6.0% Multiple actions in a single video. 13.25% Fine-grained action understanding. 9.58% Partial actions. 8.58% Non-existent actions with existent scene depictions. 5.75% Interpretation of visual context. 11.38% Continuity and Object Instance Count. 7.38% Unusual and Physically Anomalous activities. 7.92% Interpretation of social context. 11.67% Understanding of emotional context. 12.17% Time order understanding. 6.33% CVRR Evaluation Suite 0 20 40 60 80 100 Accuracy % (averaged over 11 video dimensions) Video LLaVa MovieChat LLaMA-VID Video-LLaMA-2 Video-ChatGPT VideoChat TimeChat Gemini-Pro GPT4V(ision) Human Video LMMs 15.92% 16.41% 16.46% 21.62% 24.96% 25.78% 32.89% 53.2% 70.78% 96.67% Figure 1: Left: CVRR-ES comprises of 11 diverse complex video evaluation dimensions encompassing a variety of complex, real-world contexts. Right: Overall performance of Video-LMMs on the CVRR-ES benchmark. Results for each Video-LMM are averaged across 11 video dimensions. across diverse real-world contexts becomes essential. Video-LMMs with such capabilities will be more effective when integrated into our daily lives for solving perception tasks and will be a promising step towards building human-centric AI-assistive systems. Several attempts in literature have been made to benchmark Video-LMMs. SEED-Bench [Li et al., 2023a] curated a MCQ-based benchmarking dataset including 3 evaluation dimensions for videos. Similarly, MV-Bench [Li et al., 2023c] constructed the Video-LMM benchmark and assembled 20 challenging video tasks for evaluating the spatial and temporal understanding of these models. While these methods aim at benchmarking Video-LMMs, they predominantly evaluate video and/or temporal comprehension abilities and overlook the complex reasoning aspects of Video-LMMs for real-world context, and their robustness towards user input text queries; both of which are crucial to ensure their responsible engagement with humans in various real-world situations in the wild. While some studies have explored similar areas such as hallucinations in image-based LLMs [Liu et al., 2023a, Qian et al., 2024], no such comprehensive study exists for the case of Video-LMMs. Motivated by the wide-scale applications of Video-LMMs and the lack of world-centric complex video benchmarking efforts, we present a new benchmark, Complex Video Reasoning and Robustness Evaluation Suite (CVRR-ES), to comprehensively assess the performance of Video-LMMs. As shown in Tab. 1, CVRR-ES evaluates Video-LMMs on key aspects of robustness and reasoning in videos, encompassing video domains that more accurately test models in real-world scenarios such as videos having contextual dependency and in-the-wild aspects. CVRR-ES is an open-ended video QA benchmark comprising 11 real-world video category dimensions (Fig. 1, left) that encompass diverse evaluation aspects. These dimensions span from context-dependent (e.g., social, emotional, etc.) categories to ones that often take place in the wild such as videos containing physically anomalous activities. We comprehensively evaluate a representative set of 9 recent Video-LMMs (Fig. 1, right) including both open-source and closed-source models on the CVRR-ES benchmark using a LLM-assisted automatic evaluation framework [Maaz et al., 2023, Cai et al., 2023]. The performance of Video-LMMs on the CVRR-ES benchmark reveals that these models struggle to correctly comprehend complex videos indicating their weak reasoning and lack of robustness to the textual user queries (Fig. 2). For instance, state-of-the-art Video-LLaVA [Lin et al., 2023] achieves only 15.92% performance averaged across 11 video dimensions of CVRR-ES. In contrast, closed-source models including GPT4V(vision) [OpenAI, 2023] and Gemini-Vision-Pro [Google, 2023] exhibit relatively stronger performance but still lag behind the performance of humans. Using CVRR-ES benchmark, we extensively perform quantitative and qualitative analysis formulating important insights into these Video-LMMs based on their failure cases and individual performances across the diverse video dimensions. 2 \f Based on the visual cues provided in the sequence of images, it appears that four individuals exit the vehicle. Correct Response: Three people exit the car to retrieve black bags from the trunk. 2. Two individuals exit the vehicle to retrieve items from the trunk. Two individuals exit the vehicle to retrieve items from the trunk. Two individuals exit the vehicle to retrieve items from the trunk. In the video, we can see a young man and woman exiting the car \u2026 How many individuals exit the vehicle to retrieve items from the trunk? Based on \u2026, one person is performing a prank on another individual. The prank seems to involve pouring a yellow liquid from a bowl, which is sneakily positioned behind the unsuspecting person's back at waist level. Correct Response: one man uses his back muscles to crack eggs placed by another man into a bowl behind him. The process is amusing and ends with a celebratory turn by the man holding the eggs. \u2026 Another man is holding a bowl of food behind the first man \u2026 the second man appears to be pouring the contents of the bowl down the first man's shorts. A man is seen standing in a kitchen with his hands on his hips. Another man is standing behind him and pouring something into a bowl. a guy is standing with his back facing someone else. The person standing behind him pours water on his back, causing him to shiver. In the video, a man is seen standing in a kitchen and washing dishes. He then puts on a white robe and continues washing dishes. In the video, we see a young man pouring water from a bucket to a plastic bowl. What is happening in the video? Why is the golfer celebrating in the video? Based on \u2026 the golfer is celebrating because they successfully hit the golf ball from a sand bunker onto the green and possibly close to the hole\u2026 Correct Response: The golfer is not celebrating and instead he is expressing his anger and frustration by hitting the golf stick into the ground. The golfer is celebrating because he made a hole-in-one. A hole-in-one is \u2026 The golfer is celebrating because he hit the ball into hole on his first attempt. The golfer is celebrating because he has just hit the ball into the hole. The golfer is celebrating because he has just made a hole in one, \u2026 The golfer is celebrating in the video because he is successfully teeing off and \u2026 Before the mug was moved, a tennis ball became visible underneath it. Correct Response: No other object was visible before the mug was moved in the video. A yellow tennis ball was visible in the scene before the mug was moved. A tennis ball. Before the mug was moved, a tennis ball was visible in the scene. In the beginning of the video, the person was holding a mug with a tennis ball on a table. Before the mug was moved, a white vase was visible in the scene. Before the mug was moved, what other object was visible in the scene? A tennis ball was visible in the scene before the mug was moved. GPT4(Vision) Gemini-Pro-V Video-LLaVA LLaMA-VID MovieChat TimeChat Figure 2: We observe that most Video-LMMs struggle to reason over complex videos (rows 1-3) and exhibit weak robustness and rectification capabilities when prompted to generate answers for user questions that can sometimes be confusing (row 4). The QA pairs in Comprehensive Video Reasoning and Robustness Evaluation Suite (CVRR-ES) benchmark assess the performance of Video-LMMs beyond general video comprehension. Based on our analysis, we observe that standard prompting of Video-LMMs struggles in steering their focus for complex video understanding. Additionally, their limitations in reasoning and robust video understanding of real-world scenarios are dominantly driven by the quality of textual inputs (i.e., user questions). Based on these insights, we develop a training-free Dual-Step Contextual Prompting (DSCP) technique, which effectively steers the model\u2019s behavior during inference to elicit video-specific reasoning and improved robustness within Video-LMMs. With DSCP, Video-LMMs show substantial improvements on our benchmark, suggesting the potential of prompting techniques for Video-LMMs. Our main contributions can be summarised as follows: \u2022 We present the Complex Video Robustness and Reasoning Evaluation suite (CVRR-ES), a Video Question Answering benchmark designed to assess the reasoning and robustness capabilities of Video-LMMs across 11 diverse world-centric complex video dimensions. \u2022 We comprehensively evaluate both open-source and closed-source Video-LMMs on the CVRR-ES benchmark and find that most models exhibit weak performance, highlighting their limited reasoning in complex videos and lack of robustness towards user text queries. \u2022 We conduct extensive analysis and formulate important conclusions about Video-LMMs based on their failure cases and performance on the CVRR-ES benchmark. Our findings provide valuable insights for building the next generation of human-centric AI systems with improved robustness and reasoning capabilities. \u2022 To improve Video-LMMs\u2019 reasoning and robustness abilities, we formulate a model-agnostic and training-free prompting technique that effectively enhances their performance. 3 \f2 Related Works Video Large Multi-modal models (Video-LMMs). Video-LMMs [Lin et al., 2023, Li et al., 2023d, Zhang et al., 2023] are advanced visual chatbots capable of performing a wide range of video understanding tasks, including video comprehension and captioning, video question-answering, and action grounding. These models accept both video and textual inputs and generate textual responses. From an architectural perspective, Video-LMMs typically combine pre-trained vision backbones [Radford et al., 2021, Fang et al., 2023, Wang et al., 2022b] with large language models [Touvron et al., 2023, Zheng et al., 2023] using connector modules such as MLP adapters, Q-former [Dai et al., 2023], and gated attention [Alayrac et al., 2022]. VideoChat [Li et al., 2023b] and VideoChat-GPT [Li et al., 2023d] presented initial open-source efforts in this direction and were trained with two stages of alignment and video-instruction following objectives. Recently, more advanced Video-LMMs have emerged in the field, with some models focusing on improving model architectures [Li et al., 2023d], expanding to new tasks [Munasinghe et al., 2023], and enabling support for long videos [Song et al., 2023, Ren et al., 2023]. In this work, we aim to develop a comprehensive benchmarking evaluation framework to assess the reasoning and robustness capabilities of Video-LMMs and develop a training-free prompting technique to improve their performance on these fronts. Benchmarking Video-LMMs. With the growing number of Video-LMMs emerging in the research community, several works have presented evaluation frameworks to assess and quantify these models for benchmarking and analysis purposes. SEED-Bench [Li et al., 2023a] evaluates the visual capabilities in both image and Video-LMMs across 12 unique dimensions. MV-Bench [Li et al., 2023c] curates 20 challenging video tasks to evaluate spatial and temporal understanding of VideoLMMs. Video-ChatGPT [Maaz et al., 2023] develops a quantitative evaluation framework to assess model understanding across five aspects of general video comprehension, such as the correctness and consistency of model captions. While these evaluation frameworks provide effective insights, their assessments do not extend beyond general video-comprehension metrics to more advanced aspects of reasoning and robustness, particularly for real-world context cases. In contrast, our work focuses on providing a complex video reasoning and robustness benchmark across 11 diverse real-world-centric evaluation types and offers a more thorough assessment of Video-LMMs in practical applications. Training-free Prompting Techniques. Steering model behavior at inference time using prompting has become a common paradigm in the NLP domain. Prompting [Wei et al., 2022b, Wang et al., 2022a] refers to the set of instructions given as a prefix to the language model to better align model responses with human intent without the need for task-specific fine-tuning. Prompting techniques can be as simple as a single sentence (e.g., \"Let\u2019s think step by step\") such as zero-shot chain of thought [Wei et al., 2022b] prompting, to more detailed techniques such as combining chain-ofthought prompting with few-shot learning [Brown et al., 2020] and self-consistency chain of thought prompting [Wang et al., 2022a]. Surprisingly, training-free prompting techniques for Video Large Multi-modal Models (Video-LMMs) have been minimally explored. In this work, we develop a dual-step prompting technique based on principled prompt instructions specifically designed to steer the model\u2019s behavior for improved reasoning and robustness over complex videos. 3 Complex Video Reasoning and Robustness Evaluation Suite As Video-LMMs are touching new real-world applications, it is essential to ensure that they robustly handle the user inputs, comprehend the visual world, and exhibit human-like reasoning capabilities. In this work, our goal is to establish a comprehensive benchmark that specifically assess the robustness and reasoning capabilities of Video-LMMs in a variety of complex and contextual videos covering diverse scenarios. To this end, we present Complex Video Reasoning and Robustness Evaluation Suite (CVRR-ES). We first provide a holistic overview of CVRR-ES benchmark below and detail the video evaluation dimensions in Sec. 3.1. Subsequently, we present the CVRR-ES creation process in Sec. 3.2. We provide details on the dataset quality and human evaluation in Appendix B. Overview of CVRR-ES Benchmark. CVRR-ES encompasses evaluation dimensions that cover diverse video categories related to real-world scenarios, ranging from context-dependent (e.g., social, emotional) categories to video types that often take place in the wild (e.g., anomalous activities). Specifically, we have compiled 11 video evaluation dimensions and curated 2,400 high-quality openended question-answer (QA) pairs, spanning 217 high-quality videos. The average video duration is 22.3 seconds, with maximum and minimum durations of 183 and 2 seconds, respectively. In Fig. 4 \fFigure 3: CVRR-ES Benchmark Statistics. Left: Frequency distribution of the type of questions. Right: Illustration of the most frequent keywords in the answer-set of CVRR-ES benchmark. 3 (left), we quantify the distribution of different question types present in our benchmark. This diverse set of questions aims to comprehensively capture the model\u2019s answering capabilities based on reasoning and robustness criteria. We show the word cloud plot based on the frequency of key words in the answer set of CVRR-ES in Fig. 3 (right). The frequent words correspond to objects and attributes with which Video-LMMs could most likely interact when deployed in practical scenarios. 3.1 CVRR-ES Video Category definitions. To assess the robustness and reasoning capabilities of Video-LMMs in the CVRR-ES benchmark, we carefully curate 11 diverse benchmark evaluation categories. As shown in Fig. 1 (left), these categories encompass a wide range of real-world complex and contextual videos within each category. Below, we define each video evaluation dimension of the CVRR-ES benchmark in detail. 1) Multiple actions in a single video. This category includes videos that contain multiple activities within a single video. The number of activities varies from 2 to 4 in these videos, mostly featuring humans performing multiple activities. We curate QA pairs in this category aiming to identify whether the model can reason over challenging questions concerning multiple actions and understand the interrelation between different actions within a video. 2) Fine-grained action understanding. We gather video samples with fine-grained actions. These actions encompass various fine-grained activities performed by humans, including pushing, opening, closing, spreading, sitting, etc. This category presents a challenge to the model\u2019s comprehension of subtle and fine-grained actions through carefully crafted questions. 3) Partial actions. Based on our observations that Video-LMMs predominantly generate content that may be contextually relevant and likely to co-occur with the depicted scene in the video, we compile videos featuring actions that have a high probability of being followed by subsequent actions but are not executed in the video. For instance, an action such as cracking an egg in a kitchen setting often anticipates the subsequent action of frying/cooking the egg. 4) Time order understanding. Accurately recognizing the temporal sequence of activities in videos is crucial for distinguishing between atomic actions, such as pushing and pulling. We collect videos of fine-grained actions occurring in a particular temporal direction and curate challenging questions. 5) Non-existent actions with existent scene depictions. This category examines the model\u2019s robustness and reasoning behavior in scenarios where we introduce non-existent activities into the video without altering the physical and spatial scenes or environmental details in it. 6) Non-existent actions with non-existent scene depictions. In this evaluation category, we make the QA task more challenging by creating questions that include both non-existent activities and non-existent scene comprehension. Non-existent scene comprehension involves changing the objects, attributes of objects, and background scene description. This evaluates the model\u2019s reliability to correct misleading questions and avoid generating imaginary content. 7) Continuity and object instance count. This category contains videos (both real and simulations) designed to test the models\u2019 ability to accurately recognize the number of instances of objects, people, etc., and distinguish between existing objects and new ones introduced in the same video scene. 8) Unusual and physically anomalous activities. This category consists of videos with unconventional activities and physical phenomena that seemingly defy the laws of physics. We meticulously 5 \fcollect relevant videos from various sources on the internet, focusing on capturing unusual activities such as a person floating in the air or driving a motorbike on a running river. We believe that assessing Video-LMMs in such scenarios is crucial, as it allows us to determine whether they can generalize to understand actions in out-of-distribution videos that can occur in practical situations. 9) Interpretation of social context. In the real world, human actions are often influenced by social context in their surroundings. For instance, a person might be helping an elderly individual cross the road. This category evaluates Video-LMMs on such scenarios to determine their ability to accurately infer the rationale behind actions based on the depicted social context. We gather diverse videos from the internet and create challenging questions that encompass the social context dimension. 10) Understanding of emotional context. Similar to social context, humans can accurately understand and interpret each other\u2019s actions by considering the emotional context. For example, a person being emotionally moved and crying in a gathering could be a happy moment if it is one stemming from success/joy. We collect videos and curate challenging reasoning questions aimed at recognizing the nature of actions solely based on emotional context for evaluating Video-LMMs. 11) Interpretation of visual context. This dimension focuses on assessing the model\u2019s reasoning abilities to recognize the actions by leveraging the overall visual contextual cues in the video. We curate specific videos containing actions where activity identification and reasoning require visual contextual cues. For example, to identify the number of people present based on the presence of shadows, one must utilize the visual context from the shadows to reason about the question. Qualitative Examples. Fig. 2 shows examples of collected videos for the CVRR-ES benchmark. The curated videos are carefully selected to be diverse and contain rich spatio-temporal content, aligned with the proposed video evaluation dimensions. 3.2 Building CVRR-ES Benchmark After defining the video evaluation dimensions, we now proceed toward building the CVRR-ES benchmark which consists of three stages. We present each stage in detail below. Stage 1: Data collection and Annotation. We first collect high-quality videos and annotate each video using human assistance. To ensure that each evaluation dimension captures the relevant attributes and information, we meticulously select videos that are representative of specific characteristics associated with that dimension. Across the 11 dimensions, 214 unique videos are selected for the benchmark with around 20 videos per evaluation category. Around 60% of these videos are collected from public academic datasets. To introduce diversity in the benchmark distribution, we incorporate video samples from multiple academic datasets including Something-Something-v2 [Goyal et al., 2017], CATER [Girdhar and Ramanan, 2020], Charades [Sigurdsson et al., 2016], ActivityNet [Caba Heilbron et al., 2015], HMDB51 [Kuehne et al., 2011], YFCC100M [Thomee et al., 2016]. The remaining 40% of videos are collected from the internet. Following the video collection process, two experienced human annotators are assigned to generate captions for each video. For videos where initial captions or metadata are available from academic datasets, the captions are generated by the annotators based on them. For videos collected from the internet, captions are entirely generated by human annotators. To ensure consistency and high quality, we provide annotation instructions to annotators, who generate captions accordingly. Personalized annotation guidelines are used for each video category. Refer to additional details in Appendix B. Stage 2: Question-Answer Generation. The first challenge is to select an evaluation setting to assess Video-LMMs. Humans typically engage in free-form conversation to interact with each other in day-to-day life. Inspired by this, we aim to simulate a similar style of interaction with Video-LMMs by curating open-ended QA pairs to evaluate these models for robustness and reasoning. We feed detailed ground-truth video captions to GPT-3.5 LLM, which are utilized to generate open-ended questions covering both reasoning and robustness aspects. Reasoning QA pairs: With Video-LMMs beginning to interact more directly with humans in our lives, it\u2019s crucial to validate the reasoning abilities of Video-LMMs for more reliable Human-AI interaction. When evaluating the reasoning capabilities of Video-LMMs, we aim to determine whether these models can understand the input video not only by analyzing spatial content but also by grasping the underlying rationale behind the occurring activities and their relationships with the surrounding context. This involves creating questions that go beyond simple video comprehension and scene 6 \fdescription and require the model to engage in complex logical inference, contextual understanding, and reasoning about counterfactual and hypothetical scenarios. Robustness QA pairs: In addition to evaluating the reasoning capabilities of LLMs, it is important to assess Video-LMMs to ensure their robust and responsible performance in real-world scenarios. In the context of Video-LMMs, robustness can be evaluated from both visual (video input) and textual interfaces. Our focus in this work lies on textual interface robustness by particularly testing the model\u2019s comprehension when posed with misleading or confusing questions. This scenario mirrors realistic situations where users, based on their expertise levels, may pose irrelevant, misleading, or confusing questions. It is crucial for models to demonstrate reliability and robustness in handling such queries and avoid generating unreal or hallucinated content for input videos. We curate specific prompts for each evaluation dimension to instruct LLM in generating QA pairs. Example prompts used as an instruction to LLMs for curating QA pairs for robustness and reasoning aspects are provided in Fig. 14 in the Appendix D. Stage 3: QA Pairs Filtration. After generating QA pairs, a manual filtration step is employed, with human assistance to verify each generated QA pair. Approximately 30% of the QA pairs generated by GPT-3.5 are found to be noisy, containing questions that are unrelated to the video evaluation dimensions or unanswerable based on the provided ground-truth captions. Additionally, many questions contain answers within the question itself. Therefore, an exhaustive filtering process is conducted which involves QA rectification and removing those samples which are not relevant to the video or evaluation type. This process results in a final set of 2400 high-quality QA pairs for the CVRR-ES benchmark. Examples of QA pairs are shown in Tab. 4 in the Appendix. Stage 4: Evaluation Procedure. Previous methods in the literature [Maaz et al., 2023, Cai et al., 2023, Liu et al., 2023a, Qian et al., 2024] have explored using LLM models as judges for quantifying results in open-ended QA benchmarks. We adopt a similar approach and instruct LLMs to act as teachers to assess the correctness of predicted responses from Video-LMMs compared to ground-truth answers. We generate open-ended predictions from Video-LMMs by providing video-question pairs as inputs and then present the model predictions and their corresponding ground-truth responses to the LLM Judge alongside the evaluation prompt. The Judge determines whether the prediction is correct or incorrect through a binary judgment, assigns a score from 1 to 5 representing the quality of the prediction, and provides a reasoning to explain its decision. Our ablative analysis in the Appendix. D demonstrates that reasoning-constrained LLM-based evaluation aligns well with human-based judgment. The evaluation prompt is shown in Fig. 13 in the Appendix D. 4 Dual-Step Contextual Prompting for Video-LMMs. Given their wide-scale potential in practical downstream applications, new Video-LMMs are frequently introduced by the research community. Despite the availability of numerous Video-LMMs, the majority of them are trained using only positive examples and video-conversational templates that are primarily limited to tasks such as video-captioning and video question answering. This leads to highly over-affirmative behavior and a lack of self-rectification abilities in these models (Sec. 5.4). Dual Step Contextual Prompting for Video-LMMs Retrieving Contextual reasoning information (Step 1) As an intelligent video comprehension model, focus on these guidelines: 1. Differentiate recurring objects, count accurately, and identify movements and poses. 2. Understand directional movements and temporal order. 3. Pay attention to fine-grained actions with precision. 4. Assess incomplete actions without assuming completion. 5. Detect emotional, social, and visual cues. 6. Capture and analyze all relevant actions. 7. Identify unusual actions accurately. 8. Disagree with incorrect information given in question. 9. If you do not find the evidence in the frames, you can give a definite answer by assuming that the asked action/attribute is not present. 10. Provide to the point and concise response. Now, proceed with answering the following question faithfully while keeping above guidelines in mind: Question: What is happening in the video? Context conditioned question-answering (Step 2) Context for the given video is: {step 1 response}. Now answer a question truthfully based on the video and the provided context. Question: {User question} Figure 4: Principled prompt instructions in our DSCP method for improving reasoning and robustness in Video-LMMs. Additionally, the templates have minimal focus on enhancing reasoning and robustness capabilities through reasoning-based instruction-tuning pairs, resulting in weak performance of such models against robustness and reasoning QA evaluations in the CVRR-ES benchmark. Furthermore, curating reasoning-based instruction fine-tuning datasets requires meticulous data curation steps, and retraining these models is computationally expensive [Li et al., 2023d, Ren et al., 2023]. Alternatively, training-free prompting techniques in NLP literature have shown effectiveness in eliciting reasoning abilities in LLMs such as chain of thought and self-consistency prompting [Wei et al., 2022b, Wang et al., 2022a]. Inspired by these approaches, we introduce a prompting technique called Dual Step Contextual Prompting (DSCP), which aims to steer VideoLMM focus for enhanced reasoning while simultaneously encouraging the models to provide robust and 7 \fVideo-LLaVA LLaMA-VID MovieChat How many different wigs with different colors did the young girl try before settling on the black one? I can see the young girl wearing a black wig, and a brunette wig is in the background. The girl is wearing a pink shirt in the beginning of the video The young girl tried on three different wigs with different colors before settling on the black one. The young girl tried on three different wigs with different colors before settling on the black one. Improved video reasoning and robustness with our DSCP Method The video does not provide information about the number of different wigs the young girl tried before settling on the black one. I cannot give an accurate answer as the video does not show the different wigs that the young girl tried. The video does not provide information about the number of wigs the young girl tried before settling on the black one Is the atmosphere in the dressing room tense and uncomfortable as the girl receives her new hair? Yes, the atmosphere in the dressing room is tense and uncomfortable as the girl receives her new hair. The video does not provide any information about the atmosphere in the dressing room. The video does not provide any information about the atmosphere in the dressing room. However, it is possible \u2026 Improved video reasoning and robustness with our DSCP Method No, the atmosphere in the dressing room is not tense and uncomfortable. The woman brushing the hair is smiling, and the woman in the chair is smiling as \u2026 \u2026, but the video does not suggest that the girl is uncomfortable or tense while receiving her new hair. No, the atmosphere in the dressing room is not tense or uncomfortable. The girl is laughing and smiling \u2026, indicating that she is enjoying the experience. Can we see visitors petting the white tiger through the cage bars? Sure, in this video we can see a baby panda inside a cage with bars. There are also other \u2026 Yes, we can see visitors petting the white tiger through the cage bars in the video. Yes, we can see visitors petting the white tiger through the cage bars. Improved video reasoning and robustness with our DSCP Method No, we cannot see visitors petting the white tiger through the cage bars in the video. No, we do not see visitors petting the white tiger through the cage bars in the video. No, we cannot see visitors petting the white tiger through the cage bars in the video. Does the chimpanzee use tools to repair the zoo vehicle in the video? Yes, the chimpanzee in the zoo is seen using tools to fix the zoo vehicle. Yes, the chimpanzee is seen using a tool to fix the vehicle in the video. Yes, the chimpanzee is seen using a tool to repair the zoo vehicle in the video. Improved video reasoning and robustness with our DSCP Method No, the chimpanzee does not use tools to repair the zoo vehicle in the video. \u2026 the chimpanzee driving the car \u2026 The chimpanzee does not use tools to repair the zoo vehicle in the video. It can be seen driving the car \u2026 No, the chimpanzee does not use tools to repair the zoo vehicle in the video. Figure 5: Qualitative results of DSCP prompting method. Using our DSCP approach, Video-LMMs demonstrate enhanced robustness and reasoning capabilities over complex videos. grounded answers. DSCP is a two-step prompting method that 1) ensures that the model comprehends the video while reasoning over crucial aspects of complex video understanding such as contextual information and decoding the complex relationships between objects and motions, etc., and 2) encourages robustness by generating the response against the question while conditioning both on video and the context retrieved in the first step. Below we discuss each step of DSCP in detail. Step 1: Reasoning over the video. We first guide Video-LMMs using principled prompts to interpret video content from a reasoning perspective. As shown in Fig. 4 (in blue), we formulate ten principled reasoning-based instructions for prompting, Preason, which directs Video-LMMs to not only comprehend the general video content but also steers them to reason over the rationale behind occurring activities and their relationships with the surrounding context. These prompt instructions include specific considerations like contextual priors, the temporal order of actions, instance count, and attributes. Additionally, the prompting technique incorporates instructions to ensure conciseness and factuality, aiming to mitigate hallucinations. Given a Video-LMM F and input video V, we retrieve contextual reasoning information Icontext by providing principled reasoning prompt Preason along with the video to the LMM, Icontext = F(Preason|V). The contextual information is utilized in the second step of DSCP to generate a more grounded response to the user question. Step 2: Context conditioned question answering. As discussed earlier, Video-LMMs are primarily trained with positive examples to answer questions, with limited emphasis on reasoning and robustness aspects. Consequently, enabling direct interaction of Video-LMMs with users in real-world scenarios can result in undesired responses when the user question is confusing and deceiving due to their extreme over-affirmative behavior. To address these challenges, we propose incorporating an additional inference step in Video-LMMs before answering the user\u2019s question. We note that Video-LMMs often possess factual knowledge about the video content but may become distracted and produce hallucinations when prompted with confusing or misleading questions (more details in Appendix C). Specifically, we devise a prompting method that conditions the model to first comprehend the video in detail without attending to the user question, thereby eliminating the influence of the question. The complex video comprehension information refers to Icontext formulated in step 1. Subsequently, we pose the user question in the second step using prompt Puser which combines user question and the contextual reasoning information (Fig. 4, in green) while conditioning the model on both the video and the contextual reasoning information Icontext. Concretely, Final response = F(Puser|V), where Puser = [question; Icontext]. 8 \fTable 2: Evaluation results of Video LLMs across various video-evaluation categories on the CVRR-ES benchmark. We present results for both open-source and closed-source models, alongside human evaluation results which serves as the upper bound on the benchmark. Benchmark Category Video-LLaMA-2 VideoChat Video-ChatGPT Video-LLaVA MovieChat LLaMA-VID TimeChat Gemini-V Pro GPT4V Human Multiple Actions in 16.98 23.90 27.67 15.72 12.58 17.92 28.30 43.08 57.55 93.40 single video. Fine-grained action 29.57 33.48 26.96 25.22 23.48 26.09 39.13 51.61 77.39 95.65 understanding. Partial 24.76 33.01 22.82 13.59 21.36 14.56 49.51 67.48 73.79 98.54 actions. Time order 16.45 31.58 27.63 21.05 16.45 19.74 34.21 45.39 57.89 97.37 understanding. Non-existent actions with 10.14 15.22 23.19 5.07 5.07 2.90 23.19 57.25 71.01 97.10 existent scene. Non-existent actions with 13.19 14.58 17.36 3.47 11.81 6.94 13.89 49.64 75.00 100.00 non-existent scene. Continuity and Object 28.25 24.29 28.41 21.47 19.77 24.86 34.46 36.16 62.71 96.49 instance Count. Unusual and Physically 18.95 18.42 18.95 15.79 17.89 16.32 27.37 60.00 74.74 96.84 Anomalous activities. Interpretation of 25.00 31.07 32.50 18.93 17.14 13.93 39.29 64.29 79.64 97.51 social context. Understanding of 21.92 23.63 21.23 15.07 13.70 14.73 27.40 47.26 66.44 95.55 emotional context. Interpretation of 32.60 34.43 27.84 19.78 21.25 23.08 45.05 63.00 82.42 94.87 visual context. Average 21.62 25.78 24.96 15.92 16.41 16.46 32.89 53.20 70.78 96.67 Intuitively, the factual content generated in the first step will guide the model towards a robust response in the second step to produce factual and correct responses, even in the presence of noisy/misleading user questions. We illustrate the qualitative results of the DSCP method in Fig. 5. This approach leads to responses that are better grounded with the actual video content and are robust against potential lesser-quality user queries. As we will later show, the DSCP technique effectively enhances the performance of Video-LMMs on the CVRR-ES benchmark. 5 Evaluation Experiments on CVRR-ES. Video-LMMs. Both open-source and closed-source models are selected for the evaluation. Among the open-source models, we evaluate 7 recent Video-LMMs, including Video-LLaVA [Lin et al., 2023], TimeChat [Ren et al., 2023], MovieChat [Song et al., 2023], LLaMA-ViD [Li et al., 2023d], VideoChat [Li et al., 2023b] Video-ChatGPT [Maaz et al., 2023], and Video-LLaMA-2 [Zhang et al., 2023]. For evaluating closed-source models, we use Gemini-Pro-Vision [Google, 2023] and GPT-4V(vision) [OpenAI, 2023]. Refer to the Appendix A for implementation details. 5.1 Main Experiments on CVRR-ES. In Tab. 2, we present the evaluation results of Video-LMMs on the 11 dimension categories of the CVRR-ES benchmark. Below, we present several key findings. Open Source Video-LMMs struggles on CVRR-ES benchmark. All open-source LMMs show inferior performance across the different evaluation dimensions of CVRR-ES. Interestingly, some of the earlier developed open-source Video-LMMs, like Video-LLaMA, VideoChat, and Video-ChatGPT, exhibit higher performance compared to more recent models such as Video-LLaVA, MovieChat, and LLaMA-VID. Overall, TimeChat achieves the highest performance of 32.89% averaged across the 11 evaluation dimensions among open-source LMMs, followed by VideoChat with a score of 25.78%. Humans rank highest in CVRR-ES benchmark. Human studies achieve the highest performance on the CVRR-ES benchmark, with over 95% accuracy across all evaluation dimensions. Furthermore, these results suggest that the CVRR-ES QA pairs are answerable and suitable for benchmarking. Closed source models perform competitively on CVRR-ES. As shown in Tab. 2, both Gemini and GPT4V surpass the performance of open-source models and achieve high gains across all evaluation dimensions. The competitive results of GPT4V and Gemini on complex video evaluation dimensions such as partial actions, non-existent action/scene depiction, and context-dependent categories show 9 \fPrompting Method VideoChat Video-LLaVA MovieChat LLaMA-VID TimeChat Standard prompting 25.78 15.92 16.41 16.46 32.89 Chain of Thought (CoT) prompting 22.44 25.87 15.89 29.68 39.57 DSCP (Stage 1) 38.07 32.12 28.05 25.13 33.04 DSCP (Both stages) 47.92 37.93 35.87 46.85 39.45 Table 3: Prompting methods. DSCP stage 1 uses only the principled instructions designed in step 1, while DSCP (Both stages) uses the complete dual-step prompting technique. that these models have a more sophisticated understanding of the complex visual contents of videos and have strong capabilities to rectify misleading and confusing user questions. Overall, GTP4V improves over Gemini by 17.58% and provides an average accuracy of 70.78% on CVRR-ES. 5.2 Effectiveness of DSCP method for improving Video-LMMs performance 0 10 20 30 40 50 60 Accuracy % (averaged over 11 video dimensions) Video LLaVa MovieChat LLaMA-VID Video-LLaMA-2 Video-ChatGPT VideoChat TimeChat Gemini-Pro Video LMMs with DSCP +22.01 +19.46 +30.39 +16.15 +8.93 +22.14 +6.56 +5.02 Figure 6: Video-LMMs with DSCP technique effectively improves their performance (gains are shown in green) on CVRR-ES benchmark. We next integrate DSCP technique with VideoLMMs and present results on the CVRR-ES benchmark in Fig. 6. The results indicate that DSCP improves the model\u2019s performance compared with models that use standard prompting (i.e., using only the question itself). These results suggest that prompting techniques in Video-LMMs can better guide models for improved reasoning and robustness. With DSCP, initially low-performing Video-LMMs such as Video-LLaVa, MovieChat, and LLaMA-Vid show much better relative gains and become competitive with other models. The highest relative gain of 184% is achieved by LLaMA-ViD, which moves from 7th place in the leaderboard to 2nd among the open-source models after utilizing DSCP prompting. We observe similar overall positive trends of using DSCP with closed-source model Gemini, which improves on the benchmark by an absolute overall gain of 5.02%. We provide more detailed results comparisons in Appendix C. 5.3 Different prompting techniques. We study the contribution of each step of DSCP and compare it with chain-of-thought prompting [Wei et al., 2022b]. The results for the top 5 performing Video-LMMs are shown in Tab. 3. Chainof-thought prompting improves over the standard prompting technique in 3 out of 5 Video-LMMs, suggesting that prompting techniques from NLP literature can effectively guide multi-modal VideoLMMs to enhance reasoning and robustness. Next, we ablate on the first step of DSCP prompting, which uses the principled instructions of DSCP step 1 as a prefix alongside the actual user question. Using the first step prompting technique of DSCP substantially improves model performance on all Video-LMMs, suggesting the effectiveness of the principled prompt instructions designed specifically for Video models. DSCP with both steps, which integrates an additional thinking step in the prompting step, further improves the results and provides the highest results on 4 out of 5 Video-LMMs. 5.4 Main findings and Qualitative Results Based on the results of Video-LMMs on CVRR-ES, we draw key findings and show qualitative results. These insights can serve as valuable guidance for developing the next generation of Video-LMMs, aiming to make them more robust and reliable when deployed in real-world applications. Models excelling at standard VQA benchmarks struggle on CVRR-ES benchmark. Our analysis in Sec. 5.1 reveals that the latest open-source Video-LMMs, such as Video-LLaVA, MovieChat, and LLaMA-VID, perform less effectively on the CVRR-ES benchmark compared to Video-LMMs that were introduced earlier in the community, such as VideoChat and Video-ChatGPT. Interestingly, the same recent models demonstrate superior performance on general video comprehension benchmarks. This discrepancy suggests that current VQA benchmarks, like ActivityNet-QA [Yu et al., 2019] and MSRVTT [Xu et al., 2017], do not adequately correlate with the complex video reasoning and robustness scenarios highlighted in our benchmark. Consequently, this also indicates that most newer Video-LMMs are heavily trained to excel on the general video comprehension benchmarks while reducing their generalizability, reasoning, and robustness capabilities. Over-affirmative behavior of open-source Video-LMMs. Another important observation about open-source models is their tendency to exhibit excessively positive and affirmative responses. As shown in Fig. 7, open-source Video-LMMs consistently respond with \"Yes\" even when faced with 10 \fconfusing questions that describe non-existent actions and objects. This highlights the vulnerability of these models when interacting with users in real-world scenarios. In our CVRR-ES benchmark, opensource models are particularly vulnerable to our evaluation dimensions of \"Non-existent actions with the existent scene\" and \"Non-existent actions with the non-existent scene\" compared to closed-source models. These models lack negation and self-rectification capabilities, especially when users provide misleading or confusing questions. We conjecture that such behavior arises due to the absence of negative instruction tuning pairs during the training of Video-LMMs. Tendency towards activity completion. Most open-source Video-LMMs have shown weak performance on the evaluation dimension of partial actions in CVRR-ES, which contains videos focusing on incomplete or atomic actions. To further analyze the models\u2019 behavior, we show qualitative results on such videos in Fig. 8. It can be observed that most open-source models tend to complete actions, even when only part of the action is provided in the video. For instance, Video-LLaVA struggles to reason over the video and describes the man as kicking the soccer ball, while the action in the video stops at the point of the man placing his foot beside the ball. We observe similar behavior in other Video-LMMs. Upon examining the fine-tuning strategies [Maaz et al., 2023, Liu et al., 2023b], we find that almost all models are trained on end-to-end actions-based instruction-tuning data, causing them to generate complete action descriptions at inference. This tendency highlights the vulnerability of Video-LMMs after deployment, as real-world scenarios often involve atomic, sub-atomic, and general actions alike. To improve the performance of Video-LMMs, it is crucial to incorporate diverse action types during training, including partial and incomplete actions. Weak Generalization to extreme OOD videos. The evaluation dimension of unusual and physically anomalous activities in CVRR-ES resembles extreme out-of-distribution video examples. With the exception of GPT4V and Gemini, Video-LMMs struggle with this dimension, indicating weak generalizability towards OOD videos containing the coexistence of unusual objects and activities that are extremely rare in typical videos. For instance, Video-LLaVA in Fig. 9 describes a person falling on the street, while the video actually shows the person performing an optical illusion. To be responsibly deployed in real-world applications, where OOD actions occur more frequently, Video-LMMs need to be trained to perform more robustly on OOD samples. This may involve incorporating diverse and atypical examples in the training data to improve the model\u2019s ability to handle unusual situations. Limited understanding of temporal order in complex videos. The CVRR-ES benchmark results show that Video-LMMs perform relatively better on the fine-grained action dimension compared to the time-order understanding dimension. While these models can accurately identify fine-grained actions, they struggle with comprehending the correct temporal order of these actions within a video. This limitation can lead to misinterpretations of the underlying information depending on temporal order. We present failure cases of this dimension in Fig. 10. For building more advanced world-centric Video-LMMs, it is crucial to enhance their ability to process and interpret event sequences accurately. Video-LMMs struggles in understanding the emotional and social context. For more reliable interaction between Video-LMMs and humans in practical scenarios, these models should comprehend the spatio-temporal scenes with social and contextual reasoning capabilities similar to humans. The lower performance of Video-LMMs on social and emotional contextual dimensions in CVRR-ES highlights their limitations and lack of understanding of scenes based on contextual cues. For instance, as shown in Fig. 11 (bottom row), GPT-4V struggles to comprehend a scene where a worker is attempting to prevent shoes from getting wet due to the rain by moving them under the shade. Instead, GPT-4V provides a response that contradicts the social cues present in the video. 6", + "additional_graph_info": { + "graph": [ + [ + "Muhammad Uzair Khattak", + "Muzammal Naseer" + ], + [ + "Muhammad Uzair Khattak", + "Salman Khan" + ], + [ + "Muhammad Uzair Khattak", + "Muhammad Ferjad Naeem" + ], + [ + "Muzammal Naseer", + "Salman Khan" + ], + [ + "Muzammal Naseer", + "Munawar Hayat" + ], + [ + "Muzammal Naseer", + "Salman H. Khan" + ], + [ + "Salman Khan", + "Izzeddin Teeti" + ], + [ + "Salman Khan", + "Mohamed Elhoseiny" + ], + [ + "Salman Khan", + "Fabio Cuzzolin" + ], + [ + "Muhammad Ferjad Naeem", + "Yongqin Xian" + ], + [ + "Muhammad Ferjad Naeem", + "Xiaohua Zhai" + ], + [ + "Muhammad Ferjad Naeem", + "Lukas Hoyer" + ] + ], + "node_feat": { + "Muhammad Uzair Khattak": [ + { + "url": "http://arxiv.org/abs/2405.03690v2", + "title": "How Good is my Video LMM? Complex Video Reasoning and Robustness Evaluation Suite for Video-LMMs", + "abstract": "Recent advancements in Large Language Models (LLMs) have led to the\ndevelopment of Video Large Multi-modal Models (Video-LMMs) that can handle a\nwide range of video understanding tasks. These models have the potential to be\ndeployed in real-world applications such as robotics, AI assistants, medical\nsurgery, and autonomous vehicles. The widespread adoption of Video-LMMs in our\ndaily lives underscores the importance of ensuring and evaluating their robust\nperformance in mirroring human-like reasoning and interaction capabilities in\ncomplex, real-world contexts. However, existing benchmarks for Video-LMMs\nprimarily focus on general video comprehension abilities and neglect assessing\ntheir reasoning capabilities over complex videos in the real-world context, and\nrobustness of these models through the lens of user prompts as text queries. In\nthis paper, we present the Complex Video Reasoning and Robustness Evaluation\nSuite (CVRR-ES), a novel benchmark that comprehensively assesses the\nperformance of Video-LMMs across 11 diverse real-world video dimensions. We\nevaluate 9 recent models, including both open-source and closed-source\nvariants, and find that most of the Video-LMMs, especially open-source ones,\nstruggle with robustness and reasoning when dealing with complex videos. Based\non our analysis, we develop a training-free Dual-Step Contextual Prompting\n(DSCP) technique to enhance the performance of existing Video-LMMs. Our\nfindings provide valuable insights for building the next generation of\nhuman-centric AI systems with advanced robustness and reasoning capabilities.\nOur dataset and code are publicly available at:\nhttps://mbzuai-oryx.github.io/CVRR-Evaluation-Suite/.", + "authors": "Muhammad Uzair Khattak, Muhammad Ferjad Naeem, Jameel Hassan, Muzammal Naseer, Federico Tombari, Fahad Shahbaz Khan, Salman Khan", + "published": "2024-05-06", + "updated": "2024-05-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Recently, Large Language Models (LLMs) [Touvron et al., 2023, Zheng et al., 2023, Jiang et al., 2024] have demonstrated impressive reasoning and planning capabilities while simultaneously handling a wide range of NLP tasks [Wei et al., 2022a, Brown et al., 2020]. Consequently, their integration with the vision modality, specifically for video understanding tasks, has given rise to Video Large Multi-modal Models (Video-LMMs) [Li et al., 2023b]. These models act as visual chatbots that accept both text and video as input and handle a diverse set of tasks, including video comprehension [Maaz et al., 2023], detailed video understanding [Lin et al., 2023], and action grounding [Zhang et al., 2023]. As these models directly capture video data, they hold substantial potential for deployment in real-world applications such as robotics, surveillance, medical surgery, and autonomous vehicles. However, as these models assume an expanding role in our everyday lives, assessing their performance in comprehending complex videos and demonstrating reliable reasoning and robustness capabilities arXiv:2405.03690v2 [cs.CV] 8 May 2024 \fBenchmark Textual Complex In the wild Contextual Multiple Temporal Order Robustness Reasoning (OOD) Dependency Actions & Fine-grained MSVD-QA [Xu et al., 2017] MSRVTT-QA [Xu et al., 2017] TGIF-QA [Jang et al., 2017] Activity Net-QA [Yu et al., 2019] VideoChat-GPT [Maaz et al., 2023] MVBench [Li et al., 2023c] SEED-Bench [Li et al., 2023a] CVRR-ES (ours) Table 1: Comparison of CVRR-ES with existing benchmarks for video QA. The CVRR-ES benchmark represents an initial effort to assess Video-LMMs in the context of their applicability and suitability in real-world applications. Non-existent actions with non-existent scene depictions. 6.0% Multiple actions in a single video. 13.25% Fine-grained action understanding. 9.58% Partial actions. 8.58% Non-existent actions with existent scene depictions. 5.75% Interpretation of visual context. 11.38% Continuity and Object Instance Count. 7.38% Unusual and Physically Anomalous activities. 7.92% Interpretation of social context. 11.67% Understanding of emotional context. 12.17% Time order understanding. 6.33% CVRR Evaluation Suite 0 20 40 60 80 100 Accuracy % (averaged over 11 video dimensions) Video LLaVa MovieChat LLaMA-VID Video-LLaMA-2 Video-ChatGPT VideoChat TimeChat Gemini-Pro GPT4V(ision) Human Video LMMs 15.92% 16.41% 16.46% 21.62% 24.96% 25.78% 32.89% 53.2% 70.78% 96.67% Figure 1: Left: CVRR-ES comprises of 11 diverse complex video evaluation dimensions encompassing a variety of complex, real-world contexts. Right: Overall performance of Video-LMMs on the CVRR-ES benchmark. Results for each Video-LMM are averaged across 11 video dimensions. across diverse real-world contexts becomes essential. Video-LMMs with such capabilities will be more effective when integrated into our daily lives for solving perception tasks and will be a promising step towards building human-centric AI-assistive systems. Several attempts in literature have been made to benchmark Video-LMMs. SEED-Bench [Li et al., 2023a] curated a MCQ-based benchmarking dataset including 3 evaluation dimensions for videos. Similarly, MV-Bench [Li et al., 2023c] constructed the Video-LMM benchmark and assembled 20 challenging video tasks for evaluating the spatial and temporal understanding of these models. While these methods aim at benchmarking Video-LMMs, they predominantly evaluate video and/or temporal comprehension abilities and overlook the complex reasoning aspects of Video-LMMs for real-world context, and their robustness towards user input text queries; both of which are crucial to ensure their responsible engagement with humans in various real-world situations in the wild. While some studies have explored similar areas such as hallucinations in image-based LLMs [Liu et al., 2023a, Qian et al., 2024], no such comprehensive study exists for the case of Video-LMMs. Motivated by the wide-scale applications of Video-LMMs and the lack of world-centric complex video benchmarking efforts, we present a new benchmark, Complex Video Reasoning and Robustness Evaluation Suite (CVRR-ES), to comprehensively assess the performance of Video-LMMs. As shown in Tab. 1, CVRR-ES evaluates Video-LMMs on key aspects of robustness and reasoning in videos, encompassing video domains that more accurately test models in real-world scenarios such as videos having contextual dependency and in-the-wild aspects. CVRR-ES is an open-ended video QA benchmark comprising 11 real-world video category dimensions (Fig. 1, left) that encompass diverse evaluation aspects. These dimensions span from context-dependent (e.g., social, emotional, etc.) categories to ones that often take place in the wild such as videos containing physically anomalous activities. We comprehensively evaluate a representative set of 9 recent Video-LMMs (Fig. 1, right) including both open-source and closed-source models on the CVRR-ES benchmark using a LLM-assisted automatic evaluation framework [Maaz et al., 2023, Cai et al., 2023]. The performance of Video-LMMs on the CVRR-ES benchmark reveals that these models struggle to correctly comprehend complex videos indicating their weak reasoning and lack of robustness to the textual user queries (Fig. 2). For instance, state-of-the-art Video-LLaVA [Lin et al., 2023] achieves only 15.92% performance averaged across 11 video dimensions of CVRR-ES. In contrast, closed-source models including GPT4V(vision) [OpenAI, 2023] and Gemini-Vision-Pro [Google, 2023] exhibit relatively stronger performance but still lag behind the performance of humans. Using CVRR-ES benchmark, we extensively perform quantitative and qualitative analysis formulating important insights into these Video-LMMs based on their failure cases and individual performances across the diverse video dimensions. 2 \f Based on the visual cues provided in the sequence of images, it appears that four individuals exit the vehicle. Correct Response: Three people exit the car to retrieve black bags from the trunk. 2. Two individuals exit the vehicle to retrieve items from the trunk. Two individuals exit the vehicle to retrieve items from the trunk. Two individuals exit the vehicle to retrieve items from the trunk. In the video, we can see a young man and woman exiting the car \u2026 How many individuals exit the vehicle to retrieve items from the trunk? Based on \u2026, one person is performing a prank on another individual. The prank seems to involve pouring a yellow liquid from a bowl, which is sneakily positioned behind the unsuspecting person's back at waist level. Correct Response: one man uses his back muscles to crack eggs placed by another man into a bowl behind him. The process is amusing and ends with a celebratory turn by the man holding the eggs. \u2026 Another man is holding a bowl of food behind the first man \u2026 the second man appears to be pouring the contents of the bowl down the first man's shorts. A man is seen standing in a kitchen with his hands on his hips. Another man is standing behind him and pouring something into a bowl. a guy is standing with his back facing someone else. The person standing behind him pours water on his back, causing him to shiver. In the video, a man is seen standing in a kitchen and washing dishes. He then puts on a white robe and continues washing dishes. In the video, we see a young man pouring water from a bucket to a plastic bowl. What is happening in the video? Why is the golfer celebrating in the video? Based on \u2026 the golfer is celebrating because they successfully hit the golf ball from a sand bunker onto the green and possibly close to the hole\u2026 Correct Response: The golfer is not celebrating and instead he is expressing his anger and frustration by hitting the golf stick into the ground. The golfer is celebrating because he made a hole-in-one. A hole-in-one is \u2026 The golfer is celebrating because he hit the ball into hole on his first attempt. The golfer is celebrating because he has just hit the ball into the hole. The golfer is celebrating because he has just made a hole in one, \u2026 The golfer is celebrating in the video because he is successfully teeing off and \u2026 Before the mug was moved, a tennis ball became visible underneath it. Correct Response: No other object was visible before the mug was moved in the video. A yellow tennis ball was visible in the scene before the mug was moved. A tennis ball. Before the mug was moved, a tennis ball was visible in the scene. In the beginning of the video, the person was holding a mug with a tennis ball on a table. Before the mug was moved, a white vase was visible in the scene. Before the mug was moved, what other object was visible in the scene? A tennis ball was visible in the scene before the mug was moved. GPT4(Vision) Gemini-Pro-V Video-LLaVA LLaMA-VID MovieChat TimeChat Figure 2: We observe that most Video-LMMs struggle to reason over complex videos (rows 1-3) and exhibit weak robustness and rectification capabilities when prompted to generate answers for user questions that can sometimes be confusing (row 4). The QA pairs in Comprehensive Video Reasoning and Robustness Evaluation Suite (CVRR-ES) benchmark assess the performance of Video-LMMs beyond general video comprehension. Based on our analysis, we observe that standard prompting of Video-LMMs struggles in steering their focus for complex video understanding. Additionally, their limitations in reasoning and robust video understanding of real-world scenarios are dominantly driven by the quality of textual inputs (i.e., user questions). Based on these insights, we develop a training-free Dual-Step Contextual Prompting (DSCP) technique, which effectively steers the model\u2019s behavior during inference to elicit video-specific reasoning and improved robustness within Video-LMMs. With DSCP, Video-LMMs show substantial improvements on our benchmark, suggesting the potential of prompting techniques for Video-LMMs. Our main contributions can be summarised as follows: \u2022 We present the Complex Video Robustness and Reasoning Evaluation suite (CVRR-ES), a Video Question Answering benchmark designed to assess the reasoning and robustness capabilities of Video-LMMs across 11 diverse world-centric complex video dimensions. \u2022 We comprehensively evaluate both open-source and closed-source Video-LMMs on the CVRR-ES benchmark and find that most models exhibit weak performance, highlighting their limited reasoning in complex videos and lack of robustness towards user text queries. \u2022 We conduct extensive analysis and formulate important conclusions about Video-LMMs based on their failure cases and performance on the CVRR-ES benchmark. Our findings provide valuable insights for building the next generation of human-centric AI systems with improved robustness and reasoning capabilities. \u2022 To improve Video-LMMs\u2019 reasoning and robustness abilities, we formulate a model-agnostic and training-free prompting technique that effectively enhances their performance. 3 \f2 Related Works Video Large Multi-modal models (Video-LMMs). Video-LMMs [Lin et al., 2023, Li et al., 2023d, Zhang et al., 2023] are advanced visual chatbots capable of performing a wide range of video understanding tasks, including video comprehension and captioning, video question-answering, and action grounding. These models accept both video and textual inputs and generate textual responses. From an architectural perspective, Video-LMMs typically combine pre-trained vision backbones [Radford et al., 2021, Fang et al., 2023, Wang et al., 2022b] with large language models [Touvron et al., 2023, Zheng et al., 2023] using connector modules such as MLP adapters, Q-former [Dai et al., 2023], and gated attention [Alayrac et al., 2022]. VideoChat [Li et al., 2023b] and VideoChat-GPT [Li et al., 2023d] presented initial open-source efforts in this direction and were trained with two stages of alignment and video-instruction following objectives. Recently, more advanced Video-LMMs have emerged in the field, with some models focusing on improving model architectures [Li et al., 2023d], expanding to new tasks [Munasinghe et al., 2023], and enabling support for long videos [Song et al., 2023, Ren et al., 2023]. In this work, we aim to develop a comprehensive benchmarking evaluation framework to assess the reasoning and robustness capabilities of Video-LMMs and develop a training-free prompting technique to improve their performance on these fronts. Benchmarking Video-LMMs. With the growing number of Video-LMMs emerging in the research community, several works have presented evaluation frameworks to assess and quantify these models for benchmarking and analysis purposes. SEED-Bench [Li et al., 2023a] evaluates the visual capabilities in both image and Video-LMMs across 12 unique dimensions. MV-Bench [Li et al., 2023c] curates 20 challenging video tasks to evaluate spatial and temporal understanding of VideoLMMs. Video-ChatGPT [Maaz et al., 2023] develops a quantitative evaluation framework to assess model understanding across five aspects of general video comprehension, such as the correctness and consistency of model captions. While these evaluation frameworks provide effective insights, their assessments do not extend beyond general video-comprehension metrics to more advanced aspects of reasoning and robustness, particularly for real-world context cases. In contrast, our work focuses on providing a complex video reasoning and robustness benchmark across 11 diverse real-world-centric evaluation types and offers a more thorough assessment of Video-LMMs in practical applications. Training-free Prompting Techniques. Steering model behavior at inference time using prompting has become a common paradigm in the NLP domain. Prompting [Wei et al., 2022b, Wang et al., 2022a] refers to the set of instructions given as a prefix to the language model to better align model responses with human intent without the need for task-specific fine-tuning. Prompting techniques can be as simple as a single sentence (e.g., \"Let\u2019s think step by step\") such as zero-shot chain of thought [Wei et al., 2022b] prompting, to more detailed techniques such as combining chain-ofthought prompting with few-shot learning [Brown et al., 2020] and self-consistency chain of thought prompting [Wang et al., 2022a]. Surprisingly, training-free prompting techniques for Video Large Multi-modal Models (Video-LMMs) have been minimally explored. In this work, we develop a dual-step prompting technique based on principled prompt instructions specifically designed to steer the model\u2019s behavior for improved reasoning and robustness over complex videos. 3 Complex Video Reasoning and Robustness Evaluation Suite As Video-LMMs are touching new real-world applications, it is essential to ensure that they robustly handle the user inputs, comprehend the visual world, and exhibit human-like reasoning capabilities. In this work, our goal is to establish a comprehensive benchmark that specifically assess the robustness and reasoning capabilities of Video-LMMs in a variety of complex and contextual videos covering diverse scenarios. To this end, we present Complex Video Reasoning and Robustness Evaluation Suite (CVRR-ES). We first provide a holistic overview of CVRR-ES benchmark below and detail the video evaluation dimensions in Sec. 3.1. Subsequently, we present the CVRR-ES creation process in Sec. 3.2. We provide details on the dataset quality and human evaluation in Appendix B. Overview of CVRR-ES Benchmark. CVRR-ES encompasses evaluation dimensions that cover diverse video categories related to real-world scenarios, ranging from context-dependent (e.g., social, emotional) categories to video types that often take place in the wild (e.g., anomalous activities). Specifically, we have compiled 11 video evaluation dimensions and curated 2,400 high-quality openended question-answer (QA) pairs, spanning 217 high-quality videos. The average video duration is 22.3 seconds, with maximum and minimum durations of 183 and 2 seconds, respectively. In Fig. 4 \fFigure 3: CVRR-ES Benchmark Statistics. Left: Frequency distribution of the type of questions. Right: Illustration of the most frequent keywords in the answer-set of CVRR-ES benchmark. 3 (left), we quantify the distribution of different question types present in our benchmark. This diverse set of questions aims to comprehensively capture the model\u2019s answering capabilities based on reasoning and robustness criteria. We show the word cloud plot based on the frequency of key words in the answer set of CVRR-ES in Fig. 3 (right). The frequent words correspond to objects and attributes with which Video-LMMs could most likely interact when deployed in practical scenarios. 3.1 CVRR-ES Video Category definitions. To assess the robustness and reasoning capabilities of Video-LMMs in the CVRR-ES benchmark, we carefully curate 11 diverse benchmark evaluation categories. As shown in Fig. 1 (left), these categories encompass a wide range of real-world complex and contextual videos within each category. Below, we define each video evaluation dimension of the CVRR-ES benchmark in detail. 1) Multiple actions in a single video. This category includes videos that contain multiple activities within a single video. The number of activities varies from 2 to 4 in these videos, mostly featuring humans performing multiple activities. We curate QA pairs in this category aiming to identify whether the model can reason over challenging questions concerning multiple actions and understand the interrelation between different actions within a video. 2) Fine-grained action understanding. We gather video samples with fine-grained actions. These actions encompass various fine-grained activities performed by humans, including pushing, opening, closing, spreading, sitting, etc. This category presents a challenge to the model\u2019s comprehension of subtle and fine-grained actions through carefully crafted questions. 3) Partial actions. Based on our observations that Video-LMMs predominantly generate content that may be contextually relevant and likely to co-occur with the depicted scene in the video, we compile videos featuring actions that have a high probability of being followed by subsequent actions but are not executed in the video. For instance, an action such as cracking an egg in a kitchen setting often anticipates the subsequent action of frying/cooking the egg. 4) Time order understanding. Accurately recognizing the temporal sequence of activities in videos is crucial for distinguishing between atomic actions, such as pushing and pulling. We collect videos of fine-grained actions occurring in a particular temporal direction and curate challenging questions. 5) Non-existent actions with existent scene depictions. This category examines the model\u2019s robustness and reasoning behavior in scenarios where we introduce non-existent activities into the video without altering the physical and spatial scenes or environmental details in it. 6) Non-existent actions with non-existent scene depictions. In this evaluation category, we make the QA task more challenging by creating questions that include both non-existent activities and non-existent scene comprehension. Non-existent scene comprehension involves changing the objects, attributes of objects, and background scene description. This evaluates the model\u2019s reliability to correct misleading questions and avoid generating imaginary content. 7) Continuity and object instance count. This category contains videos (both real and simulations) designed to test the models\u2019 ability to accurately recognize the number of instances of objects, people, etc., and distinguish between existing objects and new ones introduced in the same video scene. 8) Unusual and physically anomalous activities. This category consists of videos with unconventional activities and physical phenomena that seemingly defy the laws of physics. We meticulously 5 \fcollect relevant videos from various sources on the internet, focusing on capturing unusual activities such as a person floating in the air or driving a motorbike on a running river. We believe that assessing Video-LMMs in such scenarios is crucial, as it allows us to determine whether they can generalize to understand actions in out-of-distribution videos that can occur in practical situations. 9) Interpretation of social context. In the real world, human actions are often influenced by social context in their surroundings. For instance, a person might be helping an elderly individual cross the road. This category evaluates Video-LMMs on such scenarios to determine their ability to accurately infer the rationale behind actions based on the depicted social context. We gather diverse videos from the internet and create challenging questions that encompass the social context dimension. 10) Understanding of emotional context. Similar to social context, humans can accurately understand and interpret each other\u2019s actions by considering the emotional context. For example, a person being emotionally moved and crying in a gathering could be a happy moment if it is one stemming from success/joy. We collect videos and curate challenging reasoning questions aimed at recognizing the nature of actions solely based on emotional context for evaluating Video-LMMs. 11) Interpretation of visual context. This dimension focuses on assessing the model\u2019s reasoning abilities to recognize the actions by leveraging the overall visual contextual cues in the video. We curate specific videos containing actions where activity identification and reasoning require visual contextual cues. For example, to identify the number of people present based on the presence of shadows, one must utilize the visual context from the shadows to reason about the question. Qualitative Examples. Fig. 2 shows examples of collected videos for the CVRR-ES benchmark. The curated videos are carefully selected to be diverse and contain rich spatio-temporal content, aligned with the proposed video evaluation dimensions. 3.2 Building CVRR-ES Benchmark After defining the video evaluation dimensions, we now proceed toward building the CVRR-ES benchmark which consists of three stages. We present each stage in detail below. Stage 1: Data collection and Annotation. We first collect high-quality videos and annotate each video using human assistance. To ensure that each evaluation dimension captures the relevant attributes and information, we meticulously select videos that are representative of specific characteristics associated with that dimension. Across the 11 dimensions, 214 unique videos are selected for the benchmark with around 20 videos per evaluation category. Around 60% of these videos are collected from public academic datasets. To introduce diversity in the benchmark distribution, we incorporate video samples from multiple academic datasets including Something-Something-v2 [Goyal et al., 2017], CATER [Girdhar and Ramanan, 2020], Charades [Sigurdsson et al., 2016], ActivityNet [Caba Heilbron et al., 2015], HMDB51 [Kuehne et al., 2011], YFCC100M [Thomee et al., 2016]. The remaining 40% of videos are collected from the internet. Following the video collection process, two experienced human annotators are assigned to generate captions for each video. For videos where initial captions or metadata are available from academic datasets, the captions are generated by the annotators based on them. For videos collected from the internet, captions are entirely generated by human annotators. To ensure consistency and high quality, we provide annotation instructions to annotators, who generate captions accordingly. Personalized annotation guidelines are used for each video category. Refer to additional details in Appendix B. Stage 2: Question-Answer Generation. The first challenge is to select an evaluation setting to assess Video-LMMs. Humans typically engage in free-form conversation to interact with each other in day-to-day life. Inspired by this, we aim to simulate a similar style of interaction with Video-LMMs by curating open-ended QA pairs to evaluate these models for robustness and reasoning. We feed detailed ground-truth video captions to GPT-3.5 LLM, which are utilized to generate open-ended questions covering both reasoning and robustness aspects. Reasoning QA pairs: With Video-LMMs beginning to interact more directly with humans in our lives, it\u2019s crucial to validate the reasoning abilities of Video-LMMs for more reliable Human-AI interaction. When evaluating the reasoning capabilities of Video-LMMs, we aim to determine whether these models can understand the input video not only by analyzing spatial content but also by grasping the underlying rationale behind the occurring activities and their relationships with the surrounding context. This involves creating questions that go beyond simple video comprehension and scene 6 \fdescription and require the model to engage in complex logical inference, contextual understanding, and reasoning about counterfactual and hypothetical scenarios. Robustness QA pairs: In addition to evaluating the reasoning capabilities of LLMs, it is important to assess Video-LMMs to ensure their robust and responsible performance in real-world scenarios. In the context of Video-LMMs, robustness can be evaluated from both visual (video input) and textual interfaces. Our focus in this work lies on textual interface robustness by particularly testing the model\u2019s comprehension when posed with misleading or confusing questions. This scenario mirrors realistic situations where users, based on their expertise levels, may pose irrelevant, misleading, or confusing questions. It is crucial for models to demonstrate reliability and robustness in handling such queries and avoid generating unreal or hallucinated content for input videos. We curate specific prompts for each evaluation dimension to instruct LLM in generating QA pairs. Example prompts used as an instruction to LLMs for curating QA pairs for robustness and reasoning aspects are provided in Fig. 14 in the Appendix D. Stage 3: QA Pairs Filtration. After generating QA pairs, a manual filtration step is employed, with human assistance to verify each generated QA pair. Approximately 30% of the QA pairs generated by GPT-3.5 are found to be noisy, containing questions that are unrelated to the video evaluation dimensions or unanswerable based on the provided ground-truth captions. Additionally, many questions contain answers within the question itself. Therefore, an exhaustive filtering process is conducted which involves QA rectification and removing those samples which are not relevant to the video or evaluation type. This process results in a final set of 2400 high-quality QA pairs for the CVRR-ES benchmark. Examples of QA pairs are shown in Tab. 4 in the Appendix. Stage 4: Evaluation Procedure. Previous methods in the literature [Maaz et al., 2023, Cai et al., 2023, Liu et al., 2023a, Qian et al., 2024] have explored using LLM models as judges for quantifying results in open-ended QA benchmarks. We adopt a similar approach and instruct LLMs to act as teachers to assess the correctness of predicted responses from Video-LMMs compared to ground-truth answers. We generate open-ended predictions from Video-LMMs by providing video-question pairs as inputs and then present the model predictions and their corresponding ground-truth responses to the LLM Judge alongside the evaluation prompt. The Judge determines whether the prediction is correct or incorrect through a binary judgment, assigns a score from 1 to 5 representing the quality of the prediction, and provides a reasoning to explain its decision. Our ablative analysis in the Appendix. D demonstrates that reasoning-constrained LLM-based evaluation aligns well with human-based judgment. The evaluation prompt is shown in Fig. 13 in the Appendix D. 4 Dual-Step Contextual Prompting for Video-LMMs. Given their wide-scale potential in practical downstream applications, new Video-LMMs are frequently introduced by the research community. Despite the availability of numerous Video-LMMs, the majority of them are trained using only positive examples and video-conversational templates that are primarily limited to tasks such as video-captioning and video question answering. This leads to highly over-affirmative behavior and a lack of self-rectification abilities in these models (Sec. 5.4). Dual Step Contextual Prompting for Video-LMMs Retrieving Contextual reasoning information (Step 1) As an intelligent video comprehension model, focus on these guidelines: 1. Differentiate recurring objects, count accurately, and identify movements and poses. 2. Understand directional movements and temporal order. 3. Pay attention to fine-grained actions with precision. 4. Assess incomplete actions without assuming completion. 5. Detect emotional, social, and visual cues. 6. Capture and analyze all relevant actions. 7. Identify unusual actions accurately. 8. Disagree with incorrect information given in question. 9. If you do not find the evidence in the frames, you can give a definite answer by assuming that the asked action/attribute is not present. 10. Provide to the point and concise response. Now, proceed with answering the following question faithfully while keeping above guidelines in mind: Question: What is happening in the video? Context conditioned question-answering (Step 2) Context for the given video is: {step 1 response}. Now answer a question truthfully based on the video and the provided context. Question: {User question} Figure 4: Principled prompt instructions in our DSCP method for improving reasoning and robustness in Video-LMMs. Additionally, the templates have minimal focus on enhancing reasoning and robustness capabilities through reasoning-based instruction-tuning pairs, resulting in weak performance of such models against robustness and reasoning QA evaluations in the CVRR-ES benchmark. Furthermore, curating reasoning-based instruction fine-tuning datasets requires meticulous data curation steps, and retraining these models is computationally expensive [Li et al., 2023d, Ren et al., 2023]. Alternatively, training-free prompting techniques in NLP literature have shown effectiveness in eliciting reasoning abilities in LLMs such as chain of thought and self-consistency prompting [Wei et al., 2022b, Wang et al., 2022a]. Inspired by these approaches, we introduce a prompting technique called Dual Step Contextual Prompting (DSCP), which aims to steer VideoLMM focus for enhanced reasoning while simultaneously encouraging the models to provide robust and 7 \fVideo-LLaVA LLaMA-VID MovieChat How many different wigs with different colors did the young girl try before settling on the black one? I can see the young girl wearing a black wig, and a brunette wig is in the background. The girl is wearing a pink shirt in the beginning of the video The young girl tried on three different wigs with different colors before settling on the black one. The young girl tried on three different wigs with different colors before settling on the black one. Improved video reasoning and robustness with our DSCP Method The video does not provide information about the number of different wigs the young girl tried before settling on the black one. I cannot give an accurate answer as the video does not show the different wigs that the young girl tried. The video does not provide information about the number of wigs the young girl tried before settling on the black one Is the atmosphere in the dressing room tense and uncomfortable as the girl receives her new hair? Yes, the atmosphere in the dressing room is tense and uncomfortable as the girl receives her new hair. The video does not provide any information about the atmosphere in the dressing room. The video does not provide any information about the atmosphere in the dressing room. However, it is possible \u2026 Improved video reasoning and robustness with our DSCP Method No, the atmosphere in the dressing room is not tense and uncomfortable. The woman brushing the hair is smiling, and the woman in the chair is smiling as \u2026 \u2026, but the video does not suggest that the girl is uncomfortable or tense while receiving her new hair. No, the atmosphere in the dressing room is not tense or uncomfortable. The girl is laughing and smiling \u2026, indicating that she is enjoying the experience. Can we see visitors petting the white tiger through the cage bars? Sure, in this video we can see a baby panda inside a cage with bars. There are also other \u2026 Yes, we can see visitors petting the white tiger through the cage bars in the video. Yes, we can see visitors petting the white tiger through the cage bars. Improved video reasoning and robustness with our DSCP Method No, we cannot see visitors petting the white tiger through the cage bars in the video. No, we do not see visitors petting the white tiger through the cage bars in the video. No, we cannot see visitors petting the white tiger through the cage bars in the video. Does the chimpanzee use tools to repair the zoo vehicle in the video? Yes, the chimpanzee in the zoo is seen using tools to fix the zoo vehicle. Yes, the chimpanzee is seen using a tool to fix the vehicle in the video. Yes, the chimpanzee is seen using a tool to repair the zoo vehicle in the video. Improved video reasoning and robustness with our DSCP Method No, the chimpanzee does not use tools to repair the zoo vehicle in the video. \u2026 the chimpanzee driving the car \u2026 The chimpanzee does not use tools to repair the zoo vehicle in the video. It can be seen driving the car \u2026 No, the chimpanzee does not use tools to repair the zoo vehicle in the video. Figure 5: Qualitative results of DSCP prompting method. Using our DSCP approach, Video-LMMs demonstrate enhanced robustness and reasoning capabilities over complex videos. grounded answers. DSCP is a two-step prompting method that 1) ensures that the model comprehends the video while reasoning over crucial aspects of complex video understanding such as contextual information and decoding the complex relationships between objects and motions, etc., and 2) encourages robustness by generating the response against the question while conditioning both on video and the context retrieved in the first step. Below we discuss each step of DSCP in detail. Step 1: Reasoning over the video. We first guide Video-LMMs using principled prompts to interpret video content from a reasoning perspective. As shown in Fig. 4 (in blue), we formulate ten principled reasoning-based instructions for prompting, Preason, which directs Video-LMMs to not only comprehend the general video content but also steers them to reason over the rationale behind occurring activities and their relationships with the surrounding context. These prompt instructions include specific considerations like contextual priors, the temporal order of actions, instance count, and attributes. Additionally, the prompting technique incorporates instructions to ensure conciseness and factuality, aiming to mitigate hallucinations. Given a Video-LMM F and input video V, we retrieve contextual reasoning information Icontext by providing principled reasoning prompt Preason along with the video to the LMM, Icontext = F(Preason|V). The contextual information is utilized in the second step of DSCP to generate a more grounded response to the user question. Step 2: Context conditioned question answering. As discussed earlier, Video-LMMs are primarily trained with positive examples to answer questions, with limited emphasis on reasoning and robustness aspects. Consequently, enabling direct interaction of Video-LMMs with users in real-world scenarios can result in undesired responses when the user question is confusing and deceiving due to their extreme over-affirmative behavior. To address these challenges, we propose incorporating an additional inference step in Video-LMMs before answering the user\u2019s question. We note that Video-LMMs often possess factual knowledge about the video content but may become distracted and produce hallucinations when prompted with confusing or misleading questions (more details in Appendix C). Specifically, we devise a prompting method that conditions the model to first comprehend the video in detail without attending to the user question, thereby eliminating the influence of the question. The complex video comprehension information refers to Icontext formulated in step 1. Subsequently, we pose the user question in the second step using prompt Puser which combines user question and the contextual reasoning information (Fig. 4, in green) while conditioning the model on both the video and the contextual reasoning information Icontext. Concretely, Final response = F(Puser|V), where Puser = [question; Icontext]. 8 \fTable 2: Evaluation results of Video LLMs across various video-evaluation categories on the CVRR-ES benchmark. We present results for both open-source and closed-source models, alongside human evaluation results which serves as the upper bound on the benchmark. Benchmark Category Video-LLaMA-2 VideoChat Video-ChatGPT Video-LLaVA MovieChat LLaMA-VID TimeChat Gemini-V Pro GPT4V Human Multiple Actions in 16.98 23.90 27.67 15.72 12.58 17.92 28.30 43.08 57.55 93.40 single video. Fine-grained action 29.57 33.48 26.96 25.22 23.48 26.09 39.13 51.61 77.39 95.65 understanding. Partial 24.76 33.01 22.82 13.59 21.36 14.56 49.51 67.48 73.79 98.54 actions. Time order 16.45 31.58 27.63 21.05 16.45 19.74 34.21 45.39 57.89 97.37 understanding. Non-existent actions with 10.14 15.22 23.19 5.07 5.07 2.90 23.19 57.25 71.01 97.10 existent scene. Non-existent actions with 13.19 14.58 17.36 3.47 11.81 6.94 13.89 49.64 75.00 100.00 non-existent scene. Continuity and Object 28.25 24.29 28.41 21.47 19.77 24.86 34.46 36.16 62.71 96.49 instance Count. Unusual and Physically 18.95 18.42 18.95 15.79 17.89 16.32 27.37 60.00 74.74 96.84 Anomalous activities. Interpretation of 25.00 31.07 32.50 18.93 17.14 13.93 39.29 64.29 79.64 97.51 social context. Understanding of 21.92 23.63 21.23 15.07 13.70 14.73 27.40 47.26 66.44 95.55 emotional context. Interpretation of 32.60 34.43 27.84 19.78 21.25 23.08 45.05 63.00 82.42 94.87 visual context. Average 21.62 25.78 24.96 15.92 16.41 16.46 32.89 53.20 70.78 96.67 Intuitively, the factual content generated in the first step will guide the model towards a robust response in the second step to produce factual and correct responses, even in the presence of noisy/misleading user questions. We illustrate the qualitative results of the DSCP method in Fig. 5. This approach leads to responses that are better grounded with the actual video content and are robust against potential lesser-quality user queries. As we will later show, the DSCP technique effectively enhances the performance of Video-LMMs on the CVRR-ES benchmark. 5 Evaluation Experiments on CVRR-ES. Video-LMMs. Both open-source and closed-source models are selected for the evaluation. Among the open-source models, we evaluate 7 recent Video-LMMs, including Video-LLaVA [Lin et al., 2023], TimeChat [Ren et al., 2023], MovieChat [Song et al., 2023], LLaMA-ViD [Li et al., 2023d], VideoChat [Li et al., 2023b] Video-ChatGPT [Maaz et al., 2023], and Video-LLaMA-2 [Zhang et al., 2023]. For evaluating closed-source models, we use Gemini-Pro-Vision [Google, 2023] and GPT-4V(vision) [OpenAI, 2023]. Refer to the Appendix A for implementation details. 5.1 Main Experiments on CVRR-ES. In Tab. 2, we present the evaluation results of Video-LMMs on the 11 dimension categories of the CVRR-ES benchmark. Below, we present several key findings. Open Source Video-LMMs struggles on CVRR-ES benchmark. All open-source LMMs show inferior performance across the different evaluation dimensions of CVRR-ES. Interestingly, some of the earlier developed open-source Video-LMMs, like Video-LLaMA, VideoChat, and Video-ChatGPT, exhibit higher performance compared to more recent models such as Video-LLaVA, MovieChat, and LLaMA-VID. Overall, TimeChat achieves the highest performance of 32.89% averaged across the 11 evaluation dimensions among open-source LMMs, followed by VideoChat with a score of 25.78%. Humans rank highest in CVRR-ES benchmark. Human studies achieve the highest performance on the CVRR-ES benchmark, with over 95% accuracy across all evaluation dimensions. Furthermore, these results suggest that the CVRR-ES QA pairs are answerable and suitable for benchmarking. Closed source models perform competitively on CVRR-ES. As shown in Tab. 2, both Gemini and GPT4V surpass the performance of open-source models and achieve high gains across all evaluation dimensions. The competitive results of GPT4V and Gemini on complex video evaluation dimensions such as partial actions, non-existent action/scene depiction, and context-dependent categories show 9 \fPrompting Method VideoChat Video-LLaVA MovieChat LLaMA-VID TimeChat Standard prompting 25.78 15.92 16.41 16.46 32.89 Chain of Thought (CoT) prompting 22.44 25.87 15.89 29.68 39.57 DSCP (Stage 1) 38.07 32.12 28.05 25.13 33.04 DSCP (Both stages) 47.92 37.93 35.87 46.85 39.45 Table 3: Prompting methods. DSCP stage 1 uses only the principled instructions designed in step 1, while DSCP (Both stages) uses the complete dual-step prompting technique. that these models have a more sophisticated understanding of the complex visual contents of videos and have strong capabilities to rectify misleading and confusing user questions. Overall, GTP4V improves over Gemini by 17.58% and provides an average accuracy of 70.78% on CVRR-ES. 5.2 Effectiveness of DSCP method for improving Video-LMMs performance 0 10 20 30 40 50 60 Accuracy % (averaged over 11 video dimensions) Video LLaVa MovieChat LLaMA-VID Video-LLaMA-2 Video-ChatGPT VideoChat TimeChat Gemini-Pro Video LMMs with DSCP +22.01 +19.46 +30.39 +16.15 +8.93 +22.14 +6.56 +5.02 Figure 6: Video-LMMs with DSCP technique effectively improves their performance (gains are shown in green) on CVRR-ES benchmark. We next integrate DSCP technique with VideoLMMs and present results on the CVRR-ES benchmark in Fig. 6. The results indicate that DSCP improves the model\u2019s performance compared with models that use standard prompting (i.e., using only the question itself). These results suggest that prompting techniques in Video-LMMs can better guide models for improved reasoning and robustness. With DSCP, initially low-performing Video-LMMs such as Video-LLaVa, MovieChat, and LLaMA-Vid show much better relative gains and become competitive with other models. The highest relative gain of 184% is achieved by LLaMA-ViD, which moves from 7th place in the leaderboard to 2nd among the open-source models after utilizing DSCP prompting. We observe similar overall positive trends of using DSCP with closed-source model Gemini, which improves on the benchmark by an absolute overall gain of 5.02%. We provide more detailed results comparisons in Appendix C. 5.3 Different prompting techniques. We study the contribution of each step of DSCP and compare it with chain-of-thought prompting [Wei et al., 2022b]. The results for the top 5 performing Video-LMMs are shown in Tab. 3. Chainof-thought prompting improves over the standard prompting technique in 3 out of 5 Video-LMMs, suggesting that prompting techniques from NLP literature can effectively guide multi-modal VideoLMMs to enhance reasoning and robustness. Next, we ablate on the first step of DSCP prompting, which uses the principled instructions of DSCP step 1 as a prefix alongside the actual user question. Using the first step prompting technique of DSCP substantially improves model performance on all Video-LMMs, suggesting the effectiveness of the principled prompt instructions designed specifically for Video models. DSCP with both steps, which integrates an additional thinking step in the prompting step, further improves the results and provides the highest results on 4 out of 5 Video-LMMs. 5.4 Main findings and Qualitative Results Based on the results of Video-LMMs on CVRR-ES, we draw key findings and show qualitative results. These insights can serve as valuable guidance for developing the next generation of Video-LMMs, aiming to make them more robust and reliable when deployed in real-world applications. Models excelling at standard VQA benchmarks struggle on CVRR-ES benchmark. Our analysis in Sec. 5.1 reveals that the latest open-source Video-LMMs, such as Video-LLaVA, MovieChat, and LLaMA-VID, perform less effectively on the CVRR-ES benchmark compared to Video-LMMs that were introduced earlier in the community, such as VideoChat and Video-ChatGPT. Interestingly, the same recent models demonstrate superior performance on general video comprehension benchmarks. This discrepancy suggests that current VQA benchmarks, like ActivityNet-QA [Yu et al., 2019] and MSRVTT [Xu et al., 2017], do not adequately correlate with the complex video reasoning and robustness scenarios highlighted in our benchmark. Consequently, this also indicates that most newer Video-LMMs are heavily trained to excel on the general video comprehension benchmarks while reducing their generalizability, reasoning, and robustness capabilities. Over-affirmative behavior of open-source Video-LMMs. Another important observation about open-source models is their tendency to exhibit excessively positive and affirmative responses. As shown in Fig. 7, open-source Video-LMMs consistently respond with \"Yes\" even when faced with 10 \fconfusing questions that describe non-existent actions and objects. This highlights the vulnerability of these models when interacting with users in real-world scenarios. In our CVRR-ES benchmark, opensource models are particularly vulnerable to our evaluation dimensions of \"Non-existent actions with the existent scene\" and \"Non-existent actions with the non-existent scene\" compared to closed-source models. These models lack negation and self-rectification capabilities, especially when users provide misleading or confusing questions. We conjecture that such behavior arises due to the absence of negative instruction tuning pairs during the training of Video-LMMs. Tendency towards activity completion. Most open-source Video-LMMs have shown weak performance on the evaluation dimension of partial actions in CVRR-ES, which contains videos focusing on incomplete or atomic actions. To further analyze the models\u2019 behavior, we show qualitative results on such videos in Fig. 8. It can be observed that most open-source models tend to complete actions, even when only part of the action is provided in the video. For instance, Video-LLaVA struggles to reason over the video and describes the man as kicking the soccer ball, while the action in the video stops at the point of the man placing his foot beside the ball. We observe similar behavior in other Video-LMMs. Upon examining the fine-tuning strategies [Maaz et al., 2023, Liu et al., 2023b], we find that almost all models are trained on end-to-end actions-based instruction-tuning data, causing them to generate complete action descriptions at inference. This tendency highlights the vulnerability of Video-LMMs after deployment, as real-world scenarios often involve atomic, sub-atomic, and general actions alike. To improve the performance of Video-LMMs, it is crucial to incorporate diverse action types during training, including partial and incomplete actions. Weak Generalization to extreme OOD videos. The evaluation dimension of unusual and physically anomalous activities in CVRR-ES resembles extreme out-of-distribution video examples. With the exception of GPT4V and Gemini, Video-LMMs struggle with this dimension, indicating weak generalizability towards OOD videos containing the coexistence of unusual objects and activities that are extremely rare in typical videos. For instance, Video-LLaVA in Fig. 9 describes a person falling on the street, while the video actually shows the person performing an optical illusion. To be responsibly deployed in real-world applications, where OOD actions occur more frequently, Video-LMMs need to be trained to perform more robustly on OOD samples. This may involve incorporating diverse and atypical examples in the training data to improve the model\u2019s ability to handle unusual situations. Limited understanding of temporal order in complex videos. The CVRR-ES benchmark results show that Video-LMMs perform relatively better on the fine-grained action dimension compared to the time-order understanding dimension. While these models can accurately identify fine-grained actions, they struggle with comprehending the correct temporal order of these actions within a video. This limitation can lead to misinterpretations of the underlying information depending on temporal order. We present failure cases of this dimension in Fig. 10. For building more advanced world-centric Video-LMMs, it is crucial to enhance their ability to process and interpret event sequences accurately. Video-LMMs struggles in understanding the emotional and social context. For more reliable interaction between Video-LMMs and humans in practical scenarios, these models should comprehend the spatio-temporal scenes with social and contextual reasoning capabilities similar to humans. The lower performance of Video-LMMs on social and emotional contextual dimensions in CVRR-ES highlights their limitations and lack of understanding of scenes based on contextual cues. For instance, as shown in Fig. 11 (bottom row), GPT-4V struggles to comprehend a scene where a worker is attempting to prevent shoes from getting wet due to the rain by moving them under the shade. Instead, GPT-4V provides a response that contradicts the social cues present in the video. 6" + }, + { + "url": "http://arxiv.org/abs/2401.02418v1", + "title": "Learning to Prompt with Text Only Supervision for Vision-Language Models", + "abstract": "Foundational vision-language models such as CLIP are becoming a new paradigm\nin vision, due to their excellent generalization abilities. However, adapting\nthese models for downstream tasks while maintaining their generalization\nremains a challenge. In literature, one branch of methods adapts CLIP by\nlearning prompts using visual information. While effective, most of these works\nrequire labeled data which is not practical, and often struggle to generalize\ntowards new datasets due to over-fitting on the source data. An alternative\napproach resorts to training-free methods by generating class descriptions from\nlarge language models (LLMs) and perform prompt ensembling. However, these\nmethods often generate class specific prompts that cannot be transferred to\nother classes, which incur higher costs by generating LLM descriptions for each\nclass separately. In this work, we propose to combine the strengths of these\nboth streams of methods by learning prompts using only text data derived from\nLLMs. As supervised training of prompts is not trivial due to absence of\nimages, we develop a training approach that allows prompts to extract rich\ncontextual knowledge from LLM data. Moreover, with LLM contextual data mapped\nwithin the learned prompts, it enables zero-shot transfer of prompts to new\nclasses and datasets potentially cutting the LLM prompt engineering cost. To\nthe best of our knowledge, this is the first work that learns generalized\nprompts using text only data. We perform extensive evaluations on 4 benchmarks\nwhere our method improves over prior ensembling works while being competitive\nto those utilizing labeled images. Our code and pre-trained models are\navailable at https://github.com/muzairkhattak/ProText.", + "authors": "Muhammad Uzair Khattak, Muhammad Ferjad Naeem, Muzammal Naseer, Luc Van Gool, Federico Tombari", + "published": "2024-01-04", + "updated": "2024-01-04", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction The Vision field is experiencing a new paradigm in its model-building approach with the emergence of foundational models [18, 23, 37, 47], which are large DNNs pretrained on web-scale data. Among these, Vision-Language models (VLMs) such as CLIP [37] stand out as the latest Method Do not require Transfer to images unseen datasets CoOp [50] \u2718 \u2713 Prompt learning CoCoOp [49] \u2718 \u2713 methods MaPLe [20] \u2718 \u2713 PromptSRC [21] \u2718 \u2713 DCLIP [29] \u2713 \u2718 Prompt ensembling WaffleCLIP-Concept [39] \u2713 \u2718 methods (LLM) CuPL [36] \u2713 \u2718 ProText (Ours) \u2713 \u2713 Table 1. Existing methods improve CLIP\u2019s generalization by learning prompts with image supervision or using non-transferable prompt ensembling with LLM knowledge. In contrast, our approach, ProText, effectively learns prompts with text-only supervision which are transferable to new datasets and classes. highlights which leverage contrastive pre-training on massive image-text pairs from the internet. During pre-training, CLIP learns to align image-text samples in a shared feature space. This allows CLIP to encode open-vocabulary concepts and generalize well to zero-shot recognition tasks. CLIP consists of two encoders to encode image and text inputs respectively. At inference, a hand-crafted prompt such as \u2018a photo of a CLS\u2019 is used as the text input. Text features of classes are compared with visual feature and class with highest similarity is assigned as predicted label. Improving the quality of text templates such as adding attributes [1], or class-specific details [19, 36] has shown to improve CLIP performance. However, designing highquality prompts that can best describe test image remains a key challenge, as image content is not known in advance. In literature, numerous techniques have been proposed to adapt CLIP for downstream recognition tasks. One branch of methods [6, 17, 27, 41, 49, 50] treat text prompts as learnable vectors and optimize them using task-specific objectives such as cross-entropy. As prompts are learned in the embedding space, this allows them to be used with classes and datasets beyond those on which they were trained on. While effective over the baseline CLIP, most of these methods require annotated image labels to optimize the prompts which is often impractical, especially in real-world scenarios such as medical imaging, remote sensing, security, arXiv:2401.02418v1 [cs.CV] 4 Jan 2024 \fAverage over 10 datasets 63 64 65 66 67 68 Performance (%) 65.15 65.15 63.88 65.74 65.81 66.3 67.23 CLIP CuPL CoOp CoCoOp PromptSRC MaPLe ProText (Ours) Figure 1. Without using any images for supervision, ProText with text-only training improves over CLIP, CuPL, and prior 16-shot image-supervised methods in challenging cross-dataset transfer settings. Prompt ensembling based CuPL performs same as CLIP as it cannot transfer class specific LLM templates to cross-datasets. surveillance, etc. Moreover, these methods tend to overfit on few-shot source samples and struggle to retain CLIP\u2019s generalization, especially in cross-dataset settings. Alternatively, several methods [29, 36] have adopted the training-free approach of prompt ensembling by leveraging the capabilities of Large Language Models (LLMs). Instead of using hand-crafted templates, these methods mine dataset or class specific descriptors and captions from LLMs to enrich text features. These enriched features aim to better represent content that could possibly occur in test images, leading to improvements over baseline CLIP. Although these methods do not require image information, the knowledge acquired from LLMs is mostly specific to each class and not directly transferable across unseen classes and datasets since no optimization is performed. Additionally, generating LLM descriptions for each concept separately incurs additional LLM serving and prompt engineering costs. In this work, we present a new paradigm to improve CLIP\u2019s generalization. Our motivation comes from combining the strengths of prompt learning and prompt ensembling approaches while effectively addressing their limitations. To this end, we introduce ProText: Prompt Learning with Text-Only Supervision. In contrast to previous methods, our approach instead proposes to learn prompts using text only data obtained from LLMs. As supervised training of prompts is not trivial due to image-free setting, we develop a novel training framework that allows prompts to learn and extract rich contextual knowledge from LLM data. Moreover, as LLM contextual knowledge is mapped within the learned prompts, it enables zero-shot transfer of prompts to new classes and datasets, potentially leading to a substantial reduction in LLM serving and prompt engineering cost. As shown in Tab. 1, our approach is different from prior methods as it does not require image samples to learn prompts, in addition the adapted CLIP transfers well to unseen classes and datasets, therefore addressing a key limitation of LLM-based prompt ensembling techniques. We demonstrate the effectiveness of ProText by performing extensive evaluations on 4 benchmarks. On challenging crossdataset transfer setting, ProText without using any visual information achieves an average gain of +2.08% over CLIP while surpassing the performance of previous best imagesupervised prompt learning method MaPLe [20] by +0.93% (Fig. 1). Further, ProText with text-only supervision performs competitively against prior methods in domain generalization, base-to-novel class, and text-only supervised setting. Our main contributions are summarized as follows: \u2022 We present a new approach for prompt learning in CLIP using text-only supervision. Our method harmonically combines the strengths of prompt learning and prompt ensembling methods to improve CLIP\u2019s generalization. \u2022 To optimize prompts with text-only data, we develop a training approach that allows prompts to learn a mapping by extracting rich contextual information from LLM data. \u2022 As LLM contextual knowledge is mapped within the learned prompts, this enables prompts to be directly used with new classes and datasets potentially cutting the additional LLM serving and prompt engineering cost. \u2022 We validate the effectiveness of our method through extensive experiments across four benchmarks. Our TextPro approach improves the generalization of CLIP across various settings and fares competitive to approaches that explicitly use labeled image samples for training. 2. Related Work Foundational Vision-Language models (VLMs). VLMs [18, 33, 37, 46\u201348] leverage joint image-text pretraining using internet-scale data in a self-supervised fashion. Representative VLMs like CLIP [37] and ALIGN [18] have utilized around 400M and 1B image-text pairs during their pre-training. Using the contrastive learning objective, VLMs learn rich multi-modal features by attracting together the features of paired images and texts while repelling un-paired image-text features in a joint feature space. The resulting model learns open-vocabulary concepts interpretable through natural language suitable for various downstream discriminative vision tasks such as open-vocabulary image classification [6, 20, 27, 31, 32, 50], detection [3, 10, 26, 30, 51], and segmentation [13, 24, 25]. Although promising, adapting VLMs effectively while maintaining their original generalization remains a crucial challenge. In this work, we propose a novel method to adapt CLIP with prompt learning through text modality supervision to improve its performance on vision modality tasks. Prompt Learning for VLMs. Prompt Learning [6, 9, 27, 40, 41, 49, 50] has emerged as an effective fine-tuning strategy to adapt large-scale models. This approach adds a small number of learnable embeddings along with model inputs which are optimized during training while the rest of the model is kept frozen. As the pre-trained model is unchanged during prompt learning, it has become particularly effective for VLMs such as CLIP, where maintaining the model\u2019s \foriginal generalizability is crucial. CoOp [50] is the pioneering prompt learning method for CLIP which learns text prompt embeddings to fine-tune CLIP. CoCoOp [49] improves CoOp\u2019s generalization by conditioning text prompts on visual features. MaPLe [20] proposes a multi-modal prompting framework to adapt both vision and language branches of CLIP. UPL [17] adopts an unsupervised prompt learning approach to finetune CLIP. PromptSRC [21] improves prompt learning from a regularization perspective by making use of additional loss functions during training. While these methods improve baseline CLIP performance, most of them require image samples with labels, which is less practical, and generating pseudo-labels is often less effective. In contrast, we present a novel prompt learning approach that improves CLIP generalization without relying on any visual samples during training. Training-Free Text Prompt Enhancement. With the emergence of LLMs such as GPT-3 [5], several approaches [29, 36, 39] have demonstrated their potential for improving zero-shot generalization of CLIP. Instead of using handcrafted templates for generating class features, these methods leverage LLMs to generate high-level concepts, class descriptions, and/or attributes which are used in one form or another to produce enriched text features. DCLIP [29] generates fine-grained per-class language descriptors and ensemble its similarity with image to produce classification scores. WaffleCLIP [39] matches DCLIP performance with random descriptors and show further gains by data-specific concepts generated via LLMs. CuPL [36] query LLMs to generate class-specific prompt descriptions for text prompt ensembling. Although effective, most of these approaches generate class-specific text data from LLMs which are not directly transferable to unseen classes and new datasets since no training is performed. On the other hand, we aim to leverage the same LLM data via novel text-only prompt learning technique which seamlessly allows the transfer of learned prompts toward unseen classes and new datasets. 3. Method Given the language interpretable nature of foundational VLMs such as CLIP [37], they are naturally suited for zeroshot recognition tasks. However, to achieve full potential of CLIP\u2019s generalization for downstream tasks, adaptation still appears to be necessary. Numerous approaches have since been proposed to adapt general knowledge of CLIP for userspecific downstream tasks. One line of methods adopts prompt learning [20, 27, 49, 50] to re-purpose CLIP features for downstream data. While effective, most of them require image samples with labels to learn the prompts, which is a hard requirement to meet. Another line of methods adopts training-free prompt ensembling techniques [29, 36, 39] with the help of LLMs. Although ensembling-based approaches do not require image information, the majority of these works generate class-specific LLM prompts that are not directly transferable to new classes and datasets. To this end, we present a new paradigm for learning generalized transferable prompts for VLMs using text-only supervision. Our proposed adaptation framework, ProText: Prompt Learning with Text only supervision aims to address the challenges of existing approaches by learning transferable prompts without relying on images. Fig. 2 shows our ProText framework. First, we curate text-only LLM template data using class names of a given dataset and a LLM such as GPT-3 [5]. As a text-supervised approach, ProText only requires CLIP text encoders during training. Specifically, we employ one frozen encoder with learnable prompts and a second frozen encoder without learnable prompts. Learnable prompts with class-name templates are input to the prompted text encoder to obtain the class-name template feature, and a frozen text encoder generates LLM template feature from its description obtained from LLM data. Next, we employ a contextual mapping training objective which maps class-name template feature to the LLM template feature. Contextual mapping allows the prompts to learn a mapping function that embeds rich contextual knowledge from LLM data within the prompt vectors. As prompts are learned in the embedding space, they are directly compatible with new classes and datasets. At inference, the learned prompts are shipped with CLIP model for standard zero-shot CLIP inference for visual recognition. Below we explain our proposed approach in detail. We first revisit CLIP and previous methods including Prompt Learning and Prompt Ensembling via LLMs in Sec. 3.1 and then we present our ProText approach in Sec. 3.2. 3.1. Preliminaries Contrastive Language-Image Pre-training (CLIP). CLIP consist of an image encoder f and a text encoder g which maps image and text input into visual and textual feature respectively. We denote CLIP parameters as \u03b8CLIP = {\u03b8f, \u03b8g} where \u03b8f and \u03b8g refer to the image and text encoder parameters, respectively. Input image X is divided into M patches which are linearly projected to produce patch tokens and a learnable class token CLS is prepended resulting in the final sequence as \u02dc X = {CLS, e1, e2, \u00b7 \u00b7 \u00b7 , eM}. The image encoder f encodes the input patches via multiple transformer blocks to produce a latent visual feature representation \u02dc f = f( \u02dc X, \u03b8f), where \u02dc f \u2208Rd. Next, the corresponding class label y is embedded in a text template, such as \u2018a photo of a [CLASS]\u2019 which can be formulated as \u02dc Y = {SOS, t1, t2, \u00b7 \u00b7 \u00b7 , tL, ck, EOS}. Here {tl|L l=1} and ck are the word embeddings corresponding to the text template and the label y, respectively while SOS and EOS are the learnable start and end token embeddings. The text encoder g encodes \u02dc Y via multiple transformer blocks to produce the latent text feature as \u02dc g = g( \u02dc Y , \u03b8g), where \u02dc g \u2208Rd. \fconcat Text Encoder Inference on visual domain Contextual Mapping LLM GPT-3 A persian cat is a \u00a0large, long-haired cat \u00a0with a broad face \u00a0and round eyes.\" \u00a0LLM Data How does a \u00a0persian cat\u00a0 look like? concat \"a photo of a \u00a0 persian cat\" \"a photo of a persian cat\" Class names Class name\u00a0 feature \u00a0LLM template feature Text prompts Frozen parameters Learnable parameters F5005F Text Encoder Text Encoder Image Encoder Training with text-only supervision Class name templates Figure 2. Overview of ProText framework. (Left) First, diverse captions are generated for training classes using LLM like GPT-3. During training, CLIP text encoders generate prompted class-name feature (\u02dc gp) from class-name templates with learnable prompts and frozen LLM template feature (\u02dc g) from LLM generated templates. Next, we employ contextual mapping loss to guide learnable prompts to learn a mapping from the prompted class-name feature to the LLM template feature containing more information about the class. This allows the learned prompts to exploit internal knowledge of text encoder complemented by LLM descriptions. (Right) At inference, learned prompts are used with class-name templates, and the standard zero-shot CLIP inference protocol is followed. Moreover, rich contextual information from LLM descriptions mapped within the learned prompts enables its transferability to new classes and datasets. For zero-shot inference, text features of text template with class labels {1, 2, \u00b7 \u00b7 \u00b7 , C} are matched with image feature \u02dc f as exp(sim(\u02dc g\u00b7 \u02dc f)\u03c4) PC i=1 exp(sim( \u02dc gi\u00b7 \u02dc f)\u03c4), where sim() denotes the cosine similarity and \u03c4 is the temperature. Prompt Learning with CLIP. Being a parameter efficient tuning method, prompt learning has emerged as a popular technique to adapt vision-language models like CLIP. Since most of the model is kept frozen during adaptation, prompt learning aims to reduce overfitting. Learnable prompts are appended either at the image side [2], text encoder side [49, 50], or both sides. In this work, we learn hierarchical prompts at the text encoder named Deep Language Prompting (DLP) [20] formulated as follows. T learnable language prompts Pt = {p1 t, p2 t, \u00b7 \u00b7 \u00b7 , pT t } are appended with text input tokens, resulting in \u02dc Yp = {SOS, Pt, t1, t2, \u00b7 \u00b7 \u00b7 , tL, ck, EOS}. The text encoder processes \u02dc Yp and prompted text feature is obtained as \u02dc gp = g( \u02dc Yp, \u03b8g). We use deep prompting which learns hierarchical prompts at subsequent transformer blocks of text encoder. Visual feature \u02dc f is obtained without utilizing learnable prompts. To adapt CLIP on image classification task on dataset D, prompts Pt are optimized in a supervised fashion using labeled image samples with cross-entropy loss, LCE. LCE = arg min Pt E(X,y)\u223cD L(sim( \u02dc f, \u02dc gp), y). (1) Prompt Ensembling with LLM descriptions. Several methods have recently proposed to adapt CLIP via trainingfree prompt ensembling techniques. The majority of these approaches leverage the capabilities of LLMs to mine rich descriptions, attributes, or high-level concepts of class names. The corresponding text features are either averaged [36] or the similarity score of each attribute with the image is calculated to obtain classification scores [39] [29]. In this work, we focus our comparison with a strong ensembling baseline CuPL [36]. Specifically, a Large Language Model F such as GPT-3 [5] is used to generate classspecific descriptions for class labels {1, 2, \u00b7 \u00b7 \u00b7 , C} using queries such as \u2018How does a CLASS look like\u2019. Text features of the same class description are averaged together, which serves as the ensembled text features. Finally, zero-shot inference is performed with those ensembled text features. 3.2. Prompt Learning with Text-Only Supervision While image-supervised prompt learning and LLM-based prompt ensembling methods have proven effective in adapting CLIP, they face notable challenges as outlined below. Visual data dependency. Existing prompt learning methods require visual samples with labels to optimize prompts using Eq. 1. However, collecting samples and labels is difficult in critical scenarios like medical images, remote sensing, and surveillance. Pseudo-labels alleviate label dependency but they are often less effective. Furthermore, these methods tend to overfit CLIP to source data distributions and compromise generalization across cross-datasets. For instance, CoOp utilizing labeled source samples reduces average CLIP performance by 1.27% on 10 cross-datasets. LLM Prompts transferabilty limitation. LLM-based prompt ensembling approaches like CuPL [36] generate class-specific LLM descriptions that cannot be directly transferred to unseen classes and datasets. While opensource LLMs exhibit lower performance, proprietary ones such as GPT-3 are required for generating data for new classes and datasets leading to additional serving costs. \fOur work aims to address the aforementioned limitations within a unified framework. Below we detail our strategy for curating text-to-text data via LLMs for training, followed by our text-only prompt learning framework. 3.2.1 Text-Only LLM data for Prompt Learning As discussed in Sec. 3.1, optimizing prompts for downstream datasets typically requires image-labels pairs. Since we explicitly aim to bypass this requirement, we first leverage LLMs to curate text data for prompt learning which consists of text inputs and text outputs. Given a set of classes {ci}C i=1, we prepare text inputs {Li inputs}C i=1 by wrapping each class name in a standard hand-written text template, Li inputs = \u2018a photo of a ci\u2019. Next, we prepare text outputs corresponding to the Linputs. Specifically, we query GPT-3 model to generate detailed descriptions for each class name ci. Similar to CuPL [36], we prompt GPT-3 with different queries Q conditioned on class names such as \u2018How does a ci look like?\u2019 and \u2018How can you identify a ci?\u201d to obtain text outputs, Li outputs = F(Q|ci). Similar to [36], we generate M text outputs per query Q and use N different queries, resulting in M \u00d7 N text outputs per class category. We associate all Loutputs with the corresponding single Linputs for each class ci. As LLMs are pre-trained on internet-scale text corpora, they possess the capability of generating very diverse and highquality descriptions and captions for different class categories which results in high-quality text outputs. Finally we combine Linputs and Loutputs to create LLM based text-to-text data for text only prompt learning, DPROMPT = {Li inputs, Li outputs}M\u00d7N\u00d7C i=1 . We refer the readers to supplementary for additional details on the choice of LLM prompts and examples of DPROMPT. 3.2.2 Contextual mapping with Prompt Learning To leverage LLM text-to-text data DPROMPT for learning generalized transferable prompts, we propose a contextual mapping strategy that effectively learns a mapping function that maps standard class name templates such as \u2018a photo of a ci\u2019 to the text feature generated from a LLM description which contains more information about the class ci. In other words, contextual mapping allows learnable prompts to map Linputs to Loutputs in the text feature space of CLIP. The mapping function is realized in the form of learnable prompt vectors, which we found to be more effective in our ablations as compared to other techniques such as adapters via linear projection and MLP. For an ith training sample from DPROMPT consisting of a text-to-text pair {Linputs, Loutputs}i, we obtain prompted class-name feature \u02dc gp for Li inputs using learnable prompts and frozen LLM feature \u02dc g for Li outputs without the prompt vectors within the pre-trained latent space of CLIP. We then impose a contextual mapping constraint between \u02dc gp and \u02dc g text features as follows, Lmapping = 1 d d X i=1 ||\u02dc gp \u2212\u02dc g||2 2. (2) As shown above, we utilize MSE loss objective to enforce contextual mapping from Li inputs to Li outputs. We study other choices of consistency objectives in our ablations (Sec. 4.7). Motivation for Lmapping. Contextual mapping objective allows learnable prompts to exploit internal knowledge of text encoder of CLIP to generate rich contextual features aligned with the LLM descriptions (Li outputs) for a given class. This strategy effectively learns prompts without using any visual information and when trained using all training classes together, it enables prompts to capture versatile and generalized context from the LLM descriptions. These contextaware prompts become adaptable for use with any dataset and effectively enable the transferability of class-specific LLM descriptions to unseen classes and datasets. Consequently, this substantially reduces the per-dataset overhead associated with LLM serving and prompt engineering. Inference. Once text prompt vectors are optimized through our TextPro framework in the text domain, they become ready to be shipped with CLIP for downstream visual domain inference with a standard zero-shot CLIP inference setup. As shown in Fig. 2 (right), the learned prompts Pt are fused with each given class name to produce prompted text features {\u02dc gp}C i=1. Finally, zero-shot inference is performed with the prompted text features and the input image feature \u02dc f to produce classification scores on test images. 4. Experiments 4.1. Evaluation settings We perform evaluations in 4 benchmark settings. Prompt ensembling methods and ProText utilize text-only LLM data for adapting CLIP while image-supervised prompt learning methods use image-label pairs for training. Base-to-Novel Generalization. This setting evaluates the generalization of methods within a dataset. Following previous methods [49, 50], we split each dataset into base and novel classes. Models are trained on base classes and evaluated on the test set of base and novel classes respectively. Cross-dataset transfer. This setting evaluates the generalization ability of models trained on ImageNet-1k [8] source dataset by directly transferring it on cross-datasets. Domain Generalization. We evaluate the robustness of different methods on out-of-distribution datasets. We train \fMethod ImageNet Acc. 1: CLIP (ICML\u201921) 66.72 2: CLIP-Attribute 67.60 3: CLIP-80 68.32 4: DCLIP (ICLR\u201923) 68.03 5: Waffle CLIP (ICCV\u201923) 68.34 6: CuPL (ICCV\u201923) 69.62 7: ProText-Attribute 68.05 8: ProText-80 68.48 9: ProText-CuPL 70.22 Table 2. With the same amount of text data, learning contextual prompts with text-only supervision improves CLIP performance in comparison to the prompt ensembling techniques. Dataset CLIP [37] CuPL [50] ProText (Ours) Base Novel HM Base Novel HM Base Novel HM ImageNet 72.43 68.14 70.22 74.30 68.14 71.09 75.00 71.38 73.14 Caltech101 96.84 94.00 95.40 97.22 94.00 95.58 98.06 95.63 96.83 OxfordPets 91.17 97.26 94.12 94.42 97.26 95.82 94.95 98.00 96.45 StanfordCars 63.37 74.89 68.65 63.54 74.89 68.75 64.54 76.08 69.84 Flowers102 72.08 77.80 74.83 74.36 77.80 76.04 74.36 78.44 76.35 Food101 90.10 91.22 90.66 89.93 91.22 90.57 90.20 91.98 91.08 Aircraft 27.19 36.29 31.09 30.61 36.29 33.21 30.91 34.13 32.44 SUN397 69.36 75.35 72.23 76.02 75.35 75.68 76.14 79.14 77.61 DTD 53.24 59.90 56.37 62.85 59.90 61.34 63.08 61.59 62.33 EuroSAT 56.48 64.05 60.03 59.64 64.05 61.77 59.71 80.97 68.73 UCF101 70.53 77.50 73.85 75.28 77.50 76.37 75.54 79.50 77.47 Average 69.34 74.22 71.70 72.56 74.22 73.38 72.95 76.98 74.91 Table 3. Base-to-novel setting. ProText enables the transferability of learned prompts to new classes and improves over CuPL [36]. models on the ImageNet-1k source dataset and evaluate its performance on four ImageNet variants with domain shifts. Supervised setting. We provide performance comparison of ProText with CuPL[36] with text-only data per dataset. Datasets. For the aforementioned benchmarks, we use same datasets as followed by previous works [20, 21, 49, 50]. For cross-dataset transfer, domain generalization, and base-to-novel generalization settings, we use 11 image datasets that cover multiple recognition tasks. These includes ImageNet [8] and Caltech101 [11] which contains generic objects; OxfordPets [35], StanfordCars [22], Flowers102 [34], Food101 [4], and FGVCAircraft [28] for finegrained classification, SUN397 [45] for scene recognition, UCF101 [42] for action recognition, DTD [7] for texture classification, and EuroSAT [14] for satellite images categorization. For domain generalization setting, we train models on ImageNet [8] as a source dataset and use ImageNetA [16], ImageNet-R [15], ImageNet-Sketch [44] and ImageNetV2 [38] for out of distribution dataset evaluation. Implementation details. We use a publically available pretrained ViT-B/16 CLIP model from OpenAI [37]. We train ProText with Deep Language Prompting in the first 9 transformer blocks of the CLIP text encoder. For cross-dataset transfer and domain generalization setting, we train ProText using T = 4 and T = 16 language prompts with 10 and 200 epochs respectively. Similar to [44], ProText and zeroshot CLIP use additional concepts where available with its prompts such as \u2018a photo of a CLS, a type of flower\u2019 for OxfordFlowers [34]. For base-to-novel and supervised text-only settings, ProText uses optimal prompt length and epoch configuration for each dataset. Optimal training configuration is obtained through hyper-parameter search on validation split of datasets. To generate text-only data, we utilize GPT-3 DaVinci-002 model [5] and generate classspecific descriptions using the LLM prompts provided by CuPL [36]. We use publicly available CuPL data and generate descriptions for datasets not provided by CuPL. AdamW optimizer is used with 5 warm-up epochs for training. We use a single 16-GB V100 to train our models. Refer to supplementary material for additional implementation details. 4.2. Effectiveness of Text-Only Supervision We first present an ablation to motivate our approach of learning prompts with text-only supervision. We train ProText with 3 types of text data and evaluate performance on ImageNet-1k [8]. ProText-Attribute uses 46 templates from [1] which corresponds to common image attributes such as rotation, blurriness, etc. ProText-80 is trained on standard 80 templates provided by CLIP [37] and ProText-CuPL is trained on class-specific LLM data employed by our main baseline CuPL [36] for its ensembling approach. In Tab. 2, we compare ProText with CLIP and recent LLM-based ensembling methods. Prompt ensembling with attribute templates and 80 templates improves over CLIP single template result. Among the LLM-based ensembling methods, CuPL provide highest performance of 69.62%. In contrast, ProText uses a learning-based approach and shows competitive performance against prompt ensembling methods using the same text data. ProText-Attribute provides gain of 0.45% over CLIP-Attribute while roughly maintaining its performance against CLIP-80. When equipped with CuPL LLM text-data, ProText surpasses CuPL by 0.60% leading to highest performance against all methods. These results motivate our approach that instead of prompt ensembling, one can achieve competitive results by utilizing the same available text data to learn prompts. Next, we demonstrate the generalization of ProText such that the learned prompts transfer well across new classes and datasets. 4.3. Base to novel class generalization We now present results in base-to-novel class generalization setting where training data for only base classes are available and the model is evaluated on both base and novel classes. For CuPL [36], we use base-class LLM templates for base classes and zero-shot CLIP results for its novel classes. For ProText, we use base-class LLM templates for training and transfer the learned prompts for novel classes. Results are shown in Tab. 3. CuPL outperforms zeroshot CLIP on base classes while maintaining its performance on novel classes as LLM prompts for new classes are not available. ProText shows consistent improvements over CuPL on base classes for 11 datasets. Furthermore, \fSource Target ImageNet Caltech101 OxfordPets StanfordCars Flowers102 Food101 Aircraft SUN397 DTD EuroSAT UCF101 Average Methods utilizing labeled visual samples CoOp 71.51 93.70 89.14 64.51 68.71 85.30 18.47 64.15 41.92 46.39 66.55 63.88 Co-CoOp 71.02 94.43 90.14 65.32 71.88 86.06 22.94 67.36 45.73 45.37 68.21 65.74 MaPLe 70.72 93.53 90.49 65.57 72.23 86.20 24.74 67.01 46.49 48.06 68.69 66.30 PromptSRC 71.27 93.60 90.25 65.70 70.25 86.15 23.90 67.10 46.87 45.50 68.75 65.81 Zero-shot & Prompt ensembling methods CLIP 66.72 92.98 89.13 65.29 71.30 86.11 24.90 62.59 44.56 47.84 66.83 65.15 CuPL 69.62 92.98 89.13 65.29 71.30 86.11 24.90 62.59 44.56 47.84 66.83 65.15 Prompt learning with text-only supervision ProText (Ours) 69.80 94.81 91.01 66.00 72.35 86.66 24.72 67.34 47.93 51.86 69.60 67.23 Table 4. Crossdataset transfer setting. CuPL and CLIP perform same for cross-datasets as CuPL source data cannot transfer to cross-datasets. Image-based models are trained on 16-shot ImageNet samples. ProText employ same ImageNet data as CuPL for prompt learning. Source Target ImageNet -V2 -S -A -R Avg. Methods utilizing labeled visual samples CoOp 71.51 64.20 47.99 49.71 75.21 59.28 CoCoOp 71.02 64.07 48.75 50.63 76.18 59.91 MaPLe 70.72 64.07 49.15 50.90 76.98 60.27 Zero-shot & Prompt ensembling methods CLIP 66.72 60.83 46.15 47.77 73.96 57.18 CuPL 69.62 63.27 49.02 50.72 77.05 60.01 Prompt learning with text-only supervision ProText (Ours) 70.22 63.54 49.45 51.47 77.35 60.45 Table 5. Domain generalization. Prompt learning methods are trained on imageNet and evaluated on datasets with domain shifts. Dataset CLIP CuPL ProText \u2206 ImageNet 66.72 69.60 70.22 +0.62 Caltech101 92.98 94.32 95.29 +0.97 DTD 44.56 53.96 54.02 +0.06 EuroSAT 47.84 60.27 58.53 -1.74 StanfordCars 65.29 65.95 66.77 +0.82 Flowers102 71.30 73.85 74.42 +0.57 Aircraft 24.90 27.66 29.01 +1.35 SUN397 62.59 69.00 69.76 +0.76 OxfordPets 89.13 91.11 92.72 +1.61 UCF101 66.83 70.63 71.45 +0.82 Food101 86.11 86.11 86.68 +0.57 Average 65.15 69.31 69.90 +0.59 Table 6. ProText results with text supervision on each dataset. We compare ProText with CLIP and CuPL. Gains of ProText over CuPL are shown in blue. with the same LLM base-class data as CuPL, ProText effectively transfers learned prompts towards novel classes and improves CLIP and CuPL novel class performance by 2.76% averaged across 11 datasets. This shows the advantage of ProText prompts to benefit unseen class performance potentially reducing the LLM prompt serving cost by half. 4.4. Cross-dataset transfer In cross-dataset transfer setting, we compare ProText with CLIP [37], CuPL [36], and image-supervised prompt learning methods. Since class-specific ImageNet LLM prompts limit its transfer to other datasets in CuPL, we assign CLIP results to CuPL for cross-datasets. Image-supervised methods [20, 21, 49, 50] are trained with 16-shot ImageNet data. We show our main comparison results in Tab. 4. CuPL improves ImageNet performance of CLIP by ensembling ImageNet LLM prompts, while its cross-dataset results remain the same as CLIP. In contrast, ProText effectively addresses the transferability challenges of CuPL using generalized prompts trained with the same ImageNet LLM data. Since ProText allows generalization to unseen datasets, these learned prompts can directly be used with CLIP for cross-datasets leading to absolute average gains of +2.1% against CLIP and CuPL. With ProText, one can notably reduce proprietary LLM serving and prompt engineering costs as prompts learned on one dataset are effectively transferable to other datasets. We next compare ProText with strong 16-shot image-supervised methods. Without using any visual samples, ProText demonstrates effective generalization on cross-datasets and consistently surpasses previous state-of-the-art MaPLe on 9/10 datasets leading to the highest average accuracy of 67.23%. This highlights that text-only methods like ProText can lead to better generalization of CLIP as compared to image-supervised methods which tend to overfit on the source sample distributions. 4.5. Domain generalization experiments We present the results for domain generalization task in Table 5. As the domain shift variants of ImageNet share class names with ImageNet, CuPL employs prompt ensembling for each dataset and provides an average gain of +2.84% over CLIP. In contrast, ProText with learned prompts shows an additional gain of +0.44% against CuPL averaged over 4 datasets. Moreover, ProText fairs competitively with imagesupervised methods by showing consistent improvements over CoOp, CoCoOp, and MaPLe. These results suggest that text-only supervision methods like ProText can serve as an effective alternative to improve the robustness of VLMs when no visual information is available for training. \fFigure 3. Ablation: Prompt length (left) and prompt depth (right). Method ImageNet Top1. 1: ProText-contrastive loss 68.12 2: ProTextL1 loss 69.96 3: ProText-MSE loss 70.22 Table 7. Ablation of choice of loss for contextual mapping. MSE loss provides highest results. Method ImageNet Top1 1: ProText-80 templates 68.48 2: ProText-Alpaca 67.10 3: ProText-GPT-3 70.22 Table 8. Effect on performance with different text data for training. GPT-3 text data show highest results. Method ImageNet Top1. 1: Linear Adaptor 69.36 2: MLP Adaptor 69.24 3: Prompt Learning 70.22 Table 9. Ablation on the choice of mapping network. Prompt Learning shows optimal performance. Correct class confidence (%) \u2191Incorrect class confidence (%) \u2193 Method DTD SUN Caltech UFC DTD SUN Caltech UFC CLIP 30.5 49.3 84.5 56.4 1.51 0.13 0.16 0.44 ProText 33.1 54.2 89.1 59.5 1.45 0.12 0.11 0.40 Table 10. Confidence score analysis: ProText trained on ImageNet improves its logit confidence for correct classes in unseen datasets. 4.6. Supervised text-only training In this setting, we compare ProText with CuPL for each dataset trained on LLM template data and the results are shown in Tab. 6. While utilizing the same LLM data, ProText achieves consistent improvements over CuPL on 10/11 datasets with an average gain of +0.59%. This reflects the generalization of the ProText approach across various diverse image datasets where it better utilizes LLM data within the learned prompts. We also compare ProText with image-supervised methods and observe that ProText fares competitively with approaches utilizing up to 2-shot samples for training. This shows ProText as a potential alternative to image-supervised methods in extremely low-data regimes. Refer to supplementary for additional results. 4.7. Ablative analysis On understanding ProText prompts. In Table. 10, we present average confidence scores obtained from ProText logits trained on ImageNet-1k text data when applied to Figure 4. (Left) Effect of LLM data size on performance. (Right) Ablation on ensembling LLM descriptions for training ProText. cross-datasets. Compared to CLIP, ProText exhibits increased confidence scores for correct classes across various datasets, while marginally decreasing confidence scores for incorrect classes. This suggests that the prompts learned on ImageNet-1k provide complementary and transferable contextual cues, leading to improved results. We conjecture that ProText prompts potentially improve the classification of test samples situated near the decision boundary due to higher confidence for correct classes. Refer to the supplementary section for qualitative and additional analysis. Loss metric in contextual mapping. We ablate on choice of loss used for the contextual mapping module in Tab. 7. Distance-based losses improve over contrastive loss. We conjecture that contrastive loss treats samples of same class labels in a same batch as negatives leading to noisy training. Choice of LLM for generating text data. ProText by default uses GPT-3 [5] LLM to obtain text templates for training. Here we ablate on an open-source Alpaca [43] model as an alternative choice. As shown in Tab. 8, ProText with Alpaca templates performs worse than ProText-80 template and ProText-GPT-3. We observed that Alpaca templates are often noisy while GPT-3 descriptions contain more enriched class details which results in better performance. Prompt learning verses adapter. While ProText employs prompt learning to learn contextual mapping from LLM templates, here ablations on adapters in Tab. 9. Similar to [12], we attach adapter at the output of CLIP text encoder. Adapters perform lower as compared to prompting. We conjecture that adapter completely transforms text features and loses CLIP generalization. In contrast, prompt learning append learnable vectors with CLIP text input without significant replacement and learns effective mapping function. Training data size for text-supervision. To assess the effect of LLM template data size on ProText, we ablate on the number of descriptions per class in Fig. 4 (left). Increasing descriptions for each class consistently improves the results. This suggests that we could further boost ProText performance as quality and size of text data increases. Ensembling in ProText training. ProText uses multiple descriptions per class and enforce mapping of class-name template feature to feature of each LLM description for that class. We conduct an alternative experiment by ensembling \fa single feature from multiple LLM descriptions per class and enforce mapping on ensembled LLM feature. As shown in Fig. 4 (right), ProText-ensembled performs lower than ProText with individual samples. We conjecture that learning on each description allows the model to utilize additional context present in each description. Ensembling can potentially mask out less frequent details available in text. Prompt length and prompt depth. Fig. 3 (left) shows the effect of prompt length for training ProText. Setting prompt length to 16 leads to optimal performance. Fig. 3 (right) shows the effect of prompt depth on final performance where prompt depth of 9 shows optimal results. 5." + }, + { + "url": "http://arxiv.org/abs/2307.06948v2", + "title": "Self-regulating Prompts: Foundational Model Adaptation without Forgetting", + "abstract": "Prompt learning has emerged as an efficient alternative for fine-tuning\nfoundational models, such as CLIP, for various downstream tasks. Conventionally\ntrained using the task-specific objective, i.e., cross-entropy loss, prompts\ntend to overfit downstream data distributions and find it challenging to\ncapture task-agnostic general features from the frozen CLIP. This leads to the\nloss of the model's original generalization capability. To address this issue,\nour work introduces a self-regularization framework for prompting called\nPromptSRC (Prompting with Self-regulating Constraints). PromptSRC guides the\nprompts to optimize for both task-specific and task-agnostic general\nrepresentations using a three-pronged approach by: (a) regulating prompted\nrepresentations via mutual agreement maximization with the frozen model, (b)\nregulating with self-ensemble of prompts over the training trajectory to encode\ntheir complementary strengths, and (c) regulating with textual diversity to\nmitigate sample diversity imbalance with the visual branch. To the best of our\nknowledge, this is the first regularization framework for prompt learning that\navoids overfitting by jointly attending to pre-trained model features, the\ntraining trajectory during prompting, and the textual diversity. PromptSRC\nexplicitly steers the prompts to learn a representation space that maximizes\nperformance on downstream tasks without compromising CLIP generalization. We\nperform extensive experiments on 4 benchmarks where PromptSRC overall performs\nfavorably well compared to the existing methods. Our code and pre-trained\nmodels are publicly available at: https://github.com/muzairkhattak/PromptSRC.", + "authors": "Muhammad Uzair Khattak, Syed Talal Wasim, Muzammal Naseer, Salman Khan, Ming-Hsuan Yang, Fahad Shahbaz Khan", + "published": "2023-07-13", + "updated": "2023-08-24", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Vision-Language (VL) models, such as CLIP [35] and ALIGN [20], have demonstrated remarkable generalization capabilities for downstream tasks. These VL models *Joint first authors. \u0000uzair.khattak@mbzuai.ac.ae are trained on large-scale web data with a contrastive loss, which allows them to encode open-vocabulary concepts by aligning pairs of images and texts in a shared embedding space. The resulting model is suited for downstream tasks such as open-vocabulary image recognition [23], object detection [11], and image segmentation [29]. Prompt learning has emerged as a more efficient alternative to fine-tuning large-scale models, as shown in recent studies [58, 59, 3, 17, 40, 28]. This approach introduces a few learnable prompt vectors to adapt models like CLIP for downstream tasks while keeping the pre-trained model weights fixed. However, since the prompts are optimized with respect to the task-specific objective [59], such as the cross-entropy loss for ImageNet [6] classification, the prompted model tends to overfit to the task-specific data distribution as the training progresses. This can result in the prompted model losing the original generalization capability of the frozen CLIP model towards new tasks. Therefore, learning prompts that can model both task-specific and task-agnostic representations remain a major challenge for adapting foundational VL models. This work seeks to self-regulate prompts to address the issue of prompt overfitting. To this end, we propose a selfregularizing framework that guides the prompts to jointly optimize for both task-specific and task-agnostic general representations using a three-pronged approach. a) Regulating via Mutual Agreement Maximization: We observe that generalizable zero-shot knowledge is preserved within frozen pre-trained VL model features but they lack taskspecific knowledge. In contrast, prompts achieve better adaptation to a given task but with reduced generalizability to new tasks. Therefore, we propose to regulate learned prompts by maximizing the agreement between prompted and frozen VL model features while adapting them to the downstream task. b) Regulating with the Self-ensemble: In the early epochs, prompts act are not mature to capture contextual information. As the training progresses, prompts tend to become more task-specific. Therefore we deploy a weighted prompt aggregation technique to prompts during training to regulate them using their self-ensemble over the arXiv:2307.06948v2 [cs.CV] 24 Aug 2023 \fFigure 1: (Left): Existing prompt learning approaches rely on task-specific objectives that restrict prompt learning to learn a feature space suitable only for downstream tasks and consequently lose the generalized knowledge of CLIP (shown in purple). Our self-regulating framework explicitly guides the training trajectory of prompts towards the closest point between two optimal solution manifolds (solid line) to learn task-specific representations while also retaining generalized CLIP knowledge (shown in green). (Middle): Averaged across 11 image recognition datasets, PromptSRC surpasses existing methods on the base-to-novel generalization setting. (Right): We evaluate our approach on four diverse image recognition benchmarks and it overall shows competitive results compared to the previous state-of-the-art. training phase. The weights are sampled from a Gaussian distribution which suitably aggregates the useful knowledge learned by prompts at different training epochs. c) Regulating with Textual Diversity: We note that unlike having multiple image samples per category for the vision encoder, there is only a single textual label available for each class. Therefore, imposing the mutual agreement constraints on multi-modal features results in sub-optimal performance due to the lack of diversity in text-side labels for the text encoder. We overcome this disparity and regulate the prompts through diverse text label templates for each class. Overall, our approach explicitly steers prompts to learn a representation space that maximizes its performance on downstream tasks without compromising pre-trained CLIP generalization (Fig. 1: Left). We demonstrate the effectiveness of PromptSRC on four representative tasks. On the base-to-novel generalization benchmark across 11 datasets (Fig. 1: Middle), our method achieves average gains of +1.42% in harmonic-mean over the state-of-the-art MaPLe [22] and +8.26% over CLIP. Further, PromptSRC achieves competitive results in cross-dataset transfer, domain generalization, and few-shot image recognition (Fig. 1:Right). In summary, our self-regulating prompt learning framework has the following main contributions: \u2022 We address the inherent problem of prompt overfitting for adapting foundational models through selfregularization. Our framework explicitly guides the prompts to jointly acquire both task-specific knowledge and task-agnostic generalized knowledge by maximizing the mutual agreement between prompted and frozen VL model features. (\u00a73.2.1) \u2022 We suggest a weighted self-ensembling strategy for prompts that captures their complementary features learned at different epochs during training and enhances their generalization performance. (\u00a73.2.2) \u2022 To overcome the significant diversity mismatch between the text and visual domains, we propose textside diversity which complements limited textual labels via multiple text augmentations and regularizes prompts to learn more generalized contexts. (\u00a73.2.3) 2. Related Work Vision Language models: Foundational vision-language (VL) models [35, 20, 54, 49, 51] leverage both visual and textual modalities to encode rich multi-modal representations. These models are pre-trained on a large corpus of image-text pairs available on the internet in a selfsupervised manner. For instance, CLIP [35] and ALIGN [20] utilize around 400M and 1B image-text pairs, respectively, to train their multi-modal networks. During pre-training, contrastive loss is commonly used as a selfsupervision loss. This loss pulls together the features of paired images and texts while pushing away the unpaired image-text features. VL models possess a strong understanding of open-vocabulary concepts, making them suitable for various downstream vision and vision-language applications [12, 56, 38, 30, 60, 13, 32, 53, 26, 36, 8]. However, transferring these foundational models for downstream tasks without compromising on their original generalization ability still remains a major challenge. Our work aims to address this problem by proposing a novel regularization framework to adapt VL models via prompt learning. Prompt learning: Prompt learning is an alternative finetuning method for transferring a model towards downstream tasks without re-learning the trained model parameters. This approach adapts a pre-trained model by adding a small number of new learnable embeddings at the input known as prompt tokens. Due to its efficiency in terms of parameters and convergence rate, prompt learning is found to be of great interest for adapting foundational models like CLIP for vision [21, 57, 45, 46] and vision-language tasks [59, 58, 61, 7]. CoOp [59] fine-tunes CLIP by optimizing a continuous set of prompt vectors in its language branch for few-shot image recognition. Bahng et al. [1] perform visual prompt tuning on CLIP by learning prompts \fon the vision branch. [3] and [28] propose to learn multiple sets of prompts for learning different contextual representations. CoCoOp [58] highlights the overfitting problem of CoOp and proposes to condition prompts based on visual features for improved performance on generalization tasks. MaPLe [22] proposes a multi-modal prompt learning approach by learning hierarchical prompts jointly at the vision and language branches of CLIP for better transfer. Our approach builds on a variant [37] where prompts are learned at both the vision and language encoder of CLIP. Network regularization: Incorporating regularization techniques in neural networks has been proven to enhance their generalization capabilities [25]. Regularization strategies can be broadly classified into two streams. The first stream consists of constraint-based regularization methods, such as weight decay [27] and adversarial training [50]. These techniques introduce additional constraints to the learning process, which helps to prevent overfitting. The second stream of regularization techniques involves modifying the inputs, model parameters, or annotations. This category includes methods such as data augmentations [52, 55, 5], dropout [42], model ensembling [18, 47], label smoothing [43] and batch normalization [19]. Our method aims to enhance the generalization performance of learned prompts via a multi-stage regularization framework, which takes inspiration from both streams of regularization techniques mentioned above. However, to the best of our knowledge, this is the first effort to regularize prompts during adaptation by jointly attending to the original VL model feature space, the training trajectory of prompts as well as the diversity of textual inputs for the multi-modal models. 3. Proposed Method Prompt learning aims to adapt the general knowledge of VL foundational models like CLIP without full fine-tuning [59, 58, 3]. Since prompts are the only learnable vectors, this strategy aims to retain the pretrained generalized feature representations of CLIP while re-purposing them for downstream task-specific data via prompts. Although effective, they are susceptible to overfitting on the supervised downstream task (see Fig. 2) and their generalization towards new classes and datasets reduces as compared to the original zero-shot pre-trained CLIP. Our work seeks to address the overfitting behavior of prompts. Unlike prior prompting approaches that improve generalization mainly from the model architecture perspective [58, 22], we motivate our work from the regularization perspective. As evidenced by the strong zero-shot performance, pre-trained CLIP features possess robust generalization characteristics. However, naively training prompts with the supervised task-specific loss struggles to retain these general attributes from the frozen CLIP. To this end, we propose a self-regularizing framework to explicitly guide the Figure 2: Naively training prompts with standard supervised objectives improves supervised class performance but leads to poor generalization as training schedule increases. Our PromptSRC method with explicit prompts consistency constraints improves on base classes as well as shows improvements on novel classes. training trajectory of prompts to maximize its interaction with the pre-trained knowledge stored in the frozen CLIP. Fig. 3 shows our overall methodology which optimizes the prompts as follows. a) Regularization through mutual agreement maximization: We impose an explicit consistency constraint between prompted features and the pretrained CLIP features within the CLIP embedding space. b) Regularization through prompt self-ensembling: To further reduce overfitting, we propose a Gaussian weighted average of the prompt vectors learned at different training epochs. This ensemble-level regularization aggregates information from learned prompts across different epochs for improved generalization. c) Regularization through textual diversity: Unlike having multiple images for each class, the text labels during fine-tuning are limited and bounded by the number of class categories. We incorporate textual augmentations by defining multiple text label templates for a given class. The ensemble of textual labels regularizes the prompts for better generalization during optimization. We now continue by explaining our methodology in detail. We first revisit CLIP and CLIP-based prompt learning in Sec. 3.1. This is followed by the explanation of our selfregulating prompt learning approach in Sec. 3.2. 3.1. Preliminaries We denote the CLIP image and text encoders as f and g, respectively and their pretrained parameters as \u03b8CLIP = {\u03b8f, \u03b8g} where \u03b8f and \u03b8g refer to the image and text encoder parameters, respectively. The input image X \u2208 RC\u00d7H\u00d7W is divided into M patches followed by a projection to produce patch tokens. Further, a learnable class token ecls is appended with the input patches as \u02dc X = {ecls, e1, e2, \u00b7 \u00b7 \u00b7 , eM}. The image encoder f encodes the input patches via multiple transformer blocks to produce a latent visual feature representation \u02dc f = f( \u02dc X, \u03b8f), where \u02dc f \u2208Rd. Next, the corresponding class label \fFigure 3: Our proposed PromptSRC framework for self-regulating prompt learning. CLIP encoders are used to generate prompted ( \u02dc fp, \u02dc gp) and pre-trained ( \u02dc f, \u02dc g) features at the image and text sides. First, we introduce textual diversity (\u00a73.2.3) and define textual augmentations to produce a diverse set of frozen VL textual features, which are averaged to represent the pre-trained VL text features (\u02dc g). Next, we employ Mutual Agreement Maximization constraints (LSCL) to regulate the prompts, which ensure that the prompted features align well with the pre-trained VL representations at both the feature and logit levels (\u00a73.2.1). As CLIP is frozen, we use the same VL encoders to obtain both types of features. Further, our prompt self-ensembling combines the strengths of prompts learned at different epochs (P1, P2 \u00b7 \u00b7 \u00b7 PE) during training via Gaussian weighted sampling (\u00a73.2.2). The ensembled visual and textual prompts are then used for the final inference. y is wrapped within a text template such as \u2018a photo of a {class label}\u2019 which can be formulated as \u02dc Y = {tSOS, t1, t2, \u00b7 \u00b7 \u00b7 , tL, ck, tEOS}. Here {tl|L l=1} and ck are the word embeddings corresponding to the text template and the class label, respectively while tSOS and tEOS are the learnable start and end token embeddings. The text encoder g encodes \u02dc Y via multiple transformer blocks to produce the latent textual feature as \u02dc g = g( \u02dc Y , \u03b8g), where \u02dc g \u2208Rd. For zero-shot inference, textual features of text template with class labels {1, 2, \u00b7 \u00b7 \u00b7 , C} are matched with image feature \u02dc f as exp(sim(\u02dc g\u00b7 \u02dc f)\u03c4) PC i=1 exp(sim( \u02dc gi\u00b7 \u02dc f)\u03c4), where sim() denotes the cosine similarity and \u03c4 is the temperature. Prompt Learning for CLIP: Prompt learning approaches append learnable prompt tokens at either the text [59, 58] encoder or image [1] encoder. We use a simple baseline method [37] that learns hierarchical prompt tokens on both the text and image encoders separately, named as Independent Vision-Language Prompting (IVLP). Specifically, we append learnable T language and V visual prompts given as Pt = {p1 t, p2 t, \u00b7 \u00b7 \u00b7 , pT t } and Pv = {p1 v, p2 v, \u00b7 \u00b7 \u00b7 , pV v } with the textual and visual input tokens, respectively. Therefore, the image encoder processes the following input tokens \u02dc Xp = {Pv, ecls, e1, e2, \u00b7 \u00b7 \u00b7 , eM} to generate prompted visual feature represented as \u02dc fp = f( \u02dc Xp, \u03b8f). Similarly, textual feature is obtained as \u02dc gp = g( \u02dc Yp, \u03b8g), where \u02dc Yp = {tSOS, Pt, t1, t2, \u00b7 \u00b7 \u00b7 , tL, ck, tEOS}. In contrast to shallow prompting where learnable prompts are introduced only at the first transformer block of the image and text encoders, our approach uses deep prompting which learns separate sets of prompts at every transformer block. The vision and language prompts are jointly represented as P = {Pv, Pt}. The feature representations obtained using these learnable prompts are referred to as prompted features. For image classification on a downstream dataset D, prompts P interact with pre-trained and frozen \u03b8f and \u03b8g and are optimized with the cross-entropy loss, LCE, as: \\ l abe l { e q:LCE} \\ mathca l {L _ {\\te xt {CE}}} = \\text {arg}&\\min _{\\bm {P}}\\mathbb {E}_{(\\bm {X}, {y})\\sim \\mathcal {D}} \\, \\mathcal {L} (\\text {sim}(\\bm {\\Tilde {f}_p},\\bm {\\Tilde {g}_p}), y). (1) 3.2. Self-Regularization for Prompt Learning The LCE objective employs ground truth labels to optimize the prompts for the downstream task. As a result, the prompts adapt and learn task-specific knowledge. During training, prompts interact with pre-trained and frozen CLIP tokens through self-attention layers in the transformer blocks. This interaction of prompts tokens with pre-trained CLIP weights \u03b8CLIP provides implicit regularization and encourages retaining the task-agnostic generalized knowledge within learned prompts. However, as shown in Fig. 2, prompts tend to overfit on the supervised task and drift away from the generalized CLIP space as the training schedule increases. Consequently, new task performance is degraded, despite the fact that CLIP image and text encoder weights \u03b8f and \u03b8g are kept frozen. As prompts undergo further training, the implicit generalization constraint becomes weaker against the task-specific LCE objective. One naive approach to address this issue is to reduce the training schedule to balance the performance between \fthe base and new tasks. However, training the prompts for fewer iterations to prevent losing generalization comes at the cost of relatively lower performance on the supervised task. Here, we present a prompt learning approach that maximizes supervised task performance without sacrificing performance on novel tasks and classes. We propose to anchor prompt training with self-regularization which constitutes three main components as discussed below. 3.2.1 Mutual agreement maximization As discussed above, the strong downstream dataset transfer constraint imposed by LCE causes the prompts to overfit on task-specific data and it struggles to effectively utilize the general information from the frozen CLIP. We propose to explicitly guide the training trajectory by imposing a constraint to maximize its mutual agreement between the prompted and the frozen CLIP features. We achieve this by explicitly conditioning the prompted features to be consistent with the CLIP features obtained without learnable prompts. As we do not require any second model for such conditioning, we call this regularizing constraint as a selfconsistency loss (SCL). For a given input sample and its corresponding textual label, we obtain visual features using learnable prompts and pre-trained visual features, \u02dc fp and \u02dc f within the frozen CLIP latent space. Similarly, we obtain textual features \u02dc gp and \u02dc g. We then impose a constraint on the prompted visual and text features to ensure their consistency with the CLIP pretrained features as follows, \\label { e q : scl f ea t u res } \\mathca l { L_{ \\t ex t {SCL-image}}} = \\sum _{i=1}^{d}|\\bm {\\Tilde {f}_{p}} \\bm {\\Tilde {f}}|, \\; \\mathcal {L_{\\text {SCL-text}}} = \\sum _{i=1}^{d} |\\bm {\\Tilde {g}_{p}} \\bm {\\Tilde {g}}|. (2) As shown in Eq. 2, we utilize L1 loss to impose the feature level consistency. Note that our self-consistency constraint is also compatible with other variants of matching losses such as cosine similarity or MSE loss which we study in our ablations (Sec. 4.7). To further complement the regularization constraint and maximize the alignment between the general features and the prompted features, we impose logit level selfconsistency regularization and condition the prompted logits distribution on pre-trained CLIP logits distribution by minimizing the Kullback-Leibler divergence as follows, \\label {e q :scl-log i ts} \\mat hcal {L _ {\\text {SCL-logits}}} = \\mathcal {D_{KL}} (\\text {sim}(\\bm {\\Tilde {f}_p},\\bm {\\Tilde {g}_{p}}), {\\text {sim}(\\bm {\\Tilde {f}},\\bm {\\Tilde {g}}))}. (3) Overall, the self-consistency training objectives guide the prompts to gain complementary knowledge from pretrained CLIP features, therefore providing strongly generalized prompts, \\l a bel {eq:sclc ombined} \\m a thcal {L_{\\text {SCL}}} &= \\lambda _{1}\\mathcal {L_{\\text {SCL-image}}} + \\lambda _{2}\\mathcal {L_{\\text {SCL-text}}} + \\mathcal {L_{\\text {SCL-logits}}},s (4) where \u03bb1 and \u03bb2 are loss balancing hyper-parameters. Our overall training objective thus becomes, \\lab e l { e q:final-loss} \\mathcal {L_{\\text {final}}} &= \\mathcal {L_{\\text {CE}}} + \\mathcal {L_{\\text {SCL}}} . (5) Discussion on Lfinal: LSCL loss guides the prompts to converge at solutions that are generalized. On the other hand, LCE guides the prompts to maximize performance on the downstream supervised tasks. The combination of these losses conditions the prompts to maximize their performance on supervised tasks and at the same time guides the prompts learning trajectory toward a weight space that is consistent with the CLIP zero-shot features. As shown in Fig. 2, our proposed methodology maximizes the supervised tasks\u2019 performance while also improving the generalization. This shows that the proposed training objectives for prompt learning setup are complementary to each other. 3.2.2 Regularization with prompt self-ensembling The second component in our self-regularizing framework enforces regularization using prompt self-ensembling. Model ensembling in the weight space has been shown to improve both the performance and generalization of a model [47, 18]. However, it has not been actively studied in the context of prompt learning, where prompts are only learnable parameters with frozen model parameters. To effectively utilize the prompts knowledge from the previous training iterations, we propose prompts aggregation for a generalizable solution. For a training schedule with E total epochs, prompts at every epoch are given by {P }E t=1. Aggregated prompts (AP) are then calculated as, \\la b e l {e q:ptu nin g} \\{\\bm {P}\\}^{\\text {AP}} = \\sum _{t=1}^{E}{\\frac {w_{t} . \\bm {P}}{\\sum _{i=1}^{E}w_{i}}}, (6) where wi is the weight assigned to prompts at each epoch t. In the early epochs, prompts are not mature to capture contextual information due to their random initialization. During aggregation, they should be given less weight as they act as noise which is carried along with the input tokens. On the other hand, the prompts learned in the last few epochs are task specific and highly favours the supervised downstream task distribution. We propose to perform Gaussian weighted prompt aggregation (GPA), where small aggregation weights are given to prompts at initial epochs, higher weights to prompts at middle epochs, and relatively lower weights to prompts at final epochs, resulting in optimal prompt representations that improve generalization to downstream tasks. GPA provides optimal weight values wi by sampling from a Gaussian distribution wi \u223cN(\u00b5, \u03c32), where \u03c32 and \u00b5 are hyper-parameters and PE i=1 wi = 1. Gaussian distribution is defined over the epochs and its mean is dictated by the epoch number. We formulate this \fweighting as a moving average to avoid saving multiple copies of prompts by keeping one additional copy which is updated via aggregation at every epoch i, \\l a b e l { eq:moving_average} {\\bm {P}}^{\\text {GPA}} = \\sum _{i=1}^{E}w_{i} . \\bm {P_{i}}. (7) 3.2.3 Regulating prompts with textual diversity Through the LSCL loss, the visual prompted features to instill diverse generalized contexts from pre-trained CLIP visual features as multiple image samples are present for each label category. This provides a natural source of augmentations at the image side and promotes additional regularization. However, as opposed to having multiple images per category, we note that the text space during fine-tuning is limited, and prompted features are learned based on pretrained CLIP text features, with only one feature representation per category. This mismatch between the available diversity at the image and text side leads to sub-optimal learning of prompted textual features. To address the diversity mismatch, we incorporate textual diversity in the text encoder. Specifically, we use a pool of textual prompt templates {PT|N l=1}, containing N augmentations to form multiple text features per category. The pre-trained CLIP textual features are now obtained as an ensemble of multiple prompts templates \u02dc g = 1 N PN i=1 \u02dc gi. As pre-trained CLIP textual features are now represented by the ensemble of multiple augmentations for each label, the prompted textual features learn more diverse generalized contexts from the frozen CLIP. We note that the proposed textual diversity is different from the standard prompt ensembling technique explored by CLIP authors. CLIP uses ensemble of text prompts during inference for classification. In contrast, we utilize them during training for self-regularization by enforcing mutual agreement of ensembled features with prompted features, and prompted features are used at inference. Next, we show the efficacy of our proposed components via comprehensive experiments provided below. 4. Experiments 4.1. Evaluation settings We extensively evaluate our approach and present a comparison with other methods on four benchmark settings. Base-to-novel class generalization: In this setting, we equally split the datasets into base and novel classes. The model is trained on base classes and evaluated on both base classes and novel classes. This benchmark evaluates the generalization ability of a method within a dataset. Few-shot learning: We incorporate this setting to compare the learning capacity of the model under extremely limited supervision and verify if our approach learns complementary task-specific and task-agnostic knowledge. For each Dataset CLIP CoOp CoCoOp ProDA MaPLe PromptSRC \u2206 [35] [59] [58] [28] [22] (Ours) Average on 11 datasets Base 69.34 82.69 80.47 81.56 82.28 84.26 +2.0 Novel 74.22 63.22 71.69 72.30 75.14 76.10 +1.0 HM 71.70 71.66 75.83 76.65 78.55 79.97 +1.4 ImageNet Base 72.43 76.47 75.98 75.40 76.66 77.60 +0.9 Novel 68.14 67.88 70.43 70.23 70.54 70.73 +0.2 HM 70.22 71.92 73.10 72.72 73.47 74.01 +0.5 Caltech101 Base 96.84 98.00 97.96 98.27 97.74 98.10 +0.4 Novel 94.00 89.81 93.81 93.23 94.36 94.03 -0.3 HM 95.40 93.73 95.84 95.68 96.02 96.02 +0.0 OxfordPets Base 91.17 93.67 95.20 95.43 95.43 95.33 -0.1 Novel 97.26 95.29 97.69 97.83 97.76 97.30 -0.5 HM 94.12 94.47 96.43 96.62 96.58 96.30 -0.3 Stanford Cars Base 63.37 78.12 70.49 74.70 72.94 78.27 +5.3 Novel 74.89 60.40 73.59 71.20 74.00 74.97 +1.0 HM 68.65 68.13 72.01 72.91 73.47 76.58 +3.1 Flowers102 Base 72.08 97.60 94.87 97.70 95.92 98.07 +2.1 Novel 77.80 59.67 71.75 68.68 72.46 76.50 +4.1 HM 74.83 74.06 81.71 80.66 82.56 85.95 +3.4 Food101 Base 90.10 88.33 90.70 90.30 90.71 90.67 -0.1 Novel 91.22 82.26 91.29 88.57 92.05 91.53 -0.5 HM 90.66 85.19 90.99 89.43 91.38 91.10 -0.3 FGVC Aircraft Base 27.19 40.44 33.41 36.90 37.44 42.73 +5.3 Novel 36.29 22.30 23.71 34.13 35.61 37.87 +2.3 HM 31.09 28.75 27.74 35.46 36.50 40.15 +3.7 SUN397 Base 69.36 80.60 79.74 78.67 80.82 82.67 +1.9 Novel 75.35 65.89 76.86 76.93 78.70 78.47 -0.2 HM 72.23 72.51 78.27 77.79 79.75 80.52 +0.8 DTD Base 53.24 79.44 77.01 80.67 80.36 83.37 +3.0 Novel 59.90 41.18 56.00 56.48 59.18 62.97 +3.8 HM 56.37 54.24 64.85 66.44 68.16 71.75 +3.6 EuroSAT Base 56.48 92.19 87.49 83.90 94.07 92.90 -1.2 Novel 64.05 54.74 60.04 66.00 73.23 73.90 +0.7 HM 60.03 68.69 71.21 73.88 82.35 82.32 -0.1 UCF101 Base 70.53 84.69 82.33 85.23 83.00 87.10 +4.1 Novel 77.50 56.05 73.45 71.97 78.66 78.80 +0.1 HM 73.85 67.46 77.64 78.04 80.77 82.74 +2.0 Table 1: Accuracy comparison on Base-to-novel generalization of PromptSRC with previous methods. The prompts learned with our self-regularizing approach show overall consistent improvements on base classes, without losing generalization. Absolute gains over MaPLe [22] are shown in blue. dataset, we test the model\u2019s generalization for different Kshots per category, where K = 1, 2, 4, 8, 16. Domain generalization setting: We train a source model on ImageNet [6] and evaluate on out-of-distribution datasets to test performance under domain shifts. Cross-dataset evaluation: In cross-dataset transfer, we train the models on ImageNet [6] and directly evaluate it on other datasets without any data-specific fine-tuning. Datasets: For base to novel class generalization, fewshot setting and cross-dataset evaluation, we follow CoOp [59] and CoCoOp [58], and use 11 image recognition \fdatasets. The datasets cover multiple recognition tasks including ImageNet [6] and Caltech101 [10] which consists of generic objects; OxfordPets [34], StanfordCars [24], Flowers102 [33], Food101 [2], and FGVCAircraft [31] for fine-grained classification, SUN397 [48] for scene recognition, UCF101 [41] for action recognition, DTD [4] for texture classification, and EuroSAT [14] which consists of satellite images. For domain generalization benchmark, we use ImageNet [6] as a source dataset and use ImageNetA [16], ImageNet-R [15], ImageNet-Sketch [44] and ImageNetV2 [39] as out of distribution datasets. Implementation details: We use a ViT-B/16 based CLIP model in our experiments and report results averaged over 3 runs. We use deep prompting with V = T = 4 VL prompts and train for 50 epochs for few-shot setting and 20 epochs the rest of the 3 benchmarks respectively. For domain generalization and cross-dataset evaluation, we train the ImageNet source model on all classes with K = 16 shots using V = T = 4 VL prompts in the first 3 transformer layers. For few-shot and base-to-novel setting, prompts are learned in the first 9 transformer layers. Prompts are randomly initialized with a normal distribution except the text prompts of the first layer which are initialized with the word embeddings of \u201ca photo of a\u201d. We fix the learning rate to 0.0025. We set \u03bb1 = 10 and \u03bb2 = 25 to weight LSCL-image and LSCL-text respectively. The corresponding hyperparameters are fixed across all datasets and benchmarks. For textual diversity, we use a total of N = 60 standard prompt templates provided in [35]. For comparison with ProDA [28], we report their results produced by [7]. Refer to Appendix A for additional implementation details. 4.2. Effectiveness of Self-regulating Prompts We first disentangle the regularization components in our self-regulating prompting framework and show the individual contributions in Table 2. Baseline IVLP provides high base class performance but suffers from poor generalization (row-1). By enforcing mutual agreement through LSCL (row-2), novel class performance significantly increases by 3.95% while maintaining base class gains. This suggests that LSCL explicitly enforces the prompts to capture the generalizable features from frozen CLIP. Integrating GPA (row3) which suitably aggregates prompts across the training cycle further reduces overfitting and improves the novel class performance. Finally, combined with textual diversity to overcome the diversity mismatch between the text and visual domains (row-4), PromptSRC achieves improvements on both base and novel classes, leading to the average novel class and harmonic mean gains of +4.31% and +2.46% respectively. The averaged results on 11 datasets are summarized in Table 2. Note that even small improvements in these metrics correspond to significant gains. We refer the readers to Appendix B for results on individual datasets. Method Base Acc. Novel Acc. HM 1: Independent V-L prompting 84.21 71.79 77.51 2: + LSCL 84.21 75.38 79.55 3: + GPA 84.16 75.69 79.70 4: + Textual diversity 84.26 76.10 79.97 Table 2: Effect of our proposed regularization techniques. Results are averaged over 11 datasets. HM refers to harmonic mean. 4.3. Base-to-Novel Generalization We compare the performance of our approach with zeroshot CLIP [35], CoOp [59], CoCoOp [58], ProDA [28] and MaPLe [22], in Table 1. Overall, all existing approaches outperform zero-shot CLIP on base classes but show inferior performance on novel classes except MaPLe. This suggests that they overall tend to lose the generalizable features stored in the frozen CLIP model. In contrast, PromptSRC significantly improves base class performance while improving the zero-shot CLIP novel class accuracy by 1.88%. This shows the importance of explicit guidance provided by PromptSRC in learning complementary taskspecific and task-agnostic representations which aid base and novel classes respectively. CoOp is heavily trained on base classes and consequently compromises on its generalization. For instance, on EuroSAT [14], CoOp provides a substantial 92.19% base class accuracy and inferior novel class accuracy of 54.74%. On the other hand, PromptSRC which learns self-regulating prompts provides the highest base and novel class accuracies of 92.90% and 73.90% on EuroSAT respectively. In comparison to CoCoOp and ProDA, PromptSRC shows gains on the 10/11 datasets respectively. Against the recent MaPLe approach, PromptSRC improves performance on 8/11 datasets while using 77x less tunable parameters (3.55M of MaPLe vs 46K of PromptSRC). With respect to the averaged results, PromptSRC provides the best results of 84.26%, 76.10%, and 79.97% on the base class, novel class, and harmonic mean respectively. 4.4. Few-shot Experiments To explicitly verify if our regularization framework restricts the prompts to learn task-specific knowledge or not, we compare our few-shot results with existing methods in Fig. 4. In general, all prompt learning approaches perform better than the linear probe, especially in scenarios with lesser shots i.e., K = 1, 2, 4. PromptSRC overall provides consistent improvements on all shots in comparison with all existing methods. When compared with the existing best method MaPLe, PromptSRC consistently provides absolute gains of 3.05%, 2.72%, 2.59%, 1.80%, and, 1.07% on 1, 2, 4, 8, and 16 shots respectively which are averaged over 11 datasets. Furthermore, we note that our approach achieves relatively larger gains in minimal data cases such \fFigure 4: PromptSRC performance comparison in few-shot image recognition setting. All methods are trained on ViT-B/16 CLIP backbone using their best settings. PromptSRC demonstrates consistent improvements over existing methods specifically for lesser shots i.e. K = 1, 2, 4. On average, PromptSRC provides the highest performance gains for all shots. These results demonstrate that PromptSRC learns complementary task-agnostic general features from frozen CLIP without being restricted from learning downstream task representations. Source Target ImageNet Caltech101 OxfordPets StanfordCars Flowers102 Food101 Aircraft SUN397 DTD EuroSAT UCF101 Average CoOp 71.51 93.70 89.14 64.51 68.71 85.30 18.47 64.15 41.92 46.39 66.55 63.88 Co-CoOp 71.02 94.43 90.14 65.32 71.88 86.06 22.94 67.36 45.73 45.37 68.21 65.74 MaPLe 70.72 93.53 90.49 65.57 72.23 86.20 24.74 67.01 46.49 48.06 68.69 66.30 PromptSRC 71.27 93.60 90.25 65.70 70.25 86.15 23.90 67.10 46.87 45.50 68.75 65.81 Table 3: Cross-dataset benchmark evaluation. PromptSRC achieves overall favourable performance. as for K = 1, 2 for almost all datasets. This demonstrates that PromptSRC regulates prompts against overfitting without restricting the prompts to learn task-specific knowledge. 4.5. Cross Dataset Evaluation We compare our cross-dataset performance with previous methods in Table 3. On the source dataset, PromptSRC performs comparably to other methods. In comparison with CoOp and CoCoOp, PromptSRC shows competitive performance and achieves better generalization in 8/10 and 7/10 datasets respectively. Compared with MaPLe, PromptSRC Source Target ImageNet -V2 -S -A -R Avg. CLIP 66.73 60.83 46.15 47.77 73.96 57.18 CoOp 71.51 64.20 47.99 49.71 75.21 59.28 Co-CoOp 71.02 64.07 48.75 50.63 76.18 59.91 MaPLe 70.72 64.07 49.15 50.90 76.98 60.27 PromptSRC 71.27 64.35 49.55 50.90 77.80 60.65 Table 4: Domain generalization. Prompt learning methods are trained on imageNet and evaluated on datasets with domain shifts. shows improved performance in 5/10 datasets while utilizing significantly less tunable parameters (46K vs 3.55M). 4.6. Domain Generalization Experiments Table 4 summarizes the results of PromptSRC and previous methods on out-of-distribution datasets. We directly evaluate our model trained on ImageNet. On target datasets, PromptSRC consistently outperforms all existing methods, with an overall highest average accuracy of 60.65%. This suggests that our self-regulating framework favors better generalization for datasets with domain shifts. \fMethod Base Acc. Novel Acc. HM 1: Independent V-L prompting (IVLP) 84.21 71.79 77.51 2: IVLP + Cosine similarity 84.47 74.51 79.17 3: IVLP + Mean square error (MSE) 84.59 74.68 79.33 4: IVLP + L1 84.42 74.99 79.43 Table 5: Effect of matching losses for LSCL-image and LSCL-image consistency objectives. L1 matching loss provides highest HM. Method Base Acc. Novel Acc. HM 1: Exponential moving average 83.09 76.15 79.47 2: Equal weighting (averaging) 83.50 76.47 79.83 3: GPA (Ours) 84.26 76.10 79.97 Table 6: Ablation on prompt ensembling techniques. Gaussian weighted prompt aggregation (GPA) provides better performance. Method GFLOP (train) GFLOP (test) Train time (min) FPS HM CoOp 162.5 162.5 10.08 1344 71.66 CoCoOp 162.5 162.5 39.53 15.08 75.83 IVLP 162.8 162.8 12.01 1380 77.51 PromptSRC 179.6 162.8 13.13 1380 79.97 Table 7: PromptSRC compute cost comparison using SUN397 dataset. Training time for all methods is calculated for 10 epochs on a single A100 GPU on SUN397 dataset. 4.7. Ablative Analysis Embedding consistency loss ablation: In Table 5, we ablate on the choice of matching loss metric used in our proposed feature level LSCL loss constraints. For simplicity, we only incorporate LSCL-image and LSCL-text on top of the IVLP baseline. Generally, distance-based matching metrics outperform the cosine similarity metric in terms of generalization as they impose a much harder constraint. Overall, the L1 matching metric provides the highest HM. Prompt ensembling: Table 6 shows ablation on various prompt ensembling techniques. Using equal weights for prompts reduces base class results as initial epoch prompts are not mature enough. In contrast, our proposed Gaussian weighted prompt aggregation results in the highest performance. Detailed ablation experiments for other hyperparameters are provided in Appendix C. Training and inference compute cost analysis: In Table 7, we show the compute cost analysis of our approach and compare it with other prompting methods. PromptSRC\u2019s overall training GFLOPs are only 0.13x higher than baseline IVLP, while it maintains the same GFLOPs and throughput during inference. Pre-trained CLIP textual features are pre-computed and a single additional forward pass is required through image encoder to compute pre-trained CLIP visual features for our mutual agreement maximization technique. Training time of PromptSRC is 9.3% longer than IVLP which is significantly lower than CoCoOp. We use 4 vision and text prompts similar to the IVLP. Figure 5: Ablation study on the number of textual prompts for textual diversity (left) and prompt token length (right) on ImageNet. Prompt Length: Fig. 5 (right) shows the effect of prompt token length on the harmonic mean. Overall, the performance increases as prompt length increases. Using 4 visionlanguage prompts provides the highest harmonic mean. No. of templates in textual diversity: In Fig. 5 (left), we ablate on the number of text prompt templates for textual diversity. We note that increasing the number of textual templates for textual diversity generally increases the performance. This suggests that adding textual diversity using multiple templates for pre-trained features provides more rich supervision for the learned prompted features. 5." + }, + { + "url": "http://arxiv.org/abs/2210.03117v3", + "title": "MaPLe: Multi-modal Prompt Learning", + "abstract": "Pre-trained vision-language (V-L) models such as CLIP have shown excellent\ngeneralization ability to downstream tasks. However, they are sensitive to the\nchoice of input text prompts and require careful selection of prompt templates\nto perform well. Inspired by the Natural Language Processing (NLP) literature,\nrecent CLIP adaptation approaches learn prompts as the textual inputs to\nfine-tune CLIP for downstream tasks. We note that using prompting to adapt\nrepresentations in a single branch of CLIP (language or vision) is sub-optimal\nsince it does not allow the flexibility to dynamically adjust both\nrepresentation spaces on a downstream task. In this work, we propose\nMulti-modal Prompt Learning (MaPLe) for both vision and language branches to\nimprove alignment between the vision and language representations. Our design\npromotes strong coupling between the vision-language prompts to ensure mutual\nsynergy and discourages learning independent uni-modal solutions. Further, we\nlearn separate prompts across different early stages to progressively model the\nstage-wise feature relationships to allow rich context learning. We evaluate\nthe effectiveness of our approach on three representative tasks of\ngeneralization to novel classes, new target datasets and unseen domain shifts.\nCompared with the state-of-the-art method Co-CoOp, MaPLe exhibits favorable\nperformance and achieves an absolute gain of 3.45% on novel classes and 2.72%\non overall harmonic-mean, averaged over 11 diverse image recognition datasets.\nOur code and pre-trained models are available at\nhttps://github.com/muzairkhattak/multimodal-prompt-learning.", + "authors": "Muhammad Uzair Khattak, Hanoona Rasheed, Muhammad Maaz, Salman Khan, Fahad Shahbaz Khan", + "published": "2022-10-06", + "updated": "2023-04-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Foundational vision-language (V-L) models such as CLIP (Contrastive Language-Image Pretraining) [32] have shown excellent generalization ability to downstream tasks. Such models are trained to align language and vision modalities on web-scale data e.g., 400 million text-image pairs in CLIP. These models can reason about open-vocabulary visual concepts, thanks to the rich supervision provided by natural language. During inference, hand-engineered text prompts are used e.g., \u2018a photo of a \u2019 as a query for text encoder. The output text embeddings are matched with the visual embeddings from an image encoder to predict the output class. Designing high quality contextual prompts have been proven to enhance the performance of CLIP and other V-L models [17,42]. Despite the effectiveness of CLIP towards generalization to new concepts, its massive scale and scarcity of training data (e.g., few-shot setting) makes it infeasible to \ufb01ne-tune the full model for downstream tasks. Such \ufb01ne-tuning can also forget the useful knowledge acquired in the large-scale pretraining phase and can pose a risk of over\ufb01tting to the downstream task. To address the above challenges, existing works propose language prompt learning to avoid manually adjusting the prompt templates and providing a mechanism to adapt the model while keeping the original weights frozen [14,25,29,48,49]. Inspired from Natural Language Processing (NLP), these approaches only explore prompt learning for the text encoder in CLIP (Fig. 1:a) while adaptation choices together with an equally important image encoder of CLIP remains an unexplored topic in the literature. Our motivation derives from the multi-modal nature of CLIP, where a text and image encoder co-exist and both contribute towards properly aligning the V-L modalities. We argue that any prompting technique should adapt the model completely and therefore, learning prompts only for the text encoder in CLIP is not suf\ufb01cient to model the adaptations needed for the image encoder. To this end, we set out to achieve completeness in the prompting approach and propose Multi-modal Prompt Learning (MaPLe) to adequately \ufb01ne-tune the text and image encoder representations such that their optimal alignment can be achieved on the downstream tasks (Fig. 1:b). Our extensive experiments on three key representative settings including base-to-novel generalization, cross-dataset evaluation, and domain generalization demonstrate the strength of MaPLe. On base-to-novel generalization, our proposed MaPLe outperforms existing prompt learning approaches across 11 diverse image recognition datasets (Fig. 1:c) and achieves absolute average gain of 3.45% on novel classes and 2.72% on harmonic-mean arXiv:2210.03117v3 [cs.CV] 1 Apr 2023 \fMaximize similarity Image Encoder Prompts \"Text input\" (a) Existing prompt tuning methods (Uni-modal) (b) Multi-modal Prompt Learning (MaPLe) (c) Performance comparison on base-to-novel generalization Text Encoder Maximize similarity Image Encoder Prompts \"Text input\" Text Encoder Prompts Prompts Prompts Prompts Prompts Figure 1. Comparison of MaPLe with standard prompt learning methods. (a) Existing methods adopt uni-modal prompting techniques to \ufb01ne-tune CLIP representations as prompts are learned only in a single branch of CLIP (language or vision). (b) MaPLe introduces branch-aware hierarchical prompts that adapt both language and vision branches simultaneously for improved generalization. (c) MaPLe surpasses state-of-the-art methods on 11 diverse image recognition datasets for novel class generalization task. over the state-of-the-art method Co-CoOp [48]. Further, MaPLe demonstrates favorable generalization ability and robustness in cross-dataset transfer and domain generalization settings, leading to consistent improvements compared to existing approaches. Owing to its streamlined architectural design, MaPLe exhibits improved ef\ufb01ciency during both training and inference without much overhead, as compared to Co-CoOp which lacks ef\ufb01ciency due to its image instance conditioned design. In summary, the main contributions of this work include: \u2022 We propose multi-modal prompt learning in CLIP to favourably align its vision-language representations. To the best of our knowledge, this is the \ufb01rst multimodal prompting approach for \ufb01ne-tuning CLIP. \u2022 To link prompts learned in text and image encoders, we propose a coupling function to explicitly condition vision prompts on their language counterparts. It acts as a bridge between the two modalities and allows mutual propagation of gradients to promote synergy. \u2022 Our multi-modal prompts are learned across multiple transformer blocks in both vision and language branches to progressively learn the synergistic behaviour of both modalities. This deep prompting strategy allows modeling the contextual relationships independently, thus providing more \ufb02exibility to align the vision-language representations. 2. Related Work Vision Language Models: The combined use of language supervision with natural images is found to be of great interest in the computer vision community. In contrast to models learned with only image supervision, these visionlanguage (V-L) models encode rich multimodal representations. Recently, V-L models like CLIP [32], ALIGN [15], LiT [45], FILIP [41] and Florence [43] have demonstrated exceptional performance on a wide spectrum of tasks including few-shot and zero-shot visual recognition. These models learn joint image-language representations in a selfsupervised manner using abundantly available data from the web. For example, CLIP and ALIGN respectively use \u223c400M and \u223c1B image-text pairs to train a multimodal network. Although these pre-trained V-L models learn generalized representations, ef\ufb01ciently adapting them to downstream tasks is still a challenging problem. Many works have demonstrated better performance on downstream tasks by using tailored methods to adapt V-L models for few-shot image-recognition [9,19,46], object detection [8,10,27,34,44,50], and segmentation [5,22,26,33]. In this work, we propose a novel multi-modal prompt learning technique to effectively adapt CLIP for few-shot and zeroshot visual recognition tasks. Prompt Learning: The instructions in the form of a sentence, known as text prompt, are usually given to the language branch of a V-L model, allowing it to better understand the task. Prompts can be handcrafted for a downstream task or learned automatically during \ufb01ne-tuning stage. The latter is referred to as \u2018Prompt Learning\u2019 which was \ufb01rst used in NLP [21,23,24] followed by the adaptation in V-L [48, 49, 51] and vision-only [16, 38, 39, 47] models. Similar to [16] our design also uses deep \u2018vision\u2019 prompting. However, ours is the \ufb01rst multi-modal prompting de\fText Encoder Image Encoder x1 Patch Embed Word Embed \"a photo of a cat\" x1.y 1 x1.y 2 x1.y3 ... x1.yn x2.y 1 x2.y 2 x2.y3 ... x2.yn x3.y 1 x3.y 2 x2.y3 ... x3.yn ... ... ... ... xn.y1 xn.y2xn.y3 ... xn.yn ... ... ... Encoder Layer Encoder Layer ... Encoder Layer ... Encoder Layer ... Encoder Layer ... Encoder Layer x2 x3 ... xn y1 y2 y3 ... yn ..... ..... Figure 2. Overview of our proposed MaPLe (Multi-modal Prompt Learning) framework for prompt learning in V-L models. MaPLe tunes both vision and language branches where only the context prompts are learned, while the rest of the model is frozen. MaPLe conditions the vision prompts on language prompts via a V-L coupling function F to induce mutual synergy between the two modalities. Our framework uses deep contextual prompting where separate context prompts are learned across multiple transformer blocks. sign while [16] is uni-modal. Prompt Learning in Vision Language models: Full \ufb01netuning and linear probing [9] are two typical approaches to adapt a V-L model (i.e. CLIP) to the downstream tasks. The complete \ufb01ne-tuning results in degrading the previously learned joint V-L representation while linear probing limits the zero-shot capability of CLIP. To this end, inspired from prompt learning in NLP, many works have proposed to adapt V-L models by learning the prompt tokens in an end-to-end training. CoOp [49] \ufb01ne-tunes CLIP for fewshot transfer by optimizing continuous set of prompt vectors at its language branch. Co-CoOp [48] highlights the inferior performance of CoOp on novel classes and solves the generalization issue by explicitly conditioning prompts on image instances. [25] proposes to optimize multiple set of prompts by learning the distribution of prompts. [18] adapt CLIP by learning prompts for video understanding tasks. [1] perform visual prompt tuning on CLIP by prompting on the vision branch. We note that the existing methods follow independent uni-modal solutions and learn prompts either in the language or in the vision branch of CLIP, thus adapting CLIP partially. In this paper, we explore an important question: given the multimodal nature of CLIP, is complete prompting (i.e., in both language and vision branches) better suited to adapt CLIP? Our work is the \ufb01rst to answer this question by investigating the effectiveness of multi-modal prompt learning in order to improve alignment between vision and language representations. 3. Method Our approach concerns with \ufb01ne-tuning a pre-trained multimodal CLIP for better generalization to downstream tasks through context optimization via prompting. Fig. 2 shows the overall architecture of our proposed MaPLe (Multimodal Prompt Learning) framework. Unlike previous approaches [48, 49] which learn context prompts only at the language branch, MaPLe proposes a joint prompting approach where the context prompts are learned in both vision and language branches. Speci\ufb01cally, we append learnable context tokens in the language branch and explicitly condition the vision prompts on the language prompts via a coupling function to establish interaction between them. To learn hierarchical contextual representations, we introduce deep prompting in both branches through separate learnable context prompts across different transformer blocks. During \ufb01ne-tuning, only the context prompts along with their coupling function are learned while the rest of the model is frozen. Below, we \ufb01rst outline the pre-trained CLIP architecture and then present our proposed \ufb01ne-tuning approach. 3.1. Revisiting CLIP We build our approach on a pre-trained vision-language (VL) model, CLIP, which consists of a text and vision encoder. Consistent with existing prompting methods [48, 49], we use a vision transformer (ViT) [6] based CLIP model. CLIP encodes an image I \u2208RH\u00d7W \u00d73 and a corresponding text description as explained below. Encoding Image: Image encoder V with K transformer layers {Vi}K i=1, splits the image I into M \ufb01xed-size patches which are projected into patch embeddings E0 \u2208RM\u00d7dv. Patch embeddings Ei are input to the (i + 1)th transformer block (Vi+1) along with a learnable class (CLS) token ci \fCo-CooP: Base 77.0 Co-CooP: Novel 56.0 MaPLe: Base 80.4 MaPLe: Novel 59.2 (a) DTD (Texture Classification) (b) EuroSAT (Sattelite Imagery Recognition) (c) UCF101 (Action Recognition) Co-CooP: Base 87.5 Co-CooP: Novel 60.1 Co-CooP: Base 82.3 Co-CooP: Novel 73.5 MaPLe: Base 94.1 MaPLe: Novel 73.2 MaPLe: Base 83.0 M: Novel 78.7 Figure 3. t-SNE plots of image embeddings in uni-modal prompting method Co-CoOp, and MaPLe on 3 diverse image recognition datasets. MaPLe shows better separability in both base and novel classes. and sequentially processed through K transformer blocks, [ci, Ei] = Vi([ci\u22121, Ei\u22121]) i = 1, 2, \u00b7 \u00b7 \u00b7 , K. To obtain the \ufb01nal image representation x, the class token cK of last transformer layer (VK) is projected to a common V-L latent embedding space via ImageProj, x = ImageProj(cK) x \u2208Rdvl. Encoding Text: CLIP text encoder generates feature representations for text description by tokenizing the words and projecting them to word embeddings W0 = [w1 0, w2 0, \u00b7 \u00b7 \u00b7 , wN 0 ] \u2208RN\u00d7dl. At each stage, Wi is input to the (i + 1)th transformer layer of text encoding branch (Li+1), [Wi] = Li(Wi\u22121) i = 1, 2, \u00b7 \u00b7 \u00b7 , K. The \ufb01nal text representation z is obtained by projecting the text embeddings corresponding to the last token of the last transformer block LK to a common V-L latent embedding space via TextProj, z = TextProj(wN K) z \u2208Rdvl. Zero-shot Classi\ufb01cation: For zero-shot classi\ufb01cation, text prompts are hand-crafted with class labels y \u2208{1, 2, . . . C} (e.g., \u2018a photo of a \u2019) having C classes. Prediction \u02c6 y corresponding to the image I having the highest cosine similarity score (sim(\u00b7)) is calculated with a temperature parameter \u03c4, p(\u02c6 y|x) = exp(sim(x, z\u02c6 y)/\u03c4) PC i=1 exp(sim(x, zi)) . 3.2. MaPLe: Multi-modal Prompt Learning To ef\ufb01ciently \ufb01ne-tune CLIP for downstream image recognition tasks, we explore the potential of multi-modal prompt tuning. We reason that prior works that have predominantly explored uni-modal approaches are less suitable as they do not offer the \ufb02exibility to dynamically adapt both language and vision representation spaces. Thus to achieve completeness in prompting, we underline the importance of multimodal prompting approach. In Fig. 3, we visualize and compare the image embeddings of MaPLe with recent stateof-the-art work, Co-CoOp. Note that the image embeddings of CLIP, CoOp and Co-CoOp will be identical as they do not learn prompts in the vision branch. The visualization shows that image embeddings of MaPLe are more separable indicating that learning vision prompts in addition to language prompts leads to better adaptation of CLIP. In addition to multi-modal prompting, we \ufb01nd that it is essential to learn prompts in the deeper transformer layers to progressively model stage-wise feature representations. To this end, we propose to introduce learnable tokens in the \ufb01rst J (where J < K) layers of both vision and language branches. These multi-modal hierarchical prompts utilize the knowledge embedded in CLIP model to effectively learn task relevant contextual representations (see Fig. 4). 3.2.1 Deep Language Prompting To learn the language context prompts, we introduce b learnable tokens {P i \u2208Rdl}b i=1, in the language branch of CLIP. The input embeddings now follow the form [P 1, P 2, \u00b7 \u00b7 \u00b7 , P b, W0], where W0 = [w1, w2, \u00b7 \u00b7 \u00b7 , wN] corresponds to \ufb01xed input tokens. New learnable tokens are further introduced in each transformer block of the language encoder (Li) up to a speci\ufb01c depth J, [ , Wi] = Li([Pi\u22121, Wi\u22121]) i = 1, 2, \u00b7 \u00b7 \u00b7 , J. (1) Here [\u00b7, \u00b7] refers to the concatenation operation. After Jth transformer layer, the subsequent layers process previous \flayer prompts and \ufb01nal text representation z is computed, [Pj, Wj] = Lj([Pj\u22121, Wj\u22121]) j = J + 1, \u00b7 \u00b7 \u00b7 , K, (2) z = TextProj(wN K). (3) When J = 1, the learnable tokens P are only applied at the input of \ufb01rst transformer layer, and this deep language prompting technique degenerates to CoOp [49]. 3.2.2 Deep Vision Prompting Similar to deep language prompting, we introduce b learnable tokens { \u02dc P i \u2208Rdv}b i=1, in the vision branch of CLIP alongside the input image tokens. New learnable tokens are further introduced in deeper transformer layers of the image encoder (V) up to depth J. [ci, Ei, ] = Vi([ci\u22121, Ei\u22121, \u02dc Pi\u22121]) i = 1, 2, \u00b7 \u00b7 \u00b7 , J, [cj, Ej, \u02dc Pj] = Vj([cj\u22121, Ej\u22121, \u02dc Pj\u22121]) j = J + 1, \u00b7 \u00b7 \u00b7 , K, x = ImageProj(cK). Our deep prompting provides the \ufb02exibility to learn prompts across different feature hierarchies within the ViT architecture. We \ufb01nd that sharing prompts across stages is better compared to independent prompts as features are more correlated due to successive transformer block processing. Thus, the later stages do not provide independently-learned complimentary prompts as compared to the early stages. 3.2.3 Vision Language Prompt Coupling We reason that in prompt tuning it is essential to take a multi-modal approach and simultaneously adapt both the vision and language branch of CLIP in order to achieve completeness in context optimization. A simple approach would be to naively combine deep vision and language prompting, where both the language prompts P, and the vision prompts \u02dc P, will be learned during the same training schedule. We name this design as \u2018Independent V-L Prompting\u2019. Although this approach satis\ufb01es the requirement of completeness in prompting, this design lacks synergy between vision and language branch as both branches do not interact while learning the task relevant context prompts. To this end, we propose a branch-aware multi-modal prompting which tunes vision and language branch of CLIP together by sharing prompts across both modalities. Language prompt tokens are introduced in the language branch up to Jth transformer block similar to deep language prompting as illustrated in Eqs. 1-3. To ensure mutual synergy between V-L prompts, vision prompts \u02dc P, are obtained by projecting language prompts P via vision-to-language projection which we refer to as V-L coupling function F(\u00b7), such that \u02dc Pk = Fk(Pk). The coupling function is implemented as a linear layer which maps dl dimensional inputs to dv. This acts as a bridge between the two modalities, thus encouraging mutual propagation of gradients. [ci, Ei, ] = Vi([ci\u22121, Ei\u22121, Fi\u22121(Pi\u22121)]) i = 1, \u00b7 \u00b7 \u00b7 , J [cj, Ej, \u02dc Pj] = Vj([cj\u22121, Ej\u22121, \u02dc Pj\u22121]) j = J + 1, \u00b7 \u00b7 \u00b7 , K x = ImageProj(cK) Unlike independent V-L prompting, explicit conditioning of \u02dc P on P helps learn prompts in a shared embedding space between the two branches, thus improving mutual synergy. 4. Experiments 4.1. Benchmark setting Generalization from Base-to-Novel Classes: We evaluate the generalizability of MaPLe, and follow a zero-shot setting where the datasets are split into base and novel classes. The model is trained only on the base classes in a few-shot setting and evaluated on base and novel categories. Cross-dataset Evaluation: To validate the potential of our approach in cross-dataset transfer, we evaluate our ImageNet trained model directly on other datasets. Consistent with Co-CoOp, our model is trained on all 1000 ImageNet classes in a few-shot manner. Domain Generalization: We evaluate the robustness of our method on out-of-distribution datasets. Similar to crossdataset evaluation, we test our ImageNet trained model directly on four other ImageNet datasets that contain various types of domain shifts. Datasets: For generalization from base-to-novel classes and cross-dataset evaluation, we follow [48, 49] and evaluate the performance of our method on 11 image classi\ufb01cation datasets which covers a wide range of recognition tasks. This includes two generic-objects datasets, ImageNet [4] and Caltech101 [7]; \ufb01ve \ufb01ne-grained datasets, OxfordPets [31], StanfordCars [20], Flowers102 [30], Food101 [2], and FGVCAircraft [28]; a scene recognition dataset SUN397 [40]; an action recognition dataset UCF101 [36]; a texture dataset DTD [3] and a satelliteimage dataset EuroSAT [11]. For domain generalization, we use ImageNet as source dataset and its four variants as target datasets including ImageNetV2 [35], ImageNetSketch [37], ImageNet-A [13] and ImageNet-R [12]. Implementation Details We use a few-shot training strategy in all experiments at 16 shots which are randomly sampled for each class. We apply prompt tuning on a pretrained ViT-B/16 CLIP model where dl = 512, dv = 768 and dvl = 512. For MaPLe, we set prompt depth J to 9 and the language and vision prompt lengths to 2. All models are trained for 5 epochs with a batch-size of 4 and a learning rate of 0.0035 via SGD optimizer on a single NVIDIA A100 GPU. We report base and novel class accuracies and their harmonic mean (HM) averaged over 3 runs. We initial\fMethod Base Acc. Novel Acc. HM GFLOPS 1: MaPLe shallow (J = 1) 80.10 73.52 76.67 167.1 2: Deep vision prompting 80.24 73.43 76.68 18.0 3: Deep language prompting 81.72 73.81 77.56 166.8 4: Independent V-L prompting 82.15 74.07 77.90 167.0 5: MaPLe (Ours) 82.28 75.14 78.55 167.0 Table 1. Comparison of MaPLe with different prompting designs in base-to-novel generalization. Results are averaged over 11 datasets. HM refers to harmonic mean. ize the language prompts of the \ufb01rst layer P0 with the pretrained CLIP word embeddings of the template \u2018a photo of a \u2019, while for the subsequent layers they are randomly initialized from a normal distribution. For training MaPLe on all 1000 classes of ImageNet as a source model, prompt depth J is set to 3 and the model trained for 2 epochs with learning rate of 0.0026. Hyper-parameters for deep language prompting, deep vision prompting, and independent V-L prompting are detailed in Appendix A. The hyper-parameters are \ufb01xed across all datasets. 4.2. Prompting CLIP via Vision-Language Prompts Prompting Variants: We \ufb01rst evaluate the performance of different possible prompting design choices as an ablation for our proposed branch-aware multi-modal prompting, MaPLe. These variants include shallow MaPLe, deep language prompting, deep vision prompting and independent V-L prompting. In Table 1, we present the results averaged over the 11 image recognition datasets. Shallow MaPLe (row-1) provides consistant improvements over CoOp and Co-CoOp in terms of generalization. Deep language prompting (row-3) shows improvements over deep vision prompting (row-2), indicating that prompts learned at the language branch provide better adaptation of CLIP. Although separately combining the above two approaches (row-4) further improves the performance, it struggles to achieve comprehensive bene\ufb01ts from the language and vision branches. We hypothesize that this is due to the lack of synergy between the learned vision and language prompts as they do not interact with each other during training. Meanwhile, MaPLe tied with deep prompting (row-4) combines the bene\ufb01ts of prompting in both branches by enforcing interactions through explicit conditioning of vision prompts on the language prompts. It provides improvements on novel and base class accuracies which leads to the best HM of 78.55%. We explore other possible design choices and present the ablations in Appendix B. 4.3. Base-to-Novel Generalization Generalization to Unseen Classes: Table 3 presents the performance of MaPLe in base-to-novel generalization setting on 11 recognition datasets. We compare its performance with CLIP zero-shot, and recent prompt learning works including CoOp [49] and Co-CoOp [48]. In case of CLIP, we use hand-crafted prompts that are speci\ufb01cally designed for each dataset. In comparison with the state-of-the-art Co-CoOp, MaPLe shows improved performance on both base and novel categories on all 11 datasets with an exception of marginal reduction on only the base class performance of Caltech101. With mutual synergy from the branch-aware multi-modal prompting, MaPLe better generalizes to novel categories on all 11 datasets in comparison with Co-CoOp, and obtains an overall gain from 71.69% to 75.14%. When taking into account both the base and novel classes, MaPLe shows an absolute average gain of 2.72% over Co-CoOp. In comparison with CLIP on novel classes, Co-CoOp improves only on 4/11 datasets dropping the average novel accuracy from 74.22% to 71.69%. MaPLe is a strong competitor which improves accuracy over CLIP on novel classes on 6/11 datasets, with an average gain from 74.22% to 75.14%. Generalization and Performance on Base Classes: CoCoOp solves the poor generalization problem in CoOp by conditioning prompts on image instances and shows significant gains in novel categories. However on base classes, it improves over CoOp only on 3/11 datasets with an average drop in performance from 82.69% to 80.47%. Meanwhile, the completeness in prompting helps MaPLe improve over CoOp on base classes in 6/11 datasets maintaining the average base accuracy to around 82.28%, in addition to its improvement in generalization to novel classes. We \ufb01nd that the training strategies of Co-CoOp can be used to substantially boost the generalization performance of vanilla CoOp (6.8% gain in novel classes). We therefore compare our method with CoOp\u2020, which trains CoOp in CoCoOp setting (refer to Appendix A for more details). Base Novel HM CoOp 82.69 63.22 71.66 Co-CoOp 80.47 71.69 75.83 CoOp\u2020 80.85 70.02 75.04 MaPLe 82.28 75.14 78.55 Table 2. Generalization comparison of MaPLe with CoOp\u2020. Compare to CoOp\u2020, the vanilla CoOp model seems to over\ufb01t on base classes. When compared to CoOp\u2020 which attains an average base accuracy of 80.85%, MaPLe shows an improvement of 1.43% with the average base accuracy of 82.28% (Table 2). 4.4. Cross-Dataset Evaluation We test the cross-dataset generalization ability of MaPLe by learning multi-modal prompts on all the 1000 ImageNet classes and then transferring it directly on the remaining 10 datasets. Table 4 shows the performance comparison between MaPLe, CoOp and Co-CoOp. On the ImageNet source dataset, MaPLe achieves performance comparable to competing approaches but demonstrates a much stronger \f(a) Average over 11 datasets Base Novel HM CLIP 69.34 74.22 71.70 CoOp 82.69 63.22 71.66 Co-CoOp 80.47 71.69 75.83 MaPLe 82.28 75.14 78.55 +1.81 +3.45 +2.72 (b) ImageNet. Base Novel HM CLIP 72.43 68.14 70.22 CoOp 76.47 67.88 71.92 Co-CoOp 75.98 70.43 73.10 MaPLe 76.66 70.54 73.47 +0.68 +0.11 +0.37 (c) Caltech101 Base Novel HM CLIP 96.84 94.00 95.40 CoOp 98.00 89.81 93.73 Co-CoOp 97.96 93.81 95.84 MaPLe 97.74 94.36 96.02 -0.22 +0.55 +0.18 (d) OxfordPets Base Novel HM CLIP 91.17 97.26 94.12 CoOp 93.67 95.29 94.47 Co-CoOp 95.20 97.69 96.43 MaPLe 95.43 97.76 96.58 +0.23 +0.07 +0.15 (e) StanfordCars Base Novel HM CLIP 63.37 74.89 68.65 CoOp 78.12 60.40 68.13 Co-CoOp 70.49 73.59 72.01 MaPLe 72.94 74.00 73.47 +2.45 +0.41 +1.46 (f) Flowers102 Base Novel HM CLIP 72.08 77.80 74.83 CoOp 97.60 59.67 74.06 Co-CoOp 94.87 71.75 81.71 MaPLe 95.92 72.46 82.56 +1.05 +0.71 +0.85 (g) Food101 Base Novel HM CLIP 90.10 91.22 90.66 CoOp 88.33 82.26 85.19 Co-CoOp 90.70 91.29 90.99 MaPLe 90.71 92.05 91.38 +0.01 +0.76 +0.39 (h) FGVCAircraft Base Novel HM CLIP 27.19 36.29 31.09 CoOp 40.44 22.30 28.75 Co-CoOp 33.41 23.71 27.74 MaPLe 37.44 35.61 36.50 +4.03 +11.90 +8.76 (i) SUN397 Base Novel HM CLIP 69.36 75.35 72.23 CoOp 80.60 65.89 72.51 Co-CoOp 79.74 76.86 78.27 MaPLe 80.82 78.70 79.75 +1.08 +1.84 +1.48 (j) DTD Base Novel HM CLIP 53.24 59.90 56.37 CoOp 79.44 41.18 54.24 Co-CoOp 77.01 56.00 64.85 MaPLe 80.36 59.18 68.16 +3.35 +3.18 +3.31 (k) EuroSAT Base Novel HM CLIP 56.48 64.05 60.03 CoOp 92.19 54.74 68.69 Co-CoOp 87.49 60.04 71.21 MaPLe 94.07 73.23 82.35 +6.58 +13.19 +11.14 (l) UCF101 Base Novel HM CLIP 70.53 77.50 73.85 CoOp 84.69 56.05 67.46 Co-CoOp 82.33 73.45 77.64 MaPLe 83.00 78.66 80.77 +0.67 +5.21 +3.13 Table 3. Comparison with state-of-the-art methods on base-to-novel generalization. MaPLe learns multi-modal prompts and demonstrates strong generalization results over existing methods on 11 recognition datasets. Absolute gains over Co-CoOp are indicated in blue. Source Target ImageNet Caltech101 OxfordPets StanfordCars Flowers102 Food101 Aircraft SUN397 DTD EuroSAT UCF101 Average CoOp 71.51 93.70 89.14 64.51 68.71 85.30 18.47 64.15 41.92 46.39 66.55 63.88 Co-CoOp 71.02 94.43 90.14 65.32 71.88 86.06 22.94 67.36 45.73 45.37 68.21 65.74 MaPLe 70.72 93.53 90.49 65.57 72.23 86.20 24.74 67.01 46.49 48.06 68.69 66.30 Table 4. Comparison of MaPLe with existing approaches on cross-dataset evaluation. Overall, MaPLe achieves competitive performance providing highest average accuracy, indicating better generalization. generalization performance by surpassing CoOp in 9/10 and Co-CoOp in 8/10 datasets. Overall, MaPLe shows competitive performance leading to the highest averaged accuracy of 66.30%. This suggests that the use of branch-aware V-L prompting in MaPLe facilitates better generalization. 4.5. Domain Generalization We show that MaPLe generalizes favourably on out-ofdistribution datasets as compared to CoOp and Co-CoOp. We evaluate the direct transferability of ImageNet trained model to various out-of-domain datasets, and observe that \fSource Target ImageNet ImageNetV2 ImageNet-S ImageNet-A ImageNet-R CLIP 66.73 60.83 46.15 47.77 73.96 CoOp 71.51 64.20 47.99 49.71 75.21 Co-CoOp 71.02 64.07 48.75 50.63 76.18 MaPLe 70.72 64.07 49.15 50.90 76.98 Table 5. Comparison of MaPLe with existing approaches in domain generalization setting. MaPLe shows consistant improvements on all target datasets. it consistently improves against all the existing approaches as indicated in Table 5. This indicates that utilizing multimodal branch-aware prompting helps MaPLe in enhancing the generalization and robustness of V-L models like CLIP. 4.6. Ablation Experiments Prompt Depth: In Fig. 4 (left), we illustrate the effect of prompt depth J for MaPLe and ablate on the depth of language and vision branch individually. In general, the performance improves as prompt depth increases. We note that performance sensitivity increases when randomly initialized prompts are inserted in the deeper layers of a frozen model where the model feature space is already mature. Similar trend is also reported by [16]. As earlier methods utilize shallow language prompting (J = 1), we compare our method with deep language prompting. Overall, MaPLe achieves better performance than deep language prompting and achieves maximum performance at a depth of 9. Prompt Length: Fig. 4 (right) shows the effect of prompt length for MaPLe. As the prompt length increases, the performance on base classes is generally maintained, while the novel class accuracy decreases. This indicates over-\ufb01tting which inherently hurts the generalization to novel classes. Effectiveness of Multi-modal Prompting: Fig. 5 shows the analysis of per class accuracy for selected datasets in the order of increasing domain shift. It indicates that the performance gains of MaPLe in comparison to Co-CoOp varies across different datasets. MaPLe provides signi\ufb01cant gains over Co-CoOp for datasets that have large distribution shifts from the pretraining dataset of CLIP, and vision concepts that are usually rare and less generic. Further detailed analysis is provided in Appendix C. Prompting complexity: Table 6 shows the computational Figure 4. Ablation on prompt depth (left) and prompt length (right) in MaPLe. We report average results on the held-out validation sets of all datasets. Figure 5. Percentage classes where MaPLe shows improved performance over Co-CoOp, which increases as dataset domain shift from generic categories increases (\u2192). complexity of MaPLe in comparison with other approaches. Although MaPLe utilizes multi-modal prompts, its overall FLOPS (Floating Point Operations) exceeds only by 0.1% over CoOp and Co-CoOp. The independent V-L prompting also provides comparable FLOP count. In terms of inference speed, Co-CoOp is signi\ufb01cantly slower and the FPS (Frames Per Second) remains constant as the batch size increases. In contrast, MaPLe has no such overhead and provides much better inference and training speeds. Further, MaPLe provides better convergence as it requires only half training epochs as compared to Co-CoOp (5 vs 10 epochs). MaPLe adds about 2.85% training parameters on top of CLIP. To study if the performance gain is mainly attributed to more parameters, we experiment with MaPLe\u2020, which uses a uni\ufb01ed V-L coupling function for all layer prompts. MaPLe\u2020 with about 9x lesser parameters than MaPLe also improves over existing methods. We also ablate by comparing MaPLe with heavier CoCoOp in Appendix D. Method Params Params FPS (with BS) HM % CLIP 1 4 100 CoOp 2048 0.002 13.8 55.3 1353.0 71.66 CoCoOp 35360 0.03 64.6 114.7 15.1 75.83 Independent V-L 31488 0.02 62.5 239.4 1383.8 77.90 MaPLe 3.55 M 2.85 60.2 239.0 1365.1 78.55 MaPLe\u2020 0.41 M 0.33 60.2 238.0 1365.0 78.11 Table 6. Comparison of computational complexity among different prompting methods. MaPLe\u2020 is a MaPLe version which utilizes a common V-L coupling function for all layers. 5." + } + ], + "Muzammal Naseer": [ + { + "url": "http://arxiv.org/abs/2302.12252v2", + "title": "Boosting Adversarial Transferability using Dynamic Cues", + "abstract": "The transferability of adversarial perturbations between image models has\nbeen extensively studied. In this case, an attack is generated from a known\nsurrogate \\eg, the ImageNet trained model, and transferred to change the\ndecision of an unknown (black-box) model trained on an image dataset. However,\nattacks generated from image models do not capture the dynamic nature of a\nmoving object or a changing scene due to a lack of temporal cues within image\nmodels. This leads to reduced transferability of adversarial attacks from\nrepresentation-enriched \\emph{image} models such as Supervised Vision\nTransformers (ViTs), Self-supervised ViTs (\\eg, DINO), and Vision-language\nmodels (\\eg, CLIP) to black-box \\emph{video} models. In this work, we induce\ndynamic cues within the image models without sacrificing their original\nperformance on images. To this end, we optimize \\emph{temporal prompts} through\nfrozen image models to capture motion dynamics. Our temporal prompts are the\nresult of a learnable transformation that allows optimizing for temporal\ngradients during an adversarial attack to fool the motion dynamics.\nSpecifically, we introduce spatial (image) and temporal (video) cues within the\nsame source model through task-specific prompts. Attacking such prompts\nmaximizes the adversarial transferability from image-to-video and\nimage-to-image models using the attacks designed for image models. Our attack\nresults indicate that the attacker does not need specialized architectures,\n\\eg, divided space-time attention, 3D convolutions, or multi-view convolution\nnetworks for different data modalities. Image models are effective surrogates\nto optimize an adversarial attack to fool black-box models in a changing\nenvironment over time. Code is available at https://bit.ly/3Xd9gRQ", + "authors": "Muzammal Naseer, Ahmad Mahmood, Salman Khan, Fahad Khan", + "published": "2023-02-23", + "updated": "2023-04-04", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "main_content": "INTRODUCTION Deep learning models are vulnerable to imperceptible changes to the input images. It has been shown that for a successful attack, an attacker no longer needs to know the attacked target model to compromise its decisions (Naseer et al., 2019; 2020; Nakka & Salzmann, 2021). Adversarial perturbations suitably optimized from a known source model (a surrogate) can fool an unknown target model (Kurakin et al., 2016). These attacks are known as black-box attacks since the attacker is restricted to access the deployed model or compute its adversarial gradient information. Adversarial attacks are continuously evolving, revealing new blind spots of deep neural networks. Adversarial transferability has been extensively studied in image-domain (Akhtar & Mian, 2018; Wang & He, 2021; Naseer et al., 2022b; Malik et al., 2022). Existing works demonstrate how adversarial patterns can be generalized to models with different architectures (Zhou et al., 2018) and even different data domains (Naseer et al., 2019). However, the adversarial transferability between different architecture families designed for varying data modalities, e.g., image models to video models, has not been actively explored. Since the adversarial machine learning topic has gained maximum attention in the image-domain, it is natural to question if image models can help transfer 1 arXiv:2302.12252v2 [cs.CV] 4 Apr 2023 \fPublished as a conference paper at ICLR 2023 better to video-domain models. However, the image models lack dynamic temporal cues which are essential for transfer to the video models. We are motivated by the fact that in a real-world setting, a scene is not static but mostly involves various dynamics, e.g., object motion, changing viewpoints, illumination and background changes. Therefore, exploiting dynamic cues within an adversarial attack is essential to \ufb01nd blind-spots of unknown target models. For this purpose, we introduce the idea of encoding disentangled temporal representations within an image-based Vision Transformer (ViT) model using dedicated temporal prompts while keeping the remaining network frozen. The temporal prompts can learn the dynamic cues which are exploited during attack for improved transferability from image-domain models. Speci\ufb01cally, we introduce the proposed temporal prompts to three types of image models with enriched representations acquired via supervised (ViT (Dosovitskiy et al., 2020)), self-supervised (DINO (Caron et al., 2021)) or multi-modal learning (CLIP (Radford et al., 2021)). Our approach offers the bene\ufb01t that the attacks do not need to rely on specialized networks designed for videos towards better adversarial transferability. As an example, popular model designs for videos incorporate 3D convolutions, space-time attention, tube embeddings or multi-view information to be robust against the temporal changes (Bertasius et al., 2021; Arnab et al., 2021). Without access to such speci\ufb01c design choices, our approach demonstrates how an attacker can leverage regular image models augmented with temporal prompts to learn dynamic cues. Further, our approach can be easily extended to image datasets, where disentangled representations can be learned via tokens across a scale-space at varying image resolutions. In summary, the major contributions of this work include: \u2022 We demonstrate how temporal prompts incorporated with frozen image-based models can help model dynamic cues which can be exploited to fool deep networks designed for videos. \u2022 Our approach for dynamic cue modeling via prompts does not affect the original spatial representations learned by the image-based models during pre-training, e.g., fully-supervised, self-supervised and multi-modal models. \u2022 The proposed method signi\ufb01cantly improves transfer to black-box image and video models. Our approach is easily extendable to 3D datasets via learning cross-view prompts; and image-only datasets via modeling the scale-space. Finally, it enables generalization from popular plain ViT models without considering video-speci\ufb01c specialized designs. We analyse the adversarial space of three type of image models (fully-supervised, self-supervised, and text-supervised). A pre-trained ImageNet ViT with approximately 6 million parameters exhibits 44.6 and 72.2 top-1 (%) accuracy on Kinetics-400 and ImageNet validation sets using our approach, thereby signi\ufb01cantly improving the adversarial transferability on video-domain models. A similar trend exists with other image models. Our results indicate that the multi-modal CLIP can better adapt to video modalities than fully-supervised or self-supervised ViTs. However, CLIP adversaries are relatively less transferable as compared to fully-supervised ViT or self-supervised DINO model.As an example, a momentum based iterative attack launched from our DINO model can reduce the performance of TimesFormer (Bertasius et al., 2021) from 75.6% to 35.8% on Kinetics-400 dataset. 2 BOOSTING ADVERSARIAL TRANSFERABILITY USING DYNAMIC CUES Adversarial transferability refers to manipulating a clean sample (image, video, or 3D object rendered into multi-views) in a way that is deceiving for an unknown (black-box) model. In the absence of an adversarial perturbation, the same black-box model predicts the correct label for the given image, video, or a rendered view of a 3D object. A known surrogate model is usually used to optimize for the adversarial patterns. Instead of training the surrogate model from scratch on a given data distribution, an attacker can also adapt pre-trained image models to the new task. These image models can include supervised ImageNet models such as Deit (Touvron et al., 2020), self-supervised ImageNet models like DINO (Caron et al., 2021), and text-supervised large-scale multi-modal models e.g. CLIP (Radford et al., 2021). The adversarial attack generated from such pre-trained models with enriched representations transfer better in the black-box setting for image-to-image transfer task (Zhang et al., 2022; Naseer et al., 2022b; Aich et al., 2022). However, adversarial perturbations optimized from image models are not well suited to fool motion dynamics learned by a video model (Sec. 3). To cater for this, we introduce temporal cues to model motion dynamics within adversarial attacks through pre-trained image models. Our approach, therefore, models both spatial and temporal 2 \fPublished as a conference paper at ICLR 2023 Proposed training for dynamic cues within image models Image pre-trained vision transformer MHSA Frozen model Layer Norm Layer Norm FFN Input Video (\ud835\udc65) Target Video / Image Models (Black-box) Source Image Model Random Sampler Spatial Tokens Tokenizer Transformation \ud835\udcaf MHSA Spatial Pooling Tokenizer Learned Temporal Tokens (\ud835\udc61) Perturbed Video (\ud835\udc65\u2032) Multi-head Self-attention Feed-forward network \ud835\udc65 \ud835\udc65\u2032 Times Former \ud835\udc65 \ud835\udc65\u2032 ResNet3D \ud835\udc65 \ud835\udc65\u2032 BiT50 Crawling Jumping Crawling Playing Baby Whale Attack Spatial head Temporal head Figure 1: Overview of inducing dynamic cues within image models: Attackers can easily access freely available, pre-trained image models learned on large-scale image and language datasets to launch adversarial attacks. These models, however, lack temporal information. Therefore, adversarial attacks launched using image models have less success rate against a moving target such as in videos. We learn a transformation T (.) to convert a given video with t number of frames into t temporal tokens. Our transformation is based on self-attention thus it learns the motion dynamics between the video frames with global context during training. We randomly sample a single frame to represent the spatial tokens. The temporal and spatial tokens are concatenated and passed through the frozen model. The spatial tokens are ignored while the average of temporal tokens is processed through a temporal head for video recognition. The image/spatial class token learned on images (e.g., ImageNet) interacts with our temporal tokens (e.g., Kinetics) within the network through self-attention. After the training, image and video solutions are preserved within the spatial and temporal tokens and used in adversarial attacks. information during the attack (Sec. 2.1) and does not depend on specialized video networks e.g., with 3D convolutions (Tran et al., 2018), space-time divided attention (Bertasius et al., 2021) or even multi-branch models (Su et al., 2015). To this end, we \ufb01rst introduce temporal prompt adaptation of image models to learn dynamic cues on videos or multi-view information on rendered views of a 3D object, as explained next. 2.1 INTRODUCING DYNAMIC CUES THROUGH TEMPORAL PROMPTS Preliminaries: We consider an image model F pre-trained on the input samples x of size c \u00d7 h \u00d7 w, where c, h and w represent the color channels, height and width, respectively. We consider Vision Transformers (ViTs) (Dosovitskiy et al., 2020; Touvron et al., 2020) sequentially composed of n Transformer blocks comprising of multi-head self-attention (MHSA) and feed-forward layer (Dosovitskiy et al., 2020) i.e. F = (f1 \u25e6f2 \u25e6f3 \u25e6. . . fn), where fi represents a single Transformer block. ViTs divide an input sample x into N patches also called patch tokens, Pt \u2208RN\u00d7D, where D is the patch dimension. A class token1, Icls \u2208R1\u00d7D, is usually combined with these patch tokens within these image models (Dosovitskiy et al., 2020; Caron et al., 2021; Radford et al., 2021). We refer to these patch and class tokens collectively as \u2018spatial tokens\u2019 (Fig. 1). These tokens are then processed by the model F to produce an output of the same size as of input i.e. F(x) \u2208R(N+1)\u00d7D. In the case of image classi\ufb01cation, the re\ufb01ned class token (\u02dc Icls) is extracted from the model output and processed by MLP (gs), named spatial head (Fig. 1). It maps the re\ufb01ned class token from R1\u00d7D to R1\u00d7Cs, where Cs represents the number of image class categories. Motivation: The spatial class token in image models, Icls, is never optimized for temporal information. Our objective is to adapt the pre-trained and frozen image model for videos or multi-views by training a temporal class token It cls \u2208R1\u00d7D that is optimal for input samples of size t \u00d7 c \u00d7 h \u00d7 w, 1Hierarchical ViTs such as Swin Transformer (Liu et al., 2021c) or more simple designs without self-attention layers like MLP-Mixer (Tolstikhin et al., 2021) use average of patch tokens as the class token. 3 \fPublished as a conference paper at ICLR 2023 Figure 2: Mimicking dynamic cues on static images: We optimize our proposed transformation (Fig. 1) for different spatial scales to mimic changes. This improves model generalization e.g. Deit-T performance increases from 7.7% to 45% on ImageNet val. set at 96\u00d796. Attacking models with such prompts increases transferability. where t represents the number of temporal frames in a video or rendered views of a 3D object. We optimize the temporal token through MLP (gt), named temporal head (Fig. 1). It maps the temporal token from R1\u00d7D to R1\u00d7Ct, where Ct represents the number of video action categories. A trivial way is to concatenate all the additional frames of a video to generate t \u00d7 N patch tokens which can then be combined with the temporal token before forwadpass through the image model. We can then optimize temporal class token It cls and temporal head (gt) for video classi\ufb01cation and incorporate the dynamic cues within a frozen image model. This approach, however, has a drawback as the computational time complexity increases signi\ufb01cantly to O \u0000(t \u00d7 N)2 \u00d7 D \u0001 within the self-attention layers of ViTs. Another naive way is to either apply temporal pooling to convert t\u00d7N to only N spatial tokens or approximate a video via a single frame that is ti \u00d7c\u00d7h\u00d7w, where ti corresponds to a randomly sampled frame from the input video. These approaches are however sub-optimal as we are unable to optimize for motion dynamics available from different video frames. We induce the temporal information within image models without increasing the quadratic complexity within self-attention of a Vision Transformer. We achieve this by representing a given video by a randomly sampled frame while at the same time each frame in a video is modeled via a temporal prompt of size R1\u00d7D. The temporal prompts for different frames in a video are generated through our transformation T . Therefore, we only train transformation T , temporal class token It cls and head gt to learn motion dynamics within pre-trained and frozen image models. Our approach preserves the original image or spatial representation encoded within spatial class token Icls (Table 5). Both spatial and temporal tokens, Icls and It cls, are then used within our attack approach (Sec. 2.2). 2.1.1 TEMPORAL PROMPTS THROUGH TRANSFORMATION As mentioned above, the transformation T processes a video with t number of temporal frames and produces only t temporal prompts (Fig. 1). A video sample, x, is divided into patch tokens such that x \u2208Rt\u00d7N\u00d7D. The transformation T is a single self-attention block (Dosovitskiy et al., 2020) that interacts between the tokens and learns the relationship between the video frames. The transformation output is then pooled across spatial dimension N to produce t temporal tokens i.e., T (x) \u2208Rt\u00d7D. Attacking such temporal tokens allows \ufb01nding adversarial patterns capable of fooling the dynamic cues learned by the unknown black-box video model. Mimicking dynamic behavior on image datasets: Image samples are static and have no temporal dimension. So we adopt a simple strategy to mimic changing behavior for such static data. We consider images at different spatial scales to obtain a scale-space (Fig. 2) and learn prompts for different spatial scales. The dynamic cues within image models not only boost transferability of the existing adversarial attacks from image-to-video models but also increase the attack success rate in the black-box image models (Sec. 3). 2.1.2 IMAGE MODELS WITH TEMPORAL AND SPATIAL PROMPTS Our approach is motivated by the shallow prompt transfer learning (Jia et al., 2022). As discussed above, we divide a given a video x into video patches such that x \u2208Rt\u00d7N\u00d7D. These video patches are processed by our transformation to produce temporal tokens i.e. T (x) \u2208Rt\u00d7D. Further, we randomly sample a single frame xi \u223cx and divide it into patch tokens such that xi \u2208RN\u00d7D. These single frame tokens are then concatenated with the temporal prompts generated by T (x), a temporal class token It cls, and the image class token Icls to equip the pre-trained image representation of a model F with discriminative temporal information. F(x) = F \u0000\u0002 It cls, T (x), Icls, xi \u0003\u0001 (1) After the forwardpass through the image model (Eq. 1), we extract the re\ufb01ned temporal class token and temporal prompts from the model\u2019s output. The average of these re\ufb01ned temporal class tokens, 4 \fPublished as a conference paper at ICLR 2023 Figure 3: Visualizing the behavior of Adversarial Patterns across Time: We generate adversarial signals from our DINO model with temporal prompts using DIM attack (Xie et al., 2019). Observe the change in gradients across different frames (best viewed in zoom). We provide visual demos of adversarial patterns in Appendix K. and t temporal prompts is then projected through the temporal head gt for the new video classi\ufb01cation task. For the sake of brevity, we will refer to the average of our re\ufb01ned temporal class and prompts tokens collectively as \u02dc Itp cls. During backpass, we only update the parameters of our transformation, temporal class token and temporal head (Fig. 1). Training: Given a pre-trained ViT, we freeze all of its existing weights and insert our learnable transformation (T ), temporal class token It cls, and video-speci\ufb01c temporal head. We train for 15 epochs only using SGD optimizer with a learning rate of 0.005 which is decayed by a factor of 10 after the 11th and 14th epoch. We use batch size of 64 and train on 16 A100 GPUs for large-scale datasets such as Kinetics-400 (Kay et al., 2017) and only 2 A100 GPUs for other small datasets. We discuss the effect of our temporal prompts on temporal (video) and spatial (image) solutions in Table 5. Our method mostly retains the original image solution captured in image class token Icls while exhibiting dynamic temporal modeling. 2.2 TRANSFERABLE ATTACK USING SPATIAL AND TEMPORAL CUES Given an image model F adapted for motion dynamics via our approach (Fig. 1), our attack exploits the spatial and temporal cues to generate transferable perturbations for both image and video models. \u2022 Given a video and its corresponding label: Our attack uses the re\ufb01ned temporal class token \u02dc Itp cls output by the model in a supervised manner, while the spatial class token \u02dc Icls serves in an self-supervised objective. \u2022 Given an image and its corresponding label: Our attack uses the re\ufb01ned class tokens \u02dc Itp cls learned at different resolutions to mimic the dynamic behavior from images in a supervised manner, while the spatial class token \u02dc Icls serves in an self-supervised objective. We generate an adversarial example x\u2032 for a video or image sample x using the following objective: \ufb01nd x\u2032 such that F(x\u2032)argmax \u0338= y, and \u2225x \u2212x\u2032\u2225p \u2264\u03f5, (2) where y is the original label and \u03f5 represents a maximum perturbation budget within a norm distance p. We optimize Eq. 2 by maximizing the loss objective in Eq. 3 within existing adversarial attacks. maximize L = Ls \u2212Lss, (3) where Ls is a supervised loss function. It uses the labels of a given image/video through supervised MLP head. Ls can be optimized by any of the existing supervised losses proposed in transferable attacks literature; cross-entropy (Dong et al., 2018), relativistic cross-entropy (Naseer et al., 2019), or logit loss (Zhao et al., 2021). Unless otherwise mentioned, we use cross-entropy loss to demonstrate the effectiveness of our approach. Similarly, Lss is self-supervised loss that exploits pre-trained representation within spatial class tokens. We minimize the cosine similarity between the re\ufb01ned class tokens of clean and adversarial samples. Speci\ufb01cally, we de\ufb01ne Ls and Lss as follows: Ls = \u2212 Ct X j=1 yjlog \u0010 \u02dc I\u2032tp cls \u25e6gt \u0011 , Lss = \u02dc I\u2032\u22a4 cls\u02dc Icls \u2225\u02dc I\u2032cls\u2225\u2225\u02dc Icls\u2225 5 \fPublished as a conference paper at ICLR 2023 Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2014) TimesFormer ResNet3D MVCNN Models (\u2193) Temporal Prompts UCF HMDB K400 SSv2 UCF HMDB K400 SSv2 Depth Shaded 90.6 59.1 75.6 45.1 79.9 45.4 60.5 26.7 92.6 94.8 Deit-T \u0017 75.1 32.9 57.8 23.5 25.7 9.8 23.3 14.7 16.0 79.0 \u0013 64.5(-26.1) 21.9(-37.2) 51.6(-24.0) 19.8(-25.3) 22.3(-57.6) 8.9(-36.5) 19.4(-41.1) 13.9(-12.8) 15.2(-77.4) 77.4(-17.4) Deit-S \u0017 74.7 33.5 58.2 23.0 25.4 9.7 23.9 15.9 17.2 79.8 \u0013 64.0(-26.6) 20.6(-38.5) 48.5(-27.1) 19.1(-26.0) 22.6(-57.3) 8.6(-36.8) 20.6(-39.9) 13.6(-13.1) 15.9(-76.7) 78.9(-15.9) Deit-B \u0017 75.5 31.7 57.5 22.6 25.0 9.2 22.4 15.5 17.3 80.6 \u0013 64.7(-25.9) 20.0(-39.1) 48.6(-27.0) 19.3(-25.8) 22.7(-57.2) 8.1(-37.3) 19.5(-41.0) 13.4(-13.3) 17.1(-75.5) 80.9(-13.9) DINO \u0017 71.0 33.7 54.2 23.8 32.0 12.4 24.3 15.2 15.2 72.7 \u0013 60.7(-29.9) 18.8(-40.3) 46.7(-28.9) 19.4(-25.7) 21.9(-58.0) 8.3(-37.1) 19.6(-40.9) 13.9(-12.8) 15.0(-77.6) 70.7(-24.1) CLIP \u0017 79.8 36.6 62.3 26.4 38.5 15.0 29.9 17.3 15.8 81.4 \u0013 77.5(-13.1) 29.3(-29.8) 59.5(-16.1) 25.0(-20.1) 35.6(-44.3) 11.2(-34.2) 28.8(-31.7) 16.5(-10.2) 15.6(-77.0) 81.2(-13.6) Projected Gradient Decent (PGD) (Madry et al., 2018) Deit-T \u0017 84.9 40.9 68.3 32.5 69.4 32.7 44.3 18.4 37.0 90.5 \u0013 78.1(-12.5) 32.0(-27.1) 61.5(-14.1) 27.8(-17.3) 65.9(-14.0) 29.5(-15.9) 39.1(-21.4) 17.6(-9.1) 34.6(-58.0) 90.2(-4.6) Deit-S \u0017 85.1 42.0 68.6 32.5 69.9 32.7 44.8 19.4 39.4 90.9 \u0013 74.1(-16.5) 26.1(-33.0) 58.3(-17.3) 26.1(-19.0) 65.4(-14.5) 28.2(-17.2) 28.9(-31.6) 17.1(-9.6) 36.5(-56.1) 90.1(-4.7) Deit-B \u0017 85.5 41.5 67.7 33.3 69.9 32.6 44.7 18.9 37.7 91.1 \u0013 73.6(-17.0) 25.0(-34.1) 56.3(-19.3) 24.4(-20.7) 64.7(-15.2) 26.6(-18.8) 37.0(-23.5) 16.6(-10.1) 36.7(-55.9) 90.5(-4.3) DINO \u0017 81.7 38.0 64.8 30.9 68.0 31.8 42.5 18.2 33.1 89.2 \u0013 64.9(-25.7) 14.8(-44.3) 51.5(-24.1) 22.4(-22.7) 63.1(-16.8) 25.9(-19.5) 35.2(-25.3) 16.6(-10.1) 28.9(-63.5) 87.5(-7.3) CLIP \u0017 86.9 44.5 70.7 35.4 71.9 35.1 46.5 20.0 39.8 91.3 \u0013 82.6(-8.0) 32.3(-26.8) 66.9(-8.7) 32.7(-12.4) 69.7(-10.2) 28.9(-16.5) 43.2(-17.3) 18.9(-7.8) 37.6(-55.0) 90.1(-4.7) Table 1: Adversarial Transferability from image to video models: Adversarial accuracy (%) is reported at \u03f5 \u226416. When attack is optimized from an image model without temporal prompts (\u0017), then we minimize the unsupervised adversarial objective (Eq. 3) i.e., cosine similarity between the feature embedding of spatial class token of clean and adversarial frames. We use 8 frames from a given video. Attacks transferability improves by a clear margin with temporal prompts. Clean accuracy (top1 % on randomly selected validation set of 1.5k samples) is highlighted with cell color. Single step FGSM and multi-step PGD attacks perform better with our proposed temporal prompts (e.g. FGSM reduces TimesFormer generalization from 90.6% to 64.5% on UCF by exploiting dynamic cues within the DeiT-T). A similar trend exists for ResNet3D and MVCNN. In this manner, any of the existing attacks can be combined with our approach to transfer adversarial perturbations as we demonstrate in Sec. 3. 2.2.1 CHANGING SPATIAL RESOLUTION FOR REGULARIZATION An adversarial attack can easily over\ufb01t the source models meaning it can have a 100% success rate on the source model but mostly fails to fool the unknown black-box model. Different heuristics like adding momentum (Dong et al., 2018; Lin et al., 2019; Wang & He, 2021), augmentations (Naseer et al., 2021; Xie et al., 2019), ensemble of different models (Dong et al., 2018) or even self-ensemble (Naseer et al., 2022b) at a target resolution are proposed to reduce such over\ufb01tting and increase adversarial transferability. Low-resolution pre-training followed by high-resolution \ufb01ne-tuning have a regularization effect on the generalization of neural networks (Touvron et al., 2022). Similarly, we observe that \ufb01ne-tuning scale-space prompts at different low resolutions (e.g., 96\u00d796) within a model pre-trained on high resolution (224\u00d7224) also has a complimentary regularization effect on the transferability of adversarial perturbations (Tables 3 and 4). Most publicly available image models are trained at a resolution of size 224\u00d7224 and their performance at low-resolution inputs is sub-optimal (Fig. 2). Learning at low-resolution with our approach (Fig. 1) also allows to mimic a changing scene over time from static datasets like ImageNet (Fig. 2). Our low-resolution scale-space prompts (Sec. 2.1) signi\ufb01cantly improves generalization at low-resolution inputs (Fig. 2). 3 EXPERIMENTS We generate l\u221eadversarial examples from the adapted image models and study their transferability to video and image models. We followed standard adversarial transferability protocol Dong et al. (2018); Naseer et al. (2022b); the attacker is unaware of the black-box video model but has access to its train data. Our temporal cues also boost cross-task adversarial transferability with no access 6 \fPublished as a conference paper at ICLR 2023 Momentum Iterative Fast Gradient Sign Method (MIM) (Dong et al., 2018) TimesFormer ResNet3D MVCNN Models (\u2193) Temporal Prompts UCF HMDB K400 SSv2 UCF HMDB K400 SSv2 Depth Shaded 90.6 59.1 75.6 45.1 79.9 45.4 60.5 26.7 92.6 94.8 Deit-T \u0017 77.0 30.1 58.5 25.2 42.0 16.2 29.1 16.0 19.2 83.0 \u0013 66.9(-23.7) 21.4(-37.7) 51.4(-24.2) 21.5(-23.6) 41.5(-38.4) 16.1(-29.3) 26.4(-34.1) 15.3(-11.4) 19.0(-73.6) 82.9(-11.9) Deit-S \u0017 78.7 31.8 59.3 25.1 42.2 16.5 31.0 16.5 20.6 83.6 \u0013 60.8(-29.8) 16.1(-43.0) 47.6(-28.0) 21.0(-24.1) 41.9(-38.0) 15.7(-29.7) 26.8(-33.7) 14.2(-12.5) 19.3(-73.3) 83.5(-11.3) Deit-B \u0017 78.7 32.0 59.7 25.9 43.1 16.2 29.5 16.8 20.6 84.4 \u0013 61.7(-25.9) 15.6(-43.5) 45.2(-30.4) 19.1(-26.0) 40.7(-39.2) 13.6(-31.8) 24.8(-35.7) 14.5(-12.2) 20.5(-72.1) 84.2(-10.6) DINO \u0017 71.3 28.0 54.6 24.1 46.0 18.8 30.3 15.9 19.7 80.4 \u0013 47.7(-42.9) 8.40(-50.7) 38.8(-36.8) 16.4(-28.7) 39.8(-40.1) 12.7(-32.7) 24.5(-36.0) 14.3(-12.4) 19.4(-73.2) 80.7(-14.1) CLIP \u0017 81.8 35.6 63.5 29.3 53.2 21.9 35.5 17.9 20.8 86.7 \u0013 78.2(-12.4) 28.2(-30.9) 61.3(-14.3) 27.2(-17.9) 45.9(-34.0) 17.8(-27.6) 32.9(-27.6) 16.5(-10.2) 20.2(-72.4) 84.7(-10.1) MIM with Input Diversity (DIM) (Xie et al., 2019) Deit-T \u0017 75.3 28.2 59.2 24.3 41.3 16.5 29.5 16.3 20.3 84.5 \u0013 62.6(-28.0) 16.9(-42.2) 48.1(-27.5) 20.5(-24.6) 37.8(-42.1) 13.4(-32.0) 24.2(-36.3) 14.3(-12.4) 19.6(-73.0) 85.0(-9.8) Deit-S \u0017 76.3 29.4 57.5 24.1 39.8 17.3 29.8 16.2 21.8 85.9 \u0013 54.1(-36.5) 12.3(-46.8) 42.8(-32.8) 17.7(-27.4) 36.8(-43.1) 12.7(-32.7) 22.3(-38.2) 14.6(-12.1) 21.0(-71.6) 85.7(-9.1) Deit-B \u0017 75.9 28.8 57.8 24.5 41.5 16.5 30.5 15.6 22.2 85.1 \u0013 55.5(-35.1) 12.6(-46.5) 40.6(-35.0) 16.8(-28.3) 37.3(-42.6) 11.4(-34.0) 21.5(-39.0) 13.3(-13.4) 20.5(-72.1) 86.3(-8.5) DINO \u0017 64.9 23.1 51.8 22.6 43.5 16.7 27.6 13.7 19.0 80.2 \u0013 45.2(-45.4) 7.30(-51.8) 35.8(-39.8) 15.8(-29.3) 32.2(-47.7) 10.6(-34.8) 18.3(-42.2) 12.0(-14.7) 18.9(-73.7) 78.6(-16.2) CLIP \u0017 78.0 29.5 61.0 26.9 50.0 19.6 32.8 16.9 20.8 87.8 \u0013 73.4(-17.2) 22.7(-36.4) 56.7(-18.9) 25.1(-20.0) 46.5(-33.4) 13.6(-31.8) 31.5(-29.0) 15.7(-11.0) 20.5(-72.1) 87.4(-7.4) Table 2: Adversarial Transferability from image to video models: Adversarial accuracy (%) is reported at \u03f5 \u226416. When attack is optimized from an Image model without temporal prompts (\u0017), then we minimize the unsupervised adversarial objective (Eq. 3), cosine similarity, between the feature embedding of spatial class token of clean and adversarial frames. We use 8 frames from a given video. Attacks transferability improves by a clear margin with temporal prompts. Clean accuracy (top1 % on randomly selected validation set of 1.5k samples) is highlighted with cell color. We observe that a small Image model (e.g. Deit-T with only 5 Million parameters) can signi\ufb01cantly reduce the generalization of TimesFormer with the presence of our temporal prompts. The attack performance with temporal prompts increases with network capacity (e.g. Deit-T to Deit-B). to the black-box video model, its training data, or label space ( Appendix B). The maximum pixel perturbation is set to \u03f5 \u226416 for pixel range of [0, 255]. We use standard image attacks to optimize adversarial examples through our adapted image models and boost adversarial transferability to video models by using the following protocols: Surrogate image models: We optimize adversaries from the known surrogate models. We study three types of surrogate image models trained via supervised (Touvron et al., 2020), self-supervised (Caron et al., 2021) and text-supervised (Radford et al., 2021) models. \u2212Supervised image models: We use Deit-T, DeiT-S, and DeiT-B with 5, 22, and 86 million parameters to video datasets. These models are pre-trained in a supervised manner on ImageNet. \u2212Self-Supervised image model: We use Vision Transformer trained in self-supervised fashion on ImageNet using DINO training framework (Caron et al., 2021). \u2212Text-Supervised image model: We adapt CLIP image encoder only trained on text-guided images. DeiT-B, DINO, and CLIP share the same network based on Vision Transformer. These models process images of size 3\u00d7224\u00d7224 that are divided into 196 patches with a patch size of 16. Each of these patch tokens has 768 embedding dimensions. Thus, the only difference between these models lies in their corresponding training frameworks. These models are adapted to different video datasets. Adversarial examples are then simply created using existing single and multi-step attacks (Goodfellow et al., 2014; Madry et al., 2018; Dong et al., 2018; Xie et al., 2019). Target video and image models: We transfer adversarial examples to unknown (black-box) video and image models. \u2212Target Video Models: We consider recently proposed TimesFormer (Bertasius et al., 2021) with divided space-time attention, 3D convolutional network (Tran et al., 2018), and multi-view convolutional network (Su et al., 2015). \u2212Target Image Models: We consider the same image models used in baseline by (Naseer et al., 2022b): BiT-ResNet50 (BiT50) (Beyer et al., 7 \fPublished as a conference paper at ICLR 2023 MIM with Input Diversity (DIM) (Xie et al., 2019) Convolutional Transformer Models (\u2193) Method BiT50 Res152 WRN DN201 ViT-L T2T24 TnT T2T-7 Deit-B Naseer et al. (2022b) 80.10 84.92 86.36 89.24 78.90 84.00 92.28 93.42 Deit-B Ours 86.64 90.88 92.14 93.82 95.64 95.74 98.48 94.74 DINO 84.36 93.74 95.16 96.20 96.48 89.68 94.74 96.18 CLIP 55.34 64.24 65.76 71.78 59.94 54.66 62.72 75.80 Table 3: Adversarial Transferability from image to image models: Fool Rate (%) on 5k ImageNet val. samples from Naseer et al. (2022b). We compare against the best results from Naseer et al. (2022b) that our baseline is a self-ensemble Deit-B with token re\ufb01nement blocks. To mimic the dynamics with static images, we optimize our proposed prompts at resolutions of 56\u00d756, 96\u00d794, 120\u00d7120, and 224\u00d7224. Adversaries are optimized at the target resolution of 224\u00d7224. Our method performs favorably well against self-ensemble with re\ufb01ned tokens. Models Temporal Prompts SR TimesFormer UCF HMDB DeiT-B \u0017 \u0017 75.9 28.8 \u0013 \u0017 55.5 12.6 \u0013 \u0013 53.9 11.8 Table 4: Effect of Change in Spatial Resolution (SR): Incorporating change in spatial resolution improves adversarial transferability (% accuracy, lower is better). Adversaries are created at target resolution of 224 from an ensemble of models adopted at 56\u00d756, 96\u00d796, and 224\u00d7224. 2021), ResNet152 (Res152) (He et al., 2016), Wide-ResNet-50-2 (WRN) (Zagoruyko & Komodakis, 2016), DenseNet201 (DN201) (Huang et al., 2017) and other ViT models including Token-to-Token transformer (T2T) (Yuan et al., 2021), Transformer in Transformer (TnT) (Mao et al., 2021). Adapting image models for videos using dynamic cues: We use UCF (Soomro et al., 2012), HMDB (Kuehne et al., 2011), K400 (Kay et al., 2017), and SSv2 (Goyal et al., 2017) training sets to learn temporal prompts and adapt image models to videos via our approach (Fig.1). HMDB has the smallest validation set of 1.5k samples. For evaluating robustness, we selected all validation samples in HMDB, while randomly selected 1.5k samples from UCF, K400, and SSv2 validation sets. We also use multi-view training samples rendered for 3D ModelNet40 (depth and shaded) for image models. We use validation samples of rendered multi-views for both modalities. Adapting image models to images mimicking dynamic cues: We use ImageNet training set and learn our proposed transformation and prompts at multiple spatial scales; 56\u00d756, 96\u00d796, 120\u00d7120, and 224\u00d7224. We optimize adversarial attacks at the target resolution of 224\u00d7224 by using the ensemble of our learned models at multiple resolutions. In this manner, our approach mimic the change over time by changing the spatial scale or resolution over time. We study our attack approach using the 5k samples from ImageNet validation set proposed by (Naseer et al., 2022b). Inference and metrics: We randomly sample 8 frames from a given video for testing. We report drop in top-1 (%) accuracy on adversarial and clean videos. We report fooling rate (%) of adversarial samples for which the predicted label is \ufb02ipped w.r.t the original labels) to evaluate on image datasets. Extending image attacks to videos: We apply image attacks such as single step fast gradient sign method (FGSM) (Goodfellow et al., 2014) as well as iterative attacks with twenty iterations including PGD (Madry et al., 2018), MIM (Dong et al., 2018) and input diversity (augmentations to the inputs) (DIM) (Xie et al., 2019) attacks to adapted image models and transfer their adversarial perturbations to black-box video models. We follow the attack settings used by (Naseer et al., 2022b). Our proposed transformation (T (.)) allows to model temporal gradients during attack optimization (Fig. 3). We simply aggregates the gradients from both branches of our model to the input video frames (Fig. 1). 3.1 ADVERSARIAL TRANSFER FROM IMAGE-TO-VIDEO MODELS The generalization of image models on videos is discussed in Table 5. We observe that image solution remains preserved in ViT backbones even after training with our proposed transformation and temporal prompts. Deit models retain their top-1 (%) accuracy on ImageNet when measured through image class token, while also exhibiting decent performance on video datasets and rendered mulit-views of ModelNet40. CLIP achieves the best performance on videos. The transferability of image based attacks from our adapted image models to video and multi-view models is presented in Tables 1 and 2. Following insights emerges from our analysis: a) The attack success to video models 8 \fPublished as a conference paper at ICLR 2023 Figure 4: Adversaries (DIM (Xie et al., 2019)) from an ensemble of image models with different networks transfer well and fool the black-box video models. First row shows transferablity from an ensemble of different networks, while second row shows attack results from an ensemble of similar networks but pre-trained with different training schemes. ImageNet ImageNet and Videos ImageNet and ModelNet40 Models IN IN \u2013 UCF IN \u2013 HMDB IN \u2013 K400 IN \u2013 SSv2 IN \u2013 Depth IN \u2013 Shaded Deit-T 72.2 72.0 \u2013 70.0 72.3 \u2013 36.2 72.2 \u2013 44.6 72.2 \u2013 11.2 72.2 \u2013 86.0 72.1 \u2013 81.0 Deit-S 79.9 79.8 \u2013 77.2 79.8 \u2013 44.6 79.9 \u2013 53.0 79.9 \u2013 15.3 78.8 \u2013 86.6 79.8 \u2013 86.2 Deit-B 81.8 81.9 \u2013 81.4 81.7 \u2013 47.7 81.9 \u2013 57.0 81.8 \u2013 17.5 81.8 \u2013 90.1 81.4 \u2013 88.2 DINO NA NA \u2013 79.5 NA \u2013 45.1 NA \u2013 57.4 NA \u2013 17.4 NA \u2013 90.1 NA\u2013 89.8 CLIP NA NA \u2013 86.0 NA \u2013 54.6 NA \u2013 67.3 NA \u2013 19.9 NA \u2013 89.5 NA \u2013 88.9 Table 5: Spatial along with Temporal Solutions within Image Models: Our approach successfully incorporates temporal information into pre-trained and frozen image models and increases their generalization (top-1 (%) at resolution 224) to high-dimensional datasets. The original image solution remains preserved in our approach. Ours Deit-B retains 81.9% top-1 accuracy on ImageNet validation set while exhibiting 81.4% on UCF. increases with size of the image models. The larger the image model (e.g. Deit-T to DeiT-B) the higher the attack success, b) The adversarial perturbations generated using self-supervised DINO transfer better than CLIP or supervised Deit-B, c) Divided space-time attention is more robust than 3D convolution, and d) MVCNN trained on shaded rendered views is more robust than depth. 3.1.1 ENSEMBLE ADVERSARIAL TRANSFER Adversarial attacks optimized from an ensemble of models (Dong et al., 2018; Xie et al., 2019) or self-ensemble (Naseer et al., 2022b) increases transferability. We study three types of image ensembles including a) Ensemble of Different Networks: We transfer attack from three image models of the Deit family (Touvron et al., 2020) including Deit-T, Deit-S and Deit-B. These models differ in architecture, b) Ensemble of Different Training Frameworks: We transfer attack from three image models including Deit-B, DINO and CLIP. These models share similar architecture but differ in their training, and c) Ensemble of Different Spatial Resolutions: We transfer attack from three image models with the same architecture (Deit-B) but prompt adapted to different spatial resolutions. Ensemble of different training frameworks performs favorably to boost attack transferability (Fig. 4). An attacker can adapt our approach at different resolutions to enhance transferability (Table 4). We provide generalization of Deit-B at varying spatial scales for video datasets in Appendix I. 3.2 ADVERSARIAL TRANSFER FROM IMAGE-TO-IMAGE MODELS Image datasets are static but we mimic the dynamic behaviour by learning our proposed transformation at different spatial scales (Fig. 2). Our approach exhibit signi\ufb01cant gains in adversarial transferability (Table 3). Refer to appendices B-J for extensive analysis on cross-task adversarial transferability, textual bias of CLIP on adversarial transferability, visualization of attention roll-outs and latent embeddings, and effect of the number of temporal prompts on adversarial transferability. 4" + }, + { + "url": "http://arxiv.org/abs/2106.04169v3", + "title": "On Improving Adversarial Transferability of Vision Transformers", + "abstract": "Vision transformers (ViTs) process input images as sequences of patches via\nself-attention; a radically different architecture than convolutional neural\nnetworks (CNNs). This makes it interesting to study the adversarial feature\nspace of ViT models and their transferability. In particular, we observe that\nadversarial patterns found via conventional adversarial attacks show very\n\\emph{low} black-box transferability even for large ViT models. We show that\nthis phenomenon is only due to the sub-optimal attack procedures that do not\nleverage the true representation potential of ViTs. A deep ViT is composed of\nmultiple blocks, with a consistent architecture comprising of self-attention\nand feed-forward layers, where each block is capable of independently producing\na class token. Formulating an attack using only the last class token\n(conventional approach) does not directly leverage the discriminative\ninformation stored in the earlier tokens, leading to poor adversarial\ntransferability of ViTs. Using the compositional nature of ViT models, we\nenhance transferability of existing attacks by introducing two novel strategies\nspecific to the architecture of ViT models. (i) Self-Ensemble: We propose a\nmethod to find multiple discriminative pathways by dissecting a single ViT\nmodel into an ensemble of networks. This allows explicitly utilizing\nclass-specific information at each ViT block. (ii) Token Refinement: We then\npropose to refine the tokens to further enhance the discriminative capacity at\neach block of ViT. Our token refinement systematically combines the class\ntokens with structural information preserved within the patch tokens.", + "authors": "Muzammal Naseer, Kanchana Ranasinghe, Salman Khan, Fahad Shahbaz Khan, Fatih Porikli", + "published": "2021-06-08", + "updated": "2022-03-03", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "main_content": "INTRODUCTION Transformers compose a family of neural network architectures based on the self-attention mechanism, originally applied in natural language processing tasks achieving state-of-the-art performance (Vaswani et al., 2017; Devlin et al., 2018; Brown et al., 2020). The transformer design has been subsequently adopted for vision tasks (Dosovitskiy et al., 2020), giving rise to a number of successful vision transformer (ViT) models (Touvron et al., 2020; Yuan et al., 2021; Khan et al., 2021). Due to the lack of explicit inductive biases in their design, ViTs are inherently different from convolutional neural networks (CNNs) that encode biases e.g., spatial connectivity and translation equivariance. ViTs process an image as a sequence of patches which are re\ufb01ned through a series of self-attention mechanisms (transformer blocks), allowing the network to learn relationships between any individual parts of the input image. Such processing allows wide receptive \ufb01elds which can model global context as opposed to the limited receptive \ufb01elds of CNNs. These signi\ufb01cant differences between ViTs and CNNs give rise to a range of intriguing characteristics unique to ViTs (Caron et al., 2021; Tuli et al., 2021; Mao et al., 2021; Paul & Chen, 2021; Naseer et al., 2021b). Adversarial attacks pose a major hindrance to the successful deployment of deep neural networks in real-world applications. Recent success of ViTs means that adversarial properties of ViT models 1 arXiv:2106.04169v3 [cs.CV] 3 Mar 2022 \fFigure 1: Left: Conventional adversarial attacks view ViT as a single classi\ufb01er and maximize the prediction loss (e.g., cross entropy) to fool the model based on the last classi\ufb01cation token only. This leads to sub-optimal results as class tokens in previous ViT blocks only indirectly in\ufb02uence adversarial perturbations. In contrast, our approach (right) effectively utilizes the underlying ViT architecture to create a self-ensemble using class tokens produced by all blocks within ViT to design the adversarial attack. Our self-ensemble enables to use hierarchical discriminative information learned by all class tokens. Consequently, an attack based on our self-ensemble generates transferable adversaries that generalize well across different model types and vision tasks. also become an important research topic. A few recent works explore adversarial robustness of ViTs (Shao et al., 2021; Mahmood et al., 2021; Bhojanapalli et al., 2021) in different attack settings. Surprisingly, these works show that large ViT models exhibit lower transferability in black-box attack setting, despite their higher parameter capacity, stronger performance on clean images, and better generalization (Shao et al., 2021; Mahmood et al., 2021). This \ufb01nding seems to indicate that as ViT performance improves, its adversarial feature space gets weaker. In this work, we investigate whether the weak transferability of adversarial patterns from high-performing ViT models, as reported in recent works (Shao et al., 2021; Mahmood et al., 2021; Bhojanapalli et al., 2021), is a result of weak features or a weak attack. To this end, we introduce a highly transferable attack approach that augments the current adversarial attacks and increase their transferability from ViTs to the unknown models. Our proposed transferable attack leverages two key concepts, multiple discriminative pathways and token re\ufb01nement, which exploit unique characteristics of ViT models. Our approach is motivated by the modular nature of ViTs (Touvron et al., 2020; Yuan et al., 2021; Mao et al., 2021): they process a sequence of input image patches repeatedly using multiple multi-headed self-attention layers (transformer blocks) (Vaswani et al., 2017). We refer to the representation of patches at each transformer block as patch tokens. An additional randomly initialized vector (class token1) is also appended to the set of patch tokens along the network depth to distill discriminative information across patches. The collective set of tokens is passed through the multiple transformer blocks followed by passing of the class token through a linear classi\ufb01er (head) which is used to make the \ufb01nal prediction. The class token interacts with the patch tokens within each block and is trained gradually across the blocks until it is \ufb01nally utilized by the linear classi\ufb01er head to obtain class-speci\ufb01c logit values. The class token can be viewed as extracting information useful for the \ufb01nal prediction from the set of patch tokens at each block. Given the role of the class token in ViT models, we observe that class tokens can be extracted from the output of each block and each such token can be used to obtain a class-speci\ufb01c logit output using the \ufb01nal classi\ufb01er of a pretrained model. This leads us to the proposed self-ensemble of models within a single transformer (Fig. 1). We show that attacking such a self-ensemble (Sec. 3) containing multiple discriminative pathways signi\ufb01cantly improves adversarial transferability from ViT models, and in particular from the large ViTs. Going one step further, we study if the class information extracted from different intermediate ViT blocks (of the self-ensemble) can be enhanced to improve adversarial transferability. To this end, we introduce a novel token re\ufb01nement module directed at enhancing these multiple discriminative pathways. The token re\ufb01nement module strives to re\ufb01ne the information contained in the output of each transformer block (within a single ViT model) and aligns the class tokens produced by the intermediate blocks with the \ufb01nal classi\ufb01er in order to maximize the discriminative power of intermediate blocks. Our token re\ufb01nement exploits the structural information stored in the patch tokens and fuses it with the class token to maximize the discriminative performance of each block. Both the re\ufb01ned tokens and self-ensemble ideas are combined to design an adversarial attack that is shown to signi\ufb01cantly boost the transferability of adversarial examples, thereby bringing out the true 1Average of patch tokens can serve as a class token in our approach for ViT designs that do not use an explicit class token such as Swin transformer (Liu et al., 2021) or MLP-Mixer (Tolstikhin et al., 2021) 2 \fgeneralization of ViTs\u2019 adversarial space. Through our extensive experimentation, we empirically demonstrate favorable transfer rates across different model families (convolutional and transformer) as well as different vision tasks (classi\ufb01cation, detection and segmentation). 2 BACKGROUND AND RELATED WORK Adversarial Attack Modeling: Adversarial attack methods can be broadly categorized into two categories, white-box attacks and black-box attacks. While the white-box attack setting provides the attacker full access to the parameters of the target model, the black-box setting prevents the attacker from accessing the target model and is therefore a harder setting to study adversarial transferability. White-box Attack: Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2014) and Projected Gradient Descent (PGD) (Madry et al., 2018) are two initially proposed white-box attack methods. FGSM corrupts the clean image sample by taking a single step within a small distance (perturbation budget \u03f5) along the objective function\u2019s gradient direction. PGD corrupts the clean sample for multiple steps with a smaller step size, projecting the generated adversarial example onto the \u03f5-sphere around the clean sample after each step. Other state-of-the-art white-box attack methods include Jacobian-based saliency map attack (Papernot et al., 2016), Sparse attack (Modas et al., 2019), Onepixel attack (Su et al., 2019), Carlini and Wagner optimization (Carlini & Wagner, 2017), Elastic-net (Chen et al., 2018), Diversi\ufb01ed sampling (Tashiro et al., 2020), and more recently Auto-attack (Croce & Hein, 2020b). We apply white-box attacks on surrogate models to \ufb01nd perturbations that are then transferred to black-box target models. Black-box Attack and Transferability: Black-box attacks generally involve attacking a source model to craft adversarial signals which are then applied on the target models. While gradient estimation methods that estimate the gradients of the target model using black-box optimization methods such as Finite Differences (FD) (Chen et al., 2017; Bhagoji et al., 2018) or Natural Evolution Strategies (NES) (Ilyas et al., 2018; Jiang et al., 2019) exist, these methods are dependent on multiple queries to the target model which is not practical in most real-world scenarios. In the case of adversarial signal generation using source models, it is possible to directly adopt white-box methods. In our work, we adopt FGSM and PGD in such a manner. Methods like (Dong et al., 2018) incorporate a momentum term into the gradient to boost the transferability of existing white-box attacks, building attacks named MIM. In similar spirit, different directions are explored in literature to boost transferability of adversarial examples; a) Enhanced Momentum: Lin et al. (Lin et al., 2019) and Wang et al. (Wang & He, 2021) improve momentum by using Nesterov momentum and variance tuning respectively during attack iterations, b) Augmentations: Xie et al. (Xie et al., 2019) showed that applying differentiable stochastic transformations can bring diversity to the gradients and improve transferability of the existing attacks, c) Exploiting Features: Multiple suggestions are proposed in the literature to leverage the feature space for adversarial attack as well. For example, Zhou et al. (Zhou et al., 2018) incorporate the feature distortion loss during optimization. Similarly, (Inkawhich et al., 2020b;a; Huang et al., 2019) also exploit intermediate layers to enhance transferability. However, combining the intermediate feature response with \ufb01nal classi\ufb01cation loss is non-trivial as it might require optimization to \ufb01nd the best performing layers (Inkawhich et al., 2020b;a), and d) Generative Approach: Orthogonal to iterative attacks, generative methods (Poursaeed et al., 2018; Naseer et al., 2019; 2021a) train an autoencoder against the white-box model. In particular, Naseer et al. show that transferability of an adversarial generator can be increased with relativistic cross-entropy (Naseer et al., 2019) and augmentations (Naseer et al., 2021a). Ours is the \ufb01rst work to address limited transferability of ViT models. The Role of Network Architecture: Recent works exploit architectural characteristics of networks to improve the transferability of attacks. While Wu et al. (2020) exploit skip connections of models like ResNets and DenseNets to improve black-box attacks, Guo et al. (2020) build on similar ideas focused on the linearity of models. Our work similarly focuses on unique architectural characteristics of ViT models to generate more transferable adversarial perturbations with the existing white-box attacks. Robustness of ViTs: Adversarial attacks on ViT models are relatively unexplored. Shao et al. (2021) and Bhojanapalli et al. (2021) investigate adversarial attacks and robustness of ViT models studying various white-box and black-box attack techniques. The transferability of perturbations from ViT models is thoroughly explored in (Mahmood et al., 2021) and they conclude that ViT 3 \fFigure 2: Adversarial examples for ViTs have only moderate transferability. In fact transferabililty (%) of MIM (Dong et al., 2018) perturbations to target models goes down as the source model size increases such as from DeiT-T (Touvron et al., 2020) (5M parameters) to DeiT-B (Touvron et al., 2020) (86M parameters). However, the performance of the attack improves signi\ufb01cantly when applied on our proposed ensemble of classi\ufb01ers found within a ViT (MIME & MIMRE). models do not transfer well to CNNs, whereas we propose a methodology to solve this shortcoming. Moreover, Mahmood et al. (2021) explores the idea of an ensemble of CNN and ViT models to improve the transferability of attacks. Our proposed ensemble approach explores a different direction by converting a single ViT model into a collection of models (self-ensemble) to improve attack transferability. In essence, our proposed method can be integrated with existing attack approaches to take full advantage of the ViTs\u2019 learned features and generate transferable adversaries. 3 ENHANCING ADVERSARIAL TRANSFERABILITY OF VITS Preliminaries: Given a clean input image sample x with a label y, a source ViT model F and a target model M which is under-attack, the goal of an adversarial attack is generating an adversarial signal, x\u2032, using the information encoded within F, which can potentially change the target network\u2019s prediction (M(x\u2032)argmax \u0338= y). A set of boundary conditions are also imposed on the adversarial signal to control the level of distortion in relation to the original sample, i.e., \u2225x \u2212x\u2032\u2225p < \u03f5, for a small perturbation budget \u03f5 and a p-norm, often set to in\ufb01nity norm (\u2113\u221e). Motivation: The recent \ufb01ndings (Shao et al., 2021; Mahmood et al., 2021) demonstrate low black-box transferability of ViTs despite their higher parametric complexity and better feature generalization. Motivated by this behaviour, we set-up a simple experiment of our own to study the adversarial transferability of ViTs (see Fig. 2). We note that transferability of adversarial examples found via momentum iterative fast gradient sign method (Dong et al., 2018) (MIM) at \u2113\u221e\u226416 on DeiT (Touvron et al., 2020) does not increase with model capacity. In fact, adversarial transferability from DeiT base model (DeiT-B) on ResNet152 and large vision transformer (ViT-L (Dosovitskiy et al., 2020)) is lower than DeiT tiny model (DeiT-T). This is besides the fact that DeiT-B has richer representations and around 17\u00d7 more parameters than DeiT-T. We investigate if this behavior is inherent to ViTs or merely due to a sub-optimal attack mechanism. To this end, we exploit unique architectural characteristics of ViTs to \ufb01rst \ufb01nd an ensemble of networks within a single pretrained ViT model (self-ensemble, right Fig. 1). The class token produced by each self-attention block is processed by the \ufb01nal local norm and classi\ufb01cation MLP-head to re\ufb01ne class-speci\ufb01c information (Fig. 2). In other words, our MIME and MIMRE variants attack class information stored in the class tokens produced by all the self-attention blocks within the model and optimize for the adversarial example (Sec. 3.1 and 3.2). Exploring the adversarial space of such multiple discriminative pathways in a self-ensemble generates highly transferable adversarial examples, as we show next. 3.1 SELF-ENSEMBLE: DISCRIMINATIVE PATHWAYS OF VISION TRANSFORMER A ViT model (Dosovitskiy et al., 2020; Touvron et al., 2020), F, with n transformer blocks can be de\ufb01ned as F = (f1 \u25e6f2 \u25e6f3 \u25e6. . . fn) \u25e6g, where fi represents a single ViT block comprising of multi-head self-attention and feed-forward layers and g is the \ufb01nal classi\ufb01cation head. To avoid notation clutter, we assume that g consists of the \ufb01nal local norm and MLP-head (Touvron et al., 2020; Dosovitskiy et al., 2020). Self-attention layer within the vision transformer model takes a sequence of m image patches as input and outputs the processed patches. We will refer to the representations associated with the sequence of image patches as patch tokens, Pt \u2208Rm\u00d7d (where d is the dimensionality of each patch representation). Attention in ViT layers is driven by minimizing the empirical risk during training. In the case of classi\ufb01cation, patch tokens are further appended with the class token (Qt \u2208R1\u00d7d). These patch and class tokens are re\ufb01ned across multiple blocks (fi) and attention in these layers is guided such that the most discriminative information from patch tokens is preserved within the class token. The \ufb01nal class token is then projected to the number of classes by the classi\ufb01er, g. Due to the availability of class token at each transformer block, we can create an 4 \fFigure 3: Distribution of discriminative information across blocks of DeiT models. Note how multiple intermediate blocks contain features with considerable discriminative information as measured by top-1 accuracy on the ImageNet val. set. These are standard models pretrained on ImageNet with no further training. Each block (x-axis) corresponds to a classi\ufb01er Fk as de\ufb01ned in Equation 1. ensemble of classi\ufb01ers by learning a shared classi\ufb01cation head at each block along the ViT hierarchy. This provides us an ensemble of n classi\ufb01ers from a single ViT, termed as the self-ensemble: Fk = k Y i=1 fi ! \u25e6g, where k = 1, 2, . . . , n. (1) We note that the multiple classi\ufb01ers thus formed hold signi\ufb01cant discriminative information. This is validated by studying the classi\ufb01cation performance of each classi\ufb01er (Eq. 1) in terms of top-1 (%) accuracy on ImageNet validation set, as demonstrated in Fig. 3. Note that multiple intermediate layers perform well on the task, especially towards the end of the ViT processing hierarchy. For an input image x with label y, an adversarial attack can now be optimized for the ViT\u2019s selfensemble by maximizing the loss at each ViT block. However, we observe that initial blocks (1-6) for all considered DeiT models do not contain useful discriminative information as their classi\ufb01cation accuracy is almost zero (Fig. 3). During the training of ViT models (Touvron et al., 2020; Yuan et al., 2021; Mao et al., 2021), parameters are updated based on the last class token only, which means that the intermediate tokens are not directly aligned with the \ufb01nal classi\ufb01cation head, g in our self-ensemble approach (Fig. 3) leading to a moderate classi\ufb01cation performance. To resolve this, we introduce a token re\ufb01nement strategy to align the class tokens with the \ufb01nal classi\ufb01er, g, and boost their discriminative ability, which in turn helps improve attack transferability. 3.2 TOKEN REFINEMENT As mentioned above, the multiple discriminative pathways within a ViT give rise to an ensemble of classi\ufb01ers (Eq. 1). However, the class token produced by each attention layer is being processed by the \ufb01nal classi\ufb01er, g. This puts an upper bound on classi\ufb01cation accuracy for each token which is lower than or equal to the accuracy of the \ufb01nal class token. Our objective is to push the accuracy of the class tokens in intermediate blocks towards the upper bound as de\ufb01ned by the last token. For this purpose, we introduced a token re\ufb01nement module to \ufb01ne-tune the class tokens. Our proposed token re\ufb01nement module is illustrated in Fig. 4. It acts as an intermediate layer inserted between the outputs of each block (after the shared norm layer) and the shared classi\ufb01er head. Revisiting our baseline ensemble method (Fig. 1), we note that the shared classi\ufb01er head contains weights directly trained only on the outputs of the last transformer block. While the class tokens of previous layers may be indirectly optimized to align with the \ufb01nal classi\ufb01er, there exists a potential for misalignment of these features with the classi\ufb01er: the pretrained classi\ufb01er (containing weights compatible with the last layer class token) may not extract all the useful information from the previous layers. Our proposed module aims to solve this misalignment by re\ufb01ning the class tokens in a way such that the shared (pretrained) classi\ufb01er head is able to extract all discriminative information Figure 4: Recent ViTs process 196 image patches, leading to 196 patch tokens. We rearranged these to create a 14x14 feature grid which is processed by a convolutional block to extract structural information, followed by average pooling to create a single patch token. Class token is re\ufb01ned via a MLP layer before feeding to the classi\ufb01er. Both tokens are subsequently merged. 5 \fFigure 5: Self-Ensemble for DeiT (Touvron et al., 2020): We measure the top-1 accuracy on ImageNet using the class-token of each block and compare to our re\ufb01ned tokens. These results show that \ufb01ne-tuning helps align tokens from intermediate blocks with the \ufb01nal classi\ufb01er enhancing their classi\ufb01cation performance. Thus token re\ufb01nement leads to strengthened discriminative pathways allowing more transferable adversaries. contained within the class tokens of each block. Moreover, intermediate patch tokens may contain additional information that is not at all utilized by the class tokens of those blocks, which would also be addressed by our proposed block. Therefore, we extract both patch tokens and the class token from each block and process them for re\ufb01nement, as explained next. \u2212Patch Token Re\ufb01nement: One of the inputs to the token re\ufb01nement module is the set of patch tokens output from each block. We \ufb01rst rearrange these patch tokens to regain their spatial relationships. The aim of this component within the re\ufb01nement module is to extract information relevant to spatial structure contained within the intermediate patch tokens. We believe that signi\ufb01cant discriminative information is contained within these patches. The obtained rearranged patch tokens are passed through a convolution block (standard ResNet block containing a skip connection) to obtain a spatially aware feature map, which is then average pooled to obtain a single feature vector (of same dimension as the class token). This feature vector is expected to extract all spatial information from patch tokens. \u2212Class Token Re\ufb01nement: By re\ufb01ning the class tokens of each block, we aim to remove any misalignment between the existing class tokens and the shared (pretrained) classi\ufb01er head. Also, given how the class token does not contain a spatial structure, we simply use a linear layer to re\ufb01ne it. We hypothesize that re\ufb01ned class token at each block would be much more aligned with the shared classi\ufb01er head allowing it to extract all discriminative information contained within those tokens. \u2212Merging Patch and Class Token: We obtain the re\ufb01ned class token and the patch feature vector (re\ufb01ned output of patch tokens) and sum them together to obtain a merged token. While we tested multiple approaches for merging, simply summing them proved suf\ufb01cient. \u2212Training: Given a ViT model containing k transformer blocks, we plugin k instances of our token re\ufb01nement module to the output of each block as illustrated in Figure 4. We obtain the pretrained model, freeze all existing weights, and train only the k token re\ufb01nement modules for only a single epoch on ImageNet training set. We used SGD optimizer with learning rate set to 0.001. Training \ufb01nishes in less than one day on a single GPU-V100 even for a large ViT model such as DeiT-B. As expected, the trained token re\ufb01nement module leads to increased discriminability of the class tokens, which we illustrate in Figure 5. Note how this leads to signi\ufb01cant boosting of discriminative power especially in the earlier blocks, solving the misalignment problem. We build on this enhanced discriminability of the ensemble members towards better transferability, as explained next. 3.3 ADVERSARIAL TRANSFER Our modi\ufb01cations to ViT models with respect to multiple discriminative pathways and token re\ufb01nement are exploited in relation to adversarial transfer. We consider black-box attack perturbations that are generated using a source (surrogate) ViT model. The source model is only pretrained on ImageNet, modi\ufb01ed according to our proposed approach and is subsequently \ufb01ne-tuned to update only the token re\ufb01nement module for a single epoch. We experiment with multiple white-box attacks, generating the adversarial examples using a joint loss over the outputs of each block. The transferability of adversarial examples is tested on a range of CNN and ViT models. Given input sample x and its label y, the adversarial object for our self-ensemble (Eq. 1) for the untargeted attack is de\ufb01ned as, 6 \fFast Gradient Sign Method (FGSM) (Goodfellow et al., 2014) Convolutional Transformers Source (\u2193) Attack BiT50 Res152 WRN DN201 ViT-L T2T-24 TnT ViT-S T2T-7 VGG19bn FGSM 23.34 28.56 33.92 33.22 13.18 10.78 12.96 25.08 29.90 MNAS FGSM 23.16 39.82 40.10 44.34 16.60 22.56 25.82 34.10 48.96 Deit-T FGSM 29.74 37.10 38.86 42.40 44.38 35.42 50.58 73.32 57.62 FGSME 30.34 39.60 41.42 45.58 48.34 35.08 51.00 80.74 62.82 FGSMRE 30.18(+0.44) 39.82(+2.7) 41.26(+2.4) 46.06(+3.7) 46.76(+2.4) 32.68(-2.7) 48.00(-2.6) 80.10(+6.8) 63.90(+6.3) Deit-S FGSM 25.44 31.04 33.58 36.28 36.40 33.41 41.00 58.78 43.48 FGSME 30.82 38.38 41.06 46.00 47.20 39.00 51.44 78.90 56.70 FGSMRE 34.84(+9.4) 43.86(+12.8) 46.26(+12.7) 51.88(+15.6) 47.92(+11.5) 39.86(+6.5) 55.7(+14.7) 82.00(+23.2) 66.20(+22.7) Deit-B FGSM 22.54 31.58 33.86 34.96 30.50 27.84 33.08 50.24 40.50 FGSME 31.12 41.46 43.02 47.12 42.28 35.40 46.22 73.04 57.32 FGSMRE 35.12(+12.6) 45.74(+14.2) 48.46(+14.6) 52.64(+17.7) 41.68(+11.2) 36.60(+8.8) 49.60(+16.5) 74.40(+24.2) 65.92(+25.4) Projected Gradient Decent (PGD) (Madry et al., 2018) VGG19bn PGD 19.80 28.56 33.92 33.22 5.94 10.78 12.96 13.08 29.90 MNAS PGD 19.44 36.28 36.22 40.20 8.04 18.04 21.16 19.60 41.70 Deit-T PGD 14.22 23.98 24.16 26.76 35.70 21.54 44.24 86.86 53.74 PGDE 14.42 24.58 25.46 28.38 39.84 21.86 45.08 88.44 53.80 PGDRE 22.46(+8.24) 34.64(+10.7) 37.62(+13.5) 40.56(+13.8) 58.60(+22.9) 26.58(+5.0) 55.52(+11.3) 96.34(+9.5) 66.68(+12.9) Deit-S PGD 18.78 24.96 26.38 30.38 37.84 33.46 60.62 84.38 47.14 PGDE 18.98 27.72 29.54 32.90 44.30 35.40 64.76 89.82 52.76 PGDRE 28.96(+10.2) 38.92(+14.0) 42.84(+16.5) 46.82(+16.4) 60.86(+23.0) 40.30(+6.8) 76.10(+15.5) 97.32(+12.9) 71.54(+24.4) Deit-B PGD 18.68 25.56 27.90 30.24 34.08 31.98 52.76 69.82 39.80 PGDE 23.64 32.84 35.40 38.66 43.56 37.82 64.20 82.32 51.68 PGDRE 37.92(+19.2) 49.10(+23.5) 53.38(+25.5) 56.96(+26.7) 56.90(+22.8) 45.70(+13.7) 79.56(+26.8) 94.10(+24.3) 74.78(+35.0) Table 1: Fool rate (%) on 5k ImageNet val. adversarial samples at \u03f5 \u226416. Perturbations generated from our proposed self-ensemble with re\ufb01ned tokens from a vision transformer have signi\ufb01cantly higher success rate. max x\u2032 k X i=1 [ [Fk(x\u2032)argmax \u0338= y] ], s.t. \u2225x \u2212x\u2032\u2225p \u2264\u03f5, k \u2208{1, 2, . . . , n} (2) where [ [\u00b7] ] is an indicator function. In the case of target attack, the attacker optimizes the above objective towards a speci\ufb01c target class instead of an arbitrary misclassi\ufb01cation. 4 EXPERIMENTS We conduct thorough experimentation on a range of standard attack methods to establish the performance boosts obtained through our proposed transferability approach. We create \u2113\u221eadversarial attacks with \u03f5 \u226416 and observe their transferability by using the following protocols: Source (white-box) models: We mainly study three vision transformers from DeiT (Touvron et al., 2020) family due to their data ef\ufb01ciency. Speci\ufb01cally, the source models are Deit-T, DeiT-S, and DeiT-B (with 5, 22, and 86 million parameters, respectively). They are trained without CNN distillation. Adversarial examples are created on these models using an existing white-box attack (e.g., FGSM (Goodfellow et al., 2014), PGD (Madry et al., 2018) and MIM (Dong et al., 2018)) and then transferred to the black-box target models. Target (black-box) models: We test the black-box transferability across several vision tasks including classi\ufb01cation, detection and segmentation. We consider convolutional networks including BiT-ResNet50 (BiT50) (Beyer et al., 2021), ResNet152 (Res152) (He et al., 2016), Wide-ResNet-50-2 (WRN) (Zagoruyko & Komodakis, 2016), DenseNet201 (DN201) (Huang et al., 2017) and other ViT models including Token-to-Token transformer (T2T) (Yuan et al., 2021), Transformer in Transformer (TnT) (Mao et al., 2021), DINO (Caron et al., 2021), and Detection Transformer (DETR) (Carion et al., 2020) as the black-box target models. Datasets: We use ImageNet training set to \ufb01ne tune our proposed token re\ufb01nement modules. For evaluating robustness, we selected 5k samples from ImageNet validation set such that 5 random samples from each class that are correctly classi\ufb01ed by ResNet50 and ViT small (ViT-S) (Dosovitskiy et al., 2020) are present. In addition, we conduct experiments on COCO (Lin et al., 2014) (5k images) and PASCAL-VOC12 (Everingham et al., 2012) (around 1.2k images) validation set. 7 \fMomemtum Iterative Fast Gradient Sign Method (MIM) (Dong et al., 2018) Convolutional Transformers Source (\u2193) Attack BiT50 Res152 WRN DN201 ViT-L T2T-24 TnT ViT-S T2T-7 VGG19bn MIM 36.18 46.98 54.04 57.32 12.80 21.84 25.72 28.44 47.74 MNAS MIM 34.78 54.34 55.40 64.06 18.88 34.54 38.70 40.58 60.02 Deit-T MIM 36.22 45.56 47.86 53.26 63.84 48.44 72.52 96.44 77.66 MIME 34.92 45.58 47.98 54.50 67.16 46.38 71.02 97.74 78.02 MIMRE 42.04(+5.8) 54.02(+8.5) 58.48(+10.6) 63.00(+9.7) 79.12(+15.3) 49.86(+1.4) 77.80(+5.3) 99.14(+2.7) 85.50(+7.8) Deit-S MIM 38.32 45.06 47.90 52.66 63.38 58.86 79.56 94.22 68.00 MIME 40.66 49.52 52.98 58.40 71.78 61.06 84.42 98.12 74.58 MIMRE 53.70(+15.4) 61.72(+16.7) 65.10(+17.2) 71.74(+19.1) 84.30(+20.9) 66.32(+7.5) 92.02(+12.5) 99.42(+5.2) 89.08(+21.1) Deit-B MIM 36.98 44.66 47.98 52.14 57.48 54.40 70.84 84.74 59.34 MIME 45.30 54.30 58.34 63.32 70.42 61.84 82.80 94.46 73.66 MIMRE 61.58(+24.6) 70.18(+25.5) 74.08(+26.1) 79.12(+27.0) 81.28(+23.8) 69.6(+15.2) 92.20(+21.4) 94.10(+9.4) 89.72(+30.4) MIM with Input Diversity (DIM) (Xie et al., 2019) VGG19bn DIM 46.90 62.08 68.30 73.48 16.86 30.16 34.70 35.42 58.62 MNAS DIM 43.74 62.08 68.30 73.48 25.06 42.92 47.24 52.74 71.98 Deit-T DIM 57.56 68.30 70.06 77.18 62.00 70.16 82.68 89.16 86.18 DIME 60.14 70.06 69.84 78.00 66.38 72.30 85.98 93.72 90.78 DIMRE 62.10(+4.5) 70.78(+2.5) 70.78(+0.7) 78.40(+1.2) 67.58(+5.6) 68.56(-1.6) 84.18(+1.5) 93.36(+4.2) 91.52(+5.3) Deit-S DIM 59.00 62.12 63.42 67.30 62.62 73.84 79.50 82.32 74.20 DIME 68.82 74.44 75.34 80.14 76.22 84.10 91.92 94.92 88.42 DIMRE 76.14(+17.1) 81.30(+19.18) 82.64(+19.22) 86.98(+19.68) 78.88(+16.3) 85.26(+11.4) 93.22(+13.7) 96.56(+14.2) 93.60(+19.4) Deit-B DIM 56.24 59.14 60.64 64.44 61.38 69.54 73.96 76.32 64.44 DIME 73.04 78.36 80.28 83.70 79.06 85.10 91.84 94.38 86.96 DIMRE 80.10(+23.9) 84.92(+25.8) 86.36(+25.7) 89.24(+24.8) 78.90(+17.5) 84.00(+14.5) 92.28(+18.3) 95.26(+18.9) 93.42(+28.9) Table 2: Fool rate (%) on 5k ImageNet val. adversarial samples at \u03f5 \u226416. Perturbations generated from our proposed self-ensemble with re\ufb01ned tokens from a vision transformer have signi\ufb01cantly higher success rate. Evaluation Metrics: We report fooling rate (percentage of samples for which the predicted label is \ufb02ipped after adding adversarial perturbations) to evaluate classi\ufb01cation. In the case of object detection, we report the decrease in mean average precision (mAP) and for automatic segmentation, we use the popular Jaccard Index. Given the pixel masks for the prediction and the ground-truth, it calculates the ratio between the pixels belonging to intersection and the union of both masks. Baseline Attacks: We show consistent improvements for single step fast gradient sign method (FGSM) (Goodfellow et al., 2014) as well as iterative attacks including PGD (Madry et al., 2018), MIM (Dong et al., 2018) and input diversity (transformation to the inputs) (DIM) (Xie et al., 2019) attacks. Iterative attacks ran for 10 iterations and we set transformation probability for DIM to default 0.7 (Xie et al., 2019). Our approach is not limited to speci\ufb01c attack settings, but existing attacks can simply be adopted to our self-ensemble ViTs with re\ufb01ned tokens. Refer to appendices A-J for extensive analysis with more ViT designs, attacks, datasets (CIFAR10 & Flowers), computational cost comparison, and latent space visualization of our re\ufb01ned token. 4.1 CLASSIFICATION In this section, we discuss the experimental results on adversarial transferability across black-box classi\ufb01cation models. For a given attack method \u2018Attack\u2019, we refer \u2018AttackE\u2019 and \u2018AttackRE\u2019 as self-ensemble and self-ensemble with re\ufb01ned tokens, respectively, which are the two variants of our approach. We observe that adversarial transferability from ViT models to CNNs is only moderate for conventional attacks (Tables 1 & 2). For example, perturbations found via iterative attacks from DeiT-B to Res152 has even lower transfer than VGG19bn. However, the same attacks when applied using our proposed ensemble strategy (Eq. 1) with re\ufb01ned tokens consistently showed improved transferability to other convolutional as well as transformer based models. We observe that models without inductive biases that share architecture similarities show higher transfer rate of adversarial perturbations among them (e.g., from DeiT to ViT (Dosovitskiy et al., 2020)). We further observe that models trained with the same mechanism but lower parameters are more vulnerable to black-box attacks. For example, ViT-S and T2T-T are more vulnerable than their larger counterparts, ViT-L and T2T-24. Also models trained with better strategies that lead to higher generalizability are less vulnerable to black-box attacks e.g., BiT50 is more robust than ResNet152 (Tables 1 and 2). 8 \fFigure 6: Ablative Study: Fooling rate of intermediate layers under MIM (white-box) attack using our self-ensemble approach. We obtain favorable improvements for our method. Source (\u2192) DeiT-T DeiT-S DeiT-B No Attack MIM MIMRE MIM MIMRE MIM MIMRE 38.5 24.0 19.7 23.7 19.0 22.9 16.9 DIM DIMRE DIM DIMRE DIM DIMRE 20.5 13.7 20.3 12.0 19.9 11.1 Table 3: Cross-Task Transferability (classi\ufb01cation\u2192detection) Object Detector DETR (Carion et al., 2020) is fooled. mAP at [0.5:0.95] IOU on COCO val. set. Our self-ensemble approach with re\ufb01ned token (RE) signi\ufb01cantly improves cross-task transferability. (lower the better) Source (\u2192) DeiT-T DeiT-S DeiT-B No Attack MIM MIMRE MIM MIMRE MIM MIMRE 42.7 32.5 31.6 32.5 31.0 32.6 30.6 DIM DIMRE DIM DIMRE DIM DIMRE 31.9 31.4 31.7 31.3 32.0 31.0 Table 4: Cross-Task Transferability (classi\ufb01cation\u2192segmentation) DINO (Caron et al., 2021) is fooled. Jaccard index metric is used to evaluate segmentation performance. Best adversarial transfer results are achieved using our method. (lower the better) Clean Image Adv Image Clean Image Adv Image Clean Image Adv Image Figure 7: Visualization of DETR failure cases for our proposed DIMRE attack generated from DeiT-S source model. (best viewed in zoom) The strength of our method is also evident by blockwise fooling rate in white-box setting (Fig. 6). It is noteworthy how MIM fails to fool the initial blocks of ViT, while our approach allows the attack to be as effective in the intermediate blocks as for the last class token. This ultimately allows us to fully exploit ViT\u2019s adversarial space leading to high transfer rates for adversarial perturbations. 4.2 CROSS-TASK TRANSFERABILITY Self-attention is the core component of transformer architecture regardless of the task; classi\ufb01cation (Dosovitskiy et al., 2020; Touvron et al., 2020; Yuan et al., 2021; Mao et al., 2021), object detection (Carion et al., 2020), or unsupervised segmentation (Caron et al., 2021). We explore the effectiveness of our proposed method on two additional tasks: object detection (DETR) (Carion et al., 2020) and segmentation (DINO) (Caron et al., 2021). We select these methods considering the use of transformer modules employing the self-attention mechanism within their architectures. While the task of object detection contains multiple labels per image and involves bounding box regression, the unsupervised model DINO is trained in a self-supervised manner with no traditional image-level labels. Moreover, DINO uses attention maps of a ViT model to generate pixel-level segmentations, which means adversaries must disrupt the entire attention mechanism to degrade its performance. We generate adversarial signals on source models with a classi\ufb01cation objective using their initial predictions as the label. In evaluating attacks on detection and segmentation tasks at their optimal setting, the source ViT need to process images of different sizes (e.g., over 896\u00d7896 pix for DETR). To cater for this, we process images in parts (refer appendix G) which allows generation of stronger adversaries. The performance degradation of DETR and DINO on generated adversaries are summarised in Tables 3 & 4. For DETR, we obtain clear improvements. In the more robust DINO model, our transferability increases well with the source model capacity as compared to the baseline. 5" + }, + { + "url": "http://arxiv.org/abs/2105.10497v3", + "title": "Intriguing Properties of Vision Transformers", + "abstract": "Vision transformers (ViT) have demonstrated impressive performance across\nvarious machine vision problems. These models are based on multi-head\nself-attention mechanisms that can flexibly attend to a sequence of image\npatches to encode contextual cues. An important question is how such\nflexibility in attending image-wide context conditioned on a given patch can\nfacilitate handling nuisances in natural images e.g., severe occlusions, domain\nshifts, spatial permutations, adversarial and natural perturbations. We\nsystematically study this question via an extensive set of experiments\nencompassing three ViT families and comparisons with a high-performing\nconvolutional neural network (CNN). We show and analyze the following\nintriguing properties of ViT: (a) Transformers are highly robust to severe\nocclusions, perturbations and domain shifts, e.g., retain as high as 60% top-1\naccuracy on ImageNet even after randomly occluding 80% of the image content.\n(b) The robust performance to occlusions is not due to a bias towards local\ntextures, and ViTs are significantly less biased towards textures compared to\nCNNs. When properly trained to encode shape-based features, ViTs demonstrate\nshape recognition capability comparable to that of human visual system,\npreviously unmatched in the literature. (c) Using ViTs to encode shape\nrepresentation leads to an interesting consequence of accurate semantic\nsegmentation without pixel-level supervision. (d) Off-the-shelf features from a\nsingle ViT model can be combined to create a feature ensemble, leading to high\naccuracy rates across a range of classification datasets in both traditional\nand few-shot learning paradigms. We show effective features of ViTs are due to\nflexible and dynamic receptive fields possible via the self-attention\nmechanism.", + "authors": "Muzammal Naseer, Kanchana Ranasinghe, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang", + "published": "2021-05-21", + "updated": "2021-11-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "main_content": "Introduction As visual transformers (ViT) attract more interest [1], it becomes highly pertinent to study characteristics of their learned representations. Speci\ufb01cally, from the perspective of safety-critical applications such as autonomous cars, robots and healthcare; the learned representations need to be robust and generalizable. In this paper, we compare the performance of transformers with convolutional neural networks (CNNs) for handling nuisances (e.g., occlusions, distributional shifts, adversarial and natural perturbations) and generalization across different data distributions. Our in-depth analysis is based on three transformer families, ViT [2], DeiT [3] and T2T [4] across \ufb01fteen vision datasets. For brevity, we refer to all the transformer families as ViT, unless otherwise mentioned. We are intrigued by the fundamental differences in the operation of convolution and self-attention, that have not been extensively explored in the context of robustness and generalization. While convolutions excel at learning local interactions between elements in the input domain (e.g., edges and contour information), self-attention has been shown to effectively learn global interactions (e.g., 35th Conference on Neural Information Processing Systems (NeurIPS 2021). arXiv:2105.10497v3 [cs.CV] 25 Nov 2021 \fFigure 1: We show intriguing properties of ViT including impressive robustness to (a) severe occlusions, (b) distributional shifts (e.g., stylization to remove texture cues), (c) adversarial perturbations, and (d) patch permutations. Furthermore, our ViT models trained to focus on shape cues can segment foregrounds without any pixel-level supervision (e). Finally, off-the-shelf features from ViT models generalize better than CNNs (f). relations between distant object parts) [5, 6]. Given a query embedding, self-attention \ufb01nds its interactions with the other embeddings in the sequence, thereby conditioning on the local content while modeling global relationships [7]. In contrast, convolutions are content-independent as the same \ufb01lter weights are applied to all inputs regardless of their distinct nature. Given the content-dependent long-range interaction modeling capabilities, our analysis shows that ViTs can \ufb02exibly adjust their receptive \ufb01eld to cope with nuisances in data and enhance expressivity of the representations. Our systematic experiments and novel design choices lead to the following interesting \ufb01ndings: \u2022 ViTs demonstrate strong robustness against severe occlusions for foreground objects, non-salient background regions and random patch locations, when compared with state-of-the-art CNNs. For instance, with a signi\ufb01cant random occlusion of up to 80%, DeiT [3] can maintain top-1 accuracy up to \u223c60% where CNN has zero accuracy, on ImageNet [8] val. set. \u2022 When presented with texture and shape of the same object, CNN models often make decisions based on texture [9]. In contrast, ViTs perform better than CNNs and comparable to humans on shape recognition. This highlights robustness of ViTs to deal with signi\ufb01cant distribution shifts e.g., recognizing object shapes in less textured data such as paintings. \u2022 Compared to CNNs, ViTs show better robustness against other nuisance factors such as spatial patch-level permutations, adversarial perturbations and common natural corruptions (e.g., noise, blur, contrast and pixelation artefacts). However, similar to CNNs [10], a shape-focused training process renders them vulnerable against adversarial attacks and common corruptions. \u2022 Apart from their promising robustness properties, off-the-shelf ViT features from ImageNet pretrained models generalize exceptionally well to new domains e.g., few-shot learning, \ufb01negrained recognition, scene categorization and long-tail classi\ufb01cation settings. In addition to our extensive experimental analysis and new \ufb01ndings, we introduce several novel design choices to highlight the strong potential of ViTs. To this end, we propose an architectural modi\ufb01cation to DeiT to encode shape-information via a dedicated token that demonstrates how seemingly contradictory cues can be modeled with distinct tokens within the same architecture, leading to favorable implications such as automated segmentation without pixel-level supervision. Moreover, our off-the-shelf feature transfer approach utilizes an ensemble of representations derived from a single architecture to obtain state-of-the-art generalization with a pre-trained ViT (Fig. 1). 2 Related Work CNNs have shown state-of-the-art performance in independent and identically distributed (i.i.d) settings but remain highly sensitive to distributional shifts; adversarial noise [11, 12], common image corruptions [13], and domain shifts (e.g., RGB to sketches) [14]. It is natural to ask if ViT, that processes inputs based on self-attention, offers any advantages in comparison to CNN. Shao et al. [15] analyze ViTs against adversarial noise and show ViTs are more robust to high frequency changes. Similarly, Bhojanapalli et al. [16] study ViT against spatial perturbations [15] and its robustness to removal of any single layer. Since ViT processes image patches, we focus on their robustness against patch masking, localized adversarial patches [17] and common natural corruptions. A concurrent work from Paul and Chen [18] also develops similar insights on robustness of ViTs but with a somewhat different set of experiments. Geirhos et al. [9] provide evidence that CNNs mainly exploit texture to make a decision and give less importance to global shape. This is further con\ufb01rmed by CNN ability to only use local features 2 \f[19]. Recently, [20] quanti\ufb01es mutual information [21] between shape and texture features. Our analysis indicates that large ViT models have less texture bias and give relatively higher emphasis to shape information. ViT\u2019s shape-bias approaches human-level performance when directly trained on stylized ImageNet [9]. Our \ufb01ndings are consistent with a concurrent recent work that demonstrates the importance of this trend on human behavioural understanding and bridging the gap between human and machine vision [22]. A recent work [23] shows that self-supervised ViT can automatically segment foreground objects. In comparison, we show how shape-focused learning can impart similar capability in the image-level supervised ViT models, without any pixel-level supervision. Zeiler et al. [24] introduce a method to visualize CNN features at different layers and study the performance of off-the-shelf features. In a similar spirit, we study the generalization of off-the-shelf features of ViT in comparison to CNN. Receptive \ufb01eld is an indication of network\u2019s ability to model long range dependencies. The receptive \ufb01eld of Transformer based models covers the entire input space, a property that resembles handcrafted features [25], but ViTs have higher representative capacity. This allows ViT to model global context and preserve the structural information compared to CNN [26]. This work is an effort to demonstrate the effectiveness of \ufb02exible receptive \ufb01eld and content-based context modeling in ViTs towards robustness and generalization of the learned features. 3 Intriguing Properties of Vision Transformers 3.1 Are Vision Transformers Robust to Occlusions? The receptive \ufb01eld of a ViT spans over the entire image and it models the interaction between the sequence of image patches using self-attention [26, 27]. We study whether ViTs perform robustly in occluded scenarios, where some or most of the image content is missing. Occlusion Modeling: Consider a network f, that processes an input image x to predict a label y, where x is represented as a patch sequence with N elements, i.e., x = {xi}N i=1 [2]. While there can be multiple ways to de\ufb01ne occlusion, we adopt a simple masking strategy, where we select a subset of the total image patches, M < N, and set pixel values of these patches to zero to create an occluded image, x\u2032. We refer to this approach as PatchDrop. The objective is then to observe robustness such that f(x\u2032)argmax = y. We experiment with three variants of our occlusion approach, (a) Random PatchDrop, (b) Salient (foreground) PatchDrop, and (c) Non-salient (background) PatchDrop. Random PatchDrop: A subset of M patches is randomly selected and dropped (Fig. 2). Several recent Vision Transformers [2, 3, 4] divide an image into 196 patches belonging to a 14x14 spatial grid; i.e. an image of size 224\u00d7224\u00d73 is split into 196 patches, each of size 16\u00d716\u00d73. As an example, dropping 100 such patches from the input is equivalent to losing 51% of the image content. Salient (foreground) PatchDrop: Not all pixels have the same importance for vision tasks. Thus, it is important to study the robustness of ViTs against occlusions of highly salient regions. We leverage a self-supervised ViT model DINO [23] that is shown to effectively segment salient objects. In particular, the spatial positions of information \ufb02owing into the \ufb01nal feature vector (class token) within the last attention block are exploited to locate the salient pixels. This allows to control the amount of salient information captured within the selected pixels by thresholding the quantity of attention \ufb02ow. We select the subset of patches containing the top Q% of foreground information (deterministic for \ufb01xed Q) and drop them. Note that this Q% does not always correspond to the pixel percentage, e.g., 50% of the foreground information of an image may be contained within only 10% of its pixels. Non-salient (background) PatchDrop: The least salient regions of the image are selected following the same approach as above, using [23]. The patches containing the lowest Q% of foreground information are selected and dropped here. Note this does not always correspond to the pixel percentage, e.g., 80% of the pixels may only contain 20% of the non-salient information for an image. Figure 2: An example image with its occluded versions (Random, Salient and NonSalient). The occluded images are correctly classi\ufb01ed by Deit-S [3] but misclassi\ufb01ed by ResNet50 [28]. Pixel values in occluded (black) regions are set to zero. Original Image Random PatchDrop Salient PatchDrop Non-Salient PatchDrop 3 \fFigure 3: Robustness against object occlusion in images is studied under three PatchDrop settings (see Sec 3.1). (left) We study the robustness of CNN models to occlusions, and identify ResNet50 as a strong baseline. (mid-left) We compare the DeiT model family against ResNet50 exhibiting their superior robustness to object occlusion. (mid-right) Comparison against ViT model family. (right) Comparison against T2T model family. Robust Performance of Transformers Against Occlusions: We consider visual recognition task with models pretrained on ImageNet [2]. The effect of occlusion is studied on the validation set (50k images). We de\ufb01ne information loss (IL) as the ratio of dropped and total patches (M / N). IL is varied to obtain a range of occlusion levels for each PatchDrop methodology. The results (Top-1 %) reported in Fig. 3 show signi\ufb01cantly robust performance of ViT models against CNNs. In the case of random PatchDrop, we report the mean of accuracy across 5 runs. For Salient and Non-Salient Patchdrop, we report the accuracy values over a single run, since the occlusion mask is deterministic. CNNs perform poorly when 50% of image information is randomly dropped. For example, ResNet50 (23 Million parameters) achieves 0.1% accuracy in comparison to DeiT-S (22 Million parameters) which obtains 70% accuracy when 50% of the image content is removed. An extreme example can be observed when 90% of the image information is randomly masked but Deit-B still exhibits 37% accuracy. This \ufb01nding is consistent among different ViT architectures [2, 3, 4]. Similarly, ViTs show signi\ufb01cant robustness to the foreground (salient) and background (non-salient) content removal. See Appendix A, B, C, D, and E for further results on robustness analysis. ViT Representations are Robust against Information Loss: In order to better understand model behavior against such occlusions, we visualize the attention (Fig. 4) from each head of different layers. While initial layers attend to all areas, deeper layers tend to focus more on the leftover information in non-occluded regions of an image. We then study if such changes from initial to deeper layers lead to token invariance against occlusion which is important for classi\ufb01cation. We measure the correlation coef\ufb01cient between features/tokens of original and occluded images by using corr(u, v) = P i \u02c6 ui \u02c6 vi n , where \u02c6 ui = ui\u2212E[ui] \u03c3(ui) , E[\u00b7] and \u03c3(\u00b7) are mean and standard deviation operations [29]. In our case, random variables u and v refer to the feature maps for an original and occluded image de\ufb01ned over the entire ImageNet validation set. In the case of ResNet50, we consider features before the logit layer and for ViT models, class tokens are extracted from the last transformer block. Class tokens from transformers are signi\ufb01cantly more robust and do not suffer much information loss as compared to ResNet50 features (Table 1). Furthermore, we visualize the correlation coef\ufb01cient across the 12 selected superclasses within ImageNet hierarchy and note that the trend holds across different class types, even for relatively small object types such as insects, food items and birds (Fig. 5). See Appendix F for attention visualizations and G for the qualitative results. Given the intriguing robustness of transformer models due to dynamic receptive \ufb01elds and discriminability preserving behaviour of the learned tokens, an ensuing question is whether the learned representations in ViTs are biased towards texture or not. One can expect a biased model focusing only on texture to still perform well when the spatial structure for an object is partially lost. 4 \fFigure 4: Attention maps (averaged over the entire ImageNet val. set) relevant to each head in multiple layers of an ImageNet pre-trained DeiT-B model. All images are occluded (Random PatchDrop) with the same mask (bottom right). Observe how later layers clearly attend to non-occluded regions of images to make a decision, an evidence of the model\u2019s highly dynamic receptive \ufb01eld. Model Correlation Coef\ufb01cient: Random PatchDrop 25% Dropped 50% Dropped 75% Dropped ResNet50 0.32\u00b10.16 0.13\u00b10.11 0.07\u00b10.09 TnT-S 0.83\u00b10.08 0.67\u00b10.12 0.46\u00b10.17 ViT-L 0.92\u00b10.06 0.81\u00b10.13 0.50\u00b10.21 Deit-B 0.90\u00b10.06 0.77\u00b10.10 0.56\u00b10.15 T2T-24 0.80\u00b10.10 0.60\u00b10.15 0.31\u00b10.17 Table 1: Correlation coef\ufb01cient b/w features/\ufb01nal class tokens of original and occluded images for Random PatchDrop. Averaged across the ImageNet val. set. Figure 5: Correlation b/w features/\ufb01nal tokens of original and occluded images for 50% Random Drop. Results are averaged across classes for each superclass. 3.2 Shape vs. Texture: Can Transformer Model Both Characteristics? Geirhos et al. [9] study shape vs. texture hypothesis and propose a training framework to enhance shape-bias in CNNs. We \ufb01rst carry out similar analysis and show that ViT models preform with a shape-bias much stronger than that of a CNN, and comparably to the ability of human visual system in recognizing shapes. However, this approach results in a signi\ufb01cant drop in accuracy on the natural images. To address this issue, we introduce a shape token into the transformer architecture that learns to focus on shapes, thereby modeling both shape and texture related features within the same architecture using a distinct set of tokens. As such, we distill the shape information from a pretrained CNN model with high shape-bias [9]. Our distillation approach makes a balanced trade-off between high classi\ufb01cation accuracy and strong shape-bias compared to the original ViT model. We outline both approaches below. Note that the measure introduced in [9] is used to quantify shape-bias within ViT models and compare against their CNN counterparts. Training without Local Texture: In this approach, we \ufb01rst remove local texture cues from the training data by creating a stylized version of ImageNet [9] named SIN. We then train tiny and small DeiT models [3] on this dataset. Typically, ViTs use heavy data augmentations during training [3]. However, learning with SIN is a dif\ufb01cult task due to less texture details and applying further augmentations on stylized samples distorts shape information and makes the training unstable. Thus, we train models on SIN without applying any augmentation, label smoothing or mixup. We note that ViTs trained on ImageNet exhibit higher shape-bias in comparison to similar capacity CNN models e.g., DeiT-S (22-Million params) performs better than ResNet50 (23-Million params) (Fig. 6, right plot). In contrast, the SIN trained ViTs consistently perform better than CNNs. Interestingly, DeiT-S [3] reaches human-level performance when trained on a SIN (Fig. 6, left plot). 5 \fFigure 6: Shape-bias Analysis: Shape-bias is de\ufb01ned as the fraction of correct decisions based on object shape. (Left) Plot shows shape-texture tradeoff for CNN, ViT and Humans across different object classes. (Right) classmean shape-bias comparison. Overall, ViTs perform better than CNN. The shape bias increases signi\ufb01cantly when trained on stylized ImageNet (SIN). Model Distilled Token Type ImageNet top-1 (%) Shape Bias DeiT-T-SIN \u0017 cls 40.5 0.87 DeiT-T-SIN \u0013 cls 71.8 0.35 DeiT-T-SIN \u0013 shape 63.4 0.44 DeiT-S-SIN \u0017 cls 52.5 0.93 DeiT-S-SIN \u0013 cls 75.3 0.39 DeiT-S-SIN \u0013 shape 67.7 0.47 Table 3: Performance comparison of models trained on SIN. ViT produces dynamic features that can be controlled by auxiliary tokens. \u2018cls\u2019 represents the class token. During distillation cls and shape tokens converged to vastly different solution using the same features as compared to [3]. Shape Distillation: Knowledge distillation allows to compress large teacher models into smaller student models [30] as the teacher provides guidance to the student through soft labels. We introduce a new shape token and adapt attentive distillation [3] to distill shape knowledge from a CNN trained on the SIN dataset (ResNet50-SIN [9]). We observe that ViT features are dynamic in nature and can be controlled by auxiliary tokens to focus on the desired characteristics. This means that a single ViT model can exhibit both high shape and texture bias at the same time with separate tokens (Table 3). We achieve more balanced performance for classi\ufb01cation as well as shape-bias measure when the shape token is introduced (Fig. 7). To demonstrate that these distinct tokens (for classi\ufb01cation and shape) indeed model unique features, we compute cosine similarity (averaged over ImageNet val. set) between class and shape tokens of our distilled models, DeiT-T-SIN and DeiT-S-SIN, which turns out to be 0.35 and 0.68, respectively. This is signi\ufb01cantly lower than the similarity between class and distillation tokens [3]; 0.96 and 0.94 for DeiT-T and Deit-S, respectively. This con\ufb01rms our hypothesis on modeling distinct features with separate tokens within ViTs, a unique capability that cannot be straightforwardly achieved with CNNs. Further, it offers other bene\ufb01ts as we explain next. Figure 7: Shape Distillation. Shape-biased ViT Offers Automated Object Segmentation: Interestingly, training without local texture or with shape distillation allows a ViT to concentrate on foreground objects in the scene and ignore the background (Table 4, Fig. 8). This offers an automated semantic segmentation for an image although the model is never shown pixel-wise object labels. That is, shape-bias can be used as self-supervision signals for the ViT model to learn distinct shape-related features that help localize the right foreground object. We note that a ViT trained without emphasis on shape does not perform well (Table 4). The above results show that properly trained ViT models offer shape-bias nearly as high as the human\u2019s ability to recognize shapes. This leads us to question if positional encoding is the key that helps ViTs achieve high performance under severe occlusions (as it can potentially allow later layers to recover the missing information with just a few image patches given their spatial ordering). This possibility is examined next. 6 \fModel Distilled Token Type Jaccard Index DeiT-T-Random \u0017 cls 19.6 DeiT-T \u0017 cls 32.2 DeiT-T-SIN \u0017 cls 29.4 DeiT-T-SIN \u0013 cls 40.0 DeiT-T-SIN \u0013 shape 42.2 DeiT-S-Random \u0017 cls 22.0 DeiT-S \u0017 cls 29.2 DeiT-S-SIN \u0017 cls 37.5 DeiT-S-SIN \u0013 cls 42.0 DeiT-S-SIN \u0013 shape 42.4 Table 4: We compute the Jaccard similarity between ground truth and masks generated from the attention maps of ViT models (similar to [23] with threshold 0.9) over the PASCAL-VOC12 validation set. Only class level ImageNet labels are used for training these models. Our results indicate that supervised ViTs can be used for automated segmentation and perform closer to the self-supervised method DINO [23]. DeiT-S DeiT-S-SIN DeiT-S-SIN (Distilled) Figure 8: Segmentation maps from ViTs. Shape distillation performs better than standard supervised models. 3.3 Does Positional Encoding Preserve the Global Image Context? Transformers\u2019 ability to process long-range sequences in parallel using self-attention [27] (instead of a sequential design in RNN [31]) is invariant to sequence ordering. For images, the order of patches represents the overall image structure and global composition. Since ViTs operate on a sequence of images patches, changing the order of sequence e.g., shuf\ufb02ing the patches can destroy the image structure. Current ViTs [2, 3, 4, 26] use positional encoding to preserve this context. Here, we analyze if the sequence order modeled by positional encoding allows ViT to excel under occlusion handling. Our analysis suggests that transformers show high permutation invariance to the patch positions, and the effect of positional encoding towards injecting structural information of images to ViT models is limited (Fig. 10). This observation is consistent with the \ufb01ndings in the language domain [32] as described below. Figure 9: An illustration of shuf\ufb02e operation applied on images used to eliminate their structural information. (best viewed zoomed-in) Sensitivity to Spatial Structure: We remove the structural information within images (spatial relationships) as illustrated in Fig. 9 by de\ufb01ning a shuf\ufb02ing operation on input image patches. Fig. 10 shows that the DeiT models [3] retain accuracy better than their CNN counterparts when spatial structure of input images is disturbed. This also indicates that positional encoding is not absolutely crucial for right classi\ufb01cation decisions, and the model does not \u201crecover\u201d global image context using the patch sequence information preserved in the positional encodings. Without encoding, the ViT performs reasonably well and achieves better permutation invariance than a ViT using position Figure 10: Models trained on 196 image patches. Top-1 (%) accuracy over ImageNet val. set when patches are shuf\ufb02ed. Note the performance peaks when shuf\ufb02e grid size is equal to the original number of patches used during training, since it equals to only changing the position of input patch (and not disturbing the patch content). Figure 11: DeiT-T [3] trained on different number of image patches. Reducing patch size decreases the overall performance but also increases sensitivity to shuf\ufb02e grid size. 7 \fTrained with Augmentations Trained without Augmentation DeiT-B DeiT-S DeiT-T T2T-24 TnT-S Augmix ResNet50 ResNet50-SIN DeiT-T-SIN DeiT-S-SIN 48.5 54.6 71.1 49.1 53.1 65.3 76.7 77.3 94.4 84.0 Table 4: mean Corruption Error (mCE) across common corruptions [13] (lower the better). While ViTs have better robustness compared to CNNs, training to achieve a higher shape-bias makes both CNNs and ViTs more vulnerable to natural distribution shifts. All models trained with augmentations (ViT or CNN) have lower mCE in comparison to models trained without augmentations on ImageNet or SIN. Figure 12: Robustness against adversarial patch attack. ViTs even with less parameters exhibit a higher robustness than CNN. Models trained on ImageNet are more robust than the ones trained on SIN. Results are averaged across \ufb01ve runs of patch attack over ImageNet val. set. Figure 13: Robustness against sample speci\ufb01c attacks including single step, FGSM [34], and multi-step, PGD [35]. ViTs even with less parameters exhibit a higher robustness than CNN. PGD ran for 5 iterations only. Attacks are evaluated under l\u221enorm and \u03f5 represents the perturbation budget by which each pixel is changed in the input image. Results are reported over the ImageNet val. set. encoding (Fig. 10). Finally, when the patch size is varied during ViT training, the permutation invariance property is also degraded along with the accuracy on unshuf\ufb02ed natural images (Fig. 11). Overall, we attribute the permutation invariance performance of ViTs to their dynamic receptive \ufb01eld that depends on the input patch and can adjust attention with the other sequence elements such that moderately shuf\ufb02ing the elements does not degrade the performance signi\ufb01cantly. The above analysis shows that just like the texture-bias hypothesis does not apply to ViTs, the dependence on positional encodings to perform well under occlusions is also incorrect. This leads us to the conclude that ViTs robustness is due to its \ufb02exible and dynamic receptive \ufb01eld (see Fig. 4) which depends on the content of an input image. We now delve further deep into the robustness of ViT, and study its performance under adversarial perturbations and common corruptions. 3.4 Robustness of Vision Transformers to Adversarial and Natural Perturbations After analyzing the ability of ViTs to encode shape information (Sec. 3.2), one ensuing question is: Does higher shape-bias help achieve better robustness? In Table 4, we investigate this by calculating mean corruption error (mCE) [13] on a variety of synthetic common corruptions (e.g., rain, fog, snow and noise). A ViT with similar parameters as CNN (e.g., DeiT-S) is more robust to image corruptions than ResNet50 trained with augmentations (Augmix [33]). Interestingly, CNNs and ViTs trained without augmentations on ImageNet or SIN are more vulnerable to corruptions. These \ufb01ndings are consistent with [10], and suggest that augmentations improve robustness against common corruptions. We observe similar performance against untargeted, universal adversarial patch attack [17] and sample speci\ufb01c attacks including single step, fast gradient sign method (FGSM) [34], and multi-step projected gradient attack known as PGD [35]. Adversarial patch attack [17] is unbounded that is it can change pixel values at certain location in the input image by any amount, while sample speci\ufb01c attacks [34, 35] are bounded by l\u221enorm with a perturbation budget \u03f5, where \u03f5 represents the amount by which each pixel is changed in the entire image. ViTs and CNN trained on SIN are signi\ufb01cantly 8 \fFigure 14: A single ViT model can provide a features ensemble since class token from each block can be processed by the classi\ufb01er independently. This allows us to identify the most discriminative tokens useful for transfer learning. Figure 15: Top-1 (%) for ImageNet val. set for class tokens produced by each ViT block. Class tokens from the last few layers exhibit highest performance indicating the most discriminative tokens. Blocks Class Patch CUB Flowers iNaturalist Tokens Tokens [37] [38] [39] Only 12th (last block) \u0013 \u0017 68.16 82.58 38.28 \u0013 \u0013 70.66 86.58 41.22 From 1st to 12th \u0013 \u0017 72.90 91.38 44.03 \u0013 \u0013 73.16 91.27 43.33 From 9th to 12th \u0013 \u0017 73.58 90.00 45.15 \u0013 \u0013 73.37 90.33 45.12 Table 5: Ablative Study for off-the-shelf feature transfer on three datasets using ImageNet pretrained DeiT-S [3]. A linear classi\ufb01er is learned on only a concatenation of class tokens or the combination of class and averaged patch tokens at various blocks. We note class token from blocks 9-12 are most discriminative (Fig. 15) and have the highest transferability in terms of Top-1 (%) accuracy. more vulnerable to adversarial attack than models trained on ImageNet (Figs. 12 and 13), due to the shape-bias vs. robustness trade-off [10]. Given the strong robustness properties of ViT as well as their representation capability in terms of shape-bias, automated segmentation and \ufb02exible receptive \ufb01eld, we analyze their utility as an off-the-shelf feature extractor to replace CNNs as the default feature extraction mechanism [36]. 3.5 Effective Off-the-shelf Tokens for Vision Transformer A unique characteristic of ViT models is that each block within the model generates a class token which can be processed by the classi\ufb01cation head separately (Fig. 14). This allows us to measure the discriminative ability of each individual block of an ImageNet pre-trained ViT as shown in Fig. 15. Class tokens generated by the deeper blocks are more discriminative and we use this insight to identify an effective ensemble of blocks whose tokens have the best downstream transferability. Transfer Methodology: As illustrated in Fig. 15, we analyze the block-wise classi\ufb01cation accuracy of DeiT models and determine the discriminative information is captured within the class tokens of the last few blocks. As such, we conduct an ablation study for off-the-shelf transfer learning on \ufb01ne-grained classi\ufb01cation dataset CUB [37], Flowers [38] and large scale iNaturalist [39] using DeiT-S [3] as reported in Table 5. Here, we concatenate the class tokens (optionally combined with average patch tokens) from different blocks and train a linear classi\ufb01er to transfer the features to downstream tasks. Note that a patch token is generated by averaging along the patch dimension. The scheme that concatenate class tokens from the last four blocks shows the best transfer learning performance. We refer to this transfer methodology as DeiT-S (ensemble). Concatenation of both class and averaged patch tokens from all blocks helps achieve similar performance compared to the tokens from the last four blocks but requires signi\ufb01cantly large parameters to train. We \ufb01nd some exception to this on the Flower dataset [38] where using class tokens from all blocks have relatively better improvement (only 1.2%), compared to the class tokens from the last four blocks (Table 5). However, concatenating tokens from all blocks also increases the number of parameters e.g., transfer to Flowers from all tokens has 3 times more learnable parameters than using only the last four tokens. We conduct further experimentation with DeiT-S (ensemble) across a broader range of tasks to validate our hypothesis. We further compare against a pre-trained ResNet50 baseline, by using features before the logit layer. Visual Classification: We analyze the transferability of off-the-shelf features across several datasets including Aircraft [40], CUB [37], DTD [41], GTSRB [42], Fungi [43], Places365 [44] and iNaturalist [39]. These datasets are developed for \ufb01ne-grained recognition, texture classi\ufb01cation, 9 \fFigure 16: Off-the-shelf ViT features transfer better than CNNs. We explore transferability of learned representations using generic classi\ufb01cation as well as few-shot classi\ufb01cation for out-of-domain tasks. In the case of classi\ufb01cation (left), the ImageNet pre-trained ViTs transfer better than their CNN counterparts across tasks. In the case of few-shot learning (right), ImageNet pre-trained ViTs perform better on average. traf\ufb01c sign recognition, species classi\ufb01cation and scene recognition with 100, 200, 47, 43, 1394, 365 and 1010 classes respectively. We train a linear classi\ufb01er on top of the extracted features over the train split of each dataset, and evaluate the performance on their respective test splits. The ViT features show clear improvements over the CNN baseline (Fig. 16). We note that DeiT-T, which requires about 5 times fewer parameters than ResNet50, performs better among all datasets. Furthermore, the model with the proposed ensemble strategy achieves the best results across all datasets. Few-Shot Learning: We consider meta-dataset [45] designed as a large-scale few-shot learning (FSL) benchmark containing a diverse set of datasets from multiple domains. This includes letters of alphabets, hand-drawn sketches, images of textures, and \ufb01ne-grained classes making it a challenging dataset involving a domain adaption requirement as well. We follow the standard setting of training on ImageNet and testing on all other datasets which are considered as the downstream tasks. In our experiments, we use a network pre-trained for classi\ufb01cation on ImageNet dataset to extract features. For each downstream dataset, under the FSL setting, a support set of labelled images is available for every test query. We use the extracted features to learn a linear classi\ufb01er over the support set for each query (similar to [46]), and evaluate using the standard FSL protocol de\ufb01ned in [45]. This evaluation involves a varying number of shots speci\ufb01c for each downstream dataset. On average, the ViT features transfer better across these diverse domains (Fig. 16) in comparison to the CNN baseline. Furthermore, we note that the transfer performance of ViT is further boosted using the proposed ensemble strategy. We also highlight the improvement in QuickDraw, a dataset containing hand-drawn sketches, which aligns with our \ufb01ndings on improved shape-bias of ViT models in contrast to CNN models (see Sec. 3.2 for elaborate discussion). 4 Discussion and" + }, + { + "url": "http://arxiv.org/abs/2103.14641v2", + "title": "On Generating Transferable Targeted Perturbations", + "abstract": "While the untargeted black-box transferability of adversarial perturbations\nhas been extensively studied before, changing an unseen model's decisions to a\nspecific `targeted' class remains a challenging feat. In this paper, we propose\na new generative approach for highly transferable targeted perturbations\n(\\ours). We note that the existing methods are less suitable for this task due\nto their reliance on class-boundary information that changes from one model to\nanother, thus reducing transferability. In contrast, our approach matches the\nperturbed image `distribution' with that of the target class, leading to high\ntargeted transferability rates. To this end, we propose a new objective\nfunction that not only aligns the global distributions of source and target\nimages, but also matches the local neighbourhood structure between the two\ndomains. Based on the proposed objective, we train a generator function that\ncan adaptively synthesize perturbations specific to a given input. Our\ngenerative approach is independent of the source or target domain labels, while\nconsistently performs well against state-of-the-art methods on a wide range of\nattack settings. As an example, we achieve $32.63\\%$ target transferability\nfrom (an adversarially weak) VGG19$_{BN}$ to (a strong) WideResNet on ImageNet\nval. set, which is 4$\\times$ higher than the previous best generative attack\nand 16$\\times$ better than instance-specific iterative attack. Code is\navailable at: {\\small\\url{https://github.com/Muzammal-Naseer/TTP}}.", + "authors": "Muzammal Naseer, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Fatih Porikli", + "published": "2021-03-26", + "updated": "2021-08-13", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction We study the challenging problem of targeted transferability of adversarial perturbations. In this case, given an input sample from any source category, the goal of the adversary is to change the decision of an unknown model to a speci\ufb01c target class (e.g., misclassify any painting image to Fire truck, see Fig. 1). This task is signi\ufb01cantly more dif\ufb01cult than merely changing the decision to a ranNatural Images Source Domain OR : Augmenter Paintings Fire Truck Target Domain OR Parachute Target Latent Space 4 < Q Q 4 : Generator < : Discriminator Maximize Distribution Agreement Adversarial Augmented Adversarial Figure 1: Attack Overview (TTP): Instead of \ufb01nding perturbations speci\ufb01c to a class-boundary information learned by a model, TTP seeks to match global distribution statistics between the source and the target domains. Speci\ufb01cally, our generator function is trained to maximize agreement between the perturbed source distribution, its augmented version and the target distribution in the feature space. Importantly, our attack can function in an unsupervised fashion and does not require source domain to be the same as target (e.g., perturbations can be learned from paintings to transfer on natural images). dom target class or any similar class (e.g., changing \u2018cat\u2019 to \u2018aeroplane\u2019 is more dif\ufb01cult than altering the decision to \u2018dog\u2019). Target transferability can therefore lead to goaldriven adversarial perturbations that provide desired control over the attacked model. However, target transferability remains challenging for the current adversarial attacks [26, 24, 4, 44, 15, 14, 13, 21, 42] that transfer adversarial noise in a black-box setting, where architecture and training mechanism of the attacked model remain unknown, and the attack is restricted within a certain perturbation budget. We observe that modest performance of existing metharXiv:2103.14641v2 [cs.CV] 13 Aug 2021 \fods on targeted transferability is due to their reliance on class-boundary information learned by the model which lacks generalizability. For example, iterative instancespeci\ufb01c attacks rely on the classi\ufb01cation score information to perturb a given sample, thereby ignoring the global classspeci\ufb01c information [26, 4, 44, 14]. Such adversarial directions also vary across different models [24], leading to poor target transferability [24, 4]. On the other hand, although universal and generative perturbations are designed to encode global noise patterns [27, 36, 32], they still exploit the class impressions learned by a neural network which alone are not fully representative of the target distribution, thereby achieving only modest black-box fooling rates [37]. Furthermore, they are dependent on the classi\ufb01cation information, necessitating a supervised pretrained model for generator\u2019s guidance and therefore cannot directly work with unsupervised features [1, 9]. Another group of techniques exploit intermediate features, but they either \ufb01nd untargeted perturbations by design [29, 38] or are limited in their capacity to transfer targeted perturbations [21, 15, 13, 14]. We introduce a novel generative training framework which maps a given source distribution to a speci\ufb01c target distribution by maximizing the mutual agreement between the two in the latent space of a pretrained discriminator. Our main contributions are: \u2022 Generative Targeted Transferability: We propose a novel generative approach to learn transferable targeted adversarial perturbations. Our unique training mechanism allows the generator to explore augmented adversarial space during training which enhances the transferability of adversarial examples during inference (Sec. 3.1). \u2022 Mutual Distribution Matching: Our training approach is based on maximizing the mutual agreement between the given source and the target distribution. Therefore, our method can provide targeted guidance to train the generator without the need of classi\ufb01cation boundary information. This allows an attacker to learn targeted generative perturbations from the unsupervised features [1, 9] and eliminate the cost of labelled data (Sec. 3.2). \u2022 Neighbourhood Similarity Matching: Alongside global distribution matching, we introduce batch-wise neighbourhood similarity matching objective between adversarial and target class samples to maximize the local alignment between the two distributions (Sec. 3.3). Our extensive experiments on various ImageNet splits and CNN architectures show state-of-the-art targeted transferability against naturally and adversarially trained models, stylized models and input-processing based defenses. The results demonstrate our bene\ufb01t compared to recent targeted instance-speci\ufb01c as well as other generative methods. Further, our attack demonstrates rapid convergence. 2. Related Work Iterative Instance-Speci\ufb01c Perturbations: After Szegedy et al. [41] highlighted the vulnerability of neural networks, many adversarial attacks have been introduced to study if the adversarial examples are transferable from one model to another, when a target model is unknown. Among these, iterative instance-speci\ufb01c attacks [4, 44, 5] perturb a given sample by iteratively using gradient information. Target transferability of such attacks is very poor [4, 24] (as shown in Sec. 4). Other attacks also use feature space either by maximizing the feature difference [45, 11, 22] or applying attention [43] or avoiding non-linearity while backpropagating gradients [8] or exploiting skip-connections [42]. However, these attacks are mainly designed to enhance non-targeted transferability which is an easier problem. Recently, different instance-speci\ufb01c (transferable) targeted attacks have been proposed including [21] which introduces a triplet loss to push adversarial examples towards the target label while increasing their distance from the original label. Inkawhich et al. [13, 14] proposed to exploit feature space [15] along with the classi\ufb01er information [14] to generate target adversaries that are shown to transfer relatively better than other instance-speci\ufb01c attacks. These attacks [13, 14] have the following limitations. a) They need access to a labeled dataset e.g., ImageNet [39] in order to train one-vs-all binary classi\ufb01ers for attacked target classes. b) They need to identify best performing single layer [13] or a combination of layers [14] which adds further complexity to attack optimization. c) Finally, the attack performance degrades signi\ufb01cantly with quality of features, e.g., it struggles to transfer target perturbations from VGG models [13]. Universal Perturbation: In contrast to instance-speci\ufb01c perturbations, [27] learns a single universal noise pattern which is representative of the entire data distribution and can fool a model on majority of samples. Li et al. [23] introduce gradient transformation module to \ufb01nd smooth universal patterns while [29] shows that such patterns can be found without any training data. Although universal perturbations [27, 28, 29, 23] based attacks are ef\ufb01cient (the attacker just needs to add the noise to any given sample at inference), they are limited in their capacity to yield transferable adversaries which can generalize across different data distributions and models [36, 32]. Generative Perturbations: Generative adversarial perturbations perform better than directly optimizing universal noise [38, 36, 32]. Poursaeed et al. [36] proposed the \ufb01rst generative approach to adapt perturbations to an input sample. Naseer et al. [32] improved this framework with relativistic training objective which also allows cross-domain transferability. Our method belongs to the generative category and can adapt to an input sample with a single forward pass. Unlike [38, 36, 32], we seek to fool the model by matching distributions of source and targets with distribu\ftion matching and neighbourhood similarity criteria. Our proposed framework does not require labeled source or target data and can extract target perturbations from a discriminator model trained in an unsupervised manner while previous generative methods are dependent on class-boundary information learned by the model. Further, our method converges faster (Sec. 4) and provides improved targeted transferability owing to its novel loss and training mechanism. 3. Generating Targeted Adversaries Our goal is to craft adversarial perturbations \u03b4 that can fool a model to misclassify any given input to a speci\ufb01c target class t. We assume access to source and target domain data represented by P and Q, from which the source and target class samples are obtained i.e., xs \u223cP, xt \u223cQ. The source and target domains are likely to be non-aligned i.e., P \u0338= Q, making it challenging to achieve targeted transferability of adversarial perturbations. We also consider a perturbed source data P \u2032 that comprises of adversarially manipulated samples x\u2032 s \u223cP \u2032 where x\u2032 s = xs +\u03b4. xs, x\u2032 s and xt represent source, adversarial and target domain samples while D\u03c8(xs), D\u03c8(x\u2032 s) and D\u03c8(xt) are their corresponding latent distributions. 3.1. Generative Model We propose a generative approach to perturb the source domain samples xs to a speci\ufb01ed target class. The framework (see Fig. 2) consists of a generator G\u03b8 and a discriminator D\u03c8 parameterized by \u03b8 and \u03c8, respectively. The generator function G\u03b8 learns a mapping from the source images to the target category such that the input images are minimally changed i.e., adversarial noise \u03b4 is strictly constrained under a norm distance l\u221e\u2264\u03f5. This is ensured by projecting the unbounded adversaries from G\u03b8 within \ufb01xed norm distance of xs using a differentiable clipping operation, x\u2032 s = clip \u0000min(xs + \u03f5, max(W \u2217G\u03b8(xs), xs \u2212\u03f5)) \u0001 , (1) where, W is a smoothing operator with \ufb01xed weights that reduces high frequencies without violating the l\u221edistance constraint. The smooth projection in Eq. 1 (denoted by P in Fig. 2) not only tightly bounds generator\u2019s output within l\u221enorm but also encourages avoiding redundant high frequencies [35] during the optimization process. This allows the generator to converge to a more meaningful solution. The existing generative designs for adversarial attacks [36, 32] leverage the decision space of the discriminator to craft perturbations. In such cases, the class-boundary information learned by the discriminator is used to fool DNN models (e.g. for ImageNet, discriminator is pretrained on 1k classes). This dependence is problematic since an attacker must have access to a discriminator trained on largescale labeled dataset [3]. Attacker then tries to learn target Algorithm 1 Generating TTP Require: Source data Xs, Target data Xt, pretrained discriminator D\u03c8, perturbation budget \u03f5, loss criteria LG. Ensure: Randomly initialize the generator, G\u03b8 1: repeat 2: Randomly sample mini-batches xs \u223cXs and xt \u223cXt 3: Create augmented copy of the source mini-batch \u02dc xs. 4: Forward-pass xs and \u02dc xs through the generator and generate unbounded adversaries; x\u2032 s, \u02dc x\u2032 s. 5: Bound the adversaries using Eq. 1 such that: \u2225x\u2032 s \u2212xs\u2225\u221e\u2264\u03f5 and \u2225\u02dc x\u2032 s \u2212\u02dc xs\u2225\u221e\u2264\u03f5 6: Forward pass x\u2032 s, \u02dc x\u2032 s and xt through D\u03c8. 7: Compute the matching losses; L, Laug and Lsim using Eq. 3, 4 and 8, respectively. 8: Compute the generator loss given in Eq. 9. 9: Backward pass and update G\u03b8 10: until G\u03b8 converges. class impressions using either cross-entropy (CE) [36] or relativistic CE [32]. Thus, the generated perturbations are directly dependent on the quality of the discriminator\u2019s classi\ufb01cation space. Furthermore, the generated adversaries are dependent on the input instance-speci\ufb01c features and do not model the global properties of the target distribution, resulting in only limited transferability. To address above limitations, our generative design models the target distribution Q and pushes the perturbed source distribution P \u2032 closer to Q using the latent space of D\u03c8, \u2225\u03b4 \u2225\u221e\u2264\u03f5, s.t., D\u03c8(x\u2032 s) \u2248D\u03c8(xt). (2) This global objective provides two crucial bene\ufb01ts. First, reducing mismatch between perturbed and target distributions provides an improved guidance to the generator. The resulting perturbations well align the input samples with the target distribution, leading to transferable adversaries. Second, the distributions alignment task makes us independent of the D\u03c8\u2019s classi\ufb01cation information. In turn, our approach can function equally well with a discriminator trained in a self-supervised manner on unlabelled data [1, 9]. In our case, we simply align the feature distributions from D\u03c8 to match P \u2032 and Q. Thus, for a given sample x, n-dimensional features are obtained i.e., D\u03c8(x) \u2208Rn. If D\u03c8 is trained in a supervised manner on ImageNet then n = 1000, and if D\u03c8 is trained in an unsupervised fashion then n is equal to the output feature dimension. 3.2. Distribution Matching We measure the mutual agreement between P \u2032 and Q using Kullback Leibler (KL) divergence de\ufb01ned on discrimi\fShared Weights Augmenter \ud835\udcab \ud835\udcab \ud835\udc65! \ud835\udc65 \"! Pretrained Features Target Samples Unrestricted Adversaries Projection (\ud835\udcab) Generator, \ud835\udca2! Discriminator \ud835\udc9f\" Source Samples \ud835\udc65\" ! \ud835\udc65 \"\" ! \ud835\udc65# KL Divergence \u2112+ \u2112$%& Neighbourhood Similarity \u2112!'( Distribution Matching Generator Loss \ud835\udc9f) \ud835\udc65\" ! \u2248\ud835\udc9f) \ud835\udc61 \ud835\udc9f) \ud835\udc65 \"\" ! \u2248\ud835\udc9f) \ud835\udc61 Figure 2: Targeted Transferable Perturbations: During training, TTP matches adversarial and augmented adversarial samples to a target domain within discriminator\u2019s latent space for improved transferability. The adversarial samples corresponding to original and augmented images are bounded (via projection) around their source samples to explore adversarial space around natural as well as augmented samples. nator features D\u03c8(x\u2032 s) and D\u03c8(xt), DKL(P \u2032\u2225Q) = 1 N N P i=1 n P j=1 \u03c3(D\u03c8(x\u2032,i s ))j log \u03c3(D\u03c8(x\u2032,i s ))j \u03c3(D\u03c8(xi t))j , where N represents the number of samples, n is the discriminator\u2019s output dimension, and \u03c3 denotes the softmax operation. In simple terms, KL divergence measures the difference between two distributions in terms of the average surprise in experiencing xt when we expected to see x\u2032 s. Since KL divergence is asymmetric i.e. DKL(P \u2032\u2225Q) \u0338= DKL(Q\u2225P \u2032), and not a valid distance measure, we de\ufb01ne our loss function for distribution matching [20] as follows: L = DKL(P \u2032\u2225Q) + DKL(Q\u2225P \u2032). (3) As a regularization measure, we add augmented versions of the source domain samples during distribution matching. This enables the generator to focus speci\ufb01cally on adding target class-speci\ufb01c patterns that are robust to input transformations. To this end, we randomly apply rotation, crop resize, horizontal \ufb02ip, color jittering or gray scale transformation to create augmented samples \u02dc xs from the original xs. The \u02dc xs \u223c\u02dc P are passed through the G\u03b8 and the perturbed augmented samples \u02dc x\u2032 s \u223c\u02dc P \u2032 are projected using Eq. 1 to stay close to the augmented samples i.e., \u2225\u02dc x\u2032 s \u2212\u02dc xs\u2225\u221e\u2264\u03f5. No augmentation is applied to the target domain samples. We then pass \u02dc x\u2032 s through the discriminator and compute the mutual agreement between D\u03c8(\u02dc x\u2032 s) and D\u03c8(xt) as follows: Laug = DKL( \u02dc P \u2032\u2225Q) + DKL(Q\u2225\u02dc P \u2032). (4) The impact of data augmentations and their effectiveness for our proposed targeted attack is studied in Sec. 4. 3.3. Neighbourhood Similarity Matching The above objective promotes alignment between the distributions but does not consider the local structure e.g., the relationship between a sample and its augmented versions. For a faithful alignment between perturbed source samples and the target class samples, we propose to also match the neighbourhood similarity distributions between the two domains. Speci\ufb01cally, consider a batch of target domain samples {xi t}N i=1 and a batch of perturbed source domain samples {x\u2032i s }N i=1. For the case of x\u2032 s, in a given training batch, we compute a similarity matrix Ss whose elements encode the cosine similarity between the original sample and its augmented version \u02dc x\u2032 s, i.e., Ss i,j = D\u03c8(x\u2032,i s ) \u00b7 D\u03c8(\u02dc x\u2032,j s ) \u2225D\u03c8(x\u2032,i s )\u2225\u2225D\u03c8(\u02dc x\u2032,j s )\u2225 . (5) In contrast, for the case of xt, we compute similarity between only the original target samples (no augmentations) as we need to model the local neighbourhood connectivity in the target domain. This choice is impractical for the source domain case where many categories co-exist, while for the target distribution, we assume a single category. Thus the target similarity matrix St is computed as, St i,j = D\u03c8(xi t) \u00b7 D\u03c8(xj t) \u2225D\u03c8(xi t)\u2225\u2225D\u03c8(xj t)\u2225 . (6) The resulting similarity matrices are normalized along the row dimension with softmax to obtain probability estimates, \u00af Si,j = exp(Si,j) P k exp(Si,k), where, S \u2208{Ss, St}. (7) Here, each term shows the probability with which the two sample pairs are related to each other. Given \u00af Ss and \u00af St, we \fcompute the KL divergence to enforce a loss term that seeks to match the local neighbourhood patterns between source and target domains, Lsim = X i,j \u00af St i,j log \u00af St i,j \u00af Ss i,j + X i,j \u00af Ss i,j log \u00af Ss i,j \u00af St i,j . (8) 3.4. Overall loss function Finally, the generator parameters are updated by minimizing the following loss (Algorithm 1): LG = L + Laug + Lsim. (9) This loss encourages the generator to perturb source samples that not only match the global characteristics of the target distribution (L + Laug), but also the local information based on neighbourhood connectivity (Lsim). 4. Experiments Our generator G\u03b8 is based on ResNet architecture [17], and outputs an adversarial sample with the same size as of input (Fig. 3). This generator architecture is the same as in the baseline generative attacks [36, 32]. Our discriminator D\u03c8 is pre-trained in a supervised or self-supervised manner. For training G\u03b8, we freeze D\u03c8. We use Adam optimizer [19] with a learning rate of 10\u22124 (\u03b21 = .5, \u03b22 = .999) for 20 epochs. For source domain data, we use 50k random images from ImageNet train set. Our method is not sensitive to the choice of source samples since it can learn transferable perturbations even from other domains e.g. Paintings. Similar to other generative methods [36, 32], we \ufb01x source data. For target domain data, we use 1300 images for each target collected from ImageNet training set (without their original labels). We used default settings or implementations as provided by the authors of baseline attacks. Similarly, we used open-sourced (pretrained) stylized [7], adversarial [40] and puri\ufb01er (NRP) [30] models to evaluate robustness. 4.1. Evaluation Settings We perform inference on ImageNet validation set (50k samples). No augmentations are applied at inference time. The perturbation budget is tightly bounded and clearly mentioned in each experiment following the standard practices l\u221e\u226416 [4, 15, 14] and l\u221e\u226432 [32, 23]. We perturb all the ImageNet val. samples (except the target samples) to the pre-de\ufb01ned target class. We repeat this process for all the given targets and report Top-1 (%) accuracy averaged across all targets. We compare our method under two main settings (10-Targets and 100-Targets), as described below. 10-Targets: We further consider two settings. (a) 10Targets (subset-source) which is consistent with [13] and has a subset of source classes at inference. (b) 10-Targets (all-source) which is a more challenging large-scale setting Naturally Trained (IN) Models Src. Attack VGG19BN Dense121 ResNet50 ResNet152 WRN-50-2 VGG19BN PGD [26] 95.67\u2217 0.31 0.30 0.20 0.25 MIM [4] 99.91\u2217 0.92 0.68 0.36 0.47 DIM [44] 99.38\u2217 3.10 2.08 1.02 1.29 DIM-TI [5] 89.71\u2217 1.08 0.66 0.42 0.45 Po-TRIP [21] 99.40\u2217 4.61 3.21 1.78 2.01 GAP [36] 98.23\u2217 16.19 15.83 5.89 7.78 CDA [32] 98.30\u2217 16.26 16.22 5.73 8.35 Ours-P 97.38\u2217 45.53 42.90 26.72 31.00 Ours 98.54\u2217 45.77 45.87 27.18 32.63 Vens Ours 97.34\u2217 71.41 71.68 50.78 48.03 Dense121 PGD [26] 1.28 97.40\u2217 1.78 1.01 1.37 MIM [4] 1.85 99.90\u2217 2.71 1.68 1.88 DIM [44] 7.31 98.81\u2217 9.06 5.78 6.29 DIM-TI [5] 0.91 88.59\u2217 1.18 0.77 0.86 Po-TRIP [21] 8.10 99.00\u2217 11.21 7.83 8.50 GAP [36] 39.01 97.30\u2217 47.85 39.25 34.79 CDA [32] 42.77 97.22\u2217 54.28 44.11 46.01 Ours-P 57.91 97.41\u2217 71.35 55.57 53.45 Ours 58.90 97.61\u2217 68.72 57.11 56.80 Dens Ours 76.96 96.25\u2217 88.81 83.48 81.85 ResNet50 PGD [26] 0.92 1.38 93.74\u2217 1.86 1.89 MIM [4] 1.58 3.37 98.76\u2217 3.39 3.17 DIM [44] 9.14 15.47 99.01\u2217 12.45 12.61 DIM-TI [5] 0.79 2.12 88.91\u2217 1.47 1.45 Po-TRIP [21] 12.01 19.43 99.22\u2217 14.41 15.10 GAP [36] 58.47 71.72 96.81\u2217 64.89 61.82 CDA [32] 64.58 73.57 96.30\u2217 70.30 69.27 Ours-P 73.09 84.76 96.63\u2217 76.27 75.92 Ours 78.15 81.64 97.02\u2217 80.56 78.25 Rens Ours 90.43 94.39 96.67\u2217 95.48\u2217 92.63 Table 1: Target Transferability: {10-Targets (all-source)} Top1 target accuracy (%) averaged across 10 targets with 49.95K ImageNet val. samples. Perturbation budget: l\u221e\u226416. Our method outperforms previous instance-speci\ufb01c as well as generative approaches by a large margin. \u2019*\u2019 indicates white-box attack. Ours-P represents TTP trained on Paintings. as source images can come from all the ImageNet classes except the target class. For consistency and direct comparison, the ten target classes are same as in [13]. \u221210-Targets (subset-source): Following [13], for each target class, 450 source samples belonging to remaining 9 classes (except target class) become inputs to G\u03b8 to be transferred to the selected target. \u221210-Targets (all-source): For each target class, samples of all 999 source classes (except the target class) in ImageNet val. set are considered i.e., for each target class, 49,950 samples of 999 classes become inputs to G\u03b8. 100-Targets (all-source): We divide ImageNet 1k classes into 100 mutually exclusive sets. Each set contains 10 classes. We randomly sample 1 target from each set to create 100 targets (see Appendix E for more details). Generators are trained against these targets and evaluated on ImageNet val. set in 100-Targets (all-source) setting with the same protocol as described for 10-Targets (all-source). \fIndigo Bunting Cardoon Impala Wood Rabbit Crane Elephant Fire Truck Fire Truck Fire Truck Fire Truck Fire Truck Fire Truck Figure 3: Targeted adversaries produced by a TTP generator learned to maximize the agreement with \u2019Fire Truck\u2019 distribution against Dense121 ImageNet model. 1st and 2nd rows show clean images and unrestricted outputs of the adversarial generator, respectively. 3rd row shows adversaries after valid projection. See Appendix F for more qualitative examples including comparisons between targeted patterns learned by TTP from different source models of a certain family of networks. Src. Attack VGG19BN Dense121 ResNet50 VGG19BN AA [15] \u2013 0.8 0.6 FDA-fd [13] \u2013 3.0 2.1 FDAN [14] \u2013 6.0 5.4 CDA [32] \u2013 17.82 17.09 Ours-P \u2013 48.56 44.47 Ours \u2013 48.29 47.07 Dense121 AA [15] 0.0 \u2013 0.0 FDA-fd [13] 34.0 \u2013 34.0 FDAN [14] 42.0 \u2013 48.3 CDA [32] 44.84 \u2013 53.73 Ours-P 59.81 71.32 Ours 61.75 \u2013 69.60 ResNet50 AA [15] 1.1 2.0 \u2013 FDA-fd [13] 16.0 21.0 \u2013 FDAN [14] 32.1 48.3 \u2013 CDA [32] 68.55 75.68 \u2013 Ours-P 75.18 85.71 Ours 79.04 84.42 \u2013 Table 2: Target Transferability:{10-Targets (sub-source)} Top1 accuracy (%) across 10 targets. Our method shows signi\ufb01cant improvements in trasfering target perturbations compared to generative as well as feature based instance-speci\ufb01c method [13, 14]. Perturbation budget:l\u221e\u226416. Only black-box attack results are shown. Ours-P represents TTP trained on Paintings. 4.2. Attack Protocols and Results We evaluate black-box target transferability in the following scenarios. (a) Unknown Target Model: Attacker has access to a pretrained discriminator trained on labeled data but has no knowledge about the architecture of the target model. (b) Unknown Decision Space: Attacker has access to the pre-trained discriminator trained on unlabeled data in an unsupervised manner but does not know about the architecture and the class-boundary information learned by the target model. (c) Unknown Defense: Attacker is unaware of the type of defense deployed at the target model, or if any defense is applied at all, e.g., the defense can be an input processing approach or a robust training mechanism such as adversarial training. 4.2.1 Unknown Target Model Natural Training: We evaluate naturally trained ImageNet models and show strong empirical results in Tables 1, 2 & 3 demonstrating that generative methods are far superior than sample-speci\ufb01c targeted attacks based on boundary information [21, 4, 44] or feature exploitation [15, 13, 14]. Our approach has signi\ufb01cantly higher target transferability rates than previous generative methods [32, 36]. To highlight an example from Table 2, our method achieves 47.07% transferability from VGG19BN to ResNet50 which is 175% and 771% better than the previous best generative [32] and sample-speci\ufb01c [14] target attacks, respectively. Ensemble Effect: We also train generators with our algorithm on the ensembles of same-family discriminators. Speci\ufb01cally, we de\ufb01ne the following ensembles: Vens:VGG{11,13,16,19}BN,Rens:ResNet{18,50,101, 152}, and Dens:DenseNet{121,161,169,201}. The purpose of such ensembles is to understand if the combination of weak individual models from the same family can provide strong learning for the target distributions. From Table 1, we observe that modeling target distribution from an ensemble provides signi\ufb01cantly better tranferability than any individual discriminator (see Appendix A for more analysis). This signi\ufb01es that an attacker can use multiple variants of the same network to boost the attack. Target Transferability and Model Disparity: We note that within a speci\ufb01c family, transferring targeted perturbations from a smaller model to a larger one (e.g. ResNet18 \u2192ResNet152 or VGG11BN \u2192VGG19) is dif\ufb01cult as we increase the size discrepancy. Interestingly, this trend remains the same even from larger to smaller models i.e., the attack strength will increase with the disparity between models rather than only depending upon the strength of target model. For example, target transferability ResNet152 \u2192ResNet50 is higher than ResNet152 \u2192ResNet18 even though ResNet18 is weaker than ResNet50 (Fig. 4). Similar behaviour can be observed within cross-family models i.e., \fVGG11 VGG13 VGG16 VGG19 VGG11BN VGG13BN VGG16BN VGG19BN Source Models 62.2 60.0 55.0 46.8 47.6 47.4 42.4 38.1 52.4 58.0 59.6 54.8 60.4 65.5 64.4 69.0 Ours VGG11 VGG13 VGG16 VGG19 VGG11BN VGG13BN VGG16BN VGG19BN 45.7 41.7 43.3 28.8 19.7 20.6 21.1 15.9 28.3 38.3 41.9 34.8 33.2 40.0 49.8 43.3 CDA VGG11 VGG13 VGG16 VGG19 VGG11BN VGG13BN VGG16BN VGG19BN 56.2 47.2 46.7 31.2 19.5 25.9 23.0 17.5 40.8 39.9 45.3 42.8 31.7 38.9 53.4 42.8 GAP ResNet18 ResNet50 ResNet101ResNet152 Target Models ResNet18 ResNet50 ResNet101 ResNet152 Source Models 97.6 65.5 49.1 47.6 75.2 97.0 82.0 80.6 71.0 76.1 96.1 85.0 75.0 86.7 87.1 96.4 ResNet18 ResNet50 ResNet101 ResNet152 Target Models ResNet18 ResNet50 ResNet101 ResNet152 98.0 64.5 47.3 45.4 59.0 96.3 70.6 70.3 58.7 82.8 95.2 79.5 58.9 78.0 79.6 95.8 ResNet18 ResNet50 ResNet101 ResNet152 Target Models ResNet18 ResNet50 ResNet101 ResNet152 97.9 59.1 46.0 41.4 55.4 96.8 68.5 64.9 57.6 79.6 92.7 76.8 61.1 81.3 79.6 95.0 Figure 4: Within Family Target Transferability: {10-Targets (all source) settings} These results indicate that our approach boosts target transferability within different models of the same family with or without batch-norm and favorably beats the previous generative approaches (GAP [36], CDA [32]) by a large margin. Each value is averaged across 10 targets (Sec. 4) with 49.95k ImageNet val. samples for each target. Perturbation budget is set to l\u221e= 16. GAP CDA Ours Attack 0 10 20 30 40 Target Accuracy (%) VGG19BN Dense121 VGG19BN Dense121 MoCo Dense121 GAP CDA Ours Attack 0 5 10 15 20 25 30 35 40 Target Accuracy (%) Dense121 VGG19 Dense121 VGG19 MoCo VGG19 GAP CDA Ours Attack 0 5 10 15 20 25 Target Accuracy (%) ResNet50 InceptionV3 ResNet50 InceptionV3 MoCo InceptionV3 Figure 5: Target Transferability of Unsupervised Features: {10-Targets (all-source) settings}. Our approach when applied to unsupervised features, MoCo [9], surpasses GAP [36] and CDA [32] that are dependent on classi\ufb01cation layer by design. Perturbation budget is l\u221e= 16. target transferability from ResNet50 to Dense121 and vice versa is higher than VGG19BN as both models share skip connections (Table 1). See Appendix B for vulnerability of models with and without batch-norm [16]. 4.2.2 Unknown Decision Space Here we investigate the question, \u201cCan unsupervised features provide targeted adversarial perturbations?\u201d A unique property of our proposed approach is that it can be applied to feature space without any class boundary information to achieve target adversarial direction. This allows an attacker to bene\ufb01t from recently proposed unsupervised feature learning methods [9, 1]. Rather than using a discriminator trained on large scale labelled data, attack can be learned and launched from features of a discriminator trained purely in an unsupervised fashion. Therefore, our attack can eliminate the cost of label annotations. Results in Fig. 5 demonstrate that our method learned from unsupervised features, MoCo [9], not only provides target transferability but surpasses the previous generative methods which are dependent on the discriminator trained on labelled data. 4.2.3 Unknown Defense Mechanisms Input Processing as a Defense: We evaluate robustness of different input processing based adversarial defense methods in Fig. 6. We consider the following four representative defenses: a) JPEG with compression quality set to 50% [2], b) DNN-Oriented JPEG compression [25], c) Median Blur with window size set to 5\u00d75 [31], and d) Neural representaAttack VGG19BN Dense121 ResNet-152 WRN-50-2 SIN [7] GAP [36] 47.87 58.10 54.72 49.65 7.1 CDA [32] 53.41 60.34 57.67 51.23 7.6 Ours 69.55 77.48 75.74 74.61 31.0 Table 3: Target Transferability: {100-Targets (all-source)} Top1 target accuracy (%) averaged across 100 targets with 49.95K ImageNet val. samples per target . Generators are trained against ResNet50. Perturbation budget is l\u221e\u226416. \u03f5 Attack Augmix Stylized [7] Adversarial [40] [10] SIN-IN SIN l\u221e l2 \u03f5=.5 \u03f5=1 \u03f5=.1 \u03f5=.5 16 GAP [36] 51.57 76.92 12.96 1.88 0.34 23.41 0.92 CDA [32] 59.79 75.93 9.21 2.10 0.39 23.89 1.18 Ours 73.09 87.40 30.17 4.63 0.56 45.40 1.99 Oursens 88.79 92.96 57.75 14.23 1.24 74.95 7.62 32 GAP [36] 54.86 81.15 28.07 26.32 6.36 59.04 16.53 CDA [32] 63.18 76.81 19.65 27.60 6.74 57.54 16.07 Ours 78.66 91.27 41.52 46.82 16.35 75.97 30.94 Oursens 89.96 94.15 70.70 70.22 34.21 90.42 58.25 Table 4: Target Transferability: {10-Targets (all source) settings} Top-1 (%) target accuracy. Generators are trained against naturally trained ResNet50 or ResNet ensemble. Perturbation are then transferred to ResNet50 trained using different methods including Augmix [10], Stylized [7] or adversarial [40]. tion puri\ufb01er (NRP) [30] which is a state-of-the-art defense. Generators are trained against naturally trained ResNet50 and target perturbations are then transferred to VGG19BN and Dense121 which are protected by the input processing defenses. We observe (Fig. 6) that JPEG is the least effec\f16 20 24 28 32 Pertubation Budget ( ) 30 40 50 60 70 Target Accuracy (%) ResNet50 JPEG: VGG19BN GAP CDA Ours 16 20 24 28 32 Perturbation Budget ( ) 20 30 40 50 60 Target Accuracy (%) ResNet50 JPEG-DNN: VGG19BN GAP CDA Ours 16 20 24 28 32 Pertubation Budget ( ) 15 20 25 30 35 40 45 Target Accuracy (%) ResNet50 Blur: VGG19BN GAP CDA Ours 16 20 24 28 32 Pertubation Budget ( ) 0 10 20 30 40 50 60 Target Accuracy (%) ResNet50 NRP: VGG19BN GAP CDA Ours 16 20 24 28 32 Pertubation Budget ( ) 30 40 50 60 70 80 Target Accuracy (%) ResNet50 JPEG: Dense121 GAP CDA Ours 16 20 24 28 32 Perturbation Budget ( ) 20 25 30 35 40 45 50 55 60 Target Accuracy (%) ResNet50 JPEG-DNN: Dense121 GAP CDA Ours 16 20 24 28 32 Pertubation Budget ( ) 10 15 20 25 30 35 40 45 Target Accuracy (%) ResNet50 Blur: Dense121 GAP CDA Ours 16 20 24 28 32 Pertubation Budget ( ) 0 10 20 30 40 50 60 70 Target Accuracy (%) ResNet50 NRP: Dense121 GAP CDA Ours Figure 6: Target Transferability against Input Processing Defenses: {10-Targets (allsource) settings} Input processing including NRP [30] are broken under targeted blackbox attacks. Our method outperforms GAP [36] and CDA [32] on all the considered defenses including JPEG, JPEG-DNN [25], Median Blur and NRP [30]. Each point is an averaged across 10 targets (Sec. 4) with 49.95k ImageNet val. samples for each target. Generators are trained against ResNet50. tive method against target attacks while JEPG-DNN [25] performs relatively better than JPEG. Compared to JPEG, JPEG-DNN and Median blur, NRP shows better resistance to target attacks at l\u221e\u226416 but quickly breaks as perturbation is increased. Median blur shows more resistance than JPEG, JPEG-DNN and NRP at higher perturbation rates (l\u221e\u226432)1. Success rate of our method is much better than previous generative attacks [36, 32] even when the target model and the input processing remain unknown (Fig. 6). Robust Training Mechanism: Here we study the transferability of our approach against various robust training methods (augmented vs. stylized vs. adversarial) based defense strategies. Augmentation based training can make the model robust to natural corruptions [10] while training on stylized ImageNet [7] improves shape bias and training on adversarial examples can improve robustness against adversarial attacks at the cost of computation, clean accuracy, and generalization to global changes [6]. We evaluate the vulnerability of these training methods in Table 4. Generators are trained against naturally trained ResNet50 or ResNet ensemble and adversarial perturbations are then transferred to ResNet50 trained using Augmix [10], Stylized ImageNet (SIN) [7], mixture of Stylized and natural ImageNet (SININ) and adversarial examples [40]. Target transferability can easily be achieved against models trained on mixture (SIN-IN), however, the model trained on stylized images (SIN) shows higher resistance but remains vulnerable as our target attack (ensemble) achieves \u224871% success at perturbation of l\u221e= 32 (Table 4). Adversarially trained models using Madry\u2019s method [26] are more robust to target attacks. 4.3. Ablative Analysis In order to understand the effect of each component of our approach, we present an ablative study in Fig. 7. Target perturbations are transferred from ResNet50 to VGG16 (SIN) trained on stylized ImageNet which is a much harder task than transferring to naturally trained VGG16. We observe that training TTP on only distribution matching loss (Eq. 3) increases the transferability by more than 100% in 1Blur defense causes large drop in clean accuracy (see Appendix D). GAP CDA Ours Ours Ours Ours Attack 0 5 10 15 20 25 30 Target Accuracy (%) ResNet50 VGG16 (SIN) CE RCE without with + aug + aug + sim 1 3 5 7 9 11 13 15 17 19 Epochs 0 5 10 15 20 25 30 Target Accuracy (%) ResNet50 VGG16 (SIN) GAP CDA Ours Figure 7: Ablation: We dissect effect of each component of our method including novel losses, augmentation, smooth projection and epochs. Results are presented with 10-Target (all source) settings. Perturbation budget is set to l\u221e= 16. comparison to GAP [36] (with cross-entropy) or CDA [32] (with relativistic cross-entropy). Adding smoothing operator W enhances the ef\ufb01ciency of TTP. W is a differentiable Gaussian kernel with size 3\u00d73. We then noticed a signi\ufb01cant jump in transferability when augmentations are introduced and TTP is trained using both distribution matching losses (Eq. 3 & 4) which is further complemented by neighbor similarity loss (Eq. 8). Our generator trained for only one epoch outperforms GAP and CDA trained for 20 epochs (Fig. 7) which highlights our rapid convergence rate. 5." + }, + { + "url": "http://arxiv.org/abs/2006.04924v1", + "title": "A Self-supervised Approach for Adversarial Robustness", + "abstract": "Adversarial examples can cause catastrophic mistakes in Deep Neural Network\n(DNNs) based vision systems e.g., for classification, segmentation and object\ndetection. The vulnerability of DNNs against such attacks can prove a major\nroadblock towards their real-world deployment. Transferability of adversarial\nexamples demand generalizable defenses that can provide cross-task protection.\nAdversarial training that enhances robustness by modifying target model's\nparameters lacks such generalizability. On the other hand, different input\nprocessing based defenses fall short in the face of continuously evolving\nattacks. In this paper, we take the first step to combine the benefits of both\napproaches and propose a self-supervised adversarial training mechanism in the\ninput space. By design, our defense is a generalizable approach and provides\nsignificant robustness against the \\textbf{unseen} adversarial attacks (\\eg by\nreducing the success rate of translation-invariant \\textbf{ensemble} attack\nfrom 82.6\\% to 31.9\\% in comparison to previous state-of-the-art). It can be\ndeployed as a plug-and-play solution to protect a variety of vision systems, as\nwe demonstrate for the case of classification, segmentation and detection. Code\nis available at: {\\small\\url{https://github.com/Muzammal-Naseer/NRP}}.", + "authors": "Muzammal Naseer, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Fatih Porikli", + "published": "2020-06-08", + "updated": "2020-06-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Adversarial training (AT) has shown great potential to safeguard neural networks from adversarial attacks [33, 40]. So far in literature, AT is performed in the model space i.e., a model\u2019s parameters are modi\ufb01ed by minimizing empirical risk for a given data distribution as well as the perturbed images. Such AT strategy results in the following challenges. (a) Task dependency: AT is task-dependent e.g. robust classi\ufb01cation models cannot directly be incorporated into an object detection or a segmentation pipeline, since the overall system would still require further training Purifier Network Until Converge Perturbed Image Clean Image Purified Image Perceptual Feature Space Maximize Feature Distortion \u2206G Minimize Feature Distortion \u2206H \u2206G SSP \u2206H Figure 1: Our main idea is to train a Puri\ufb01er Network in a selfsupervised manner. We generate perturbed images using our proposed Self-supervised Perturbation (SSP) attack that disrupts the deep perceptual features. The Puri\ufb01er Network projects back the perturbed images close to the perceptual space of clean images. This creates a training loop independent of the task or label space. with modi\ufb01ed task-dependant loss functions. (b) Computational cost: AT is computationally expensive [33] which restricts its applicability to high-dimensional and large-scale datasets such as ImageNet [39]. (c) Accuracy drop: models trained with AT lose signi\ufb01cant accuracy on the original distribution e.g. ResNet50 [19] accuracy on ImageNet validation set drops from 76% to 64% when robusti\ufb01ed against PGD attack [33] at a perturbation budget of only \u03f5 \u22642 (i.e. maximum change in each pixel can be 2/255). (d) Label leakage: supervised AT suffers from label leakage [26] which allows the model to over\ufb01t on perturbations thus affecting model generalization to unseen adversaries [56]. In comparison to AT, input processing methods [16, 50] for adversarial defense are scalable and can work across different tasks. However, they have been broken in white-box 1 arXiv:2006.04924v1 [cs.CV] 8 Jun 2020 \fsettings [2] and shown to be least effective in black-box settings. For example, [10] successfully transfer their attack against multiple input processing based defenses even when the backbone architecture is adversarially trained using [48]. Furthermore, input transformations (e.g., Gaussian smoothing and JPEG compression) can maximize the attack strength instead of minimizing it [37, 10]. Motivated by the complementary strengths of AT and input processing methods, we propose a self-supervised AT mechanism in the input space. Our approach (Fig. 1) uses a min-max (saddle point) formulation to learn an optimal input processing function that enhances model robustness. In this way, our optimization rule implicitly performs AT. The main advantage of our approach is its generalization ability, once trained on a dataset, it can be applied offthe-shelf to safeguard a completely different model. This makes it a more attractive solution compared to popular AT approaches that are computationally expensive (and thus less scalable to large-scale datasets). Furthermore, in comparison to previous pre-processing based defenses that are found to be vulnerable towards recent attacks, our defense demonstrates better robustness. Our main contributions are: \u2022 Task Generalizability: To ensure a task independent AT mechanism, we propose to adversarially train a purifying model named Neural Representation Puri\ufb01er (NRP). Once trained, NRP can be deployed to safeguard across different tasks, e.g., classi\ufb01cation, detection and segmentation, without any additional training (Sec. 3). \u2022 Self-Supervision: The supervisory signal used for AT should be self-supervised to make it independent of label space. To this end, we propose an algorithm to train NRP on adversaries found in the feature space in random directions to avoid any label leakage (Sec. 3.1). \u2022 Defense against strong perturbations: Attacks are continuously evolving. In order for NRP to generalize, it should be trained on worst-case perturbations that are transferable across different tasks. We propose to \ufb01nd highly transferable perceptual adversaries (Sec. 4.3). \u2022 Maintaining Accuracy: A strong defense must concurrently maintain accuracy on the original data distribution. We propose to train the NRP with an additional discriminator to bring adversarial examples close to original samples by recovering the \ufb01ne texture details (Sec. 4.2). 2. Related Work Defenses: A major class of adversarial defenses processes the input images to achieve robustness against adversarial patterns. For example, [16] used JPEG compression to remove high-frequency components that are less important to human vision using discrete cosine transform. A compressed sensing approach called Total Variation Minimization (TVM) was proposed in [16] to remove the small localized changes caused by adversarial perturbations. Xie et al. [51] introduced the process of Random Resizing and Padding (R&P) as a pre-processing step to mitigate the adversarial effect. A High-level representation Guided Denoiser (HGD) [29] framework was used as a pre-processing step to remove perturbations. NeurIPS 2017 Defense Competition Rank-3 (NeurIPS-r3) approach [47] introduced a two step prep-processing pipeline where the images \ufb01rst undergo a series of transformations (JPEG, rotation, zoom, shift and sheer) and then passed through an ensemble of adversarially trained models to obtain the weighted output response as a prediction. [41] proposed to recover adversaries using GAN and [35] super-resolve images to minimize adversarial effect. As compared to the above defenses, we design an input processing model that derives a selfsupervised signal from the deep feature space to adversarially train the defense model. Our results show signi\ufb01cantly superior performance to all so-far developed input processing based defenses. Attacks: The self-supervised perturbation signal obtained to adversarially train our proposed approach can also be used as an adversarial attack. Since the seminal work of Szegedy et al. [46], many adversarial attack algorithms [14, 15, 3, 9] have been proposed to show the vulnerability of neural networks against imperceptible changes to inputs. A single-step attack, called Fast Gradient Sign Method (FGSM), was proposed in [14]. In a follow-up work, Kurakin et al. [15] proposed a robust multi-step attack, called Iterative Fast Gradient Sign Method (I-FGSM) that iteratively searches the loss surface of a network under a given metric norm. To improve transferability, a variant of I-FGSM, called momentum iterative fast gradient sign method (MI-FGSM), was introduced [9], which signi\ufb01cantly enhanced the transferability of untargeted attacks on ImageNet dataset [39] under a l\u221enorm budget. More recently, [53] proposed a data augmentation technique named input diversity method (DIM) to further boost the transferability of these attack methods. In contrast to our selfsupervised attack approach, all of these methods are supervised adversarial attacks that rely on cross-entropy loss to \ufb01nd the deceptive gradient direction. 3. Neural Representation Puri\ufb01er Our defense aims to combine the bene\ufb01ts of adversarial training and input processing methods in a single framework that is computationally ef\ufb01cient, generalizable across different tasks and retains the clean image accuracy. The basic intuition behind our defense mechanism is to effectively use information contained in the feature space of deep networks to obtain an automatic supervisory signal. To this end, we design a Neural Representation Puri\ufb01er (NRP) model that learns to clean adversarially perturbed images based on the automatically derived (self) supervision. The objective is to recover the original benign image x \fFigure 2: Neural Representation Puri\ufb01er. Using a self-supervision signal, the proposed defense learns to purify perturbed images, such that their corresponding perceptual representation in deep feature space becomes close to clean natural images. given an input adversarial image x\u2032. We wish to remove the adversarial patterns by training a neural network P\u03b8 parameterized by \u03b8, which we refer as the puri\ufb01er network. The main objective is to be independent of the task-speci\ufb01c objective function, such that once trained, the proposed defense is transferable to other models (even across tasks). Towards this end, the network P\u03b8 is trained in an adversarial manner by playing a game with the critic network C\u03c6, and a feature extractor F\u03c8 (see Fig. 2). The function of the puri\ufb01er and critic networks is similar to generator and discriminator in a traditional Generative Adversarial Network (GAN) framework, with the key difference that in our case, P\u03b8 performs image restoration instead of image generation. The feature extractor, F\u03c8, is pretrained on ImageNet and remains \ufb01xed, while the other two networks are optimized during training. Adversarial examples x\u2032 are created by maximizing the F\u03c8\u2019s response in random directions de\ufb01ned by a distance measure (Algorithm 1), while at minimization step, P\u03b8 tries to recover the original sample x by minimizing the same distance (Algorithm 2). 3.1. Self-Supervision The automatic supervision signal to train NRP defense is obtained via a loss-agnostic attack approach. Below, we \ufb01rst outline why such a Self-Supervised Perturbation (SSP) is needed and then describe our approach. Motivation: Strong white-box attacks [15, 6], that are generally used for AT, consider already-known network parameters \u03b8 and perturb the inputs to create x\u2032, such that they are misclassi\ufb01ed by the target model, i.e. T (x\u2032; \u03b8) \u0338= y. Since the perturbations are calculated using gradient directions speci\ufb01c to \u03b8, the resulting perturbed images x\u2032 do not generalize well to other networks [9, 43, 9, 53, 58]. This dependency limits these attacks to a speci\ufb01c network and task. In contrast, our goal is to design a self-supervised perturbation mechanism that can generalize across networks and tasks, thus enabling a transferable defense approach. 2 3 4 5 6 7 8 9 10 Number of iterations 0 10 20 30 40 50 60 I-FGSM: Fooling Rate I-FGSM: Feature Distortion MI-FGSM: Fooling Rate MI-FGSM: Feature Distortion Figure 3: Fooling rate of Inc-v4 and average feature distortion is shown for adversaries generated on Inc-v3 (black-box setting) by I-FGSM and MI-FGSM. As the number of iterations increases, fooling rate of I-FGSM decreases along with its feature distortion while MI-FGSM maintains its distortion as iterations increase. The self-supervised perturbation is based on the concept of \u2018feature distortion\u2019, introduced next. Feature Distortion: Given a clean image x and its perturbed counterpart x\u2032 that is crafted to fool the target model T (\u00b7), the feature distortion refers to the change that x\u2032 causes to the internal representations of a neural network F(\u00b7) relative to x. This can be represented by, \u2206(x, x\u2032) = d \u0000F(x; \u03b8)|n, F(x\u2032; \u03b8)|n \u0001 , (1) where, F(x; \u03b8)|n denotes the internal representation obtained from the nth layer of a pretrained deep network F(\u00b7) and d(\u00b7) is a distance metric which can be \u2113p [14], Wasserstein distance [1] or cosine similarity between the features of the original and perturbed sample. The reason why we base our self-supervised perturbation on feature distortion is its direct impact on the perturbation transferability. To show this, we conduct a proofof-concept experiment by generating adversarial examples \fAlgorithm 1 SSP: Self-Supervised Perturbation Require: A feature extractor F\u03c8, batch of clean samples x, input transformation R, perturbation budget \u03f5, step-size \u03ba, and number of iterations T. Ensure: Perturbed sample x\u2032 with \u2225x\u2032 \u2212x\u2225\u221e\u2264\u03f5. 1: g0 = 0; x\u2032 = R(x); 2: for t = 1 to T do 3: Forward pass x\u2032 t to F\u03c8 and compute \u2206using Eq. 1; 4: Compute gradients gt = \u2207x \u2206(xt, x\u2032); 5: Generate adversaries using; x\u2032 t+1 = x\u2032 t + \u03ba \u00b7 sign(gt); (2) 6: Project adversaries in the vicinity of x x\u2032 t+1 = clip(x\u2032 t+1, x \u2212\u03f5, x + \u03f5); (3) 7: end for 8: return x\u2032 = x\u2032 T . on ImageNet-NeurIPS [7]. We consider two popular attack methods, MI-FGSM [9] and I-FGSM [15], among which MI-FGSM has higher transferability compared to IFGSM. Interestingly, feature distortion strength of I-FGSM decreases as the number of attack iterations increases, compared to MI-FGSM (Fig. 3). MI-FGSM maintains its perturbation strength with increasing number of iterations. This indicates that feature distortion has a direct impact on transferability and therefore maximizing the objective in Eq. 1 (signifying feature-space distortion) can boost the transferability of adversarial examples without using any decision boundary information. Based on this observation, our proposed perturbation generation approach directly maximizes the distortion in deep feature space to create strong, highly generalizable and task-independent adversarial examples. Self-supervised Perturbation: Conventional black-box attacks operate in the logit-space of deep networks. The objective of \u2018logit-based\u2019 adversarial attacks is to change the target model\u2019s prediction for a clean image T (x) \u0338= T (x\u2032) such that x\u2032 is bounded: \u2225x \u2212x\u2032\u2225\u2264\u03f5. In contrast to these methods, we propose to \ufb01nd adversaries by maximizing the feature loss (Sec. 3.2) of neural networks. Our approach does not rely on decision-boundary information since our \u2018representation-based\u2019 attack directly perturbs the feature space by solving the following optimization problem: max x\u2032 \u2206(x, x\u2032) subject to: \u2225x \u2212x\u2032\u2225\u221e\u2264\u03f5, (4) Our proposed method to maximize feature distortion for a given input sample is summarized in Algorithm 1. We apply a transformation R to input x at the \ufb01rst iteration (Algorithm 1) to create a neural representation difference between an adversarial and benign example and then maximize the difference within a given perturbation budget. There can be different choices for R but in this work, R simply adds random noise to the input sample, i.e. our algorithm takes a random step at the \ufb01rst iteration. Algorithm 2 NRP: Neural Representation Puri\ufb01cation via Self-Supervised Adversarial Training Require: Training data D, Puri\ufb01er P\u03b8, feature extractor F\u03c8, critic network C\u03c6, perturbation budget \u03f5 and loss criteria L. Ensure: Randomly initialize P\u03b8 and C\u03c6. 1: repeat 2: Sample mini-batch of data, x, from the training set. 3: Find adversaries, x\u2032, at a given perturbation budget \u03f5 by maximizing distance, \u2206(Eq. 1), using Algorithm 1. 4: Forward-pass x\u2032 through P\u03b8 and calculate LP\u03b8 (Eq. 8). 5: Back-pass and update \u03b8 to minimize LP\u03b8 (Eq. 8). 6: Update C\u03c6 to classify x from P\u03b8(x\u2032). 7: until P\u03b8 converges. 3.2. NRP Loss functions We propose a hybrid loss function that is used to train the puri\ufb01er network (see Algorithm 2). This loss function consists of three terms that we explain below: Feature loss: The Self-supervised Perturbation (SSP) generated by Algorithm 1 is the direct result of increasing the feature loss function, \u2206, de\ufb01ned on the feature extractor F\u03c8. In order to learn the puri\ufb01er network, we must decrease this distance as follows: Lfeat = \u2206 \u0000F\u03c8(x), F\u03c8(P\u03b8(x\u2032)) \u0001 , (5) where, \u2206is formally de\ufb01ned in Eq. 1, and the distance measure used to compute \u2206is the mean absolute error (MAE). We empirically observe that removing Lfeat loss leads to a network that does not converge to a meaningful state and produces weaker defense (see Fig. 5). Pixel loss: Smoothing images can help in mitigating the adversarial effect since the perturbation patterns resemble to that of noise. Therefore, in order to encourage smoothness, we apply l2 loss in the image pixel space, Limg = \u2225P\u03b8(x\u2032) \u2212x\u22252. (6) Adversarial loss: Instead of using vanilla GAN objective, we use relativistic average GAN which has shown better convergence properties [23, 37]. For a given batch of original, x, and adversarial examples, x\u2032, the relativistic loss for the puri\ufb01er network P\u03b8 is given as: Ladv = \u2212log \u0000\u03c3 \u0000C\u03c6(P\u03b8(x\u2032)) \u2212C\u03c6(x) \u0001 \u0001 , (7) where \u03c3 represents the sigmoid layer. The overall loss objective for P\u03b8 is the combination of losses de\ufb01ned on pixel and feature spaces as well as the relativistic loss: LP\u03b8 = \u03b1 \u00b7 Ladv | {z } Adversarial loss + \u03b3 \u00b7 Limg | {z } P ixel loss + \u03bb \u00b7 Lfeat | {z } F eature loss . (8) The pixel and feature losses focus on restoring image content and style, while adversarial loss restores texture details. \fBlock3-Conv3 Fire8-Conv3 Before-Classifier 30 40 50 60 70 80 90 Fooling Rate Model-wise Comparison VGG16 SqueezeNet AlexNet Block2-Conv2 Block3-Conv3 Block4-Conv3 Block5-Conv3 30 40 50 60 70 80 90 Fooling Rate Layer-wise Comparison: VGG16 Figure 4: Fooling rate for Inc-v3 [26] on ImageNet-NeurIPS dataset. Adversaries are created by applying SSP (Algorithm 1) at different layers and best results for each model is selected. Perceptual adversaries found in VGG space has the highest transferability (further analysis is in supplementary material). 3.3. NRP Architecture Here, we outline the architecture of generator, feature extractor and discriminator blocks. Generator (P\u03b8): Our generator architecture is inspired by [27, 49]. It consists of a convolution layer followed by multiple \u201cbasic blocks\u201d. Each basic block is composed of 3 \u201cdense blocks\u201d and each dense block contains \ufb01ve convolutional layers followed by leaky-relu [54] and \ufb01nally a convolutional layer that has output with same dimension as input. Generally, adding a skip connection from input to generator\u2019s output helps in restoration tasks e.g., image super resolution [27] and deblurring [25]. However, in our case an important design criteria is to avoid such skip connection since our objective is to remove adversarial noise and a direct skip connection can potentially reintroduce harmful noise patterns. Feature Extractor (F\u03c8): It is a VGG [42] network pretrained on ImageNet. During training, F\u03c8 remains \ufb01xed while its response is maximized in random directions (adversary generation process) and minimized (puri\ufb01cation process) using a prede\ufb01ned distance metric. In our experiments, we demonstrate the effectiveness of VGG space for creating strong adversaries as compared to other deep architectures. Discriminator (C\u03c6): Our discriminator architecture is also based on VGG network [42]. It consists of \ufb01ve convolutional blocks containing convolutional layers followed by batch-norm and leaky-relu and then a fully connected layer. 3.4. On Suitable Perceptual Adversaries The intuition to train NRP on boundary-agnostic perceptual adversaries is based on the extensive study [57] that found correlation of deep features with human perception. Speci\ufb01cally, [57] compares three models i.e. VGG [42], AlexNet [24] and SqueezeNet [21]. Following [57], we study these models from adversarial perspective by applying feature distortion at different layers in Fig. 4. Our \ufb01ndings are as follows: (a) VGG\u2019s perceptual adversaries are more transferable than AlexNet and SqueezeNet (a detailed transferability analysis on seen/unseen perturbations of VGG is in supplementary material), (b) under same feature distortion settings, adversaries found at different layers are not equally transferable e.g. conv3.3 (block 3, layer 3) features offer better adversarial transferability than the rest of the network. We believe this is because the initial VGG layers learn low-level features while the deeper ones become too speci\ufb01c to the label space. Further, we found that increasing the representation loss at multiple network layers does not notably increase attack success rate and adds a signi\ufb01cant computational overhead. Since NRP training process is agnostic to the label-space of the source model i.e., it neither depends on a particular task-speci\ufb01c loss function (e.g., cross entropy) nor on the ground-truth labels, this makes it a generic algorithm, which can defend a totally unseen model. Furthermore, we demonstrate that perturbations discovered with our SSP approach offer high transferability across models trained on different datasets and tasks. 4. Experiments 4.1. Training Details Training is done on randomly selected 25k images from MS-COCO data set. These images are resized to 480 \u00d7 480 \u00d7 3. Adversaries created using SSP are fed as inputs to NRP with their corresponding clean images used as target labels. During training, we randomly crop images of 128 \u00d7 128 \u00d7 3. Batch size is set to 16 and training is done on four Tesla v100 GPUs. Learning rates for generator and discriminator are set to 10\u22124, with the value of \u03b1 = 5 \u00d7 10\u22123, \u03b3 = 1 \u00d7 10\u22122 and \u03bb = 1. We study eight models trained on the ImageNet [39]. Five of these models are naturally trained. These include Inceptionv3 (Inc-v3) [45], Inceptionv4 (Inc-v4), Inception Resnet v2 (IncRes-v2) [44], Resnet v2-152 (Res-152) [20] and VGG-19 [42]. The other three models including Adv-v3 [26], Inc-v3ens3 and IncRes-v2ens [48] are adversarially trained. The speci\ufb01c details about these models can be found in [26, 48]. 4.2. Defense Results and Insights (a) Generalizability Across Attacks: Figs. 6, 7 & 8 demonstrate generalization ability of NRP to recover images from strong adversarial noise. Quantitative analysis in Table 1 shows that compared to previously broken defenses [10], NRP achieves strong robustness against stateof-the-art attacks [53, 10], bringing down the effectiveness of the ensemble translation-invariant attack with input diversity (DIMT I) [10] from 79.8% to 31.9%. (b) NRP as Cross-task Defense: In order to measure the cross-task defense capabilities, we deploy NRP against cross-domain attack (CDA) [37], a state-of-the-art attack that generates diverse cross-domain adversarial perturbations. Results in Table 2 demonstrate that NRP successfully removes all unseen perturbations and proves a generic cross-task defense for classi\ufb01cation, object detection and in\fTable 1: Robustness of different defense methods against stateof-the-art black-box attacks (lower is better). IncRes-v2ens is used as backbone model following [10]. NRP signi\ufb01cantly reduces the attack success rate. Adversaries (\u03f5 \u226416) are created against Incv3, Inc-v4, IncRes-v2, Res-v2-152 and Ensemble. Defenses Attacks FGSM FGSMT I MIFGSM MIFGSMT I DIM DIMT I Inc-v3 JPEG [17] 19.9 25.5 20.3 28.2 30.7 37.0 TVM [17] 18.8 30.7 19.4 34.9 24.4 44.2 NIPS-r3 [47] 9.8 24.5 12.9 26.7 18.0 41.4 R&P [50] 6.5 19.8 8.7 23.9 13.3 36.8 HGD [28] 2.1 18.4 6.9 25.7 9.7 38.3 APE-GAN [41] 19.6 28.0 17.9 30.4 23.6 38.6 SR [35] 23.0 36.7 23.6 38.3 32.5 49.0 NRP 3.2 4.8 4.5 9.1 5.1 11.0 Inc-v4 JPEG [17] 21.8 27.9 26.0 31.6 38.6 43.5 TVM [17] 19.9 31.8 24.8 38.4 29.1 45.6 NIPS-r3 [47] 11.5 24.6 15.6 29.5 14.1 41.9 R&P [50] 7.9 21.6 12.1 28.0 17.2 39.3 HGD [28] 2.6 18.1 9.6 27.8 32.4 58.7 APE-GAN [41] 21.1 28.8 20.7 32.8 25.0 39.0 SR [35] 25.3 34.1 29.2 42.3 39.3 52.3 NRP 3.1 4.4 4.8 10.3 5.2 12.5 IncRes-v2 JPEG [17] 24.7 32.4 31.6 45.9 47.2 55.7 TVM [17] 23.4 38.5 34.4 55.4 41.7 66.2 NIPS-r3 [47] 13.3 31.4 22.7 46.2 37.6 61.5 R&P [50] 9.9 28.1 18.6 45.2 30.2 61.4 HGD [28] 3.9 25.4 19.6 45.1 32.4 58.7 APE-GAN [41] 24.7 36.8 30.4 50.5 36.3 60.5 SR [35] 27.6 42.4 42.6 62.1 54.3 72.2 NRP 3.5 6.9 7.6 18.7 7.5 20.8 Res-v2-152 JPEG [17] 24.0 32.7 31.2 38.3 42.4 50.8 TVM [17] 22.0 38.1 24.5 41.2 36.8 55.7 NIPS-r3 [47] 12.5 30.1 18.0 34.4 34.4 52.9 R&P [50] 8.6 27.4 14.6 31.1 26.4 50.4 HGD [28] 3.6 24.4 15.1 31.8 32.6 51.8 APE-GAN [41] 24.3 37.1 23.2 38.6 34.3 53.8 SR [35] 26.3 41.8 30.2 49.2 48.4 63.9 NRP 3.4 6.5 5.8 11.9 6.3 17.8 Ensemble JPEG [17] 38.1 43.3 67.7 77.2 82.5 83.4 TVM [17] 30.0 39.8 50.1 72.1 64.1 79.8 NIPS-r3 [47] 19.8 33.9 43.9 71.4 63.7 83.1 R&P [50] 13.8 31.2 32.8 68.3 51.7 81.4 HGD [28] 4.9 29.9 38.6 73.3 57.7 82.6 APE-GAN [41] 32.0 42.1 44.6 69.3 59.6 74.5 SR [35] 38.1 45.8 65.2 79.9 79.3 84.9 NRP 3.7 7.9 10.1 27.8 11.4 31.9 stance level segmentation against CDA. (c) Ablation: Fig. 5 thoroughly investigates the impact of different training mechanisms in combination with our defense, and provides the following insights: (i) Relativistic GAN loss offers a more robust solution than vanilla GAN, (ii) NRP performance decreases slightly without pixel loss, (iii) NRP without feature loss loses supervisory signal de\ufb01ned by perceptual-space boundary, hence the generator Table 2: NRP generalizability across different adversarial attacks. Classi\ufb01cation model is defended against CDA trained against Incv3 while detection and segmentation models are defended against CDA trained against Res-v2-152 (higher is better). (q=quantity, w=weights, win=window size) Classi\ufb01cation: Defending IncRes-v2ens [48] against CDA [37] Method No ImageNet Comics Paintings Attack l\u221e\u22648 l\u221e\u226416 l\u221e\u22648 l\u221e\u226416 l\u221e\u22648 l\u221e\u226416 No Defense 97.8 83.0 30.9 94.0 56.6 71.6 23.7 JPEG (q=75) 97.6 74.9 18.6 90.1 42.6 68.0 18.0 JPEG (q=50) 96.2 74.2 19.0 90.1 43.4 66.0 19.2 JPEG (q=20) 94.1 73.4 21.7 87.0 51.3 62.7 18.8 TVM (w=10) 93.1 82.3 30.2 91.0 77.2 72.7 27.4 TVM (w=30) 96.0 81.1 27.3 93.4 66.4 70.6 24.1 MF (win=3) 95.4 77.3 27.7 92.4 66.8 65.0 22.1 NRP 95.6 95.7 96.0 95.4 94.2 95.3 94.1 Detection: Defending Mask-RCNN [18] against CDA [37] No Defense 59.9 35.2 8.1 40.5 16.8 41.7 14.8 JPEG (q=75) 57.6 41.3 11.9 41.6 19.4 44.5 18.3 JPEG (q=50) 54.6 41.7 14.5 39.5 18.5 47.7 19.9 JPEG (q=20) 39.7 30.7 15.1 28.2 14.7 30.5 15.3 TVM (w=10) 54.1 32.1 14.3 40.5 28.9 37.6 21.5 TVM (w=30) 58.0 39.9 10.1 46.8 21.0 45.4 17.2 MF (win=3) 54.7 32.1 9.0 41.1 20.4 37.6 15.2 NRP 54.4 51.5 50.3 53.5 53.7 53.2 54.3 Segmentation: Mask-RCNN [18] defense against CDA [37] No Defense 56.8 32.4 7.3 37.6 15.5 39.1 13.8 JPEG (q=75) 54.4 38.5 11 38.5 17.8 41.7 16.9 JPEG (q=50) 51.5 38.9 13.4 36.6 17.3 40 18.2 JPEG (q=20) 37.1 28.8 14.0 26.3 13.8 28.3 14.3 TVM (w=10) 50.8 29.8 13.2 37.6 26.6 34.9 19.8 TVM (w=30) 54.4 37.1 9.3 43.7 19.3 42.3 15.9 MF (win=3) 51.5 29.8 8.3 36.0 18.8 34.9 13.9 NRP 51.3 48.4 47.3 50.3 50.8 50.2 51.4 Clean FGSMTI MIFGSMTI DIMTI 40 50 60 70 80 90 100 Accuray NRP (Proposed) NRP without Pixel Loss NRP with GAN Loss FGSP GNP NRP without Feature Loss Figure 5: Ablation. Proposed NRP is able to recover input samples from the strong black-box ensemble attack [10] as compared to GNP and FGSP. NRP trained without Lfeat performs poorly indicating the importance of perceptual loss. Top-1 accuracy (higher is better) is reported for IncRes-v2ens [48] on ImageNet-NeurIPS. does not converge to a meaningful state, (iv) Gaussian smoothing (Gaussian noise data augmentation) proves to be useful in reducing adversarial vulnerability of classi\ufb01er [8, 55]. Training NRP as a Gaussian denoiser, named Gaus\fTable 3: Success rate (lower is better) of BPDA [6] and DIMT I [10] attacks against NRP. Res-v2-152 [20] is combined with other puri\ufb01er networks (ResG [27], UNet [38]). Adversaries are then transferred to the naturally and adversarially trained models. NRP protects the backbone network even when the attacker tries to bypass using BPDA technique. (attack iterations: 10, \u03f5 \u226416) Source Attack NRP Inc-v3 Inc-v4 IncRes-v2 Adv-v3 Inc-v3ens3 IncRes-v2ens Res-v2-152 DIMT I \u0017 77.4 77.9 74.2 51.2 56.2 47.7 ResG \u2295Res-v2-152 DIMT I \u2295BPDA \u0013 29.7 26.2 19.6 22.3 22.1 16.1 UNet \u2295Res-v2-152 DIMT I \u2295BPDA \u0013 29.0 27.1 19.5 26.9 27.7 18.8 Afghan Hound (0.73, \u0017) Porcupine (0.64, \u0017) Erythrocebus Patas (0.53, \u0017) Guenon Monkey (0.77, \u0017) Crane (0.55, \u0017) Monarch Butter\ufb02y (0.65, \u0013) Dung Beetle (0.90, \u0013) Lycaenid (0.94, \u0013) Lorikeet (0.94, \u0013) Flamingo (0.90, \u0013) Figure 6: A visual illustration of NRP generalizability to different adversaries (\u03f5 \u226416) (top: attacked; bottom: puri\ufb01ed). Our method can clean challenging adversarial patterns resulting from SSP applied to adversarially robust model [12]. Previous denoising methods are not designed for this type of structured noise. IncRes-v2ens backbone is used here. (see supplementary material for more examples) sian Noise Puri\ufb01er (GNP) does not prove effective against translation-invariant attacks [10], and (v) Training NRP to stabilize FGSM adversaries (termed FGSP in Fig. 5) performs relatively better than GNP. (d) What if Attacker knows about the Defense: We study this dif\ufb01cult scenario with the following criteria: (i) attacker knows that the defense is deployed and has access to its training data and training mechanism, and (ii) attacker trains a local defense similar to NRP, and then uses BPDA [6] to bypass the defense. To simulate this attack, we train residual generator (ResG) [27] and UNet [38] with the same training mechanise as described in Sec. 4.1. We then combine BPDA [2] with translation-invariant attack to bypass NRP. Under these challenging settings, NRP shows a relative gain of 74% and 66% respectively for IncRes-v2, IncRes-v2ens (see Table 3). 4.3. Self Supervised Perturbation as an Attack Next, we evaluate the strength of SSP as an attack for the tasks of classi\ufb01cation, detection and segmentation. Classi\ufb01cation: Table 5 compares SSP with FGSM [14], RFGSM [48], I-FGSM [15], MI-FGSM [9], TAP [58] and DIM [53] using their standard hyper-parameters (see supTable 4: Cross-task SSP Attack: Pixel-level accuracy is shown for SegNet-Basic [4] on Camvid testset [5], while mAP (with IoU = 0.5) is reported for Mask-RCNN. Problem Method No Attack SSP (l\u221e\u22648) SSP (l\u221e\u226416) Semantic Seg. SegNet [4] 79.70 52.48 32.59 Instance Seg. Mask-RCNN [18] 56.8 29.4 8.8 Object Det. RetinaNet [30] 53.78 22.75 5.16 Mask-RCNN [18] 59.50 31.8 9.7 plementary material). The results in Table 5 provide the following insights. (i) SSP consistently demonstrates a strong black-box adversarial transferability on both naturally and adversarially trained models, bringing down top-1 accuracy of IncRes-v2 [44] from 100.0% to 14.1%, (ii) While MIFGSM [9] and DIM [53] perform slightly better on adversarially trained ensemble models [48] in terms of top-1 accuracy, SSP shows comparable top-1 rate and surpasses in terms of top-5 accuracy, and (iii) These results indicate that decision-boundary based attacks \ufb02ip the label of input sample to the near-by class category, while SSP being agnostic to decision-level information pushes the adversaries far from the original input category. Cross-task Adversarial \fDIM [53]: Welsh Springer (0.52, \u0017) Puri\ufb01ed: Pomeranian (0.88, \u0013) DIMT I [10]: Cocker (0.71, \u0017) Puri\ufb01ed: Pomeranian (0.86, \u0013) Figure 7: NRP successfully recovers diverse patterns from strongest black-box attacks (l\u221e\u226416). IncRes-v2ens used as backbone. CDA [37]: Adversarial Prediction for Adversarial Puri\ufb01ed Prediction for Puri\ufb01ed Figure 8: NRP successfully removes perturbation generated by CDA[37] (\u03f5 \u226416) and stabilizes Mask-RCNN [18] predictions. Table 5: SSP as an attack for Classi\ufb01cation. Top-1 (T-1) and Top-5 (T-5) accuracies are reported under untargeted l\u221e adversarial attacks on ImageNet-NIPS with perturbation budget l\u221e\u226416. \u2018\u2217\u2019 indicates white-box attacks. Naturally Trained Adv. Trained Attack Inc-v3 Inc-v4 Res-152 IncRes-v2 VGG-19 Adv-v3 Inc-v3ens3 IncRes-v2ens T-1 T-5 T-1 T-5 T-1 T-5 T-1 T-5 T-1 T-5 T-1 T-5 T-1 T-5 T-1 T-5 Res-152 FGSM [14] 55.1 81.1 62.6 85.1 18.9\u2217 44.7\u2217 65.0 86.5 43.9 70.4 64.6 85.8 76.9 93.5 87.9 98.2 R-FGSM [48] 60.8 84.3 68.4 88.1 14.6\u2217 40.3\u2217 71.9 90.3 55.8 71.4 74.8 92.3 81.1 96.0 87.1 97.5 I-FGSM [15] 80.9 96.7 85.3 97.8 0.9\u2217 10.8\u2217 93.1 98.8 75.9 94.8 89.2 99.2 90.5 97.9 94.6 99.5 MI-FGSM [9] 38.9 72.7 44.8 76.5 0.6\u2217 2.9\u2217 47.7 79.6 42.1 71.8 67.0 89.9 69.4 93.3 81.5 96.4 TAP [58] 48.2 55.7 7.6\u2217 55.2 49.2 57.8 64.1 DIM [53] 15.9 44.0 17.3 48.4 0.8\u2217 3.0\u2217 20.0 50.2 25.6 56.3 55.8 82.8 54.9 84.2 71.5 93.1 VGG16 FGSM [14] 32.6 58.6 38.4 62.6 38.5 66.3 44.5 68.5 8.8 25.1 51.7 75.3 54.9 81.7 70.8 90.7 R-FGSM [48] 44.4 69.5 47.6 75.1 51.1 78.8 56.4 78.8 11.2 31.8 65.5 87.4 66.7 89.2 77.5 93.6 I-FGSM [15] 69.2 93.0 75.2 93.7 79.0 96.2 85.6 96.8 14.4 49.3 83.5 97.7 83.9 96.7 92.1 98.8 MI-FGSM [9] 20.4 45.0 19.7 43.2 25.2 53.8 26.8 53.8 1.5 12.1 43.0 70.9 42.0 72.7 62.0 86.8 TAP [58] 23.9 28.1 23.9 32.3 38.8 41.9 63.8 DIM [53] 14.7 38.8 16.6 39.0 21.0 48.0 21.5 45.7 0.6 7.6 35.8 65.8 31.8 60.8 53.7 79.5 FFF [34] 61.7 80.7 60.8 78.7 72.8 90.1 76.1 90.1 44.0 68.0 79.6 93.1 83.1 93.1 92.8 98.5 SSP 5.3 11.0 5.9 11.9 16.5 29.5 14.1 25.5 2.7 6.8 25.9 43.2 40.2 58.3 58.0 75.0 Attack: Since SSP is loss-agnostic, it enables attacks on altogether different tasks. Table 4 explores SSP for object detection and image segmentation. For Segmentation, the self-supervised perturbations created on CAMVID [5] in VGG-16 feature space are able to bring down the per pixel accuracy of Segnet-Basic by 47.11% within l\u221e\u226416. For object detection, on MS-COCO validation set [31], mean Average Precision (mAP) with 0.5 intersection over union (IOU) of RetinaNet [30] and Mask-RCNN [18] drop from 53.78% to 5.16% and 59.5% to 9.7%, respectively, under l\u221e\u226416. 5." + }, + { + "url": "http://arxiv.org/abs/1905.11736v5", + "title": "Cross-Domain Transferability of Adversarial Perturbations", + "abstract": "Adversarial examples reveal the blind spots of deep neural networks (DNNs)\nand represent a major concern for security-critical applications. The\ntransferability of adversarial examples makes real-world attacks possible in\nblack-box settings, where the attacker is forbidden to access the internal\nparameters of the model. The underlying assumption in most adversary generation\nmethods, whether learning an instance-specific or an instance-agnostic\nperturbation, is the direct or indirect reliance on the original\ndomain-specific data distribution. In this work, for the first time, we\ndemonstrate the existence of domain-invariant adversaries, thereby showing\ncommon adversarial space among different datasets and models. To this end, we\npropose a framework capable of launching highly transferable attacks that\ncrafts adversarial patterns to mislead networks trained on wholly different\ndomains. For instance, an adversarial function learned on Paintings, Cartoons\nor Medical images can successfully perturb ImageNet samples to fool the\nclassifier, with success rates as high as $\\sim$99\\% ($\\ell_{\\infty} \\le 10$).\nThe core of our proposed adversarial function is a generative network that is\ntrained using a relativistic supervisory signal that enables domain-invariant\nperturbations. Our approach sets the new state-of-the-art for fooling rates,\nboth under the white-box and black-box scenarios. Furthermore, despite being an\ninstance-agnostic perturbation function, our attack outperforms the\nconventionally much stronger instance-specific attack methods.", + "authors": "Muzammal Naseer, Salman H. Khan, Harris Khan, Fahad Shahbaz Khan, Fatih Porikli", + "published": "2019-05-28", + "updated": "2019-10-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Albeit displaying remarkable performance across a range of tasks, Deep Neural Networks (DNNs) are highly vulnerable to adversarial examples, which are carefully crafted examples generated by adding a certain degree of noise (a.k.a. perturbations) to the corresponding original images, typically appearing quasi-imperceptible to humans [1]. Importantly, these adversarial examples are transferable from one network to another, even when the other network fashions a different architecture and possibly trained on a different subset of training data [2, 3]. Transferability permits an adversarial attack, without knowing the internals of the target network, posing serious security concerns on the practical deployment of these models. Adversarial perturbations are either instance-speci\ufb01c or instance-agnostic. The instance-speci\ufb01c attacks iteratively optimize a perturbation pattern speci\ufb01c to an input sample (e.g., [4, 5, 6, 7, 8, 9, 10, 11]). In comparison, the instance-agnostic attacks learn a universal perturbation or a function that \ufb01nds adversarial patterns on a data distribution instead of a single sample. For example, [12] 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. arXiv:1905.11736v5 [cs.CV] 14 Oct 2019 \fFigure 1: Transferable Generative Adversarial Perturbation: We demonstrate that common adversaries exist across different image domains and introduce a highly transferable attack approach that carefully crafts adversarial patterns to fool classi\ufb01ers trained on totally different domains. Our generative scheme learns to reconstruct adversaries on paintings or comics (left) that can successfully fool natural image classi\ufb01ers with high fooling rates at the inference time (right). proposed universal adversarial perturbations that can fool a model on the majority of the source dataset images. To reduce dependency on the input data samples, [13] maximizes layer activations of the source network while [14] extracts deluding perturbations using class impressions relying on the source label space. To enhance the transferability of instance-agnostic approaches, recent generative models attempt to directly craft perturbations using an adversarially trained function [15, 16]. We observe that most prior works on crafting adversarial attacks suffer from two pivotal limitations that restrict their transferability to real-world scenarios. (a) Existing attacks rely directly or indirectly on the source (training) data, which hampers their transferability to other domains. From a practical standpoint, source domain can be unknown, or the domain-speci\ufb01c data may be unavailable to the attacker. Therefore, a true \"black-box\" attack must be able to fool learned models across different target domains without ever being explicitly trained on those data domains. (b) Instance-agnostic attacks, compared with their counterparts, are far more scalable to large datasets as they avoid expensive per-instance iterative optimization. However, they demonstrate weaker transferability rates than the instance-speci\ufb01c attacks. Consequently, the design of highly transferable instance-agnostic attacks that also generalize across unseen domains is largely an unsolved problem. In this work, we introduce \u2018domain-agnostic\u2019 generation of adversarial examples, with the aim of relaxing the source data reliance assumption. In particular, we propose a \ufb02exible framework capable of launching vastly transferable adversarial attacks, e.g., perturbations found on paintings, comics or medical images are shown to trick natural image classi\ufb01ers trained on ImageNet dataset with high fooling rates. A distinguishing feature of our approach is the introduction of relativistic loss that explicitly enforces learning of domain-invariant adversarial patterns. Our attack algorithm is highly scalable to large-scale datasets since it learns a universal adversarial function that avoids expensive iterative optimization from instance-speci\ufb01c attacks. While enjoying the ef\ufb01cient inference time of instance-agnostic methods, our algorithm outperforms all existing attack methods (both instance-speci\ufb01c and agnostic) by a signi\ufb01cant margin (\u223c86.46% average increase in fooling rate from naturally trained Inception-v3 to adversarially trained models in comparison to state-of-the-art [10]) and sets the new state-of-the-art under both white-box and black-box settings. Figure 1 provides an overview of our approach. 2 Related Work Image-dependent Perturbations: Several approaches target creation of image-dependent perturbations. [17] noticed that despite exhibiting impressive performance, neural networks can be fooled through maliciously crafted perturbations that appear quasi-imperceptible to humans. Following this 2 \f\ufb01nding, many approaches [4, 5, 6, 7, 8, 9] investigate the existence of these perturbations. They either apply gradient ascent in the pixel space or solve complex optimizations. Recently, a few methods [18, 10] propose input or gradient transformation modules to improve the transferability of adversarial examples. A common characteristic of the aforementioned approaches is their data-dependence; the perturbations are computed for each data-point separately in a mutually exclusive way. Further, these approaches render inef\ufb01ciently at inference time since they iterate on the input multiple times. In contrast, we resort to a data-independent approach based on a generator, demonstrating improved inference-time ef\ufb01ciency along with high transferability rates. Universal Adversarial Perturbation: Seminal work of [12] introduces the existence of Universal Adversarial Perturbation (UAP). It is a single noise vector which when added to a data-point can fool a pretrained model. [12] crafts UAP in an iterative fashion utilizing target data-points that is capable of \ufb02ipping their labels. Though it can generate image-agnostic UAP, the success ratio of their attack is proportional to the number of training samples used for crafting UAP. [13] proposes a so-called data-independent algorithm by maximizing the product of mean activations at multiple layers given a universal perturbation as input. This method crafts a so-called data-independent perturbation, however, the attack success ratio is not comparable to [12]. Instead, we propose a fully distribution-agnostic approach that crafts adversarial examples directly from a learned generator, as opposed to \ufb01rst generating perturbations followed by their addition to images. Generator-oriented Perturbations: Another branch of attacks leverage generative models to craft adversaries. [15] learns a generator network to perturb images, however, the unbounded perturbation magnitude in their case might render perceptible perturbations at test time. [33] trains conditional generators to learn original data manifold and searches the latent space conditioned on the human recognizable target class that is mis-classi\ufb01ed by a target classier. [19] apply generative adversarial networks to craft visually realistic perturbations and build distilled network to perform black-box attack. Similarly, [16, 14] train generators to create adversaries to launch attacks; the former uses target data directly and the latter relies on class impressions. A common trait of prior work is that they either rely directly (or indirectly) upon the data distribution and/or entail access to its label space for creating adversarial examples (Table 1). In contrast, we propose a \ufb02exible, distribution-agnostic approach inculcating relativistic loss to craft adversarial examples that achieves state-of-the-art results both under white-box and black-box attack settings. Method Data Type Transfer Label Cross-domain Strength Agnostic Attack FFF [13] Pretrained-net/data Low \u0013 \u0017 AAA [14] Class Impressions Medium \u0017 \u0017 UAP [12] ImageNet Low \u0017 \u0017 GAP [16] ImageNet Medium \u0017 \u0017 RHP [11] ImageNet Medium \u0017 \u0017 Ours Arbitrary (Paintings, Comics, Medical scans etc.) High \u0013 \u0013 Table 1: A comparison of different attack methods based on their dependency on data distribution and labels. 3 Cross-Domain Transferable Perturbations Our proposed approach is based on a generative model that is trained using an adversarial mechanism. Assume we have an input image xs belonging to a source domain Xs \u2208Rs. We aim to train a universal function that learns to add a perturbation pattern \u03b4 on the source domain which can successfully fool a network trained on source Xs as well as any target domain Xt \u2282Rt when fed with perturbed inputs x\u2032 t = xt + \u03b4. Importantly, our training is only performed on the unlabelled source domain dataset with ns samples: {xi s}ns i=1 and the target domain is not used at all during training. For brevity, in the following discussion, we will only refer the input and perturbed images using x and x\u2032 respectively and the domain will be clear from the context. The proposed framework consists of a generator G\u03b8(x) and a discriminator D\u03c8(x) parameterized by \u03b8 and \u03c8. In our case, we initialize discriminator with a pretrained network and the parameters \u03c8 are remained \ufb01xed while the G\u03b8 is learned. The output of G\u03b8 is scaled to have a \ufb01xed norm and it lies within a bound; x\u2032 = clip \u0000min(x + \u03f5, max(G\u03b8(x), x \u2212\u03f5)) \u0001 . The perturbed images x\u2032 as well as 3 \fFigure 2: The proposed generative framework seeks to maximize the \u2018fooling gap\u2019 that helps in achieving very high transferability rates across domains. The orange dashed line shows the \ufb02ow of gradients, notably only the generator is tuned in the whole pipeline to fool the pretrained discriminator. the real images x are passed through the discriminator. The output of the discriminator denotes the class probabilities D\u03c8(x, x\u2032) \u2208[0, 1]c, where c is the number of classes. This is different from the traditional GAN framework where a discriminator only estimate whether an input is real or fake. For an adversarial attack, the goal is to fool a network on most examples by making minor changes to its inputs, i.e., \u2225\u03b4 \u2225\u221e\u2264\u03f5, s.t., P \u0000argmaxj(D\u03c8(x\u2032)j) \u0338= argmaxj(D\u03c8(x)j) \u0001 > fr, (1) where, fr is the fooling ratio, y is the ground-truth label for the example x and the predictions on clean images x are given by, y = argmaxj(D\u03c8(x)j). Note that we do not necessarily require the ground-truth labels of source domain images to craft a successful attack. In the case of adversarial attacks based on a traditional GAN framework, the following objective is maximized for the generator to achieve the maximal fooling rate: \u03b8\u2217\u2190argmax \u03b8 CROSSENTROPY(D\u03c8(x\u2032), 1y), (2) where 1y is the one-hot encoded label vector for an input example x. The above objective seeks to maximize the discriminator error on the perturbed images that are output from the generator network. We argue that the objective given by Eq. 2 does not directly enforce transferability for the generated perturbations \u03b4. This is primarily due to the reason that the discriminator\u2019s response for clean examples is totally ignored in the conventional generative attacks. Here, inspired by the generative adversarial network in [20], we propose a relativistic adversarial perturbation (RAP) generation approach that explicitly takes in to account the discriminator\u2019s predictions on clean images. Alongside reducing the classi\ufb01er\u2019s con\ufb01dence on perturbed images, the attack algorithm also forces the discriminator to maintain a high con\ufb01dence scores for the clean samples. The proposed relativistic objective is given by: \u03b8\u2217\u2190argmax \u03b8 CROSSENTROPY(D\u03c8(x\u2032) \u2212D\u03c8(x), 1y). (3) The cross entropy loss would be higher when the perturbed image is scored signi\ufb01cantly lower than the clean image response for the ground-truth class i.e., D\u03c8(x\u2032)y < < D\u03c8(x)y. The discriminator basically seeks to increase the \u2018fooling gap\u2019 (D\u03c8(x\u2032)y \u2212D\u03c8(x)y) between the true and perturbed samples. Through such relative discrimination, we not only report better transferability rates across networks trained on the same domain, but most importantly show excellent cross-domain transfer rates for the instance-agnostic perturbations. We attribute this behaviour to the fact that once a perturbation pattern is optimized using the proposed loss on a source distribution (e.g., paintings, cartoon images), the generator learns a \"contrastive\" signal that is agnostic to the underlying distribution. As a result, when the same perturbation pattern is applied to networks trained on totally different domain (e.g., natural images), it still achieves the state-of-the-art attack transferability rates. Table 2 shows the gain in transferability when using relativistic cross-entropy (Eq. 3) in comparison to simple cross-entropy loss (Eq. 2). For an untargeted attack, the above mentioned objective in Eq. 2 and 3 suf\ufb01ces, however, for a targeted adversarial attack, the prediction for the perturbed image must match a given target class y\u2032 i.e., argmaxj(D\u03c8(x\u2032)j) = y\u2032 \u0338= y. For such a case, we employ the following loss function: \u03b8\u2217\u2190argmin \u03b8 CROSSENTROPY(D\u03c8(x\u2032), 1y\u2032) + CROSSENTROPY(D\u03c8(x), 1y). (4) The overall training scheme for the generative network is given in Algorithm 1. 4 \fAlgorithm 1 Generator Training for Relativistic Adversarial Perturbations 1: A pretrained classi\ufb01er D\u03c8, arbitrary training data distribution X, perturbation budget \u03f5, loss criteria L. 2: Randomly initialize generator network G\u03b8 3: repeat 4: Sample mini-batch of data from the training set. 5: Use the current state of the generator, G\u03b8, to generate unbounded adversaries. 6: Project adversaries, G\u03b8(x), within a valid perturbation budget to obtain x\u2032 such that \u2225x\u2032 \u2212x\u2225\u221e\u2264\u03f5. 7: Forward pass x\u2032 to D\u03c8 and compute loss given in Eq. (3)/Eq. (4) for targeted/untargeted attack. 8: Backward pass and update the generator, G\u03b8, parameters to maximize the loss. 9: until model convergence. 1 2 3 4 5 6 7 8 9 10 Number of iterations 0 1 2 3 4 5 Loss Loss Trend Over Iterations 1 2 3 4 5 6 7 8 9 10 Number of iterations 0 1 2 3 4 5 Taxicab Norm Gradients Trend Over Iterations Figure 3: Loss and gradients trend for CE and RCE loss functions. Results are reported with VGG16 network on 100 random images for MI-FGSM attack. Trends are shown in log scale. 4 Gradient Perspective of Relativistic Cross-Entropy Adversarial perturbations are crafted via loss function gradients. An effective loss function helps in the generation of perturbations by back-propagating stronger gradients. Below, we show that Relativistic Cross-Entropy (RCE) ensures this requisite and thus leads to better performance than regular Cross-Entropy (CE) loss. Suppose, the logit-space outputs from the discriminator (pretrained classi\ufb01er) corresponding to a clean image (x) and a perturbed image (x\u2019) are denoted by a and a\u2032, respectively. Then, CE(a\u2032, y)=\u2212log \u0000ea\u2032 y/P k ea\u2032 k\u0001 is the cross-entropy loss for a perturbed input x\u2032. For clarity, we de\ufb01ne p\u2032 y = ea\u2032 y/P k ea\u2032 k. The derivative of p\u2032 y w.r.t a\u2032 i is \u2202p\u2032 y/\u2202a\u2032 i = p\u2032 y([ [i=y] ] \u2212p\u2032 i). Using chain rule, the derivative of cross-entropy loss is given by: \u2202CE \u2202a\u2032 i = p\u2032 i \u2212[ [i=y] ]. (5) For the relativistic loss formulated as RCE(a\u2032, a, y)=\u2212log \u0000ea\u2032 y\u2212ay/P k ea\u2032 k\u2212ak\u0001 , we de\ufb01ne ry= \u0000ea\u2032 y\u2212ay/P k ea\u2032 k\u2212ak\u0001 . The derivative of ry w.r.t a\u2032 i is \u2202ry/\u2202a\u2032 i = ri([ [i=y] ] \u2212ry). From chain rule, RCE derivative w.r.t to a\u2032 i is given by: \u2202RCE \u2202a\u2032 i = ri \u2212[ [i=y] ]. (6) In light of above relation, RCE has three important properties: 1. Comparing (Eq.5) with (Eq.6) shows that RCE gradient is a function of \u2018difference\u2019 (a\u2032 y\u2212ay) as opposed to only scores a\u2032 y in CE loss. Thus, it measures the relative change in prediction as an explicit objective during optimization. 2. RCE loss back-propagates larger gradients compared to CE, resulting in ef\ufb01cient training and stronger adversaries (see Figure 3 for empirical evidence). Sketch Proof: We can factorize the denominator in (Eq. 6) as follows: \u2202RCE/\u2202a\u2032 i = \u0000ea\u2032 y\u2212ay/(ea\u2032 y\u2212ay + P k\u0338=y ea\u2032 k\u2212ak) \u0001 \u2212 [ [i=y] ]. Consider the fact that maximization of RCE is only possible when e(a\u2032 y\u2212ay) decreases 5 \fand P k\u0338=y e(a\u2032 k\u2212ak) increases. Generally, ay \u226bak\u0338=y for the score generated by a pretrained model and a\u2032 y \u226aa\u2032 k\u0338=y (here k denotes an incorrectly predicted class). Thus, \u2202RCE/\u2202a\u2032 i > \u2202CE/\u2202a\u2032 i since e(a\u2032 y\u2212ay) < e(a\u2032 y) and P k\u0338=y e(a\u2032 k\u2212ak) > P k\u0338=y e(a\u2032 k). In simple words, the gradient strength of RCE is higher than CE. 3. In case x is misclassi\ufb01ed by F(\u00b7), the gradient strength of RCE is still higher than CE (here noise update with the CE loss will be weaker since adversary\u2019s goal is already achieved i.e., x is misclassi\ufb01ed). Loss VGG-16 VGG-19 Squeeze-v1.1 Dense-121 Cross Entropy (CE) 79.21 78.96 69.32 66.45 Relativistic CE 86.95 85.88 77.81 75.21 Table 2: Effect of Relativistic loss on transferability in terms of fooling rate (%) on ImageNet val-set. Generator is trained against ResNet-152 on Paintings dataset. 5 Experiments 5.1 Rules of the Game We report results using following three different attack settings in our experiments: (a) White-box. Attacker has access to the original model (both architecture and parameters) and the training data distribution. (b) Black-box. Attacker has access to a pretrained model on the same distribution but without any knowledge of the target architecture and target data distribution. (c) Cross-domain Black-box. Attacker has neither access to (any) pretrained model, nor to its label space and its training data distribution. It then has to seek a transferable adversarial function that is learned from a model pretrained on a possibly different distribution than the original. Hence, this setting is relatively far more challenging than the plain black-box setting. Perturbation Attack VGG-19 ResNet-50 Dense-121 Fool Rate (\u2191) Top-1 (\u2193) Fool Rate (\u2191) Top-1 (\u2193) Fool Rate (\u2191) Top-1 (\u2193) l\u221e\u226410 Gaussian Noise 23.59 64.65 18.06 70.74 17.05 70.30 Ours-Paintings 47.12 46.68 31.52 60.77 29.00 62.0 Ours-Comics 48.47 45.78 33.69 59.26 31.81 60.40 Ours-ChestX 40.81 50.11 22.00 67.72 20.53 67.63 l\u221e\u226416 Gaussian Noise 33.80 57.92 25.76 66.07 23.30 66.70 Ours-Paintings 66.52 30.21 47.51 47.62 44.50 49.76 Ours-Comics 67.75 29.25 51.78 43.91 50.37 45.17 Ours-ChestX 62.14 33.95 34.49 58.6 31.81 59.75 l\u221e\u226432 Gaussian Noise 61.07 35.48 47.21 48.40 39.90 54.37 Ours-Paintings 87.08 11.96 69.05 28.77 63.78 33.46 Ours-Comics 87.90 11.17 71.91 26.12 71.85 26.18 Ours-ChestX 88.12 10.92 62.17 34.85 59.49 36.98 Table 3: Cross-Domain Black-box: Untargeted attack success (%) in terms of fooling rate on ImageNet val-set. Adversarial generators are trained against ChexNet on Paintings, Comics and ChestX datasets. Perturbation budget, l\u221e\u226410/16/32, is chosen as per the standard practice. Even without the knowledge of targeted model, its label space and its training data distribution, the transferability rate is much higher than the Gaussian noise. 5.2 Experimental Settings Generator Architecture. We chose ResNet architecture introduced in [21] as the generator network G\u03b8; it consists of downsampling, residual and upsampling blocks. For training, we used Adam optimizer [22] with a learning rate of 1e-4 and values of exponential decay rate for \ufb01rst and second moments set to 0.5 and 0.999, respectively. Generators are learned against the four pretrained ImageNet models including VGG-16, VGG-19 [23], Inception (Inc-v3) [24], ResNet-152 [25] and ChexNet (which is a Dense-121 [26] network trained to diagnose pneumonia) [27]. Datasets. We consider the following datasets for generator training namely Paintings [28], Comics [29], ImageNet and a subset of ChestX-ray (ChestX) [27]. There are approximately 80k samples in Paintings, 50k in Comics, 1.2 million in ImageNet training set and 10k in ChestX. 6 \fBee Eater Cardoon Impala Anemone Fish Crane Jigsaw Puzzle Jigsaw Puzzle Jigsaw Puzzle Jigsaw Puzzle Jigsaw Puzzle Figure 4: Untargeted adversaries produced by generator trained against Inception-v3 on Paintings dataset. 1st row shows original images while 2nd row shows unrestricted outputs of adversarial generator and 3rd row are adversaries after valid projection. Perturbation budget is set to l\u221e\u226410. Figure 5: Illustration of attention shift. We use [31] to visualize attention maps of clean (1st row) and adversarial (2nd row) images. Adversarial images are obtained by training generator against VGG-16 on Paintings dataset. Inference: Inference is performed on ImageNet validation set (val-set) (50k samples), a subset (5k samples) of ImageNet proposed by [11] and ImageNet-NeurIPS [30] (1k samples) dataset. Evaluation Metrics: We use the fooling rate (percentage of input samples for which predicted label is \ufb02ipped after adding adversarial perturbations), top-1 accuracy and % increase in error rate (the difference between error rate of adversarial and clean images) to evaluate our proposed approach. 5.2.1 Results Table 3 shows the cross-domain black-box setting results, where attacker have no access to model architecture, parameters, its training distribution or label space. Note that ChestX [27] does not have much texture, an important feature to deceive ImageNet models [32], yet the transferability rate of perturbations learned against ChexNet is much better than the Gaussian noise. Tables 4 and 5 show the comparison of our method against different universal methods on both naturally and adversarially trained models [34] (Inc-v3, Inc-v4 and IncRes-v2). Our attack success rate is much higher both in white-box and black-box settings. Notably, for the case of adversarially trained models, Gaussian smoothing on top of our approach leads to signi\ufb01cant increase in transferability. We provide further comparison with GAP [16] in the supplementary material. Figures 4 and 5 show the model\u2019s output and attention shift on example adversaries. 7 \fModel Attack VGG-16 VGG-19 ResNet-152 VGG-16 FFF 47.10\u2217 41.98 27.82 AAA 71.59\u2217 65.64 45.33 UAP 78.30\u2217 73.10 63.40 Ours-Paintings 99.58\u2217 98.97 47.90 Ours-Comics 99.83\u2217 99.56 58.18 Ours-ImageNet 99.75\u2217 99.44 52.64 VGG-19 FFF 38.19 43.60\u2217 26.34 AAA 69.45 72.84\u2217 51.74 UAP 73.50 77.80\u2217 58.00 Ours-Paintings 98.90 99.61\u2217 40.98 Ours-Comics 99.29 99.76\u2217 42.61 Ours-ImageNet 99.19 99.80\u2217 53.02 ResNet-152 FFF 19.23 17.15 29.78\u2217 AAA 47.21 48.78 60.72\u2217 UAP 47.00 45.5 84.0\u2217 Ours-Paintings 86.95 85.88 98.03\u2217 Ours-Comics 88.94 88.84 94.18\u2217 Ours-ImageNet 95.40 93.26 99.02\u2217 Table 4: White & Black-box Setting: Fool rate (%) of untargeted attack on ImageNet val-set. Perturbation budget is l\u221e\u226410. * indicates white-box attack. Our attack\u2019s transferability from ResNet-152 to VGG16/19 is even higher than other white-box attacks. Model Attack Inc-v3ens3 Inc-v3ens4 IncRes-v2ens Inc-v3 UAP 1.00/7.82 1.80/5.60 1.88/5.60 GAP 5.48/33.3 4.14/29.4 3.76/22.5 RHP 32.5/60.8 31.6/58.7 24.6/57.0 Inc-v4 UAP 2.08/7.68 1.94/6.92 2.34/6.78 RHP 27.5/60.3 26.7/62.5 21.2/58.5 IncRes-v2 UAP 1.88/8.28 1.74/7.22 1.96/8.18 RHP 29.7/62.3 29.8/63.3 26.8/62.8 Ours-Paintings 33.92/72.46 38.94/71.4 33.24/69.66 Ours-gs-Paintings 47.78/73.06 48.18/72.68 42.86/73.3 Ours-Comics 21.06/67.5 24.1/68.72 12.82/54.72 Ours-gs-Comics 34.52/70.3 56.54/69.9 23.58/68.02 Ours-ImageNet 28.34/71.3 29.9/66.72 19.84/60.88 Ours-gs-ImageNet 41.06/71.96 42.68/71.58 37.4/72.86 Table 5: Black-box Setting: Transferability comparison in terms of % increase in error rate after attack. Results are reported on subset of ImageNet (5k) with perturbation budget of l\u221e\u226416/32. Our generators are trained against naturally trained Inc-v3 only. \u2018gs\u2019 represents Gaussian smoothing applied to generator output before projection that enhances our attack strength. 5.2.2 Comparison with State-of-the-Art Finally, we compare our method with recently proposed instance-speci\ufb01c attack method [10] that exhibits high transferability to adversarially trained models. For the very \ufb01rst time in literature, we showed that a universal function like ours can attain much higher transferability rate, outperforming the state-of-the-art instance-speci\ufb01c translation invariant method [10] by a large average absolute gain of 46.6% and 86.5% (in fooling rates) on both naturally and adversarially trained models, respectively, as reported in Table 6. The naturally trained models are Inception-v3 (Inc-v3) [24], Inception-v4 (Inc-v4), Inception Resnet v2 (IncRes-v2) [35] and Resnet v2-152 (Res-152) [36]). The adversarially trained models are from [34]. Attack Naturally Trained Adversarially Trained Inc-v3 Inc-v4 IncRes-v2 Res-152 Inc-v3ens3 Inc-v3ens4 IncRes-v2ens Inc-v3 FGSM 79.6\u2217 35.9 30.6 30.2 15.6 14.7 7.0 TI-FGSM 75.5\u2217 37.3 32.1 34.1 28.2 28.9 22.3 MI-FGSM 97.8\u2217 47.1 46.4 38.7 20.5 17.4 9.5 TI-MI-FGSM 97.9\u2217 52.4 47.9 41.1 35.8 35.1 25.8 DIM 98.3\u2217 73.8 67.8 58.4 24.2 24.3 13.0 TI-DIM 98.5\u2217 75.2 69.2 59.2 46.9 47.1 37.4 IncRes-v2 FGSM 44.3 36.1 64.3\u2217 31.9 18.0 17.2 10.2 TI-FGSM 49.7 41.5 63.7\u2217 40.1 34.6 34.5 27.8 MI-FGSM 74.8 64.8 100.0\u2217 54.5 25.1 23.7 13.3 TI-MI-FGSM 76.1 69.5 100.0\u2217 59.6 50.7 51.7 49.3 DIM 86.1 83.5 99.1\u2217 73.5 41.2 40.0 27.9 TI-DIM 86.4 85.5 98.8\u2217 76.3 61.3 60.1 59.5 Res-152 FGSM 40.1 34.0 30.3 81.3\u2217 20.2 17.7 9.9 TI-FGSM 46.4 39.3 33.4 78.9\u2217 34.6 34.5 27.8 MI-FGSM 54.2 48.1 44.3 97.5\u2217 25.1 23.7 13.3 TI-MI-FGSM 55.6 50.9 45.1 97.4\u2217 39.9 37.7 32.8 DIM 77.0 77.8 73.5 97.4\u2217 40.5 36.0 24.1 TI-DIM 77.0 73.9 73.2 97.2\u2217 60.3 58.8 42.8 Ours-Paintings 100.0\u2217 99.7 99.8 98.9 69.3 74.6 64.8 Ours-gs-Paintings 99.9\u2217 98.5 97.6 93.6 85.2 83.9 75.9 Ours-Comics 99.9\u2217 99.8 99.8 98.7 39.3 46.8 23.3 Ours-gs-Comics 99.9\u2217 97.0 93.4 87.7 60.3 58.8 42.8 Ours-ImageNet 99.8\u2217 99.1 97.5 98.1 55.4 60.5 36.4 Ours-gs-ImageNet 98.9\u2217 95.4 90.5 91.8 78.6 78.4 68.9 Table 6: White-box and Black-box: Transferability comparisons. Success rate on ImageNetNeurIPS validation set (1k images) is reported by creating adversaries within the perturbation budget of l\u221e \u2264 16, as per the standard practice [10]. Our generators are learned against naturally trained Inceptionv3 only. \u2217indicates white-box attack. \u2018gs\u2019 is Gaussian smoothing applied to the generator output before projection. Smoothing leads to slight decrease in transferability on naturally trained but shows signi\ufb01cant increase against adversarially trained models. 8 \f1 3 5 7 9 0 20 40 60 80 100 Fool Rate (%) Transferability Vs. Epochs (a) Naturally Trained IncRes-v2 1 3 5 7 9 0 10 20 30 40 50 60 70 Fool Rate (%) Transferability Vs. Epochs (b) Adversarially Trained IncRes-v2 No Smoothing 3x3 5x5 7x7 9x9 11x11 0 20 40 60 80 100 Fool Rate (%) Transferability Vs. Kernel Size (c) Naturally Trained IncRes-v2 No Smoothing 3x3 5x5 7x7 9x9 11x11 0 10 20 30 40 50 60 Fool Rate (%) Transferability Vs. Kernel Size (d) Adversarially Trained IncRes-v2 Figure 6: Effect of Gaussian kernel size and number of training epochs is shown on the transferability (in %age fool rate) of adversarial examples. Generator is trained against Inception-v3 on Paintings, while the inference is performed on ImageNet-NeurIPS. Firstly, as number of epochs increases, transferability against naturally trained IncRes-v2 increases while decreases against its adversarially trained version. Secondly, as the size of Gaussian kernel increases, transferability against naturally as well as adversarially trained IncRes-v2 decreases. Applying kernel of size 3 leads to optimal results against adversarially trained model. Perturbation is set to l\u221e\u226416. 5.3 Transferability: Naturally Trained vs. Adversarially Trained Furthermore, we study the impact of training iterations and Gaussian smoothing [10] on the transferability of our generative adversarial examples. We report results using naturally and adversarially trained IncRes-v2 model [35] as other models exhibit similar behaviour. Figure 6 displays the transferability (in %age accuracy) as a function of the number of training epochs (a-b) and various kernel sizes for Gaussian smoothing (c-d). Firstly, we observe a gradual increase in the transferability of generator against the naturally trained model as the training epochs advance. In contrast the transferability deteriorates against the adversarially trained model. Therefore, when targeting naturally trained models, we train for ten epochs on Paintings, Comics, and ChestX datasets (although we anticipate better performance for higher epochs). When targeting adversarially trained models, we deploy an early stopping criterion to obtain the best trained generator since the performance drops on such models as epochs are increased. This fundamentally shows the reliance of naturally and adversarially trained models on different set of features. Our results clearly demonstrate that the adversarial solution space is shared across different architectures and even across distinct data domains. Since we train our generator against naturally trained models only, therefore it converges to a solution space on which an adversarially trained model has already been trained. As a result, our perturbations gradually become weaker against adversarially trained models as the training progress. A visual demonstration is provided in supplementary material. Secondly, the application of Gaussian smoothing reveals different results on naturally trained and adversarially trained models. After applying smoothing, adversaries become stronger for adversarially trained models and get weaker for naturally trained models. We achieve optimal results with the kernel size of 3 and \u03c3 = 1 for adversarially trained models and use these settings consistently in our experiments. We apply Gaussian kernel on the unrestricted generator\u2019s output, therefore as the kernel size is increased, generator\u2019s output becomes very smooth and after projection within valid l\u221erange, adversaries become weaker. 6" + }, + { + "url": "http://arxiv.org/abs/1811.09020v3", + "title": "Task-generalizable Adversarial Attack based on Perceptual Metric", + "abstract": "Deep neural networks (DNNs) can be easily fooled by adding human\nimperceptible perturbations to the images. These perturbed images are known as\n`adversarial examples' and pose a serious threat to security and safety\ncritical systems. A litmus test for the strength of adversarial examples is\ntheir transferability across different DNN models in a black box setting (i.e.\nwhen the target model's architecture and parameters are not known to attacker).\nCurrent attack algorithms that seek to enhance adversarial transferability work\non the decision level i.e. generate perturbations that alter the network\ndecisions. This leads to two key limitations: (a) An attack is dependent on the\ntask-specific loss function (e.g. softmax cross-entropy for object recognition)\nand therefore does not generalize beyond its original task. (b) The adversarial\nexamples are specific to the network architecture and demonstrate poor\ntransferability to other network architectures. We propose a novel approach to\ncreate adversarial examples that can broadly fool different networks on\nmultiple tasks. Our approach is based on the following intuition: \"Perpetual\nmetrics based on neural network features are highly generalizable and show\nexcellent performance in measuring and stabilizing input distortions. Therefore\nan ideal attack that creates maximum distortions in the network feature space\nshould realize highly transferable examples\". We report extensive experiments\nto show how adversarial examples generalize across multiple networks for\nclassification, object detection and segmentation tasks.", + "authors": "Muzammal Naseer, Salman H. Khan, Shafin Rahman, Fatih Porikli", + "published": "2018-11-22", + "updated": "2019-03-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Transferability is a phenomenon where adversarial examples created for one network can fool others. The transferability of adversarial examples makes it challenging to deploy deep neural networks in security critical environments. This is of high concern because it gives attackers the \ufb02exibility to train a local network and transfer its attack against an already deployed network, without knowing its architecture or parameters (\u2018black-box attacks\u2019). Current Image Classification Adversary + Adversarial Noise Deep Neural Network Classification Detection Segmentation Segmentation Detection Zebra Brain Coral Off-the-shelf Deep Features Figure 1: Similar to off-the-shelf deep features that are employed to boost the performance of different computer vision tasks, adversarial noise patterns found in deep features space are transferable across different tasks. (Noise pattern is magni\ufb01ed for better visualization) attack algorithms [8, 5] perform well when the network architecture and parameters are known (\u2018white-box setting\u2019); however, their strength signi\ufb01cantly decreases in the black box setting, as shown in [21]. Recent attempts on enhancing the transferability in black-box settings have been reported in [6, 26, 27]. Nevertheless, their dependency on a taskspeci\ufb01c loss function make them non-transferable across different tasks. For example, to fool classi\ufb01cation models, the attacker starts from the softmax cross-entropy to \ufb01nd a gradient direction that increases the model loss for a given sample. Examples found in this way are speci\ufb01c and do not generalize beyond their original task. We propose a novel approach to generate high strength adversarial examples that are transferable across different network architectures and, most importantly, across different vision tasks (e.g., image segmentation, classi\ufb01cation and object detection). Our approach is based on the following intuitions: (a) neural networks trained on ImageNet [18] (or other suf\ufb01ciently large image datasets) learn generic internal representations that are transferable to new tasks and datasets [10, 19]. As a result, it is common practice to use pre-trained classi\ufb01cation networks as the basic building block (network backbone) for a variety of different tasks 1 arXiv:1811.09020v3 [cs.CV] 26 Mar 2019 \f[13, 15] and, (b) a perceptual metric based on VGG internal representations aligns well with human perception [?] and can be used not only to measure the input distortion but also to stabilize it [11]. We hypothesize that adversarial examples based on perceptual distortion, under a given bound, e.g. l\u221e\u2264\u03f5, in the deep features space, are ideally suited to fool any deep network, whether designed for classi\ufb01cation, object detection, segmentation or other vision tasks. We present the \ufb01rst such algorithm, which creates adversarial examples by distorting the deep neural activations. This not only generates high-strength perturbations but also provides \ufb02exibility to work with any task, as the proposed attack does not use any task-dependent loss function. To the best of our knowledge, the closest to our approach is a decision-boundary free attack (called FFF) [17]. The idea is to train a single perturbation within a given metric norm to maximize the activation response of the network\u2019s internal layers. After training, the perturbation is added to the input images to make them adversarial. The problem with this approach is that it optimizes adversarial noise in a way that is independent of the data sample; hence noise, severely over\ufb01ts the network and has very low transferability. In contrast, we do not optimize a single noise pattern, instead we directly maximize the distortions in the network\u2019s internal representations for a given input sample. Zhou et al. [27] also proposed to maximize representation loss. However, their approach is speci\ufb01c to only the classi\ufb01cation task, since the gradient direction in their approach is dependent on cross-entropy loss, which requires labels for the task at hand. Thus, their attack algorithm is essentially a supervised adversarial attack. In contrast, we don\u2019t use any task dependent loss in our objective, so our attack method does not rely on any labels. Thus, it is an unsupervised adversarial attack. Furthermore, we focus on VGG networks for high-dimensional datasets. Remarkably, other networks (Inception/Resnet) do not offer enough distortion in a constrained optimization scenario to carry out this type of attack. One intriguing aspect of our approach is its simplicity and ef\ufb01ciency. For instance, we only use features from a single layer (conv3.3) of VGG-16 [20] (instead of multiple layers as in [17]) and calculate the mean squared difference between the original and adversarial examples to represent neural representation distortion (NRD). NRD is fully differentiable and its minimization can help in image restoration problems [11]. Here, we propose to maximize the NRD to construct adversarial examples. Finding adversarial examples based on feature representation makes our attack generalizable across different architectures for different tasks. Speci\ufb01cally, we show high inter-task and intra-task transferability for our approach on large-scale datasets, including ImageNet [18], MS-COCO [14] and CAMVID [4]. Our method is not restricted to the original backbone models trained on a speci\ufb01c benchmark. Most backbone models are \ufb01ne-tuned with additional training datasets to a speci\ufb01c task. As we elaborate in Sec. 6, our method can successfully be applied to any network that is pretrained on one benchmark, then \ufb01ne-tuned on another, e.g. RetinaNet [13] and SegNet [3]. Contributions: We study and highlight the importance of a neural network\u2019s internal representations (Fig. 1) in the context of adversarial attacks. Our major contributions are: \u2022 We propose a generalizable, black-box, untargeted adversarial attack algorithm on a neural network\u2019s internal representation. \u2022 We leverage generic representations learned by models (e.g. VGG-16 [20]) trained on large image datasets (e.g. ImageNet [18]) to construct transferable adversarial examples. \u2022 Our attack algorithm does not rely on a task-speci\ufb01c loss function or a speci\ufb01c set of input labels, therefore it demonstrates cross-network, cross-dataset, and cross-task transferability. \u2022 We provide state-of-the-art results for classi\ufb01cation networks and provide a robust benchmark to measure the robustness of any neural network based vision system against generic adversarial examples. 2. Related Work Since the seminal work of Szegedy et al. [24] many adversarial attack algorithms [7, 8, 2, 6] have been proposed to show the vulnerability of neural networks against imperceptible changes to inputs. A single-step attack, called fast gradient sign method (FGSM), was proposed by [7]. In a follow-up work, Kurakin et al. [8] proposed a robust multi-step attack, called iterative fast gradient sign methods (I-FGSM) that iteratively searches the loss surface of a network under a given metric norm. To improve transferability, a variant of I-FGSM, called momentum iterative fast gradient sign method (MI-FGSM), was introduced [6], which signi\ufb01cantly enhances the transferability of untargeted attacks on ImageNet [18] under a perturbation budget of l\u221e\u226416. Authors [6] associated the transferability of MI-FGSM with its ability to break local maxima as the number of attack iterations increase. Recently, [26] proposed a data augmentation technique to further boost the transferability of these attack methods. In contrast to ours, all of these methods are supervised adversarial attacks dependent on cross-entropy loss to \ufb01nd the harmful gradient direction. Interestingly, NRD of I-FGSM decreases as the number of attack iterations increases as compared to MI-FGSM as shown in Fig. 2. We generate adversarial examples on ImageNet [18] subset provided by the NIPS security challenge 2017. As can be seen, MI-FGSM maintains its NRD with increasing number of iterations. This also indicates that di2 \f2 3 4 5 6 7 8 9 10 Number of iterations 0 10 20 30 40 50 60 70 80 90 Accuracy and NRD I-FGSM: Accuracy I-FGSM: NRD MI-FGSM: Accuracy MI-FGSM: NRD Figure 2: Accuracy of Inc-v4 and NRD is shown for adversarial examples generated on Inc-v3 by I-FGSM and MIFGSM. NRD is averaged over all examples. As the number of iterations increases, the accuracy of Inc-v4 on adversarial example found by I-FGSM increases, i.e., the transferability of I-FGSM decreases along with its NRD. rectly maximizing the NRD can boost the transferability of adversarial examples. 3. Adversarial Attacks In this section, we \ufb01rst provide our problem setting, followed by a brief background to adversarial attacks. We explain how popular attack mechanisms, such as FGSM [7], I-FGSM [8] and MI-FGSM [6], differ from each other. This background will form the basis of our proposed attack in Sec. 4. Problem Setting: In this paper, we speci\ufb01cally consider the transferability of untargeted attacks under the l\u221enorm constraint on perturbation strength. Untargeted attacks are considered because they have higher transferability compared to targeted attacks [6, 26]. Furthermore, to make sure that the benign and adversarial examples are close to each other, an attacker is constrained under a metric norm like l\u221e\u2264\u03f5, i.e., in the case of images the attacker can change each pixel intensity value by at maximum \u03f5 amount. 3.1. FGSM Adversarial examples can be formulated as a constrained optimization problem. Suppose we are given a classi\ufb01er function F that maps an input x to its ground-truth class y, a cost function J(x, y) that is used to train the classi\ufb01er and an allowed perturbation budget \u2018\u03f5\u2019. FGSM [7] \ufb01nds an adversarial example x\u2032 that satis\ufb01es \u2225x\u2032 \u2212x \u2225\u221e\u2264\u03f5 using the following formulation: x\u2032 = x + \u03f5 \u00b7 sign(\u2207xJ(x, y)), (1) where \u2207xJ(x, y) represent the gradient of the cost function w.r.t input x. A common choice for J is the cross-entropy loss. The problem with FGSM is that it is a single-step attack, which reduces the attack success rate due to under\ufb01tting the threat model. To overcome this dif\ufb01culty, an iterative version of FGSM was proposed [8]. 3.2. I-FGSM I-FGSM [8] iteratively applies FGSM with a small step size \u03b1 for a given number of iterations T. The step size \u03b1 can be calculated by dividing the perturbation budget \u03f5 with the number of iterations T, i.e., \u03b1 = \u03f5/T. I-FGSM can be represented as follows for steps t \u2208[1, T]: x\u2032 0 = x, x\u2032 t+1 = x\u2032 t + \u03b1 \u00b7 sign(\u2207xJ(x\u2032 t, y)). (2) The problem with I-FGSM is that it over\ufb01ts the threat model, reducing model accuracy to even 0%, while producing a small neural representation distortion (NRD) (See Fig. 2 for empirical evidence). One side effect of having low NRD is the reduced transferability of adversarial examples. This is what Dong et al. [6] built upon, proposing an attack algorithm that \ufb01nds adversarial examples iteratively, while maintaining the transferability rate. 3.3. MI-FGSM The work in [6] added momentum into the optimization objective of I-FGSM. It can be expressed as follows: x\u2032 0 =x, x\u2032 t+1 = x\u2032 t + \u03b1 \u00b7 sign(gt+1), t \u2208[1, T] gt+1 = \u00b5 \u00b7 gt + \u2207xJ(x\u2032 t, y) \u2225\u2207xJ(x\u2032 t, y)\u22251 . (3) The strength of MI-FGSM can be described by two of its control parameters, number of iterations and momentum. The number of attack iterations makes it strong in whitebox settings (like I-FGSM), while momentum allows it to maintain NRD, enhancing the attack success rate in blackbox settings. Based on the above observations, we build our framework and propose to enhance the NRD directly to create strong adversarial examples for black-box attacks. 4. Neural Representation Distortion The Problem: Strong white-box attack algorithms [8, 5] consider already-known network parameters \u03b8 and perturb the input to create x\u2032, such that the example is misclassi\ufb01ed, i.e., F(x\u2032; \u03b8) \u0338= y. Since the perturbations are calculated using gradient directions that are speci\ufb01c to \u03b8, the resulting perturbed images x\u2032 do not generalize well to other networks [6, 21]. The attacks presented in [6, 26, 27] show relatively better transferability, however, these attacks also perturb input images along gradient directions \u2207xJ that are dependent on the ground-truth label y and the de\ufb01nition of the loss function J. This dependency limits the crossnetwork and cross-task transferability of these attacks. 3 \fOriginal (a) Sports Car l\u221e\u226416 (b) Racer l\u221e\u226416 (c) Racer l\u221e\u226416 (d) Loggerhead l\u221e\u226416 (e) Quilt Figure 3: VGG-16 output is shown for sample images. (a) represents benign example, while (b), (c), (d) and (e) show adversarial examples generated by FGSM, MI-FGSM, DIM and NRDM, respectively, against VGG-16 . All adversarial examples have distance l\u221e\u226416 from the original seed (a). Our Solution: In this paper, we propose to directly maximize the perceptual metric based on representation loss of deep feature activations by solving the following optimization problem: max x\u2032 F(x\u2032)|k \u2212F(x)|k subject to: \u2225x \u2212x\u2032\u2225\u221e\u2264\u03f5, (4) where F is DNN based classi\ufb01er, k is the internal representation layer and \u03f5 is the allowed perturbation budget. We apply a transformation T to input x at the \ufb01rst iteration (Algorithm 1) to create a neural representation difference of an adversarial w.r.t a benign example and then maximize the mean squared error of this difference with in a given perturbation budget. There can be different choices for T but in this work T simply adds random noise to the input sample, i.e our algorithm takes a random step at the \ufb01rst iteration. Random noise is convenient to attain a difference at the starting point of our algorithm and it is preferable to heuristic transformations that may cause methodical bias. We use the VGG-16 [20] conv3.3 feature map as the neural representation distortion. This choice is based on observations, reported in the recent study [21], that adversarial examples found in VGG space have high transferability. This is also evident in our experimentation (Table 4). Increasing the representation loss at multiple network layers did not notably increase attack success and adds a signi\ufb01cant computational overhead. Our attack algorithm does not rely on the cross-entropy loss or input labels. This makes it a generic algorithm, which can be used to attack any system using off-the-shelf features in their pipeline. This makes several popular computer vision tasks vulnerable to adversarial attacks, e.g., object detection and segmentation. Furthermore, our proposed approach is complementary to recent best-performing attack methods, such as MI-FGSM [6] and DIM [26]. Therefore, we demonstrates that it can be used alongside them, which further boosts the strength of adversaries. Our proposed method to maximize NRD for a given input sample is summarized in Algorithm 1. Algorithm 1 Neural Representation Distortion Method Input: A classi\ufb01er F, input sample x, input transformation T , internal network layer k, perturbation budget \u03f5 and number of iterations T. Output: An adversarial example x\u2032 with \u2225x\u2032 \u2212x\u2225\u221e\u2264\u03f5. 1: g0 = 0; x\u2032 = x; 2: for t = 0 to T \u22121 do 3: if t = 0 then 4: x\u2032 = T (x) 5: end if 6: Forward pass x\u2032 t to F and compute L as follows; L = \u2225F(x\u2032)|k \u2212F(x)|k\u22252; (5) 7: Compute gradients gt = \u2207xL(x\u2032 t, x); 8: Apply the following equation; x\u2032 t+1 = x\u2032 t + \u03f5 \u00b7 sign(gt); (6) 9: Project adversary into the vicinity of x x\u2032 t+1 = clip(x\u2032 t+1, x \u2212\u03f5, x + \u03f5); (7) 10: end for 11: return x\u2032 = x\u2032 T . 5. Experiments 5.1. Evaluation Protocol In this section, we describe the datasets used for evaluation, network architectures under attack, and the parameter settings for each attack algorithm. 5.1.1 Datasets We use the MNIST, and CIFAR10 test sets and the ImageNet [18] subset provided by NIPS security challenge 2017 (ImageNet-NIPS) to validate the effectiveness of the proposed attack against classi\ufb01cation models. The MNIST and CIFAR10 test sets contain 10k samples each, while 4 \fImageNet-NIPS contains 1k image samples. For object detection, we used MS-COCO [14] validation set, which contains 40.5k images. This is a multi-task dataset popular for image segmentation, object detection and image captioning tasks. We report adversarial attack performance against object detection, however adversarial examples found on this dataset can be used to fool other related tasks e.g., visual question answering. For segmentation, we use the CAMVID [4] test set to measure segmentation robustness against NRDM (Algorithm 1). This dataset contains 233 image samples extracted from video sequences of driving scenes. model-m model-c conv2d(32, 3x3) 2\u2217{conv2d(96, 3x3)} maxpool(2x2) conv2d(96, 3x3, s=2) conv2d(64, 3x3) 2\u2217{conv2d(192, 3x3)} maxpool(2x2) conv2d(96, 3x3, s=2) conv2d(64, 3x3) 2\u2217{conv2d(192, 3x3)} fc(64) conv2d(10, 3x3) softmax(10) avg-pool softmax(10) Table 1: Architectures of naturally trained convolutional networks for MNIST (model-m) and CIFAR10 (model-c). \u2018*\u2019 indicates the number of times a layer is repeated. \u2018s\u2019 represent stride. Each convolutional layer is followed by ReLU activation. Batch-norm is used after each convolutional layer in model-c. Layers whose outputs are used by NRDM are highlighted in bold. 5.1.2 Network Architectures Classi\ufb01cation: We study eight models trained on the ImageNet dataset [18]. These models can be grouped into two categories. (a) Naturally trained: Five of these models are only trained on benign examples. These include Inceptionv3 (Inc-v3) [23], Inceptionv4 (Inc-v4), Inception Resnet v2 (IncRes-v2) [22] and Resnet v2152 (Res-152) [9] and VGG-19 [20]. (b) Adversarially trained: The other three, models including Adv-v3 [12], Inc-v3ens3 and IncRes-v2ens [25], are adversarially trained and made publicly available. The speci\ufb01c details about these models can be found in [12, 25]. Attacks are created for naturally trained models, while tested against all of them. For classi\ufb01cation on smaller datasets, we study three models each for MNIST and CIFAR10. Among these models, two are naturally trained and one is adversarially trained using saddle point optimization [16]. Adversarial examples are created for naturally trained models, named model-m and model-c for MNIST and CIFAR10, respectively (see Table 1). These examples are subsequently transferred to adversarially trained Madry\u2019s models [16] and naturally trained ResNet models, named res-m and res-m res-c conv2d(16, 3x3) conv2d(16, 3x3) 1\u2217rb ( conv2d(16, 3x3) conv2d(16, 3x3) 3\u2217rb ( conv2d(16, 3x3) conv2d(16, 3x3) 1\u2217rb ( conv2d(32, 3x3) conv2d(32, 3x3, s=2) 3\u2217rb ( conv2d(32, 3x3) conv2d(32, 3x3, s=2) 1\u2217rb ( conv2d(64, 3x3) conv2d(64, 3x3, s=2) 3\u2217rb ( conv2d(64, 3x3) conv2d(64, 3x3, s=2) softmax(10) avg-pool(8x8) softmax(10) Table 2: Architectures of naturally trained residual networks for MNIST (res-m) and CIFAR10 (res-c). \u2018\u2217\u2019 indicates the number of times a layer is repeated. \u2018s\u2019 and \u2018rb\u2019 represent stride and residual block respectively. Each convolutional layer is followed by a ReLU activation. Batchnorm is used after each convolutional layer in res-c. res-c for MNIST and CIFAR10 respectively (see Table 2). Object Detection: To demonstrate cross-task and cross-dataset transferability, we study naturally trained RetinaNet [13] performance against adversarial examples found by the NRDM approach (Algorithm 1) on the MS-COCO validation set. Segmentation: We evaluate the robustness of naturally trained SegNet-basic [3] against adversarial examples generated by the NRDM approach (Algorithm 1) on the CAMVID [4] test set. 5.1.3 Attack Parameters FGSM is a single-step attack. Its step size is set to 16. In the case of R-FGSM, we take a step of size \u03b1=16/3 in a random direction and then a gradient step of size 16\u2212\u03b1 to maximize model loss. The attack methods, I-FGSM, MI-FGSM and DIM, are run for ten iterations. The step size for these attacks is set to 1.6, as per standard practice. The momentum decay factor for MI-FGSM is set to one. This means that attack accumulates all the previous gradient information to perform the current update and is shown to have the best success rate [6]. For DIM, the transformation probability is set to 0.7. In the case of FFF [17], we train the adversarial noise for 10K iterations to maximize the response at the activation layers of VGG-16 [20]. For the NRDM (Algorithm 1), we used the VGG-16 [20] conv3-3 feature map as the representation loss. Since NRDM maximizes loss w.r.t a benign example, it does not suffer from the over\ufb01tting problem. We run NRDM for the maximum number of 100 iterations. The transferability of different attacks is compared against the number of iterations in Fig. 4. MIFGSM and DIM quickly reach to their full potential within ten iterations. The strength of I-FGSM strength decreases, 5 \fAccuracy Naturally Trained Adv. Trained Inc-v3 Inc-v4 Res-152 IncRes-v2 VGG-19 Adv-v3 Inc-v3ens3 IncRes-v2ens T-1 95.3 97.7 96.1 100.0 85.5 94.3 90.2 96.9 T-5 99.8 99.8 99.9 100.0 96.7 99.4 95.5 99.8 Table 3: Model accuracies are reported on original data set ImageNet-NIPS containing benign examples only. T-1: top-1 and T-5: top-5 accuracies. Best and second best performances are colorized. Naturally Trained Adv. Trained Attack Inc-v3 Inc-v4 Res-152 IncRes-v2 VGG-19 Adv-v3 Inc-v3ens3 IncRes-v2ens T-1 T-5 T-1 T-5 T-1 T-5 T-1 T-5 T-1 T-5 T-1 T-5 T-1 T-5 T-1 T-5 Inc-v3 FGSM [7] 22.0\u2217 45.7\u2217 62.5 84.7 64.6 85.8 65.9 85.9 49.9 75.7 69.1 88.1 77.2 90.9 90.8 98.3 R-FGSM [25] 16.7\u2217 38.0\u2217 65.8 86.0 69.5 89.7 68.8 88.7 61.4 83.8 76.4 90.9 77.9 91.2 88.8 97.6 I-FGSM [8] 0.0\u2217 1.7\u2217 82.0 97.6 86.5 98.6 90.6 99.1 76.7 95.0 88.5 98.7 84.9 94.4 94.6 99.5 MI-FGSM [6] 0.0\u2217 1.5\u2217 47.1 78.8 47.1 84.5 52.5 81.9 47.3 76.7 71.6 89.8 73.8 90.7 88.3 98.0 TAP [27] 0.0\u2217 22.1 46.9 24.7 52.5 60.9 68.8 DIM [26] 0.2\u2217 1.3\u2217 27.8 63.1 42.1 75.2 34.6 65.4 40.2 71.4 65.2 87.9 68.3 89.6 86.3 97.5 Res-152 FGSM [7] 54.1 79.3 61.2 84.2 16.5\u2217 41.0\u2217 62.5 85.6 46.0 72.7 67.3 87.4 74.0 89.4 88.4 97.7 R-FGSM [25] 58.5 83.4 64.9 86.6 12.9\u2217 35.2\u2217 69.1 88.5 56.1 80.8 74.5 90.6 75.5 90.4 86.5 96.5 I-FGSM [8] 80.0 96.6 84.1 98.4 0.9\u2217 6.2\u2217 92.5 99.1 75.7 94.9 87.4 99.0 85.5 94.8 93.4 99.3 MI-FGSM [6] 43.5 76.8 49.9 79.2 0.9\u2217 5.1\u2217 54.8 82.4 46.8 76.0 72.6 90.7 71.1 90.1 86.0 97.5 TAP [27] 48.2 55.7 7.6\u2217 55.2 49.2 57.8 64.1 DIM [26] 20.1 51.2 22.0 54.6 0.6\u2217 4.2\u2217 24.6 57.3 33.3 62.6 53.5 82.5 55.2 83.1 74.4 94.1 IncRes-v2 FGSM [7] 61.7 83.8 69.6 87.6 68.4 89.6 50.1\u2217 73.9\u2217 52.3 76.5 72.0 89.6 79.0 91.6 90.0 97.7 R-FGSM [25] 66.6 87.0 71.8 89.4 73.5 91.5 46.1\u2217 71.3\u2217 62.9 84.1 75.5 91.2 79.3 91.5 87.4 97.3 I-FGSM [8] 62.8 88.4 68.3 91.9 77.2 94.8 1.1\u2217 2.6\u2217 71.4 91.7 85.6 97.5 83.8 95.6 89.8 98.4 MI-FGSM [6] 36.0 67.5 42.4 73.2 49.3 82.2 1.0\u2217 2.4\u2217 51.3 76.8 70.0 90.1 71.5 92.2 81.8 96.3 TAP [27] 25.9 33.2 53.5 4.8\u2217 60.5 79.1 87.8 DIM [26] 21.4 49.8 23.5 53.4 32.3 64.3 4.8\u2217 13.7\u2217 39.7 69.2 54.9 81.4 57.5 85.9 73.5 94.4 VGG16 FGSM [7] 30.1 56.0 34.0 58.0 36.6 65.2 42.2 66.1 9.1 27.9 48.8 72.6 53.5 79.5 72.8 91.1 R-FGSM [25] 41.5 67.9 45.1 72.5 49.2 78.4 54.9 77.7 12.9 35.8 63.9 86.2 63.5 85.3 77.1 93.0 I-FGSM [8] 69.4 93.0 75.3 94.5 79.5 95.7 87.2 97.9 18.3 56.1 82.2 97.5 80.9 93.7 91.5 99.1 MI-FGSM [6] 16.9 42.0 18.7 40.1 24.9 51.6 26.1 52.5 2.0 14.4 38.8 68.1 42.5 72.4 64.2 87.7 TAP [27] 23.9 28.1 23.9 32.3 38.8 41.9 63.8 DIM [26] 12.9 35.5 15.2 35.8 20.6 45.7 19.7 43.8 0.6 8.8 31.6 59.0 32.1 61.0 56.3 81.0 FFF [17] 61.7 80.7 60.8 78.7 72.8 90.1 76.1 90.1 44.0 68.0 79.6 93.1 83.1 93.1 92.8 98.5 NRDM 5.1 10.2 6.2 12.4 15.6 27.6 13.6 23.0 4.5 14.2 27.7 46.8 54.2 75.4 75.3 89.8 NRDM-DIM 4.9 10.5 5.7 12.0 16.0 28.6 12.7 22.6 5.0 14.0 28.7 45.7 52.9 73.8 74.0 89.5 Table 4: Model accuracies are reported under untargeted l\u221eadversarial attacks on ImageNet-NIPS with perturbation budget l\u221e\u226416 for pixel space [0-255]. T-1 and T-2 represent top-1 and top-5 accuracies, respectively. NRDM shows higher or competitive success rates for black-box models than FGSM [7], I-FGSM [8], MI-FGSM [6], TAP [27], DIM [26] and FFF [17]. NRDM-DIM combines input diversity as well as momentum with NRDM. \u2018\u2217\u2019 indicates the white-box attacks. Best and second best black-box attacks are colorized. while NRDM strength increases, with the number of attack iterations. 5.2. Input Transformations Different input transformations have been proposed to mitigate the adversarial effect but they can be easily broken in a white-box scenario. This is because an attacker can be adaptive and incorporate transformations into the adversary generation process. Even non-differentiable transformations can be by-passed by approximating them with an identity function [1]. However in a black-box scenario, the attacker does not have any knowledge of the transformation function along with the network architecture and its parameters. We test the strength of our adversarial attack against well studied transformations, including JPEG, total variation minimization (TVM) and median \ufb01ltering. We report our experimental results using the above-mentioned network architectures and input transformations in the following section. 6. Results Classi\ufb01cation: We report the performance of our attack against a number of CNN architectures on the ImageNetNIPS dataset in Table 4. The following insights can be drawn from our results. (1) In comparison to other stateof-the-art attacks, our approach consistently demonstrates a much higher transferability rate for naturally trained images. Speci\ufb01cally, NRDM attack have much higher trans6 \f5 10 15 20 25 30 35 40 45 50 100 Number of iterations 10 20 30 40 50 60 70 80 90 Accuracy I-FGSM MI-FGSM DIM NRDM Figure 4: Accuracy of Inc-v3 for adversarial examples generated on VGG-16 by I-FGSM and MI-FGSM, DIM and NRDM. NRDM\u2019s strength increases with number of iterations, in contrast to MI-FGSM and DIM. Datasets \u2193 Attack \u2193 Naturally Trained Adv. Trained model-m res-m Madry-M MNIST FGSM 42.28\u2217 53.15 95.96 I-FGSM 40.66\u2217 51.04 96.64 MI-FGSM 40.66\u2217 48.19 95.96 NRDM 4.39\u2217 23.54\u2217 97.56 model-c res-c Madry-C CIFAR10 FGSM 5.47\u2217 24.19 85.54 I-FGSM 2.52\u2217 36.81 87.00 MI-FGSM 2.52\u2217 16.56 85.71 NRDM 11.92\u2217 23.98 86.99 Table 5: Model accuracies under untargeted l\u221eadversarial attacks on MNIST and CIFAR10 with perturbation budget l\u221e\u226476.5 and l\u221e\u22648, respectively, for pixel space [0255], as per standard practice [16]. NRDM shows higher or competitive success rates for black-box models compared to FGSM, I-FGSM and MI-FGSM. \u2018\u2217\u2019 indicates the whitebox attacks. Best and second best attacks are colorized. Dataset \u2193 Metric \u2193 Naturally Trained Adv. Trained model-m res-m Madry-M MNIST Accuracy 99.30 98.88 98.40 model-c res-c Madry-C CIFAR10 Accuracy 85.44 80.56 87.62 Table 6: Model accuracies on original test datasets for MNIST and CIFAR10 containing benign examples only. Best and second best performances are colorized. ferability on naturally trained models, bringing down top-1 accuracy of IncRes-v2 [22] from 100.0% (see Table 3) to 12.7% (see Table 4). (2) In comparison, MI-FGSM [6] and DIM [26] perform slightly better on adversarially trained ensemble models [25], with NRDM showing competitive success rate. This is because the MI-FGSM and DIM methods use decision boundary information while, NRDM is agnostic to decision-level information about the classi\ufb01er. Method No Attack NRDM l\u221e\u22648 l\u221e\u226416 No Defense 79.70 52.48 32.59 JPEG (quality=75) 77.25 51.76 32.44 JPEG (quality=50) 75.27 52.45 33.16 JPEG (quality=20) 68.82 53.08 35.54 TVM (weights=30) 73.70 55.54 34.21 TVM (weights=10) 70.38 59.52 34.57 MF (window=3) 75.65 49.18 30.52 Table 7: Segnet-Basic accuracies on CAMVID test set with and without input transformations against NRDM. Best and second best performances are colorized. Method No Attack NRDM l\u221e\u22648 l\u221e\u226416 No Defense 53.78 22.75 5.16 JPEG (quality=75) 49.57 20.73 4.7 JPEG (quality=50) 46.36 19.89 4.33 JPEG (quality=20) 40.04 19.13 4.58 TVM (weights=30) 47.06 27.63 6.36 TVM (weights=10) 42.79 32.21 9.56 MF (window=3) 43.48 19.59 5.05 Table 8: mAP (with IoU = 0.5) of RetinaNet is reported on the MS-COCO validation set with and without input transformations against NRDM. Best and second best performances are colorized. (3) We also test with adversarial examples found using different network architectures (i.e., Inc-v3, Res-152, IncRes-v2, VGG16). Overall, we conclude that the adversarial examples found in VGG-16 [20] space have very high transferability. Figure 3 shows a visual comparison of adversaries found by different attack algorithms. On small datasets (MNIST and CIFAR10), similar to other attacks, the NRDM becomes ineffective against adversarially trained Madry models [16] (see Tables 6 and 5) in blackbox settings. This shows that \ufb01nding better methods for adversarial training is a way forward to defend against these attacks. Input transformations can somewhat help to mitigate the adversarial effect in black-box settings (see Table 9). TVM is the most effective against all the attacks, while median \ufb01ltering perform better against DIM [26]. JPEG is the least effective against untargeted adversarial attacks. Segmentation: The NRDM attack created on CAMVID [4] in VGG-16 feature space is able to bring down the per pixel accuracy of Segnet-Basic by 47.11% within l\u221e\u226416 (see Table 7 and Fig 5). JPEG and TVM transformations are slightly effective but only at the cost of accuracy on benign examples. Object Detection: RetinaNet [13] collapses in the presence of adversaries found by NRDM on the MS-COCO validation set using the VGG-16 [20] feature space. Its mean average precision (mAP) with 0.5 intersection over union (IOU) drops from 53.78% to 5.16% under perturba7 \fNo Attack FGSM [7] R-FGSM [25] I-FGSM [8] MI-FGSM [6] DIM [26] NRDM T-1 T-5 T-1 T-5 T-1 T-5 T-1 T-5 T-1 T-5 T-1 T-5 T-1 T-5 No Defense 95.3 99.8 30.1 56.0 41.5 67.9 69.4 93.0 16.9 42.0 12.9 35.5 5.1 10.2 JPEG (quality=75) 93.9 99.5 30.4 55.4 41.8 67.0 69.7 92.3 18.4 42.0 13.5 33.3 5.4 12.6 JPEG (quality=50) 91.3 99.3 31.0 55.4 40.5 65.3 68.7 91.8 18.1 42.1 13.1 34.4 6.5 12.8 JPEG (quality=20) 86.0 97.6 29.9 53.9 38.0 64.6 69.8 90.9 18.4 42.1 14.1 34.2 8.4 18.7 TVM (weights=30) 93.1 99.4 30.6 56.2 41.7 67.7 73.7 94.5 17.2 42.1 14.9 33.5 9.8 18.5 TVM (weights=10) 88.8 97.6 32.1 57.3 43.6 69.4 73.9 93.4 19.8 45.7 15.8 37.1 24.0 40.5 MF (window=3) 93.2 99.1 24.3 45.5 36.1 62.3 62.8 89.9 16.2 36.8 18.8 42.1 9.9 17.9 Table 9: Inc-v3 accuracy is reported with and without input transformations. Adversarial examples are generated for VGG-16 in white-box setting by FGSM, R-FGSM, I-FGSM, MI-FGSM, DIM and NRDM under perturbation budget l\u221e\u226416 and then transferred to Inc-v3. T-1 and T-2 represent top-1 and top-5 accuracies, respectively. Best and second best performances are colorized. (a) Original (b) Prediction for Original l\u221e\u226416 (c) Adversarial (d) Prediction for Adversarial Figure 5: Segnet-Basic output is shown for different images. (a) is the original image, while (b) shows predictions for the original image. (c) is the adversary found by NRDM algorithm 1, while (d) shows predictions for the adversarial image. The perturbation budget is written on the top of adversarial image. (a) Original l\u221e\u22648 (b) Adversarial (c) Original l\u221e\u226416 (d) Adversarial Figure 6: RetinaNet detection results are shown for different images. (a) and (c) show detection for the original images, while (b) and (d) show detection for adversaries found using NRDM algorithm 1. The perturbation budget is written on the top of each adversarial image. tion budget l\u221e\u226416 (see Table 8 and Fig 6). TVM is relatively more effective compared to other transforms against the NRDM attack. 7." + }, + { + "url": "http://arxiv.org/abs/1807.01216v2", + "title": "Local Gradients Smoothing: Defense against localized adversarial attacks", + "abstract": "Deep neural networks (DNNs) have shown vulnerability to adversarial attacks,\ni.e., carefully perturbed inputs designed to mislead the network at inference\ntime. Recently introduced localized attacks, Localized and Visible Adversarial\nNoise (LaVAN) and Adversarial patch, pose a new challenge to deep learning\nsecurity by adding adversarial noise only within a specific region without\naffecting the salient objects in an image. Driven by the observation that such\nattacks introduce concentrated high-frequency changes at a particular image\nlocation, we have developed an effective method to estimate noise location in\ngradient domain and transform those high activation regions caused by\nadversarial noise in image domain while having minimal effect on the salient\nobject that is important for correct classification. Our proposed Local\nGradients Smoothing (LGS) scheme achieves this by regularizing gradients in the\nestimated noisy region before feeding the image to DNN for inference. We have\nshown the effectiveness of our method in comparison to other defense methods\nincluding Digital Watermarking, JPEG compression, Total Variance Minimization\n(TVM) and Feature squeezing on ImageNet dataset. In addition, we systematically\nstudy the robustness of the proposed defense mechanism against Back Pass\nDifferentiable Approximation (BPDA), a state of the art attack recently\ndeveloped to break defenses that transform an input sample to minimize the\nadversarial effect. Compared to other defense mechanisms, LGS is by far the\nmost resistant to BPDA in localized adversarial attack setting.", + "authors": "Muzammal Naseer, Salman H. Khan, Fatih Porikli", + "published": "2018-07-03", + "updated": "2018-11-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Deep neural network architectures achieve remarkable performance on critical applications of machine learning including sensitive areas such as face detection [16], malware detection [17] and autonomous driving [11]. However, the vulnerability of DNNs to adversarial examples limit their wide adoption in security critical applications [1]. It has been shown that adversarial examples can be created by minimally modifying the original input samples such that a DNN mis-classi\ufb01es them with high con\ufb01dence. DNNs are often criticized as black-box models; adversarial examples raise further concerns by highlighting blind spots of DNNs. At the same time, adversarial phenomena provide an opportunity to understand DNN\u2019s behavior to minor perturbations in visual inputs. Methods that generate adversarial examples either modify each image pixel by a small amount [24, 8, 14, 13] often imperceptible to human vision or few image pixels by a large visible amounts [20, 22, 4, 12, 7]. Pixel attack [22] changes few image pixels, but it requires small images (e.g., 32\u00d732) and does not provide control over noise location. Small noise patches were introduced by [20] in the form of glasses to cover human face to deceive face recognition systems. Similarly, Evtimov et al. [7] added noise patches as rectangular patterns on top of traf\ufb01c signs to cause misclassi\ufb01cation. Very recently, localized adversarial attacks, i.e., Adversarial patch [4] and LaVAN [12] have been introduced that can be optimized for triplets (misclassi\ufb01cation con\ufb01dence, target class, perturbed location). These practical attacks have demonstrated high strength and can easily bypass existing defense approaches. Therefore they present a signi\ufb01cant challenge for existing deep learning systems. Contributions: In this work, we study the behavior of localized adversarial attacks and propose an effective mechanism to defend against them (see Fig. 1). LaVAN and Adversarial patch add adversarial noise without affecting the original object in the image, and to some extent, they are complementary to each other. In an effort towards a strong defense against these attacks, this paper contributes as follows: \u2022 Motivated by the observation that localized adversarial attacks introduce high-frequency noise, we proarXiv:1807.01216v2 [cs.CV] 19 Nov 2018 \f(a) Impala (94%) (b) Ice Lolly (99%) (c) Impala (94%) (d) Squirrel Monkey (58%) (e) Toaster (91%) (f) Squirrel Monkey (57%) Figure 1: Inception v3 [23] con\ufb01dence scores are shown for example images. (a) and (d) represent benign examples from ImageNet [18], (b) and (e) are adversarial examples generated by LaVAN [12] and Adversarial patch [4] respectively, (c) and (f) show transformed adversarial images using our proposed LGS. As illustrated, LGS restores correct class con\ufb01dences. pose a transformation called Local Gradient Smoothing (LGS). LGS \ufb01rst estimates region of interest in an image with the highest probability of adversarial noise and then performs gradient smoothing in only those regions. \u2022 We show that by its design, LGS signi\ufb01cantly reduces gradient activity in the targeted attack region and thereby showing the most resistance to BPDA [2], an attack speci\ufb01cally designed to bypass transformation based defense mechanisms. \u2022 Our proposed defense outperforms other state-of-theart methods such as Digital watermarking, TVM, JPEG compression, and Feature squeezing in localized adversarial attacks setting [12, 4]. 2. Related Work Among the recent localized adversarial attacks, the focus of adversarial patch [4] is to create a scene independent physical-world attack that is agnostic to camera angles, lighting conditions and even the type of classi\ufb01er. The result is an image independent universal noise patch that can be printed and placed in the classi\ufb01er\u2019s \ufb01eld of view in a white box (when deep network model is known) or black box (when deep network model is unknown) setting. However, the size of the adversarial patch should be 10% of the image for the attack to be successful in about 90% cases [12]. This limitation was addressed by Karmoon et al. [12], who focused on creating localized attack covering as little as 2% of the image area instead of generating a universal noise patch. In both of these attacks [4, 12], there is no constraint on noise, and it can take any value within image domain, i.e., [0, 255] or [0, 1]. Defense mechanisms against adversarial attacks can be divided into two main categories. (a) Methods that modify DNN by using adversarial training [25] or gradient masking [15] and (b) techniques that modify input sample by using some smoothing function to reduce adversarial effect without changing the DNN [6, 5, 9, 26]. For example, JPEG compression was \ufb01rst presented as a defense by [6] and recently studied extensively by [5, 19]. [26] presented feature squeezing methods including bit depth reduction, median \ufb01ltering, Gaussian \ufb01ltering to detect and defend against adversarial attacks. Guo et al. [9] considered smoothing input samples by total variance minimization along with JPEG compression and image quilting to reduce the adversarial effect. Our work falls into the second category as we also transform the input sample to defend against localized adversarial attacks. However, as we will demonstrate through experiments, the proposed defense mechanism pro\fvides better defense against localized attacks compared to previous techniques. The paper is organized as follows: Section 3 discusses localized adversarial attacks, LaVAN and Adversarial patch in detail. Section 4 presents our defense approach (LGS) against these attacks. We discuss other related defense methods in Section 5.2. Section 5 demonstrates the effectiveness of the proposed method LGS in comparison to other defense methods against LaVAN and adversarial patch attacks. Section 5.3 discusses BPDA and resilience of different defense methods against it. Section 6 concludes the draft by discussing possible future directions. 3. Adversarial Attacks In this section, we provide a brief background to adversarial attacks and explain how LaVAN [12] and Adversarial patch [4] are different from traditional attacks. 3.1. Traditional Attacks The search for adversarial examples can be formulated as a constrained optimization problem. Given a discriminative classi\ufb01er F(y | x), an input sample x \u2208Rn, a target class \u00af y and a perturbation budget \u03f5, an attacker seeks to \ufb01nd a modi\ufb01ed input x\u2032 = x + \u03b4 \u2208Rn with adversarial noise \u03b4 to increase likelihood of the target class \u00af y by solving the following optimization problem: max x\u2032 F(y = \u00af y | x\u2032) subject to: \u2225x \u2212x\u2032\u2225p \u2264\u03f5 (1) This formulation produces well camou\ufb02aged adversarial examples but changes each pixel in the image. Defense methods such as JPEG compression [6, 5], Total variance minimization [9] and Feature squeezing [26] are effective against such attacks especially when the perturbation budget \u03f5 is not too high. 3.2. LaVAN LaVAN [12] differs from the formulation presented in Eq. 1 as it con\ufb01nes adversarial noise \u03b4 to a small region, usually away from the salient object in an image. It uses the following spatial mask to replace the small area with noise, as opposed to noise addition performed in traditional attacks: x\u2032 = (1 \u2212m) \u2299x + m \u2299\u03b4, s.t., m \u2208Rn and , (2) where \u2299is Hadamard product and \u03b4 represents adversarial noise. They also introduce a new objective function where at each iteration, optimization algorithm takes a step away from the source class and towards the target class simultaneously, as follows: max x\u2032 F(\u00af y | x\u2032) \u2212F(y | x\u2032) subject to: \u2225x \u2212x\u2032\u2225\u221e\u2264\u03f5, 0 \u2264\u03f5 \u22641, (3) where x\u2032 is given by Eq. 2. 3.3. Adversarial Patch Adversarial examples created using the methodology presented in Eq. 1 cannot be used in physical world attacks because adversarial noise loses its effect under different camera angles, rotations and lighting conditions. Athalye et al. [3] introduced an Expectation over Transformation (EoT) attack to create robust adversarial examples invariant to chosen set of transformations. Brown et al. [4] build upon Athalye\u2019s work and used EoT to create a scene independent robust noise patch con\ufb01ned to small region that can be printed and placed in the classi\ufb01er\u2019s \ufb01eld of view to cause misclassi\ufb01cation. To generate adversarial patch p\u2032, [4] proposed a patch operator A(p, x, l, t) for a given image x, patch p, location l and a set of transformation t. During optimization, patch operator A applies a set of transformations to the patch p and then projects it onto the image x at a location l to increase likelihood of target class \u00af y. p\u2032 = max p Ex\u223cX,t\u223cT,l\u223cL[F(\u00af y | A(p, x, l, t))] (4) where X represent training images, T represents distribution over transformations, and L is a distribution over locations in the image. 4. Defense: Local Gradients Smoothing Both of the above discussed attacks [12, 4] introduce high frequency noise concentrated at a particular image location and strength of such a noise becomes very prominent in image gradient domain. We propose that the effect of such adversarial noise can be reduced signi\ufb01cantly by suppressing high frequency regions without effecting the low frequency image areas that are important for classi\ufb01cation. An ef\ufb01cient way to achieve this is by projecting scaled normalized gradient magnitude map onto the image to directly suppress high activation regions. To this end, we \ufb01rst compute the magnitude of \ufb01rst-order local image gradients as follows: \u2225\u2207x(a, b) \u2225= s\u0012\u2202x \u2202a \u00132 + \u0012\u2202x \u2202b \u00132 , (5) where a, b denote the horizontal and vertical directions in the image plane. The range of the gradient magnitude calculated using the above equation is normalized for consistency across an image as follows: g(x) = \u2225\u2207x(a, b) \u2225\u2212\u2225\u2207x(a, b) \u2225min \u2225\u2207x(a, b) \u2225max \u2212\u2225\u2207x(a, b) \u2225min . (6) \f(a) (b) (c) (d) (e) (f) (g) (h) Figure 2: (a) and (e) show adversarial examples generated by LaVAN [12] and adversarial patch [4] respectively, (b) and (f) show normalized gradients magnitude before applying windowing operation to look for highest activation regions, (c) and (g) show concept of window search to estimate noise regions, (d) and (h) show normalized gradients magnitude after applying windowing operation. We refer to supplementary material for more examples. The normalized gradient g(x) is projected onto the original image to suppress noisy perturbations in the input data domain. This operation smooths out very high frequency image details. As demonstrated by our evaluations, these regions have high likelihood of being perturbed areas, but they do not provide signi\ufb01cant information for \ufb01nal classi\ufb01cation. The noise suppression is performed as follows: T (x) = x \u2299(1 \u2212\u03bb \u2217g(x)), (7) where \u03bb is the smoothing factor for LGS and \u03bb \u2217g(x) is clipped between 0 and 1. Applying this operation at a global image level, however, introduces image structural loss that causes a drop in classi\ufb01er\u2019s accuracy on benign examples. To minimize this effect, we design a block-wise approach where gradient intensity is evaluated within a local window. To this end, we \ufb01rst divide the gradient magnitude map into a total of K overlapping blocks of same size (\u03c4) and then \ufb01lter these blocks based on a threshold (\u03b3) to estimate highest activation regions which also have the highest likelihood of adversarial noise. This step can be represented as follows: g\u2032 h,w = W(g(x), h, w, \u03c4, o) \u2208R\u03c4, \u02c6 gh,w = ( g\u2032 h,w, if 1 |g\u2032 h,w| P i P j g\u2032 h,w(i, j) > \u03b3 0, otherwise. (8) where | \u00b7 | denotes the cardinality of each patch, o denotes the patch overlap, W(\u00b7) represent the windowing operation, h, w denote the vertical and horizontal components of the top left corner of the extracted window, respectively. We set the block size \u03c4 = 15\u00d715 with 5\u00d75 overlap and threshold is 0.1 in all of our experiments. The updated gradient blocks represented as \u02c6 gh,w are then collated to recreate the full gradient image: \u00af g = W\u22121({\u02c6 gh,w}K 1 ). Figure 2 shows the effect of windowing search on gradients magnitude maps. We further demonstrated LGS ef\ufb01ciency on challenging images in supplementary material. 5. Experiments 5.1. Protocol and Results Overview We used Inception v3 model [23] to experiment with various attack and defense mechanisms in all of our experiments. All attacks are carried out in white-box settings. We consider the validation set available with Imagenet-2012 dataset in our experiments. This set consists of a total of 50k images. We report top-1 accuracy of classi\ufb01er. Results are summarized in tables 1, 2 and 3. LaVAN [12] can be optimized for triplets (target, con\ufb01dence, location) but it is highly sensitive to noise location. Adversary loses its effect with even a small change to the pixel location. To reduce the computational burden and conduct experiments on a large scale, we randomly chose noise location along border areas of the image because they have the least probability to cover the salient object. We ran \fNo Attack 42x42 noise patch covering \u223c2% of image 52x52 noise patch covering \u223c3% of image 60x60 noise patch covering \u223c4% of image No Defense 75.61% 11.00% 2.79% 0.78% LGS [lambda=2.3] 71.05% 70.90% 69.84% 69.37% LGS [lambda=2.1] 71.50% 70.80% 69.54% 68.56% LGS [lambda=1.9] 71.84% 70.40% 68.84% 66.98% LGS [lambda=1.7] 72.30% 69.55% 67.32% 63.38% LGS [lambda=1.5] 72.72% 67.68% 64.13% 55.67% DW 52.77% 67.70% 66.19% 64.57% MF [window=3] 70.59% 63.90% 62.15% 59.81% GF [window=5] 61.75% 59.52% 57.68% 55.29% BF [window=5] 65.70% 61.53% 58.70% 55.59% JPEG [quality=80] 74.35% 18.14% 6.23% 2.06% JPEG [quality=60] 72.71% 25.69% 11.86% 4.85% JPEG [quality=40] 71.20% 37.10% 23.26% 12.73% JPEG [quality=30] 70.04% 45.00% 33.72% 22.04% JPEG [quality=20] 67.51% 52.84% 46.25% 37.19% JPEG [quality=10] 60.25% 53.10% 48.73% 43.59% TMV [weights=10] 70.21% 14.48% 4.64% 1.73% TMV [weights=20] 72.85% 13.24% 3.78% 1.17% TMV [weights=30] 73.85% 12.79% 3.53% 1.04% BR [depth=1] 39.85% 25.93% 15.14% 9.73% BR [depth=2] 64.61% 16.32% 6.15% 2.68% BR [depth=3] 72.83% 13.4% 3.89% 1.25% Table 1: Summary of Inception v3 performance against LaVAN attack on ImageNet validation set with and without defenses including local gradient smoothing (LGS), digital watermarking (DW), median \ufb01ltering (MF), Gaussian \ufb01ltering (GF), bilateral \ufb01ltering (BF), JPEG compression, total variance minimization (TVM) and bit-depth reduction (BR). Bold numbers represent the best accuracy of a certain defense against LAVAN attack. 1000 iterations of attack optimization per image. We terminate the optimization early if classi\ufb01er mis-classify with con\ufb01dence above than or equal to 99% or we let it run for at max 1000 iterations and attack is considered to be successful if the image label is changed to a random target (not equal to the true object class). Inceptionv3 model accepts 299x299 image as an input. Three adversarial noise masks with size 42x42 (\u223c2% of the image), 52x52 (\u223c3% of the image) and 60x60 (\u223c4% of the image) were applied. Table 1 presents summary of all the results. For the case of adversarial patch [4] attack, placing a patch of size 95x95 ( 10% of the image) randomly on all Imagenet validation set was not possible because it would cover most of salient objects details in an image. So we carefully created 1000 adversarial examples that model misclassi\ufb01ed as a toaster with a con\ufb01dence score at least 90%. We then applied all the defense techniques and reported results in Table 2. Figure 3 shows runtime of defense methods to process ImageNet [18] validation set. We used optimized python implementations. Speci\ufb01cally, we employed JPEG from Pillow, Total variance minimization (TVM), and Bilateral \ufb01ltering (BF) from scikit-image, Median \ufb01ltering (MF) and Gaussian \ufb01ltering (GF) from scipy, and LGS and Bit Depth Reduction (BR) are written in python 3.6 as well. All experiments were conducted on desktop windows computer equipped with Intel i7-7700k quad-core CPU clocked at 4.20GHz and 32GB RAM. Defense None LGS DW MF JPEG TVM BR Adversarial Patch 0% 90.5% 80% 49.10% 45% 1% 0% Table 2: Accuracy of Inception v3 against adversarial patch attack with and without defense. The size of adversarial noise is 95x95 covering \u223c10% of image. LGS is used with \u03bb = 2.3, DW in blind defense scenario, MF with window equal to 3, JPEG compression with quality equal to 30, TVM with weights equal to 10 and BR with depth 3. This hyperparameter choice was made for fair comparison such that the performance on benign examples from ImageNet is approximately the same (\ufb01rst column of Table 1). Results are reported for 1000 adversarial examples misclassi\ufb01ed as toaster with con\ufb01dence above than 90%. 5.2. Comparison with Related Defenses In this section, we report comparisons of our approach with other recent defense methods that transform the input sample to successfully reduce the adversarial effect. The compared methods include both global and local techniques. Note that our method processes image locally so it \f735 4172 1550 270 11640 170 4235 70 Time (in seconds) Defense Methods Defense Run Time Comparison LGS DW MF GF BF JPEG TVM BR Figure 3: Computational cost comparison of defense methods to process 50k ImageNet validation images. Graph is shown in log scale for better visualization with actual processing times written on the top of each bar in seconds. has advantage over other defenses like JPEG, MF, TVM and BR that process image globally. First, we provide a brief description of the competing defenses which will allow us to elaborate further on the performance trends in Tables 1, 2 and 3. 5.2.1 Digital Watermarking Hayes et.al [10] presented two, non-blind and blind, defense strategies to tackle the challenge of localized attacks [12, 4]. Non-blind defense considers a scenario, where defender has the knowledge of adversarial mask location. This is unlikely scenario in the context of adversarial attacks because threat is over immediately, once the adversary provides the mask location. Localized attacks have the ability to change the attention of classi\ufb01er from the original object to adversarial mask. In their blind defense, authors [10] exploited the attention mechanism by \ufb01rst \ufb01nding the mask location using saliency map and then processing that area before inference. Using saliency map to detect adversarial mask location is the strength of this defense but at the same time its also the weakness of defense because on benign examples, saliency map will give the location of original object and hence processing original object will decrease the performance on clean examples. Authors [10] reported blind defense performance to protect VGG19 [21] on only 400 randomly selected images with 12% accuracy drop on clean images. We have tested this defense on imagenet validation set [18] (50k images). This method has the second best accuracy on adversarial examples after LGS but its accuracy on clean examples expectedly dropped by a large margin (22.8%). Tables 1, 2 and 3 summarizes the performance of digital watermarking [10]. 5.2.2 JPEG Compression [6, 5, 19] extensively studied JPEG compression to defend against adversarial attacks. This way high-frequency components are removed that are less important to human vision by using Discrete Cosine Transform (DCT). JPEG performs compression as follows: \u2022 Convert an RGB image Y CbCr color space, where Y and Cb, Cr represent luminance and chrominance respectively. \u2022 Down-sample the chrominance channels and apply DCT to 8 \u00d7 8 blocks for each channel. \u2022 Perform quantization of frequency amplitudes by dividing with a constant and rounding off to the nearest integer. As illustrated in Table 1, image quality decreases as the degree of compression increases which in turn decreases accuracy on benign examples. JPEG compression is not very effective against localized attacks, and its defending ability decreases a lot against BPDA. JPEG performance comparison is shown in Tables 1, 2 and 3 and Figure 4. 5.2.3 Feature Squeezing The main idea of feature squeezing [26] is to limit the explorable adversarial space by reducing resolution either by using bit depth reduction or smoothing \ufb01lters. We found that bit reduction is not effective against localized attacks, however smoothing \ufb01lter including Gaussian \ufb01lter, median \ufb01lter, and bilateral \ufb01lter reduces localized adversarial effect with reasonable accuracy drop on benign examples. Among smoothing \ufb01lters, median \ufb01lter outperforms Gaussian and bilateral \ufb01lters. Feature squeezing performance is shown in Tables 1, 2 and 3 and Figure 4. 5.2.4 Total Variance Minimization (TVM) Guo et al. [9] considered smoothing adversarial images using TVM along with JPEG compression and image quilting. TVM has the ability to measure small variations in the image, and hence it proved effective in removing small perturbations. As illustrated in Table 1, TVM becomes ineffective against large concentrated variations introduced by the localized attacks. Further comparisons are shown in Tables 2 and 3 and Figure 4. 5.3. Resilience to BPDA BPDA [2] is built on the intuition that transformed images by JPEG or TVM should look similar to original images, that is, T (x) \u2248x. BPDA approximate gradients for non-differentiable operators with combined forward propagation through operator and DNN while ignoring operator \f(a) Dragon\ufb02y (99%) (c) Cardoon (94%) (e) Cardoon (91%) (g) Cardoon (89%) (i) Dragon\ufb02y (70%) (k) Dragon\ufb02y (98%) (m) Dragon\ufb02y (99%) (b) Toaster (94%) (d) Sandpiper (89%) (f) Sandpiper (45%) (h) Sandpiper (55%) (j) Sandpiper (28%) (l) Toaster (90%) (n) Toaster (92%) Figure 4: Inception v3 con\ufb01dence score is shown on example images. (a,b) represent adversarial examples generated by LaVAN and adversarial patch respectively, (c,d) show transformed adversarial images using LGS with lambda equal to 2.3 respectively, (e,f) show transformed adversarial images using DW processing method respectively, (g,h) show transformed adversarial images using median \ufb01lter with window size 3 respectively, (i,j) show transformed adversarial images using JPEG with quality 30 respectively, (k,l) show transformed adversarial images using TVM with weights equal to 10 respectively, and (m,n) show transformed adversarial images using BR with depth 3. during the backward pass. This strategy allows BPDA to approximate true gradients and thus bypassing the defense. In the traditional attack setting like Projected Gradient Descent (PGD) [13], the explorable space available to BPDA is Rn because it can change each pixel in the image. In localized attack setting explorable space reduces to Rm<