diff --git "a/abs_29K_G/test_abstract_long_2405.04781v1.json" "b/abs_29K_G/test_abstract_long_2405.04781v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.04781v1.json" @@ -0,0 +1,340 @@ +{ + "url": "http://arxiv.org/abs/2405.04781v1", + "title": "CourseGPT-zh: an Educational Large Language Model Based on Knowledge Distillation Incorporating Prompt Optimization", + "abstract": "Large language models (LLMs) have demonstrated astonishing capabilities in\nnatural language processing (NLP) tasks, sparking interest in their application\nto professional domains with higher specialized requirements. However,\nrestricted access to closed-source LLMs via APIs and the difficulty in\ncollecting massive high-quality datasets pose obstacles to the development of\nlarge language models in education fields of various courses. Given these\nchallenges, we propose CourseGPT-zh, a course-oriented education LLM that\nsupports customization and low-cost deployment. To address the\ncomprehensiveness and diversity requirements of course-specific corpora, we\ndesign a high-quality question-answering corpus distillation framework\nincorporating prompt optimization, which effectively mines textbook knowledge\nand enhances its diversity. Moreover, considering the alignment of LLM\nresponses with user needs, a novel method for discrete prompt optimization\nbased on LLM-as-Judge is introduced. During optimization, this framework\nleverages the LLM's ability to reflect on and exploit error feedback and\npatterns, allowing for prompts that meet user needs and preferences while\nsaving response length. Lastly, we obtain CourseGPT-zh based on the open-source\nLLM using parameter-efficient fine-tuning. Experimental results show that our\ndiscrete prompt optimization framework effectively improves the response\nquality of ChatGPT, and CourseGPT-zh exhibits strong professional capabilities\nin specialized knowledge question-answering, significantly outperforming\ncomparable open-source models.", + "authors": "Zheyan Qu, Lu Yin, Zitong Yu, Wenbo Wang, Xing zhang", + "published": "2024-05-08", + "updated": "2024-05-08", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "Parameter AND Efficient AND Fine AND Tuning", + "gt": "Large language models (LLMs) have demonstrated astonishing capabilities in\nnatural language processing (NLP) tasks, sparking interest in their application\nto professional domains with higher specialized requirements. However,\nrestricted access to closed-source LLMs via APIs and the difficulty in\ncollecting massive high-quality datasets pose obstacles to the development of\nlarge language models in education fields of various courses. Given these\nchallenges, we propose CourseGPT-zh, a course-oriented education LLM that\nsupports customization and low-cost deployment. To address the\ncomprehensiveness and diversity requirements of course-specific corpora, we\ndesign a high-quality question-answering corpus distillation framework\nincorporating prompt optimization, which effectively mines textbook knowledge\nand enhances its diversity. Moreover, considering the alignment of LLM\nresponses with user needs, a novel method for discrete prompt optimization\nbased on LLM-as-Judge is introduced. During optimization, this framework\nleverages the LLM's ability to reflect on and exploit error feedback and\npatterns, allowing for prompts that meet user needs and preferences while\nsaving response length. Lastly, we obtain CourseGPT-zh based on the open-source\nLLM using parameter-efficient fine-tuning. Experimental results show that our\ndiscrete prompt optimization framework effectively improves the response\nquality of ChatGPT, and CourseGPT-zh exhibits strong professional capabilities\nin specialized knowledge question-answering, significantly outperforming\ncomparable open-source models.", + "main_content": "Introduction Large language models, such as ChatGPT [1], GPT4 [2], LLaMA [3], and ChatGLM [4], have demonstrated remarkable performance and generalization capabilities across various NLP tasks, significantly expanding the boundaries of language applications. With the increase in model parameters and pretraining corpus size, capabilities such as logical reasoning, instruction following, and In-Context Learning [5],[6],[7] have emerged. Based on these breakthroughs, the latest LLMs have shown profound understanding and professionalism in various fields, such as virtual assistants, text generation, and code annotation. Utilizing LLMs to disrupt industries has become an inevitable trend, including the field of education[8],[9]. Recently, there has been a desire to leverage the extensive knowledge of large language models to construct domainspecific LLMs in various vertical fields, which require greater expertise and accuracy. To address the issue that general-purpose LLMs cannot meet specific domain requirements, a variety of methods have been proposed. For instance, steering foundation models through role-playing or prompt engineering have been used to tap into the knowledge learned during the pre-training phase, which can unleash their deep-seated expert capabilities [10],[11]. Other approaches involve pretraining or continual pre-training with domain-specific corpus to incorporate domainspecific knowledge into large language models [8],[12],[13],[14]. In addition, to reduce the hallucination during the response generation, retrieval augmentation has also been applied to provide reliable references [8],[15]. Based on these \u2217Xing zhang is the corresponding author. arXiv:2405.04781v1 [cs.CL] 8 May 2024 \fapproaches, successful implementations such as MedAgents [10], ChatLaw [15], EduChat [8], and FinGPT [16] have demonstrated the potential of LLMs to provide professional responses and insights in various vertical fields, including healthcare, law, finance, and education. However, constructing domain-specific large language models is still labor-consuming and expensive. To begin with, for closed-source large language models like ChatGPT, the high costs of text generation and fine-tuning services are often prohibitive. As for open-source LLMs, there is a significant gap in parameter size and pre-training corpus compared to closed-source LLMs, resulting in significantly weaker general capabilities such as reasoning, and domain-specific knowledge extraction [9],[17],[18],[19]. Faced with complex professional terminology, open-source large language models often fail to meet user requirements for domain knowledge. In this context, it often requires a large amount of in-domain pre-training corpus or expertise datasets to enhance professionalism in vertical fields. Although various existing works have developed specialized datasets and evaluation criteria for various fields such as philosophy, medicine, and law, as well as for scenarios including network operation and geospatial semantics [17],[18],[19],[20],[21], there is still a considerable demand for manual effort in constructing datasets for courses or privatized scenarios that are not covered by these datasets. This challenge is particularly pronounced when accessible corpora in the field are scarce, making it extremely difficult to construct tens of thousands of specialized instruction data. Furthermore, the majority of models are primarily pre-trained on English corpora, which may lead to a degradation in their performance in other languages [22],[23]. In addition to the challenges of constructing specialized corpora, the high cost of inference incurred by open-source large language models cannot be overlooked. Compared to the concise responses provided by humans, the responses generated by large language models, while more comprehensive, also include a significant amount of redundant information, resulting in unnecessary inference overhead. Typically, to further align the responses of large language models with specific preferences, methods such as RLHF (Reinforcement Learning from Human Feedback)[24] are introduced for fine-tuning models. However, this approach still requires a substantial amount of human-labeled preference data. Consequently, promoting alignment between the responses and human preferences, as well as reducing inference costs, is also a key factor in fostering the widespread adoption of open-source large models in specialized vertical domains. Targeted at these issues, we propose CourseGPT-zh, an open-source education large language model, and design a pipeline for constructing high-quality question-answer pairs through mining textbook knowledge. By utilizing the constructed diverse question-answer pairs, we perform parameter-efficient fine-tuning on the open-source model to mitigate the resource constraints required for deployment. In addition, in the data construction process, we incorporate LLM-as-Judge and utilize discrete prompt optimization to generate optimal prompts, steering ChatGPT to produce high-quality training data aligned with human preferences. Through this method, we ensure high-quality responses while reducing the deployment costs associated with response length. Our main contributions can be summarized as: \u2022 In this paper, we propose CourseGPT-zh, an open-source education large language model, with a pipeline for constructing high-quality and diverse question-answer pairs. Based on textbooks, we guide the model to conduct thorough exploration and questioning of textbooks, extracting knowledge from both closed-source large language models and specialized texts. Additionally, we employ a method inspired by self-instruct to guide the large language models in generating related questions, further enhancing the diversity. \u2022 Considering that although large language models can generate comprehensive answers, some content may be redundant or incorrect. Therefore, we employ prompt engineering to guide ChatGPT in generating responses that align with human preferences. To obtain the optimal prompts, we have designed an iterative discrete prompt optimization framework, which incorporates LLM-as-Judge to facilitate automatic evaluation of the quality of responses guided by prompts. Furthermore, the optimized prompt allows the large language model to achieve a balance between the quality of responses and their length, achieving information compression in responses. \u2022 A parameter-efficient fine-tuning method of the ChatGLM3 model is conducted based on constructed highquality question-answering data, resulting in the CourseGPT-zh. Experimental evidence has shown that CourseGPT-zh exhibits improved alignment with human responses, and delivers more concise answers while maintaining a high level of response quality. On various NLP task evaluation metrics, CourseGPT-zh significantly outperforms other open-source large models. 2 \f2 Related-work With fierce competition and rapid development, large language models ranging from billions to trillions of parameters have achieved remarkable performance across various NLP tasks after being pre-trained on massive amounts of text. Represented by LLMs such as ChatGPT, GPT4, and GPT4-Turbo, the OpenAI model family has successively reset the benchmarks for NLP tasks, being regarded as one of the greatest inventions in history. Concurrently, a multitude of open-source large language models, including llama-2-13b, ChatGLM3-6b, and Mistral-8x7B-MoE[25], have also shown astonishing improvements, even surpassing the level of ChatGPT on some dimensions. More importantly, they can be deployed on a single to several GPUs and can be flexibly customized through fine-tuning. 2.1 Domain-specific LLMs Although general-purpose large language models have achieved exceptional performance on generic NLP tasks, they often fall short in vertical domains that necessitate extensive specialized knowledge and high accuracy requirements. The performance of zero-shot large language models in these domains is typically inadequate, thereby granting domainspecific LLMs significant attention. Closed-source large language models, while exhibiting superior performance across various capabilities, present challenges for continual pre-training and fine-tuning with private corpora. Therefore, the construction of domain-specific models based on closed-source LLMs frequently leverages role-playing or collaboration abilities to extract knowledge in the specialized field during the pre-training phase. In contrast, open-source LLMs can be further pre-trained or fine-tuned with extensive high-quality domain-specific data, and they have achieved multiple successful applications in fields such as medicine, law, education, finance, etc. HuatuoGPT [26] employs a mixed dataset comprising distilled data from ChatGPT and real-world data provided by physicians\u2019 medical advice to fine-tune an open-source model. Furthermore, it aligns the model\u2019s response with human preferences through RLAIF (Reinforcement Learning from Artificial Intelligence Feedback). By learning from the response styles of real-world doctor-patient interactions, the fine-tuned model can engage with users in a human-like manner and significantly surpasses other models at a similar level across various metrics. MedChatZH [12] has developed a dialogue model specifically designed for Traditional Chinese Medicine, incorporating extensive Chinese medical literature for continual pre-training. After fine-tuning millions of question-answer data from the Internet and various Chinese hospitals, the model achieves state-of-the-art performance in the field of Chinese medicine. ChatLaw [15], targeting the legal domain, not only provides professional responses concerning legal knowledge but also acquires problem-solving abilities through training on multiple-choice question data. Furthermore, it employs a method combining vector database retrieval with keyword search, effectively reducing the hallucination in responses. EduChat [8] offers a range of functionalities, including open-ended question answering, paper assessment, and Socratic teaching, enhancing various skills through fine-tuning and the integration of tools. The model gains interdisciplinary knowledge through continual pre-training and strengthens its question-answering and instruction-following capabilities with large-scale instruction and open-domain dialogue datasets. FinGPT [16] adopts a data-centric approach, focusing on automated data management pipelines and lightweight adaptive technologies, establishing a comprehensive framework from data processing to feature engineering and application, while also enhancing the transparency of the overall framework. One of its strengths lies in its ability to integrate seamlessly with both open-source and closed-source large language models without the need for further training. 2.2 Discrete prompt engineering Prompt engineering aims to guide large language models to fully leverage their potential through the meticulous design of prompts. Extensive research has demonstrated that well-crafted prompts can significantly enhance the ability of large language models to improve their performance across various NLP tasks [27],[28]. Prompt engineering encompasses continuous prompt learning and discrete prompt optimization. Continuous prompt learning aims to adapt large language models to various tasks by incorporating learnable parameters within the prompts [29], [30]. However, continuous prompt learning typically requires access to the gradient vectors of the LLMs, which restricts its application in closed-source models that are accessed only through APIs. For discrete prompts, traditional methods often rely on meticulous manual design, which not only demands considerable human effort but also may not necessarily maximize the model\u2019s performance. Consequently, numerous methods for automatically generating optimal discrete prompts have been explored, leveraging the large model itself as an optimizer to autonomously enhance its performance in NLP tasks. Recently, several leading automated discrete prompt optimization frameworks have been proposed. EVOPROMPT[31] draws on the principles of evolutionary algorithms (EAs) to iteratively guide LLMs to generate new prompts through evolutionary operators. It does not require any gradient information from LLMs and can achieve a balance between exploration and exploitation. Experiments on nine datasets have shown that optimized prompts can significantly improve task performance. APE[32], inspired by program synthesis, represents discrete prompting optimization as 3 \fOpen-source Pre-trained Model Course-oriented Chat Model Factual Accuracy User Satisfaction Clarity Condensability Paragraphs Reflection Resample", + "additional_graph_info": { + "graph": [ + [ + "Zheyan Qu", + "Zitong Yu" + ], + [ + "Zitong Yu", + "Guoying Zhao" + ], + [ + "Zitong Yu", + "Xiaobai Li" + ], + [ + "Zitong Yu", + "Chenxu Zhao" + ], + [ + "Zitong Yu", + "Yunxiao Qin" + ] + ], + "node_feat": { + "Zheyan Qu": [ + { + "url": "http://arxiv.org/abs/2405.04781v1", + "title": "CourseGPT-zh: an Educational Large Language Model Based on Knowledge Distillation Incorporating Prompt Optimization", + "abstract": "Large language models (LLMs) have demonstrated astonishing capabilities in\nnatural language processing (NLP) tasks, sparking interest in their application\nto professional domains with higher specialized requirements. However,\nrestricted access to closed-source LLMs via APIs and the difficulty in\ncollecting massive high-quality datasets pose obstacles to the development of\nlarge language models in education fields of various courses. Given these\nchallenges, we propose CourseGPT-zh, a course-oriented education LLM that\nsupports customization and low-cost deployment. To address the\ncomprehensiveness and diversity requirements of course-specific corpora, we\ndesign a high-quality question-answering corpus distillation framework\nincorporating prompt optimization, which effectively mines textbook knowledge\nand enhances its diversity. Moreover, considering the alignment of LLM\nresponses with user needs, a novel method for discrete prompt optimization\nbased on LLM-as-Judge is introduced. During optimization, this framework\nleverages the LLM's ability to reflect on and exploit error feedback and\npatterns, allowing for prompts that meet user needs and preferences while\nsaving response length. Lastly, we obtain CourseGPT-zh based on the open-source\nLLM using parameter-efficient fine-tuning. Experimental results show that our\ndiscrete prompt optimization framework effectively improves the response\nquality of ChatGPT, and CourseGPT-zh exhibits strong professional capabilities\nin specialized knowledge question-answering, significantly outperforming\ncomparable open-source models.", + "authors": "Zheyan Qu, Lu Yin, Zitong Yu, Wenbo Wang, Xing zhang", + "published": "2024-05-08", + "updated": "2024-05-08", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction Large language models, such as ChatGPT [1], GPT4 [2], LLaMA [3], and ChatGLM [4], have demonstrated remarkable performance and generalization capabilities across various NLP tasks, significantly expanding the boundaries of language applications. With the increase in model parameters and pretraining corpus size, capabilities such as logical reasoning, instruction following, and In-Context Learning [5],[6],[7] have emerged. Based on these breakthroughs, the latest LLMs have shown profound understanding and professionalism in various fields, such as virtual assistants, text generation, and code annotation. Utilizing LLMs to disrupt industries has become an inevitable trend, including the field of education[8],[9]. Recently, there has been a desire to leverage the extensive knowledge of large language models to construct domainspecific LLMs in various vertical fields, which require greater expertise and accuracy. To address the issue that general-purpose LLMs cannot meet specific domain requirements, a variety of methods have been proposed. For instance, steering foundation models through role-playing or prompt engineering have been used to tap into the knowledge learned during the pre-training phase, which can unleash their deep-seated expert capabilities [10],[11]. Other approaches involve pretraining or continual pre-training with domain-specific corpus to incorporate domainspecific knowledge into large language models [8],[12],[13],[14]. In addition, to reduce the hallucination during the response generation, retrieval augmentation has also been applied to provide reliable references [8],[15]. Based on these \u2217Xing zhang is the corresponding author. arXiv:2405.04781v1 [cs.CL] 8 May 2024 \fapproaches, successful implementations such as MedAgents [10], ChatLaw [15], EduChat [8], and FinGPT [16] have demonstrated the potential of LLMs to provide professional responses and insights in various vertical fields, including healthcare, law, finance, and education. However, constructing domain-specific large language models is still labor-consuming and expensive. To begin with, for closed-source large language models like ChatGPT, the high costs of text generation and fine-tuning services are often prohibitive. As for open-source LLMs, there is a significant gap in parameter size and pre-training corpus compared to closed-source LLMs, resulting in significantly weaker general capabilities such as reasoning, and domain-specific knowledge extraction [9],[17],[18],[19]. Faced with complex professional terminology, open-source large language models often fail to meet user requirements for domain knowledge. In this context, it often requires a large amount of in-domain pre-training corpus or expertise datasets to enhance professionalism in vertical fields. Although various existing works have developed specialized datasets and evaluation criteria for various fields such as philosophy, medicine, and law, as well as for scenarios including network operation and geospatial semantics [17],[18],[19],[20],[21], there is still a considerable demand for manual effort in constructing datasets for courses or privatized scenarios that are not covered by these datasets. This challenge is particularly pronounced when accessible corpora in the field are scarce, making it extremely difficult to construct tens of thousands of specialized instruction data. Furthermore, the majority of models are primarily pre-trained on English corpora, which may lead to a degradation in their performance in other languages [22],[23]. In addition to the challenges of constructing specialized corpora, the high cost of inference incurred by open-source large language models cannot be overlooked. Compared to the concise responses provided by humans, the responses generated by large language models, while more comprehensive, also include a significant amount of redundant information, resulting in unnecessary inference overhead. Typically, to further align the responses of large language models with specific preferences, methods such as RLHF (Reinforcement Learning from Human Feedback)[24] are introduced for fine-tuning models. However, this approach still requires a substantial amount of human-labeled preference data. Consequently, promoting alignment between the responses and human preferences, as well as reducing inference costs, is also a key factor in fostering the widespread adoption of open-source large models in specialized vertical domains. Targeted at these issues, we propose CourseGPT-zh, an open-source education large language model, and design a pipeline for constructing high-quality question-answer pairs through mining textbook knowledge. By utilizing the constructed diverse question-answer pairs, we perform parameter-efficient fine-tuning on the open-source model to mitigate the resource constraints required for deployment. In addition, in the data construction process, we incorporate LLM-as-Judge and utilize discrete prompt optimization to generate optimal prompts, steering ChatGPT to produce high-quality training data aligned with human preferences. Through this method, we ensure high-quality responses while reducing the deployment costs associated with response length. Our main contributions can be summarized as: \u2022 In this paper, we propose CourseGPT-zh, an open-source education large language model, with a pipeline for constructing high-quality and diverse question-answer pairs. Based on textbooks, we guide the model to conduct thorough exploration and questioning of textbooks, extracting knowledge from both closed-source large language models and specialized texts. Additionally, we employ a method inspired by self-instruct to guide the large language models in generating related questions, further enhancing the diversity. \u2022 Considering that although large language models can generate comprehensive answers, some content may be redundant or incorrect. Therefore, we employ prompt engineering to guide ChatGPT in generating responses that align with human preferences. To obtain the optimal prompts, we have designed an iterative discrete prompt optimization framework, which incorporates LLM-as-Judge to facilitate automatic evaluation of the quality of responses guided by prompts. Furthermore, the optimized prompt allows the large language model to achieve a balance between the quality of responses and their length, achieving information compression in responses. \u2022 A parameter-efficient fine-tuning method of the ChatGLM3 model is conducted based on constructed highquality question-answering data, resulting in the CourseGPT-zh. Experimental evidence has shown that CourseGPT-zh exhibits improved alignment with human responses, and delivers more concise answers while maintaining a high level of response quality. On various NLP task evaluation metrics, CourseGPT-zh significantly outperforms other open-source large models. 2 \f2 Related-work With fierce competition and rapid development, large language models ranging from billions to trillions of parameters have achieved remarkable performance across various NLP tasks after being pre-trained on massive amounts of text. Represented by LLMs such as ChatGPT, GPT4, and GPT4-Turbo, the OpenAI model family has successively reset the benchmarks for NLP tasks, being regarded as one of the greatest inventions in history. Concurrently, a multitude of open-source large language models, including llama-2-13b, ChatGLM3-6b, and Mistral-8x7B-MoE[25], have also shown astonishing improvements, even surpassing the level of ChatGPT on some dimensions. More importantly, they can be deployed on a single to several GPUs and can be flexibly customized through fine-tuning. 2.1 Domain-specific LLMs Although general-purpose large language models have achieved exceptional performance on generic NLP tasks, they often fall short in vertical domains that necessitate extensive specialized knowledge and high accuracy requirements. The performance of zero-shot large language models in these domains is typically inadequate, thereby granting domainspecific LLMs significant attention. Closed-source large language models, while exhibiting superior performance across various capabilities, present challenges for continual pre-training and fine-tuning with private corpora. Therefore, the construction of domain-specific models based on closed-source LLMs frequently leverages role-playing or collaboration abilities to extract knowledge in the specialized field during the pre-training phase. In contrast, open-source LLMs can be further pre-trained or fine-tuned with extensive high-quality domain-specific data, and they have achieved multiple successful applications in fields such as medicine, law, education, finance, etc. HuatuoGPT [26] employs a mixed dataset comprising distilled data from ChatGPT and real-world data provided by physicians\u2019 medical advice to fine-tune an open-source model. Furthermore, it aligns the model\u2019s response with human preferences through RLAIF (Reinforcement Learning from Artificial Intelligence Feedback). By learning from the response styles of real-world doctor-patient interactions, the fine-tuned model can engage with users in a human-like manner and significantly surpasses other models at a similar level across various metrics. MedChatZH [12] has developed a dialogue model specifically designed for Traditional Chinese Medicine, incorporating extensive Chinese medical literature for continual pre-training. After fine-tuning millions of question-answer data from the Internet and various Chinese hospitals, the model achieves state-of-the-art performance in the field of Chinese medicine. ChatLaw [15], targeting the legal domain, not only provides professional responses concerning legal knowledge but also acquires problem-solving abilities through training on multiple-choice question data. Furthermore, it employs a method combining vector database retrieval with keyword search, effectively reducing the hallucination in responses. EduChat [8] offers a range of functionalities, including open-ended question answering, paper assessment, and Socratic teaching, enhancing various skills through fine-tuning and the integration of tools. The model gains interdisciplinary knowledge through continual pre-training and strengthens its question-answering and instruction-following capabilities with large-scale instruction and open-domain dialogue datasets. FinGPT [16] adopts a data-centric approach, focusing on automated data management pipelines and lightweight adaptive technologies, establishing a comprehensive framework from data processing to feature engineering and application, while also enhancing the transparency of the overall framework. One of its strengths lies in its ability to integrate seamlessly with both open-source and closed-source large language models without the need for further training. 2.2 Discrete prompt engineering Prompt engineering aims to guide large language models to fully leverage their potential through the meticulous design of prompts. Extensive research has demonstrated that well-crafted prompts can significantly enhance the ability of large language models to improve their performance across various NLP tasks [27],[28]. Prompt engineering encompasses continuous prompt learning and discrete prompt optimization. Continuous prompt learning aims to adapt large language models to various tasks by incorporating learnable parameters within the prompts [29], [30]. However, continuous prompt learning typically requires access to the gradient vectors of the LLMs, which restricts its application in closed-source models that are accessed only through APIs. For discrete prompts, traditional methods often rely on meticulous manual design, which not only demands considerable human effort but also may not necessarily maximize the model\u2019s performance. Consequently, numerous methods for automatically generating optimal discrete prompts have been explored, leveraging the large model itself as an optimizer to autonomously enhance its performance in NLP tasks. Recently, several leading automated discrete prompt optimization frameworks have been proposed. EVOPROMPT[31] draws on the principles of evolutionary algorithms (EAs) to iteratively guide LLMs to generate new prompts through evolutionary operators. It does not require any gradient information from LLMs and can achieve a balance between exploration and exploitation. Experiments on nine datasets have shown that optimized prompts can significantly improve task performance. APE[32], inspired by program synthesis, represents discrete prompting optimization as 3 \fOpen-source Pre-trained Model Course-oriented Chat Model Factual Accuracy User Satisfaction Clarity Condensability Paragraphs Reflection Resample" + } + ], + "Zitong Yu": [ + { + "url": "http://arxiv.org/abs/2303.00197v2", + "title": "Development and task-based evaluation of a scatter-window projection and deep learning-based transmission-less attenuation compensation method for myocardial perfusion SPECT", + "abstract": "Attenuation compensation (AC) is beneficial for visual interpretation tasks\nin single-photon emission computed tomography (SPECT) myocardial perfusion\nimaging (MPI). However, traditional AC methods require the availability of a\ntransmission scan, most often a CT scan. This approach has the disadvantages of\nincreased radiation dose, increased scanner cost, and the possibility of\ninaccurate diagnosis in cases of misregistration between the SPECT and CT\nimages. Further, many SPECT systems do not include a CT component. To address\nthese issues, we developed a Scatter-window projection and deep Learning-based\nAC (SLAC) method to perform AC without a separate transmission scan. To\ninvestigate the clinical efficacy of this method, we then objectively evaluated\nthe performance of this method on the clinical task of detecting perfusion\ndefects on MPI in a retrospective study with anonymized clinical SPECT/CT\nstress MPI images. The proposed method was compared with CT-based AC (CTAC) and\nno-AC (NAC) methods. Our results showed that the SLAC method yielded an almost\noverlapping receiver operating characteristic (ROC) plot and a similar area\nunder the ROC (AUC) to the CTAC method on this task. These results demonstrate\nthe capability of the SLAC method for transmission-less AC in SPECT and\nmotivate further clinical evaluation.", + "authors": "Zitong Yu, Md Ashequr Rahman, Craig K. Abbey, Barry A. Siegel, Abhinav K. Jha", + "published": "2023-03-01", + "updated": "2023-03-19", + "primary_cat": "physics.med-ph", + "cats": [ + "physics.med-ph", + "eess.IV" + ], + "main_content": "INTRODUCTION Attenuation of photons is a major image-degrading e\ufb00ect that adversely impacts image quality in single-photon emission computed tomography (SPECT). Multiple studies have shown that attenuation compensation (AC) is bene\ufb01cial for clinical interpretations of SPECT myocardial perfusion images.1,2 Conventional AC methods typically require an attenuation map, now most commonly obtained from a separate CT scan. However, these CT-based AC (CTAC) methods have multiple disadvantages, such as increased radiation dose, higher scanner costs, and possible misregistration between the SPECT and CT images potentially leading to inaccurate diagnosis.3\u20136 Further, many SPECT systems often do not have a CT component. For example, SPECT systems in smaller community hospitals and physician o\ufb03ces, as well as mobile SPECT systems facilitating use in remote locations are often SPECT-only. The emerging solid-state-detector-based SPECT systems, which provide higher sensitivity, energy, temporal, and spatial resolution compared to conventional SPECT systems, often do not have CT imaging capability either.7,8 For these reasons, there is an important need to develop transmission-less AC (Tx-less AC) methods for SPECT. Given this high signi\ufb01cance, multiple Tx-less AC methods have been proposed, including methods that use SPECT emission data to estimate attenuation maps9\u201311 and methods that operate on the iterative inversion of the forward mathematical models of SPECT systems.12,13 More recently, deep learning (DL)-based methods Further author information: (Send correspondence to Abhinav K. Jha) Abhinav K. Jha: E-mail:a.jha@wustl.edu \fhave shown signi\ufb01cant promise for Tx-less AC.14\u201319 Shi et al. recently reported promising performance of a conditional generative adversarial network for Tx-less AC for myocardial perfusion SPECT (MPS).18 Chen et al. developed strategies for generating attenuation maps using emission data for dedicated cardiac SPECT with small \ufb01eld-of-view and found that their strategies outperformed the method that directly predicts AC images from non-attenuation-corrected images.19 While the performance of these methods is promising, these DL-based methods have typically been evaluated using \ufb01gures of merit (FoM) that measure the \ufb01delity between the images reconstructed using the DL-based approach with a reference standard, which is typically the image reconstructed with the CT-based AC method. Medical images are acquired for speci\ufb01c clinical tasks. Thus, clinical translation of these Tx-less AC methods requires that they be evaluated in reference to the clinical task.20\u201323 However, studies have shown that evaluation using \ufb01delity-based FoMs may not correlate with performance on clinical tasks in myocardial perfusion imaging (MPI).24\u201326 Thus, it is crucial to evaluate these methods on the speci\ufb01c clinical tasks for which the images are acquired. We have shown that scatter-window data in SPECT contains information to estimate the attenuation distribution.27 Based on this premise, we had proposed a DL-based Tx-less AC method for SPECT.28 In this paper, we advance upon this idea to propose a Scatter-window projection and DL-based Tx-less AC (SLAC) method for myocardial perfusion SPECT (MPS) that uses only SPECT emission data in photopeak and scatter windows. We objectively evaluate the method on the clinical task of myocardial defect detection in a retrospective study. 2. METHODS 2.1 Proposed method The overall framework of the SLAC method is shown in Fig. 1. The probability of scatter at a certain location is proportional to the attenuation distribution at that location. It is expected that a reconstruction of the scatterwindow projection would show the contrast between regions with di\ufb00erent attenuation coe\ufb03cients. A previous study has shown a promising performance of a DL-based method that estimates the attenuation map from a reconstruction of scatter-window projection.18 Thus, the scatter-window projection was reconstructed using an ordered-subsets expectation maximization (OSEM)-based approach, yielding an initial estimate of attenuation map.29,30 Then, in this study, we used a DL-based technique to segment the initial estimate of attenuation maps. U-Net-based approaches have shown promise in biomedical image segmentation problems.31\u201333 Thus, we used a U-Net-based approach, namely, multi-channel input and multi-encoder U-Net (McEUN), to segment the initial estimate of attenuation maps. The McEUN was trained to segment the initial estimate of attenuation maps into six regions, including skin and subcutaneous adipose, muscles and organs, lungs, bones, patient table, and background. The McEUN mainly consists of two components: an encoder with muti-channel input and an assembly of six decoders. To stabilize the network training and leverage salient regions, skip connections with attention gate (AG) were implemented between the output of layers in the encoder and each decoder.34 Dropout was applied to prevent over\ufb01tting.35 The network was designed to input the whole 3-D image and maximize the amount of information learned in a global sense. The network was trained to minimize the cross entropy between estimated and true segmentation. 2.2 Evaluation We evaluated the SLAC method on the clinical task of detecting cardiac perfusion defects in a retrospective Institutional Review Board-approved evaluation study with anonymized stress SPECT MPI data. We compared the performance of our method with activity maps reconstructed using a CT-based AC (CTAC) method and to images obtained without AC, referred to as no-AC (NAC) approach. We followed the recently proposed Recommendations for EvaLuation of AI in NuClear-medicinE (RELAINCE) guidelines to lend rigor to our evaluation.23 The description of the evaluation study consists of four parts, including data collection and curation, network training and method implementation, process to extract task-speci\ufb01c information, and \ufb01gures of merit used. \fFigure 1. The overall framework of the SLAC method. 2.2.1 Data collection and curation The dataset used in this study consisted of N = 648 anonymized clinical SPECT/CT stress MPI studies scanned between January 2016 and July 2021, with SPECT projection data and CT images along with clinical reports. As per the clinical reports, we categorized patients diagnosed with normal rest and stress myocardial perfusion function as healthy patients, while patients diagnosed with ischemia in a left ventricular wall as diseased patients. MPI scans were acquired on a GE Discovery NM/CT 670 system after the injection of 99mTc-tetrofosmin. SPECT emission data were collected in photopeak (126-154 keV) and scatter windows (114-126 keV). CT images were acquired at 120 kVp on a GE Optima CT 540 system integrated with the GE Discovery NM/CT 670. To avoid misalignment between CT and SPECT scans, CT images were registered to the SPECT space using MIM Maestro (MIM Software Inc, Cleveland, OH). CT-de\ufb01ned attenuation maps were calculated from the CT scans using a bi-linear model.36 For evaluating the SLAC method on the clinical task of detecting cardiac defects, knowledge of the existence and location of the defects was needed. The clinical records have limitations in providing this information. To address this issue, we implemented a strategy that introduces synthetic cardiac defects in healthy patient images.37 We designed 27 types of clinically realistic defects with three radial extents, three severities, and at three locations. A summary of defect types is shown in Table 1. Table 1. Defect parameters Parameter Extent 30, 60, and 90 degrees around the left ventricular (LV) wall Severity 10%, 25%, and 50% less activity than the normal myocardium Location Anterior, inferior, and lateral LV walls The whole dataset was divided into the training dataset (N = 508) and the testing dataset (N = 140). Because we need the ground truth of defects in the test dataset for our evaluation study, the testing dataset only consisted of patients that were diagnosed as healthy according to the clinical records, and synthetic defects were introduced in 71 of these 140 healthy test patients, referred to as defect-present samples. The remaining 69 healthy test patients were referred to as defect-absent samples. We generated 27\u00d771 = 1917 defect-present samples in both photopeak and scatter energy windows. We also generated 27\u00d769 = 1863 defect-absent samples, although they were identical if from the same patient. As mentioned in Sec. 2.1, the scatter-window projection was reconstructed using an OSEM-based reconstruction method without AC, yielding the initial estimate of the attenuation map. Also, the photopeak-window projection was reconstructed using the same strategy, yielding an initial estimate of the activity map. The CTbased attenuation maps for network training were segmented into skin and subcutaneous adipose, muscles and \forgans, lungs, bones, patient table, and background, using a Markov random \ufb01eld-based method.38 The average attenuation coe\ufb03cients of each region were calculated and served as the prede\ufb01ned attenuation coe\ufb03cients. 2.2.2 Network training and method implementation A total of N = 508 samples were used for the network training. The kernel weights of the McEUN were initialized using the Glorot normal initializer.39 Biases were initialized to a constant of 0.03. The McEUN was trained to minimize a weighted cross entropy loss between predicted and CT-based segmentations using Adam optimizer.40 We optimized the weight parameters to yield the best segmentation performance. Five-fold cross-validation was implemented to prevent over\ufb01tting. The training and validation were performed using Keras 2.2.4 on two TITAN RTX GPUs with 23 GB memory each. As mentioned in Sec. 2.2.1, there were a total of 27\u00d7140 = 3780 samples in the test dataset. The trained McEUN yielded segmented masks of initial estimates of attenuation maps. Prede\ufb01ned attenuation coe\ufb03cients were assigned to each region, yielding the \ufb01nal estimate of attenuation maps. Next, the photopeak-window projections were reconstructed using an OSEM-based reconstruction method, which accounts for attenuation and collimator-detector response.30 The \ufb01nal estimates of attenuation maps were used for AC. The reconstructed activity images had a size of 64\u00d764\u00d764 with a voxel size of 0.68 cm. Following the clinical protocols, the reconstructed activity images were reoriented into short-axis slices and \ufb01ltered by a Butterworth \ufb01lter with an order of 5 and cuto\ufb00frequency 0.44 cm\u22121. CTAC-based images and NAC-based images were obtained using the same OSEM-based reconstruction approach and post-processing procedures as used in the SLAC method but with di\ufb00erent AC approaches. 2.2.3 Process to extract task-speci\ufb01c information We objectively evaluated the performance of the SLAC method on the task of detecting myocardial perfusion defects in an observer study. While ideally, such evaluation should be performed with human observers, this is time-consuming and tedious. Model observers provide an easy-to-use in silico approach to perform such evaluation and identify methods for evaluation with human observers. Thus, multiple studies use model observers to evaluate imaging systems and methods.21,41\u201343 In our evaluation study, the location of the defect was chosen to be in the inferior left ventricular wall in the test dataset. Then, we had a patient population where the defect extent and severity were varying, but the location of the defect was the same in the entire population in the test dataset. Therefore, there were 9\u00d771 = 639 defect-present samples and 9\u00d769 = 621 defect-absent samples in the evaluation study. Previous studies have shown that the channelized Hotelling observer (CHO) with rotationally symmetric frequency channels can emulate human-observer performance on the task of detecting perfusion defects from MPS images in this setting.44,45 Thus, in this study, we used this model observer. We extracted a 32\u00d732 region from the middle 2-D slice of the short-axis images that had the defect centroid at the center and applied the CHO to this region to yield test statistics. These statistics were calculated for each defect-present and -absent image in the test set using a leave-one-out strategy. 2.2.4 Figures of merit The test statistics generated were compared to a threshold to classify the image into the defect-present or defectabsent class. By varying the threshold, the ROC curve was plotted46,47 using LABROC4 program.48 The AUC measures the performance of methods on the task of defect detection. A higher AUC indicates better performance. We calculated the AUC with 95% con\ufb01dence intervals (CIs) for SLAC, CTAC, and NAC methods. To assess the performance of our method using visual \ufb01delity-based criterion, we calculated root mean-squareerror (RMSE) and structural similarity index (SSIM) with CIs between images obtained using the SLAC and the CTAC method, as well as between images obtained using the NAC and the CTAC method. 3. RESULTS 3.1 Comparing to other AC methods Fig. 2 shows ROC curves obtained by SLAC, CTAC, and NAC methods, along with the corresponding 95% CIs. The ROC curve obtained by the SLAC method almost overlapped those obtained by the CTAC method and outperformed the NAC method. Fig. 3 shows the AUC values with CIs obtained by three methods. We \fFigure 2. ROC curves obtained by CTAC, SLAC, and NAC methods. Shadows indicate 95% con\ufb01dence intervals. Figure 3. AUC obtained by CTAC, SLAC, and NAC methods. observed that the AUC obtained by SLAC was similar to that obtained by the CTAC method and signi\ufb01cantly outperformed (p < 0.05) the NAC method. 3.2 Representative examples Fig. 4 shows examples of SPECT images and corresponding attenuation maps estimated by SLAC, compared with those from CTAC. We found that the attenuation maps obtained using SLAC method were close to those obtained from CT images. Further, the short-axis SPECT images obtained using SLAC methods were similar to those obtained using CTAC methods. Fig. 4.b shows the same defect was shown in CTAC and SLAC-based images. 3.3 Evaluation using \ufb01delity-based \ufb01gures of merit Table 2 shows RMSE and SSIM between images obtained using the SLAC and the CTAC method, as well as between the NAC and the CTAC method. We found that the SLAC method signi\ufb01cantly outperformed the NAC method based on these metrics. 4. DISCUSSIONS AND" + }, + { + "url": "http://arxiv.org/abs/2302.05744v1", + "title": "Rethinking Vision Transformer and Masked Autoencoder in Multimodal Face Anti-Spoofing", + "abstract": "Recently, vision transformer (ViT) based multimodal learning methods have\nbeen proposed to improve the robustness of face anti-spoofing (FAS) systems.\nHowever, there are still no works to explore the fundamental natures\n(\\textit{e.g.}, modality-aware inputs, suitable multimodal pre-training, and\nefficient finetuning) in vanilla ViT for multimodal FAS. In this paper, we\ninvestigate three key factors (i.e., inputs, pre-training, and finetuning) in\nViT for multimodal FAS with RGB, Infrared (IR), and Depth. First, in terms of\nthe ViT inputs, we find that leveraging local feature descriptors benefits the\nViT on IR modality but not RGB or Depth modalities. Second, in observation of\nthe inefficiency on direct finetuning the whole or partial ViT, we design an\nadaptive multimodal adapter (AMA), which can efficiently aggregate local\nmultimodal features while freezing majority of ViT parameters. Finally, in\nconsideration of the task (FAS vs. generic object classification) and modality\n(multimodal vs. unimodal) gaps, ImageNet pre-trained models might be\nsub-optimal for the multimodal FAS task. To bridge these gaps, we propose the\nmodality-asymmetric masked autoencoder (M$^{2}$A$^{2}$E) for multimodal FAS\nself-supervised pre-training without costly annotated labels. Compared with the\nprevious modality-symmetric autoencoder, the proposed M$^{2}$A$^{2}$E is able\nto learn more intrinsic task-aware representation and compatible with\nmodality-agnostic (e.g., unimodal, bimodal, and trimodal) downstream settings.\nExtensive experiments with both unimodal (RGB, Depth, IR) and multimodal\n(RGB+Depth, RGB+IR, Depth+IR, RGB+Depth+IR) settings conducted on multimodal\nFAS benchmarks demonstrate the superior performance of the proposed methods. We\nhope these findings and solutions can facilitate the future research for\nViT-based multimodal FAS.", + "authors": "Zitong Yu, Rizhao Cai, Yawen Cui, Xin Liu, Yongjian Hu, Alex Kot", + "published": "2023-02-11", + "updated": "2023-02-11", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Face recognition technology has widely used in many intelligent systems due to their convenience and remarkable accuracy. However, face recognition systems are still vulnerable to presentation attacks (PAs) ranging from print, replay and 3D-mask attacks. Therefore, both the academia and industry have recognized the critical role of face antispoo\ufb01ng (FAS) for securing the face recognition system. In the past decade, plenty of hand-crafted features based [6,7,23,37] and deep learning based [2,12,13,31,38, 45] methods have been proposed for unimodal FAS. Despite satisfactory performance in seen attacks and environments, unimodal methods generalize poorly on emerging novel attacks and unseen deployment conditions. Thanks to the advanced sensors with various modalities (e.g., RGB, Infrared (IR), Depth, Thermal) [17], multimodal methods facilitate the FAS applications under some high-security scenarios with low false acceptance errors (e.g., face payment and vault entrance guard). Recently, due to the strong long-range and cross-modal representation capacity, vision transformer (ViT) [11] based methods [15, 26] have been proposed to improve the robustness of FAS systems. However, these methods focus on direct \ufb01netuning ViTs [15] or modifying ViTs with complex and powerful modules [26], which cannot provide enough insights on bridging the fundamental natures (e.g., modality-aware inputs, suitable multimodal pre-training, and ef\ufb01cient \ufb01netuning) of ViT in multimodal FAS. Despite mature exploration and \ufb01nds [3,18,44] of ViT on other computer vision communities (e.g., generic object classi\ufb01cation [8]), these knowledge might not be fully aligned for the multimodal FAS due to the task and modality gaps. Compared with CNN, ViT usually aggregates the coarse intra-patch info at the very early stage and then propagates the inter-patch global attentional features. On other words, it neglects the local detailed clues for each modality. According to the prior evidence from MM-CDCN [48], local \ufb01ne-grained features from multiples levels bene\ufb01ts the live/spoof clue representation in convolutional neural networks (CNN) from different modalities. Whether local descriptors/features can improve the ViT-based multimodal FAS systems is worth exploring. Compared with CNNs, ViTs usually have huger parameters to train, which easily over\ufb01t on the FAS task with limarXiv:2302.05744v1 [cs.CV] 11 Feb 2023 \fited data amount and diversity. Existing works show that direct \ufb01netuning the last classi\ufb01cation head [15] or training extra lightweight adapters [20] can achieve better performance than fully \ufb01netuning. However, all these observations are based on the unimodal RGB inputs, it is unclear how different ViT-based transfer learning techniques perform on 1) other unimodal scenario (IR or Depth modality); and 2) multimodal scenario (e.g., RGB+IR+Depth). Moreover, to design more ef\ufb01cient transfer learning modules for ViT-based multimodal FAS should be considered. Existing multimodal FAS works usually \ufb01netune the ImageNet pre-trained models, which might be sub-optimal due to the huge task (FAS vs. generic object classi\ufb01cation) and modality (multimodal vs. unimodal) gaps. Meanwhile, in consideration of costly collection of large-scale annotated live/spoof data, self-supervised pre-training without labels [34] is potential for model initialization in multimodal FAS. Although a few self-supervised pre-training methods (e.g., masked image modeling (MIM) [3,9] and contrastive learning [1]) are developed for multimodal (e.g., visionlanguage) applications, there are still no self-supervised pre-trained models specially for multimodal FAS. To investigate the discrimination and generalization capacity of pre-trained models and design advanced self-supervision strategies are crucial for ViT-based multimodal FAS. Motivated by the discussions above, in this paper we rethink the ViT-based multimodal FAS into three aspects, i.e., modality-aware inputs, suitable multimodal pre-training, and ef\ufb01cient \ufb01netuning. Besides the elaborate investigations, we also provide corresponding elegant solutions to 1) establish powerful inputs with local descriptors [5, 10] for IR modality; 2) ef\ufb01ciently \ufb01netune multimodal ViTs via adaptive multimodal adapters; and 3) pre-train generalized multimodal model via modality-asymmetric masked autoencoder. Our contributions include: \u2022 We are the \ufb01rst to investigate three key factors (i.e., inputs, pretraining, and \ufb01netuning) for ViT-based multimodal FAS. We \ufb01nd that 1) leveraging local feature descriptors bene\ufb01ts the ViT on IR modality; 2) partially \ufb01netuning or using adapters can achieve reasonable performance for ViT-based multimodal FAS but still far from satisfaction; and 3) mask autoencoder [3, 18] pre-training cannot provide better \ufb01netuning performance compared with ImageNet pre-trained models. \u2022 We design the adaptive multimodal adapter (AMA) for ViT-based multimodal FAS, which can ef\ufb01ciently aggregate local multimodal features while freezing majority of ViT parameters. \u2022 We propose the modality-asymmetric masked autoencoder (M2A2E) for multimodal FAS self-supervised pre-training. Compared with modality-symmetric autoencoders [3,18], the proposed M2A2E is able to learn more intrinsic task-aware representation and compatible with modality-agnostic downstream settings. To our best knowledge, this is the \ufb01rst attempt to design the MIM framework for generalized multimodal FAS. \u2022 Our proposed methods achieve state-of-the-art performance with most of the modality settings on both intraas well as cross-dataset testings. 2. Related Work Multimodal face anti-spoo\ufb01ng. With multimodal inputs (e.g., RGB, IR, Depth, and Thermal), there are a few multimodal FAS works that consider input-level [14, 30, 35, 35] and decision-level [54] fusions. Besides, mainstream FAS methods extract complementary multi-modal features using feature-level fusion [25, 26, 28, 41, 48, 56] strategies. As there are redundancy across multi-modal features, direct feature concatenation [48] easily results in high-dimensional features and overiftting. To alleviate this issue, Zhang et al. [55, 56] propose a feature re-weighting mechanism to select the informative and discard the redundant channel features among RGB, IR, and Depth modalities. Shen et al. [39] design a Modal Feature Erasing operation to randomly dropout partial-modal features to prevent modality-aware overftting. George and Marcel [16] present a cross-modal focal loss to modulate the loss contribution of each modality, which benefts the model to learn complementary information among modalities. Transformer for vision tasks. Transformer is proposed in [40] to model sequential data in the \ufb01eld of NLP. Then ViT [11] is proposed recently by feeding transformer with sequences of image patches for image classi\ufb01cation. In consideration of the data hungry characteristic of ViT, direct training ViTs from scratch would result in severe over\ufb01tting. On the one hand, fast transferring (e.g., adapter [8,19, 22] and prompt [57] tuning) while \ufb01xed most of pre-trained models\u2019 parameters is usually ef\ufb01cient for downstream tasks. On the other hand, self-supervised masked image modeling (MIM) methods (e.g., BEiT [4] and MAE [3,18]) bene\ufb01t the excellent representation learning, which improve the \ufb01netuning performance in downstream tasks. Meanwhile, a few works introduce vision transformer for FAS [15,26,33,42,43,47]. On the one hand, ViT is adopted in the spatial domain [15, 33, 42] to explore live/spoof relations among local patches. On the other hand, global features temporal abnormity [43] or physiological periodicity [47] are extracted applying ViT in the temporal domain. Recently, Liu and Liang [26] develop the modalityagnostic transformer blocks to supplement liveness features for multimodal FAS. Despite convincing performance via modi\ufb01ed ViT with complex customized modal-disentangled and cross-modal attention modules [26], there are still no works to explore the fundamental natures (e.g., modality\f9 10 1112 5 6 7 8 Linear Projection [Cls token] 0 1 2 3 4 5 6 7 8 9 10 11 12 Transformer x N Classification head Live/Spoof RGB IR Depth Norm MHSA Norm FFN AMA\u00a0 \u00a0\u00a0 AMA\u00a0 \u00a0\u00a0 1x1 Conv + GELU 0 1 2 3 4 5 6 7 8 9 10 11 12 1 2 3 4 3x3 Conv\u00a0with paddings 9 10 1112 5 6 7 8 1 2 3 4 0 1 2 3 4 5 6 7 8 9 10 11 12 GAP 1x1 Conv Sigmoid 1x1 Conv + GELU Reshape and concatenation Split and Reshape Figure 1. Framework of the ViT \ufb01netuning with adaptive multimodal adapters (AMA). The AMA and classi\ufb01cation head are trainable while the linear projection and vanilla transformer blocks are \ufb01xed with the pre-trained parameters. \u2018MHSA\u2019, \u2019FFN\u2019, and \u2019GAP\u2019 are short for the multihead self-attention, feed-forword network, and global average pooling, respectively. RGB IR Depth Raw LBP HOG PLGF GRAY_HOG_PLGF Figure 2. Visualization of three classical local descriptors (i.e., LBP [36], HOG [10], and PLGF [5]) and their compositions. aware inputs, suitable multimodal pre-training, and ef\ufb01cient \ufb01netuning) in vanilla ViT for multimodal FAS. 3. Methodology To bene\ufb01t the exploration of fundamental natures of ViT for multimodal FAS, here we adopt the simple, elegant, and uni\ufb01ed ViT framework as baselines. As illustrated in the left part (without \u2018AMA\u2019) of Fig. 1, the vanilla ViT consists of a patch tokenizer Epatch via linear projection, N transformer blocks Ei trans (i = 1, ..., N) and a classi\ufb01cation head Ehead. The unimodal (XRGB, XIR, XDepth) or multimodal (XRGB+IR, XRGB+Depth, XIR+Depth, XRGB+IR+Depth) inputs are passed over Epatch to generate the visual tokens T Vis, which is concatenated with learnable class token T Cls, and added with position embeddings. Then all patch tokens T All = [T Vis, T Cls] will be forwarded with Etrans. Finally, T Cls is sent to Ehead for binary live/spoof classi\ufb01cation. We will \ufb01rst brie\ufb02y introduce different local descriptor based inputs in Sec. 3.1, then introduce the ef\ufb01cient ViT \ufb01netuning with AMA in Sec. 3.2, and at last present the generalized multimodal pre-training via M2A2E in Sec. 3.3. 3.1. Local Descriptors for Multimodal ViT Besides the raw multimodal inputs, we consider three local features and their compositions for multimodal ViT. The motivations behind are that the vanilla ViT with raw inputs is able to model rich cross-patch semantic contexts but sensitive to illumination and neglecting the local \ufb01negrained spoof clues. Explicitly leveraging local descriptors as inputs might bene\ufb01t multimodal ViT mining more discriminative \ufb01ne-grained spoof clues [46,48,49,51] as well as illumination-robust live/spoof features [25]. Local binary pattern (LBP). LBP [36] computes a binary pattern via thresholding central difference among neighborhood pixels. Fine-grained textures and illumination invariance make LBP robust for generalized FAS [24]. For a center pixel Ic and a neighboring pixel Ii(i = 1, 2, ..., p), LBP can be formalized as follows: LBP = p X i=1 F(Ii \u2212Ic) \u00d7 2i\u22121, F(I) = \u001a 1, I \u22650, 0, otherwise. (1) Typical LBP maps are shown in second column of Fig. 2. Histograms of oriented gradients (HOG). HOG [10] describes the distribution of gradient orientations or edge directions within a local subregion. It is implemented via \ufb01rstly computing magnitudes and orientations of gradients at each pixel, and then the gradients within each small local subregion are accumulated into orientation histogram vectors of several bins, voted by gradient magnitudes. Due to the partial invariance to geometric and photometric changes, HOG features might be robust for the illumination-sensitive modalities like RGB and IR. The visualization results are shown in third column of Fig. 2. \fPattern of local gravitational force (PLGF). Inspired by Law of Universal Gravitation, PLGF [5] describes the image interest regions via local gravitational force magnitude, which is useful to reduce the impact of illumination/noise variation while preserving edge-based low-level clues. It can be formulated as: PLGF = arctan( r (I \u2217Mx I )2 + (I \u2217My I )2), Mx(m, n) = \u001a cos(arctan(m/n)) m2+n2 , (m2 + n2) > 0, 0, otherwise, My(m, n) = \u001a sin(arctan(m/n)) m2+n2 , (m2 + n2) > 0, 0, otherwise, (2) where I is the raw image. Mx and My are two \ufb01lter masks for gravitational force calculation. m and n are indexes denoting the relative position to the center. \u2217is convolution operation sliding along all pixels. The visualization of PLGF maps are shown in fourth column of Fig. 2. Composition. In consideration of the complementary characteristics from raw image and local descriptors, we also study the compositions among these features via inputlevel concatenation. For example, \u2018GRAY HOG PLFG\u2019 denotes three-channel inputs (raw gray-scale channel + HOG + PLFG), which is visualized in last column of Fig. 2. 3.2. Adaptive Multimodal Adapter Recent studies have veri\ufb01ed that introducing adapters [20] with fully connected (FC) layers can improve the FAS performance when training data is not adequate. However, FC-based adapter focuses on the intra-token feature re\ufb01nement but neglects 1) contextual features from local neighbor tokens; and 2) multimodal features from cross-modal tokens. To tackle these issues, we extend the convolutional adapter (ConvAdapter) [22] into a multimodal version for multimodal FAS. As illustrated in Fig. 1, instead of directly \ufb01netuning the transformer blocks Etrans, we \ufb01x all the pre-trained parameters from Epatch and Etrans while training only adaptive multimodal adapters (AMA) and Ehead. An AMA module consists of four parts: 1) an 1\u00d71 convolution with GELU \u0398\u2193for dimension reduction from the original channels D to a hidden dimension D\u2032; 2) a 3\u00d73 2D convolution \u03982D mapping channels D\u2032\u00d7K to D\u2032 for multimodal local feature aggregation, where K means the modality numbers; 3) an adaptive modality weight (w1, ..., wK) generator via cascading global averaging pooling (GAP), 1\u00d71 convolution \u0398Ada to project channels from D\u2032\u00d7K to K, and the Sigmoid function \u03c3; and 4) an 1\u00d71 convolution with GELU \u0398\u2191for dimension expansion to D. As features from different modalities are already spatially aligned, we restore the 2D structure for each modality after the channel squeezing. Similarly, the 2D structure will be \ufb02atten into 1D tokens before Masked\u00a0RGB Masked IR Masked Depth Linear Projection Transformer Encoder Transformer Decoder Transformer Decoder Transformer Decoder visible tokens visual + mask tokens Figure 3. The framework of the modality-asymmetric masked autoencoder (M2A2E). Different from previous multimodal MAE [3] masking all modalities as inputs, our M2A2E randomly selects unimodal masked input for multimodal reconstruction. the channel expanding. The AMA can be formulated as T Vis Kmodal = Concat[\u0398\u2193(T Vis RGB), \u0398\u2193(T Vis IR ), \u0398\u2193(T Vis Depth)], wRGB, wIR, wDepth = \u03c3(\u0398Ada(GAP(T Vis Kmodal))), T Vis Kmodal = \u03982D(T Vis Kmodal), T Vis Kmodal = Concat[wRGB \u00b7 T Vis Kmodal, wIR \u00b7 T Vis Kmodal, wDepth \u00b7 T Vis Kmodal], AMA = Concat[\u0398\u2191(\u0398\u2193(T Cls)), \u0398\u2191(T Vis Kmodal)]. (3) Here we show an example when K=3 (i.e., RGB+ IR+Depth) in Eq.(3), and AMA is \ufb02exible for arbitrary modalities (e.g., RGB+IR). Note that AMA is equivalent to vanilla ConvAdapter [22] in unimodal setting when K=1. 3.3. Modality-Asymmetric Masked Autoencoder Existing multimodal FAS works usually \ufb01netune the ImageNet pre-trained models, which might be sub-optimal due to the huge task and modality gaps. Meanwhile, in consideration of costly collection of large-scale annotated live/spoof data, self-supervised pre-training without labels [34] is potential for model initialization in multimodal FAS. Here we propose the modality-asymmetric masked autoencoder (M2A2E) for multimodal FAS self-supervised pre-training. As shown in Fig. 8, given a multimodal face sample (XRGB, XIR, XDepth), M2A2E randomly selects unimodal input Xi (i \u2208RGB, IR, Depth) among all modalities. Then random sampling strategy [18] is used to mask out p percentage of the visual tokens in Xi. Only the unmasked visible tokens are forwarded the ViT encoder and both visible and masked tokens are fed in unshared ViT decoders. In terms of the reconstruction target, given a masked input Xi with the i-th modality, M2A2E aims to predict the pixel values with mean squared error (MSE) loss for 1) each masked patch of Xi, and 2) the whole input images of other modalities Xj (j \u0338= i; j \u2208RGB, IR, Depth). The motivation behind M2A2E is that with the multimodal reconstruction target, the self-supervised pre-trained ViTs are able to model \fTable 1. ACER(%) results of protocols \u2018seen\u2019 and \u2018unseen\u2019 on WMCA. The values ACER(%) reported on testing sets are obtained with thresholds computed for BPCER=1% on development sets. Best results are marked in bold. \u2018ConvA\u2019 indicates the ConvAdapter [22]. Unseen Method Seen Flexiblemask Replay Fakehead Prints Glasses Papermask Rigidmask mean\u00b1std Modality: RGB MC-CNN [17] 32.82 22.80 31.40 1.90 30.00 50.00 4.80 18.30 22.74\u00b115.33 CCL(ResNet50) [29] 30.69 4.76 15.37 24.67 19.03 16.80 9.51 17.62 15.39\u00b16.51 CCL(CDCN) [29] 27.14 7.18 11.79 21.82 20.53 35.13 18.91 15.10 18.64\u00b18.91 ViT [11] 9.84 18.4 19.94 13.67 1.92 24.58 5.59 9.59 13.38+-8.17 ViT+ConvA (Ours) 4.72 18.64 10.07 7.98 0.43 20.14 6.38 2.32 9.42+-7.56 ViT+ConvA+M2A2E (Ours) 8.95 23.68 11.42 2.28 0.29 27.98 7.20 7.65 11.5+-10.52 Modality: IR RDWT-Haralick [17] 6.26 MC-CNN [17] 2.51 ViT [11] 7.74 14.96 1.85 2.57 0.00 45.63 1.19 1.98 9.74+-16.61 ViT+ConvA (Ours) 4.35 9.67 0.00 1.30 0.51 45.63 0.70 0.43 8.32+-16.80 ViT+ConvA+M2A2E (Ours) 3.34 13.24 0.14 5.12 0.29 31.24 0.61 0.00 7.23+-11.63 Modality: Depth MC-CNN [17] 6.04 ViT [11] 7.78 21.04 0.14 2.86 0.87 37.92 1.05 9.06 10.42+-14.22 ViT+ConvA (Ours) 5.73 26.33 0.29 2.57 0.29 36.73 0.94 6.25 10.49+-14.83 ViT+ConvA+M2A2E (Ours) 5.31 20.27 0.00 2.57 0.00 36.55 0.43 7.31 9.59+-13.92 Modality: RGB+IR MA-Net [28] 6.85 25.33 3.16 2.05 0.28 36.72 0.86 9.82 11.18\u00b114.30 ViT [11] 4.02 15.76 19.15 6.42 1.45 23.25 2.19 3.44 10.23+-8.96 ViT+AMA (Ours) 1.27 15.49 1.16 1.74 0.43 28.16 1.01 0.77 6.97+-11.09 ViT+AMA+M2A2E (Ours) 1.35 8.63 0.29 0.43 0.00 29.47 2.75 0.97 6.08+-10.75 Modality: RGB+Depth MC-PixBiS [13] 1.80 49.70 3.70 0.70 0.10 16.00 0.20 3.40 10.50\u00b116.70 CMFL [16] 1.70 12.40 1.00 2.50 0.70 33.50 1.80 1.70 7.60\u00b111.20 MA-ViT [26] 1.45 9.76 0.93 0.55 0.00 14.00 0.00 1.46 3.81\u00b15.67 ViT [11] 3.10 18.67 8.87 5.00 0.72 22.52 0.58 3.93 8.61+-8.72 ViT+AMA (Ours) 1.19 18.67 1.01 2.03 0.00 16.88 1.16 0.72 5.78+-8.23 ViT+AMA+M2A2E (Ours) 2.53 14.45 0.29 0.00 0.00 19.91 3.91 2.19 5.82+-8.04 Modality: IR+Depth ViT [11] 5.47 15.64 0.00 1.71 0.00 41.03 0.67 2.38 8.78+-15.26 ViT+AMA (Ours) 1.88 10.45 0.00 3.92 0.00 41 0.38 1.55 8.19+-14.94 ViT+AMA+M2A2E (Ours) 1.96 11.51 0.00 0.98 0.00 34.06 0.00 0.00 6.65+-12.81 Modality: RGB+IR+Depth (* indicates with RGB+IR+Depth+Thermal modalities) IQM+LBP* [17] 7.54 28.58 0.84 2.38 2.3 50.86 16.34 14.27 16.29\u00b118.36 MC-CNN* [17] 1.04 2.52 0.12 0.00 0.00 42.14 0.35 0.75 6.55\u00b115.72 ViT [11] 2.52 15.88 5.46 0.58 0.14 19.99 0.90 2.32 6.46+-8.12 ViT+AMA (Ours) 0.92 15.39 0.64 1.99 0.87 18.37 0.87 0.77 5.56+-7.80 ViT+AMA+M2A2E (Ours) 1.39 9.02 0.00 0.00 0.00 17.99 0.00 0.00 3.86+-7.08 1) task-aware contextual semantics (e.g., moir\u00b4 e patterns and color distortion) via masked patch prediction; and 2) intrinsic physical features (e.g., 2D attacks without facial depth) via cross modality translation. Relation to modality-symmetric autoencoders [3, 18]. Compared with the vanilla MAE [18], M2A2E adopts the same masked strategy in unimodal ViT encoder but targeting at multimodal reconstruction with multiple unshared ViT decoders. Besides, M2A2E is similar to the multimodal MAE [3] only when partial tokens from a single modality are visible while masking all tokens from other modalities. 4. Experimental Evaluation 4.1. Datasets and Performance Metrics Three commonly used multimodal FAS datasets are used for experiments, including WMCA [17], CASIASURF (MmFA) [56] and CASIA-SURF CeFA (CeFA) [27]. WMCA contains a wide variety of 2D and 3D PAs with four modalities, which introduces 2 protocols: \u2018seen\u2019 protocol which emulates the seen attack scenario and the \u2018unseen\u2019 attack protocol that evaluates the generalization on an unseen attack. MmFA consists of 1000 subjects with 21000 videos, and each sample has 3 modalities, which has an of\ufb01cial intra-testing protocol. CeFA is the largest multimodal FAS dataset, covering 3 ethnicities, 3 modalities, 1607 subjects, and 34200 videos. We conduct intraand cross-dataset testings on WMCA and MmFA datasets, and leave large-scale CeFA for self-supervised pre-training. In terms of evaluation metrics, Attack Presentation Classi\ufb01cation Error Rate (APCER), Bona\ufb01de Presentation Classi\ufb01cation Error Rate (BPCER), and ACER [21] are used for the metrics. The ACER on testing set is determined by the Equal Error Rate (EER) threshold on dev sets for MmFA, and the BPCER=1% threshold for WMCA. True Positive Rate (TPR)@False Positive Rate (FPR)=10\u22124 [56] is also provided for MmFA. For cross-testing experiments, Half Total Error Rate (HTER) is adopted. \f4.2. Implementation Details We crop the face frames using MTCNN [53] face detector. The local descriptors are extracted from gray-scale images with: 1) 3x3 neighbors for LBP [36]; 2) 9 orientations, 8x8 pixels per cell, and 2x2 cells per block for HOG [10]; and 3) the size of masks are set to 5 for PLGF [5]. Composition inputs \u2018GRAY HOG PLGF\u2019 are adopted on unimodal and multimodal experiments for IR modality, while the raw inputs are utilized for RGB and Depth modalities. ViT-Base [11] supervised by binary cross-entropy loss is used as the defaulted architecture. For the direct \ufb01netuning, only the last transformer block and classi\ufb01cation head are trainable. For AMA and ConvAdapter [22] \ufb01netuning, the original and hidden channels are D=768 and D\u2032=64, respectively. For M2A2E, the mask ratio p=40% is used while decoder depth and width is 4 and 512, respectively. The experiments are implemented with Pytorch on one NVIDIA A100 GPU. For the self-supervised pre-training on CeFA with RGB+IR+Depth modalities, we use the AdamW [32] optimizer with learning rate (lr) 1.5e-4, weight decay (wd) 0.05 and batch size 64 at the training stage. ImageNet pre-trained weights are used for our encoder. We train the M2A2E for 400 epochs while warming up the \ufb01rst 40 epochs, and then performing cosine decay. For supervised unimodal and multimodal experiments on WMCA and MmFA, we use the Adam optimizer with the \ufb01xed lr=2e-4, wd=5e-3 and batch size 16 at the training stage. We \ufb01netune models with maximum 30 epochs based on the ImageNet or M2A2E pre-trained weights. 4.3. Intra-dataset Testing Intra testing on WMCA. The unimodal and multimodal results of protocols \u2018seen\u2019 and \u2018unseen\u2019 on WMCA [17] are shown in Table 1. On the one hand, compared with the direct \ufb01netuning results from \u2018ViT\u2019, the ViT+AMA/ConvAdapter can achieve signi\ufb01cantly lower ACER in all modalities settings and both \u2018seen\u2019 and \u2018unseen\u2019 protocols. This indicates the proposed AMA ef\ufb01ciently leverages the unimodal/multimodal local inductive cues to boost original ViT\u2019s global contextual features. On the other hand, when replaced the ImageNet pre-trained ViT with self-supervised M2A2E from CeFA, the generalization for unseen attack detection improves obviously with modalities \u2018IR\u2019, \u2018Depth\u2019, \u2018RGB+IR\u2019, \u2018IR+Depth\u2019, and \u2018RGB+IR+Depth\u2019, indicating its excellent transferability for downstream modality-agnostic tasks. It is surprising to \ufb01nd in the last block that the proposed methods with RGB+IR+Depth modalities perform even better than \u2018MCCNN\u2019 [17] with four modalities in both \u2018seen\u2019 and \u2018unseen\u2019 protocols. With complex and specialized modules, although \u2018MA-ViT\u2019 [26] outperforms the proposed methods with RGB+Depth modalities in \u2018unseen\u2019 protocol by -2.01% ACER, the proposed AMA and M2A2E might be Table 2. The results on MmFA. Larger TPR and lower ACER values indicate better performance. Best results are marked in bold. Method APCER(%) BPCER(%) ACER(%) TPR(%) @FPR=10\u22124 Modality: RGB SEF [56] 8.0 14.5 11.3 6.8 MS-SEF [55] 40.3 1.6 21.0 14.6 ViT [11] 18.91 15.83 17.37 16.72 ViT+ConvA (Ours) 10.70 9.33 10.02 20.22 ViT+ConvA+M2A2E (Ours) 6.62 6.17 6.40 23.77 Modality: IR SEF [56] 15.0 1.2 8.1 10.9 MS-SEF [55] 38.6 0.4 19.4 15.9 ViT [11] 18.74 19.11 18.92 10.72 ViT+ConvA (Ours) 16.30 13.00 14.65 17.36 ViT+ConvA+M2A2E (Ours) 12.73 11.44 12.09 19.94 Modality: Depth SEF [56] 5.1 4.8 5.0 14.1 MS-SEF [55] 6.0 1.2 3.6 67.3 ViT [11] 2.67 2.22 2.44 36.56 ViT+ConvA (Ours) 1.19 2.61 1.90 64.11 ViT+ConvA+M2A2E (Ours) 1.68 2.61 2.15 51.39 Modality: RGB+IR SEF [56] 14.4 1.6 8.0 26.1 MS-SEF [55] 36.5 0.005 18.3 37.0 ViT [11] 17.18 18.94 18.06 24.67 ViT+AMA (Ours) 16.83 11.87 14.35 36.67 ViT+AMA+M2A2E (Ours) 11.38 11.44 11.41 40.94 Modality: RGB+Depth SEF [56] 4.3 5.6 5.0 10.6 MS-SEF [55] 5.8 0.8 3.3 71.1 ViT [11] 5.13 4.06 4.60 36.22 ViT+AMA (Ours) 1.29 2.39 1.84 67.89 ViT+AMA+M2A2E (Ours) 1.25 2.06 1.65 75.06 Modality: IR+Depth SEF [56] 1.5 8.4 4.9 24.3 MS-SEF [55] 2.0 0.3 1.1 81.2 ViT [11] 2.08 3.28 2.68 40.39 ViT+AMA (Ours) 1.56 1.78 1.67 59.72 ViT+AMA+M2A2E (Ours) 1.48 0.83 1.16 67.33 Modality: RGB+IR+Depth SEF [56] 3.8 1.0 2.4 56.8 MS-SEF [55] 1.9 0.1 1.0 92.4 MA-ViT [26] 0.78 0.83 0.80 82.83 ViT [11] 2.10 1.78 1.94 66.61 ViT+AMA (Ours) 2.22 0.49 1.36 78.94 ViT+AMA+M2A2E (Ours) 0.81 0.42 0.62 85.23 potential to plug-and-play in \u2018MA-ViT\u2019 for further performance improvement. Intra testing on MmFA. For MmFA, we compare with three famous multimodal methods \u2018SEF\u2019 [56], \u2018MSSEF\u2019 [55], and \u2018MA-ViT\u2019 [26]. From Table 2, we can observe that the performance of \u2018ViT\u2019 is usually worse than \u2018MS-SEF\u2019 with multimodal settings due to the limited modality fusion ability. When equipped with AMA and M2A2E-based self-supervised pre-training, \u2018ViT+AMA+M2A2E\u2019 outperforms MS-SEF by a large margin in most modality settings (\u2018RGB\u2019, \u2018Depth\u2019, \u2018RGB+IR\u2019, \u2018RGB+Depth\u2019, \u2018RGB+IR+Depth\u2019) in terms of ACER metrics. Thanks to the powerful multimodal representation capacity from the M2A2E pre-trained model, the proposed method surpasses the dedicated \u2018MA-ViT\u2019 with \u2018RGB+IR+Depth\u2019 modalities. 4.4. Cross-dataset Testing To evaluate the unimodal and multimodal generalization, we conduct cross-testing experiments between models trained on MmFA and WMCA with Protocol \u2018seen\u2019. We also introduce the \u2018MM-CDCN\u2019 [48] and \u2018MA-ViT\u2019 [26] \fACER(%) ACER(%) ACER(%) Raw LBP HOG PLGF HOG_PLGF LBP_HOG_PLGF GRAY_LBP_HOG GRAY_HOG_PLGF (a) RGB modality (c) Depth modality (b) IR modality Direct finetuning ConvAdapter Raw LBP HOG PLGF HOG_PLGF LBP_HOG_PLGF GRAY_LBP_HOG GRAY_HOG_PLGF Direct finetuning ConvAdapter Direct finetuning ConvAdapter Raw LBP HOG PLGF HOG_PLGF LBP_HOG_PLGF GRAY_LBP_HOG GRAY_HOG_PLGF Figure 4. Impacts of inputs with local feature descriptors (e.g., LBP, HOG, PLGF) for ViT using direct \ufb01netuning and ConvAdapter strategies on (a) RGB, (b) IR, and (c) Depth modalities. More results on multimodal settings can be found in Appendix A. Table 3. The HTER (%) values from the cross-testing between WMCA and MmFA datasets. Best results are marked in bold. Method Train on WMCA Test on MmFA Train on MmFA Test on WMCA Modality: RGB CDCN [51] 27.47 34.66 ViT [11] 25.18 50.26 ViT+ConvA (Ours) 21.31 35.29 ViT+ConvA+M2A2E (Ours) 23.56 35.69 Modality: IR CDCN [51] 44.11 31.19 ViT [11] 35.22 37.88 ViT+ConvA (Ours) 30.61 31.78 ViT+ConvA+M2A2E (Ours) 26.06 26.50 Modality: Depth CDCN [51] 31.16 32.11 ViT [11] 27.94 29.53 ViT+ConvA (Ours) 25.04 26.75 ViT+ConvA+M2A2E (Ours) 23.12 23.71 Modality: RGB+IR MM-CDCN [48] 24.60 27.86 ViT [11] 27.07 30.63 ViT+AMA (Ours) 19.77 27.80 ViT+AMA+M2A2E (Ours) 17.82 25.67 Modality: RGB+Depth MM-CDCN [48] 22.38 25.46 ViT [11] 23.72 29.95 ViT+AMA (Ours) 19.17 24.99 ViT+AMA+M2A2E (Ours) 18.34 25.82 Modality: IR+Depth MM-CDCN [48] 29.5 30.22 ViT [11] 31.84 34.99 ViT+AMA (Ours) 25.49 28.04 ViT+AMA+M2A2E (Ours) 21.43 26.35 Modality: RGB+IR+Depth Aux.(Depth) [31] 12.35 24.54 MM-CDCN [48] 21.25 21.83 MA-ViT [26] 10.41 20.63 ViT [11] 19.19 23.21 ViT+AMA (Ours) 13.99 20.22 ViT+AMA+M2A2E (Ours) 8.60 18.83 as baselines. Table 3 lists the HTER of all methods trained on one dataset and tested on another dataset. From these results, the proposed \u2018ViT+AMA+M2A2E\u2019 outperforms \u2018MM-CDCN\u2019 in most modality settings and \u2018MA-ViT\u2019 with \u2018RGB+IR+Depth\u2019 on both two crosstesting protocols, indicating that the learned multimodal features are robust to the sensors, resolutions, and attack types. Speci\ufb01cally, directly \ufb01netuning ImageNet pre-trained ViTs (see results of \u2018ViT\u2019) usually generalize worse than \u2018MM-CDCN\u2019 in multimodal settings. When assembling with AMA and M2A2E, the HTER can be further reduced 9.25%/5.38%/10.41%/10.59% and 4.96%/4.13%/8.64%/4.38% for \u2018RGB+IR\u2019/\u2018RGB+Depth\u2019/ \u2018IR+Depth\u2019/\u2018RGB+IR+Depth\u2019 when tested on MmFA and ACER(%) RGB+Depth RGB+IR IR+Depth RGB+IR+Depth Vanilla Adapter ConvAdapter Multimodal ConvAdapter AMA Multimodal ConvAdapter (huge) Figure 5. Ablation of the adapter types in transformer blocks. WMCA, respectively. 4.5. Ablation Study We also provide the results of ablation studies for inputs with local descriptors and AMA on \u2018seen\u2019 protocol of WMCA while studies for M2A2E on \u2018unseen\u2019 protocol of WMCA and cross testing from WMCA to MmFA. Impact of inputs with local descriptors. In the default setting of ViT inputs, composition input \u2018GRAY HOG PLGF\u2019 is adopted on for IR modality, while the raw inputs are utilized for RGB and Depth modalities. In this ablation, we consider three local descriptors (\u2018LBP\u2019 [36], \u2018HOG\u2019 [10], \u2018PLGF\u2019 [5]) and their compositions (\u2018HOG PLGF\u2019, \u2018LBP HOG PLGF\u2019, \u2018GRAY HOG PLGF\u2019). It can be seen from Fig. 4 that the \u2018LBP\u2019 input usually performs worse than other features for all three modalities. In contrast, the \u2018PLGF\u2019 input achieves reasonable performance (even better performance than raw input for IR modality via direct \ufb01netuning). It is clear that raw inputs are good enough for all modalities via ConvAdapter. One highlight is that composition input \u2018GRAY HOG PLGF\u2019 performs the best for IR modality via both direct \ufb01netuning and ConvAdapter, indicating the importance of local detailed and illumination invariant cues in IR feature representation. Impact of adapter types. Here we discuss \ufb01ve possible adapter types for ef\ufb01cient multimodal learning, including FC-based \u2018vanilla adapter\u2019 [20], independent-modal \u2018ConvAdapter\u2019 [22], \u2018multimodal ConvAdapter\u2019 with \u03982D mapping channels D\u2032\u00d7K to D\u2032, \u2018multimodal ConvAdapter (huge)\u2019 with \u03982D mapping channels D\u2032\u00d7K to D\u2032\u00d7K, and adaptive multimodal ConvAdapter (\u2018AMA\u2019). As shown in Fig. 5, the ConvAdapter based modules perform signi\ufb01cantly better than vanilla adapter in multimodal settings, indicating the local inductive biases bene\ufb01t the ViT-based FAS. Moreover, compared with \u2018ConvAdapter\u2019, \u2018multi\fACER(%) Dimension in AMA Figure 6. Ablation of the hidden dimensions in AMA. ACER(%) RGB+Depth RGB+IR IR+Depth RGB+IR+Depth on MHSA on FFN on MHSA+FFN Figure 7. Ablation of the AMA positions in transformer blocks. modal ConvAdapter\u2019 reduces more than 0.5% ACER in all multimodal settings via aggregating multimodal local features. In contrast, we cannot see any performance improvement from \u2018multimodal ConvAdapter (huge)\u2019. In other words, directly learning high-dimensional (D\u2032\u00d7K) convolutional features for all K modalities results in serious over\ufb01tting. Compared with \u2018multimodal ConvAdapter\u2019, AMA enhances the diversity of features for different modalities via adaptively weighting the shared lowdimensional (D\u2032) convolutional features, which decreases 0.52%, 0.31%, 1.07%, 0.07% ACER for \u2018RGB+Depth\u2019, \u2018RGB+IR\u2019, \u2018IR+Depth\u2019, \u2018RGB+IR+Depth\u2019, respectively. Impact of dimension and position of AMA. Here we study the hidden dimensions D\u2032 in AMA and the impact of AMA positions in transformer blocks. It can be seen from Fig. 6 that despite more lightweight, lower dimensions (16 and 32) cannot achieve satisfactory performance due to weak representation capacity. The best performance can be achieved when D\u2032=64 in all multimodal settings. In terms of AMA positions, it is interesting to \ufb01nd from Fig. 7 that plugging AMA along FFN performs better than along MHSA in multimodal settings. This might be because the multimodal local features complement the limitation of point-wise receptive \ufb01eld in FFN. Besides, it is reasonable that applying AMA on MHSA+FFN performs the best. Impact of mask ratio in M2A2E. Fig. 8(a) illustrates the generalization of the M2A2E pre-trained ViT when \ufb01netuning on \u2018unseen\u2019 protocol of WMCA and cross testing from MmFA to WMCA. Different from the conclusions from [3, 18] using very large mask ratio (e.g., 75% and 83%), we \ufb01nd that mask ratio ranges from 30% to 50% are suitable for multimodal FAS, and the best generalization performance on two testing protocols are achieved when mask ratio equals to 40%. In other words, extreme high mask ratios (e.g., 70% to 90%) might force the model to learn too semantic features but ignoring some useful low/mid-level live/spoof cues. (a) (b) (c) Decoder depth ACER (%) or HTER (%) ACER (%) or HTER (%) Epoch ACER (%) or HTER (%) Masking ratio (%) Figure 8. Ablation of the (a) mask ratio; (b) self-supervision training epochs; and (c) decoder depth in M2A2E. ACER(%) RGB+Depth RGB+IR IR+Depth RGB+IR+Depth RGB IR Depth Mutimodal MAE M2A2E Figure 9. Results of the multimodal MAE [3] and our M2A2E. Impact of training epochs and decoder depth in M2A2E. We also investigate how the training epochs and decoder depth in\ufb02uence the M2A2E. As shown in Fig. 8(b) and Fig. 8(c), training M2A2E with 400 epochs and decoder of 4 transformer blocks generalizes the best. More training iterations and deeper decoder are not always helpful due to the severe over\ufb01ts on reconstruction targets. Comparison between multimodal MAE [3] and M2A2E. We also compare M2A2E with the symmetric multimodal MAE [3] when \ufb01netuning on all downstream modality settings. It can be seen from Fig. 9 that with more challenging reconstruction target (from masked unimodal inputs to multimodal prediction), M2A2E is outperforms the best settings of multimodal MAE [3] on most modalities (\u2018RGB\u2019, \u2018IR\u2019, \u2018RGB+IR\u2019, \u2018RGB+Depth\u2019, \u2018RGB+IR+Depth\u2019), indicating its excellent downstream modality-agnostic capacity. 5." + }, + { + "url": "http://arxiv.org/abs/2302.03548v1", + "title": "PhysFormer++: Facial Video-based Physiological Measurement with SlowFast Temporal Difference Transformer", + "abstract": "Remote photoplethysmography (rPPG), which aims at measuring heart activities\nand physiological signals from facial video without any contact, has great\npotential in many applications (e.g., remote healthcare and affective\ncomputing). Recent deep learning approaches focus on mining subtle rPPG clues\nusing convolutional neural networks with limited spatio-temporal receptive\nfields, which neglect the long-range spatio-temporal perception and interaction\nfor rPPG modeling. In this paper, we propose two end-to-end video transformer\nbased architectures, namely PhysFormer and PhysFormer++, to adaptively\naggregate both local and global spatio-temporal features for rPPG\nrepresentation enhancement. As key modules in PhysFormer, the temporal\ndifference transformers first enhance the quasi-periodic rPPG features with\ntemporal difference guided global attention, and then refine the local\nspatio-temporal representation against interference. To better exploit the\ntemporal contextual and periodic rPPG clues, we also extend the PhysFormer to\nthe two-pathway SlowFast based PhysFormer++ with temporal difference periodic\nand cross-attention transformers. Furthermore, we propose the label\ndistribution learning and a curriculum learning inspired dynamic constraint in\nfrequency domain, which provide elaborate supervisions for PhysFormer and\nPhysFormer++ and alleviate overfitting. Comprehensive experiments are performed\non four benchmark datasets to show our superior performance on both intra- and\ncross-dataset testings. Unlike most transformer networks needed pretraining\nfrom large-scale datasets, the proposed PhysFormer family can be easily trained\nfrom scratch on rPPG datasets, which makes it promising as a novel transformer\nbaseline for the rPPG community.", + "authors": "Zitong Yu, Yuming Shen, Jingang Shi, Hengshuang Zhao, Yawen Cui, Jiehua Zhang, Philip Torr, Guoying Zhao", + "published": "2023-02-07", + "updated": "2023-02-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Physiological signals such as heart rate (HR), respiration frequency (RF), and heart rate variability (HRV) are important vital signs to be measured in many circumstances, especially for healthcare or medical purposes. Traditionally, the electrocardiography (ECG) and photoplethysmograph (PPG) or blood volume pulse (BVP) are the two most common ways for measuring heart activities and corresponding physiological signals. However, both ECG and PPG/BVP sensors need to be attached to body parts, which may 1 arXiv:2302.03548v1 [cs.CV] 7 Feb 2023 \fSpringer Nature 2021 L AT EX template rPPG signals t1 t2 t3 Direction Query Key interaction Fig. 1 The trajectories of rPPG signals around t1, t2, and t3 share similar properties (e.g., trends with rising edge \ufb01rst then falling edge later, and relatively high magnitudes) induced by skin color changes. It inspires the long-range spatio-temporal attention (e.g., blue tube around t1 interacted with red tubes from intraand inter-frames) according to their local temporal di\ufb00erence features for quasi-periodic rPPG enhancement. Here \u2018tube\u2019 indicates the same regions across short-time consecutive frames. cause discomfort and are inconvenient for longterm monitoring. To counter this issue, remote photoplethysmography (rPPG) [Yu et al., 2021b, Chen et al., 2018, Liu et al., 2021c] methods are developing fast in recent years, which aim to measure heart activity remotely without any contact. In earlier studies of facial rPPG measurement, most methods analyze subtle color changes on facial regions of interest (ROI) with classical signal processing approaches [Verkruysse et al., 2008, Poh et al., 2010b, Poh et al., 2010a, Li et al., 2014, Tulyakov et al., 2016, Magdalena Nowara et al., 2018]. Besides, there are a few color subspace transformation methods [De Haan and Jeanne, 2013, Wang et al., 2017] which utilize all skin pixels for rPPG measurement. Based on the prior knowledge from traditional methods, a few learning based approaches [Hsu et al., 2017, Qiu et al., 2018, Niu et al., 2018, Niu et al., 2019a] are designed as non-end-to-end fashions. ROI based preprocessed signal representations (e.g., time-frequency map [Hsu et al., 2017] and spatio-temporal map [Niu et al., 2018, Niu et al., 2019a]) are generated \ufb01rst, and then learnable models could capture rPPG features from these maps. However, these methods need the strict preprocessing procedure and neglect the global contextual clues outside the pre-de\ufb01ned ROIs. Meanwhile, more and more end-to-end deep learning based rPPG methods [\u02c7 Spetl\u00b4 \u0131k et al., 2018, Chen and McDu\ufb00, 2018, Yu et al., 2019a, Yu et al., 2019b, Liu et al., 2020] are developed, which treat facial video frames as input and predict rPPG and other physiological signals directly. However, pure end-to-end methods are easily in\ufb02uenced by the complex scenarios (e.g., with head movement and various illumination conditions) and rPPG-unrelated features can not be ruled out in learning, resulting in huge performance drops [Yu et al., 2020] in realistic datasets (e.g., VIPL-HR [Niu et al., 2019a]). Recently, due to its excellent longrange attentional modeling capacities in solving sequence-to-sequence issues, transformer [Lin et al., 2021a, Han et al., 2020] has been successfully applied in many arti\ufb01cial intelligence tasks such as natural language processing (NLP) [Vaswani et al., 2017], image [Dosovitskiy et al., 2021] and video [Bertasius et al., 2021] analysis. Similarly, rPPG measurement from facial videos can be treated as a video sequence to signal sequence problem, where the long-range contextual clues should be exploited for semantic modeling. As shown in Fig. 1, rPPG clues from di\ufb00erent skin regions and temporal locations (e.g., signal trajectories around t1, t2, and t3) share similar properties (e.g., trends with rising edge \ufb01rst then falling edge later and relative high magnitudes), which can be utilized for long-range feature modeling and enhancement. However, di\ufb00erent from the most video tasks aiming at semantic motion representation, facial rPPG measurement focuses on capturing subtle skin color changes, which makes it challenging for global spatio-temporal perception. Besides, the rPPG measurement task usually relies on periodic hidden visual dynamics, and the existing deep end-to-end models are weak in representing such clues. Furthermore, videobased rPPG measurement is usually a long-time monitoring task, and it is challenging to design and train transformers with long video sequence inputs. Motivated by the discussions above, we propose two end-to-end video transformer architectures, namely PhysFormer and PhysFormer++, for remote physiological measurement. On the one hand, the cascaded temporal di\ufb00erence transformer blocks in PhysFormer bene\ufb01t the rPPG feature enhancement via global spatio-temporal attention based on the \ufb01ne-grained temporal skin color di\ufb00erences. Furthermore, the two-pathway \fSpringer Nature 2021 L AT EX template 3 SlowFast temporal di\ufb00erence transformer based PhysFormer++ with periodicand cross-attention is able to e\ufb03ciently capture the temporal contextual and periodic rPPG clues from facial videos. On the other hand, to alleviate the interference-induced over\ufb01tting issue and complement the weak temporal supervision signals, elaborate supervision in frequency domain is designed, which helps the PhysFormer family learn more intrinsic rPPG-aware features. This paper is an extended version of our prior work [Yu et al., 2022] accepted by CVPR 2022. The main di\ufb00erences with the conference version are as follows: 1) besides the temporal di\ufb00erence transformer based PhysFormer, we propose the novel SlowFast video transformer architecture PhysFormer++ for rPPG measurement task; 2) based on the temporal di\ufb00erence transformer, the temporal di\ufb00erence periodic transformer and temporal di\ufb00erence cross-attention transformer are proposed to enhance the rPPG periodic perception and cross-tempo rPPG dynamics, respectively; 3) a detailed overview about the traditional, non-end-to-end learning based, and end-toend learning based rPPG measurement methods is discussed in the related work; 4) more elaborate experimental results, visualization, and e\ufb03ciency analysis are given for the PhysFormer family. To sum up, the main contributions of this paper are listed: \u2022 We propose the PhysFormer family, i.e., PhysFormer and PhysFormer++, which mainly consists of a powerful video temporal di\ufb00erence transformer backbone. To our best knowledge, it is the \ufb01rst time to explore the long-range spatio-temporal relationship for reliable rPPG measurement. Besides, the proposed temporal di\ufb00erence transformer is potential for broader \ufb01ne-grained or periodic video understanding tasks in computer vision (e.g., video action recognition and repetition counting) due to its excellent spatio-temporal representation capacity with local temporal di\ufb00erence description and global spatio-temporal modeling. \u2022 We propose the two-pathway SlowFast architecture for PhysFormer++ to e\ufb03ciently leverage both \ufb01ne-grained and semantic tempo rPPG clues. Speci\ufb01cally, the temporal di\ufb00erence periodic and cross-attention transformers are respectively designed for the Slow and Fast pathways to enhance the representation capacity of the periodic rPPG dynamics. \u2022 We propose an elaborate recipe to supervise PhysFormer with label distribution learning and curriculum learning guided dynamic loss in frequency domain to learn e\ufb03ciently and alleviate over\ufb01tting. Such curriculum learning guided dynamic strategy could bene\ufb01t not only the rPPG measurement task but also general deep learning tasks such as multi-task learning and multi-loss adjusting. \u2022 We conduct intraand cross-dataset testings and show that the proposed PhysFormer achieves superior or on par state-of-the-art performance without pretraining on large-scale datasets like ImageNet-21K. In the rest of the paper, Section 2 provides the related work about rPPG measurement and vision transformer. Section 3 \ufb01rst introduces the detailed architectures of the PhysFormer and PhysFormer++, and then formulates the label distribution learning and curriculum learning guided dynamic supervision for rPPG measurement. Section 4 introduces the four rPPG benchmark datasets and evaluation metrics, and provides rigorous ablation studies, visualizations and evaluates the performance of the proposed models. Finally, a conclusion is given in Section 5. 2 Related Work In this section, we provide a brief discussion of the related facial rPPG measurement approaches. As shown in Table 1, these approaches can be generally categorized into traditional, non-endto-end learning, and end-to-end learning based methods. We also brie\ufb02y review the transformer architectures for vision tasks. 2.1 rPPG measurement Traditional approaches. An early study of rPPG-based physiological measurement was reported in [Verkruysse et al., 2008]. Plenty of traditional hand-crafted approaches have been developed in this \ufb01eld since then. Compared with coarsely averaging arbitrary color channel from the detected full face region, selective merging information from di\ufb00erent color channels [Poh et al., 2010b, Poh et al., 2010a] from di\ufb00erent ROIs [Lam and Kuno, 2015, \fSpringer Nature 2021 L AT EX template Table 1 Summary of the representative rPPG measurement methods in terms of traditional, non-end-to-end learning, and end-to-end learning categories. Method Venue Feature Backbone Loss Function Traditional Poh2010 [Poh et al., 2010a] IEEE Trans. Biomed. Eng. ROI selection, ICA decomposition CHROM [De Haan and Jeanne, 2013] IEEE Trans. Biomed. Eng. Mapping on the color di\ufb00erence (chrominance) subspace Li2014 [Li et al., 2014] CVPR ROI selection, tracking, illumination recti\ufb01cation, non-rigid motion elimination RandomPatch [Lam and Kuno, 2015] ICCV ICA decomposition on random patches, majority voting Tulyakov2016 [Tulyakov et al., 2016] CVPR Chrominance features from multiple ROIs, low-rank factorization via self-adaptive matrix completion POS [Wang et al., 2017] IEEE Trans. Biomed. Eng. Mapping on the projection plane orthogonal to the skin tone Non-end-to-end Learning SynRhythm [Niu et al., 2018] ICPR spatio-temporal map representation, pretrained on Synthetic Rhythms ResNet18 L1 regression loss EVM-CNN [Qiu et al., 2018] IEEE TMM ROI tracking, feature image via spatial decomposition and temporal \ufb01ltering Shallow CNN L2 regression loss RhythmNet [Niu et al., 2019a] IEEE TIP spatio-temporal map in YUV color space, temporal contextual modeling via RNN ResNet18+GRU L1 regression loss temporal smooth loss ST-Attention [Niu et al., 2019b] IEEE FG spatio-temporal map in YUV color space, channel and spatio-temporal attention, temporal augmentation ResNet18 L1 regression loss NAS-HR [Lu and Han, 2021] VRIH Spatial\u2013temporal map with POS signals, NAS for e\ufb03cient architecture search lightweight NAS L1 regression loss CVD [Niu et al., 2020] ECCV Multi-scale spatio-temporal map on both RGB and YUV color spaces, cross-veri\ufb01ed feature disentangling, multi-task learning ResNet18 with shallow decoder L1 regression loss, CE loss, NegPearson loss, reconstruction loss Dual-GAN [Lu et al., 2021] CVPR spatio-temporal map, dual GAN for BVP signal and noise modeling, respectively Customized CNN L1 regression loss, CE loss, NegPearson loss, GAN loss End-to-end Learning DeepPhys [Chen and McDu\ufb00, 2018] ECCV Motion representation with normalized frame di\ufb00erence, attention mechanism using appearance to guide motion Two-branch CNN MSE loss HR-CNN [\u02c7 Spetl\u00b4 \u0131k et al., 2018] BMVC Two-stage CNN to measure the rPPG signals \ufb01rst, and then estimate the HR value Shallow CNN SNR loss, L1 regression loss PhysNet [Yu et al., 2019a] BMVC End-to-end spatio-temporal networks 3DCNN NegPearson loss rPPGNet [Yu et al., 2019b] ICCV Two-stage framework to enhance video quality \ufb01rst, and then extract rPPG signals 3DCNN NegPearson loss, reconstruction loss skin segmentation loss AutoHR [Yu et al., 2020] IEEE SPL NAS with temporal di\ufb00erence convolution, spatio-temporal augmentation NAS 3DCNN NegPearson loss, CE loss TS-CAN [Liu et al., 2020] NeuIPS Multi-task temporal shift convolutional attention networks, mobile-level real-time rPPG and respiratory measurement Two-branch temporal shift CNN MSE loss on pulse and respiration E\ufb03cientPhys [Liu et al., 2021b] arXiv Temporal di\ufb00erence normalization, self-attention-shifted networks Temporal shift Swin Transformer MSE loss PhysFormer (Ours) PhysFormer++ (Ours) 2022 Temporal di\ufb00erence transformers supervised by the label distribution learning and curriculum learning strategy Temporal di\ufb00erence video transformer NegPearson loss, CE loss, label distribution loss Li et al., 2014] with adaptive temporal \ufb01ltering [Li et al., 2014] are proven to be more e\ufb03cient for subtle rPPG signal recovery. To improve the signal-to-noise rate of the recovered rPPG signals, several signal decomposition methods such as independent component analysis (ICA) [Poh et al., 2010b, Poh et al., 2010a, Lam and Kuno, 2015] and matrix completion [Tulyakov et al., 2016] are also proposed. To alleviate the impacts of the skin tone and head motion, several color space projection (e.g., chrominance subspace [De Haan and Jeanne, 2013] and skinorthogonal space [Wang et al., 2017]) methods are developed. Despite remarkable early-stage progresses, these approaches have the following limitations: 1) they require empirical knowledge to design the components (e.g., hyperparameter in signal processing \ufb01ltering); 2) there is a lack of supervised learning models to counter data variations, especially in challenging environments with serious interference. Non-end-to-end learning approaches. In recent years, deep learning based approaches dominate the \ufb01eld of rPPG measurement due to the strong spatio-temporal representation capabilities. One representative framework is to learn robust rPPG features from the facial ROI-based spatio-temporal signal map (STmap). STmap [Niu et al., 2018, Niu et al., 2019b] or its variants (e.g., multisacle STmap [Niu et al., 2020, Lu et al., 2021] and chrominance STmap [Lu and Han, 2021]) are \ufb01rst extracted from prede\ufb01ned facial ROIs on di\ufb00erent color spaces, and then classical convolutional neural network (CNN) (e.g., ResNet [He et al., 2016]) and recurrent neural network (RNN) (e.g., GRU [Cho et al., 2014]) are \fSpringer Nature 2021 L AT EX template 5 cascaded for rPPG feature representation. The STmap-based non-end-to-end learning framework focuses on learning an underlying mapping from the input feature maps to the target rPPG signals. With dense raw rPPG information and less irrelevant elements (e.g., face-shape attributes), these methods usually converge faster and achieve reasonable performance against head movement but need explicit and exhaustive preprocessings. End-to-end learning approaches. Besides learning upon handcrafted STmaps, end-toend learning from facial sequence directly is also favorite. Both spatial 2DCNN networks [\u02c7 Spetl\u00b4 \u0131k et al., 2018, Chen and McDu\ufb00, 2018] and spatio-temporal models [Yu et al., 2019a, Yu et al., 2019b, Yu et al., 2020, Liu et al., 2020, Liu et al., 2021b, Nowara et al., 2021, Gideon and Stent, 2021] are developed for rPPG feature representation. Yu et al. [Yu et al., 2019a] investigates the recurrent methods (PhysNetLSTM, PhysNet-ConvLSTM) for rPPG measuremnt. However, such CNN+LSTM based architectures are good at long-range sequential modeling via LSTM but fail to explore longrange intra-frame spatial relationship using CNN with local convolutions. In contrast, with spatial transformer backbone and temporal shift module, E\ufb03cientPhys [Liu et al., 2021b] is able to explore long-range spatial but only short-term temporal relationship. In other words, existing end-to-end methods only consider the spatio-temporal rPPG features from local neighbors and adjacent frames but neglect the long-range relationship among quasi-periodic rPPG features. Compared with the non-end-to-end learning based methods, end-to-end approaches are less dependent on task-related prior knowledge and handcrafted engineering (e.g., STmap generation) but rely on diverse and large-scale data to alleviate the problem of over\ufb01tting. To enhance the longrange contextual spatio-temporal representation capacities and alleviate the data-hungry requirement of the deep rPPG models, we propose the PhysFormer and PhysFormer++ architectures, which can be easily trained from scratch on rPPG datasets with the elaborate supervision recipe. 2.2 Transformer for vision tasks Due to the powerful self-attention based long-range modeling capacity, transformer [Lin et al., 2021a, Vaswani et al., 2017] has been successfully applied in the \ufb01eld of NLP to model the contextual relationship for sequential data. Then vision transformer (ViT) [Dosovitskiy et al., 2021] is proposed recently by feeding transformer with sequences of image patches for image classi\ufb01cation. Many other ViT variants [Han et al., 2020, Khan et al., 2021, Touvron et al., 2021, Liu et al., 2021e, Yuan et al., 2021, Wang et al., 2021b, Han et al., 2021, Chen et al., 2021a, Ding et al., 2021] are proposed from then, which achieve promising performance compared with its counterpart CNNs for image analysis tasks [Carion et al., 2020, Zheng et al., 2021, He et al., 2021]. Recently, some works introduce vision transformer for video understanding tasks such as action recognition [Arnab et al., 2021, Fan et al., 2021, Neimark et al., 2021, Girdhar et al., 2019, Liu et al., 2021f, Bulat et al., 2021, Bertasius et al., 2021], action detection [Zhao et al., 2021, Liu et al., 2021d, Wang et al., 2021a, Xu et al., 2021], video super-resolution [Cao et al., 2021], video inpainting [Zeng et al., 2020, Liu et al., 2021a], and 3D animation [Chen et al., 2022, Chen et al., 2021b]. Some works [Neimark et al., 2021, Girdhar et al., 2019] conduct temporal contextual modeling with transformer based on single-frame features from pretrained 2D networks, while other works [Bertasius et al., 2021, Arnab et al., 2021, Liu et al., 2021f, Bulat et al., 2021, Fan et al., 2021] mine the spatio-temporal attentions via video transformer directly. Most of these works are incompatible for long-video-sequence (>150 frames) signal regression task. There are two related works [Yu et al., 2021a, Liu et al., 2021b] using ViT for rPPG feature representation. TransRPPG [Yu et al., 2021a] extracts rPPG features from the preprocessed signal maps via ViT for face 3D mask presentation attack detection [Yu et al., 2021c]. Based on the temporal shift networks [Liu et al., 2020, Lin et al., 2019], E\ufb03cientPhys-T [Liu et al., 2021b] adds several Swin Transformer [Liu et al., 2021e] layers for global spatial attention. Di\ufb00erent from these \fSpringer Nature 2021 L AT EX template Stem Tube Tokens Temporal Difference Multi-head Self-attention Add & Norm Spatio-temporal Feed-forward Add & Norm Temporal Difference Transformer\u00a0 Predictor Video x N rPPG Signals T T' T TDC TDC 1x1x1 Vid2Seq Vid2Seq Vid2Seq Softmax Temporal Difference Multi-head Self-attention Multi-head Concat & Projection Seq2Vid 1x1x1 Depth-wise 3x3x3 Spatio-temporal Feed-forward Q K V 1x1x1 Fig. 2 Framework of the PhysFormer. It consists of a shallow stem, a tube tokenizer, several temporal di\ufb00erence transformers, and a rPPG predictor head. The temporal di\ufb00erence transformer is formed from the Temporal Di\ufb00erence Multi-head Self-attention (TD-MHSA) and Spatio-temporal Feed-forward (ST-FF) modules, which enhances the global and local spatio-temporal representation, respectively. \u2018TDC\u2019 is short for the temporal di\ufb00erence convolution [Yu et al., 2020, Yu et al., 2021d]. two works, the proposed PhysFormer and PhysFormer++ are end-to-end video transformers, which are able to capture long-range spatiotemporal attentional rPPG features from facial video directly. 3 Methodology We will \ufb01rst introduce the architecture of PhysFormer and PhysFormer++ in Sec. 3.1 and 3.2, respectively. Then we will introduce label distribution learning for rPPG measurement in Sec. 3.3, and at last present the curriculum learning guided dynamic supervision in Sec. 3.4. 3.1 PhysFormer As illustrated in Fig. 2, PhysFormer consists of a shallow stem Estem, a tube tokenizer Etube, N temporal di\ufb00erence transformer blocks Ei trans (i = 1, ..., N) and a rPPG predictor head. Inspired by the study in [Xiao et al., 2021], we adopt a shallow stem to extract coarse local spatio-temporal features, which bene\ufb01ts the fast convergence and clearer subsequent global self-attention. Speci\ufb01cally, the stem is formed by three convolutional blocks with kernel size (1x5x5), (3x3x3) and (3x3x3), respectively. Each convolution operator is cascaded with a batch normalization (BN), ReLU and MaxPool. The pooling layer only halves the spatial dimension. Therefore, given an RGB facial video input X \u2208R3\u00d7T \u00d7H\u00d7W , the stem output Xstem = Estem(X), where Xstem \u2208 RD\u00d7T \u00d7H/8\u00d7W/8, and D, T, W, H indicate channel, sequence length, width, height, respectively. Then Xstem would be partitioned into spatiotemporal tube tokens Xtube \u2208RD\u00d7T \u2032\u00d7H\u2032\u00d7W \u2032 via the tube tokenizer Etube. Subsequently, the tube tokens will be forwarded with N temporal di\ufb00erence transformer blocks and obtain the global-local re\ufb01ned rPPG features Xtrans, which has the same dimensions with Xtube. Finally, the rPPG predictor head temporally upsamples, spatially averages, and projects the features Xtrans to 1D signal Y \u2208RT . Tube tokenization. Here the coarse feature Xstem would be partitioned into non-overlapping tube tokens via Etube(Xstem), which aggregates the spatio-temporal neighbor semantics within the tube region and reduces computational costs for the subsequent transformers. Speci\ufb01cally, the \fSpringer Nature 2021 L AT EX template 7 token tokenizer consists of a learnable 3D convolution with the same kernel size and stride (non-overlapping setting) as the targeted tube size Ts \u00d7Hs \u00d7Ws. Thus, the expected tube token map Xtube \u2208RD\u00d7T \u2032\u00d7H\u2032\u00d7W \u2032 has length, height and width T \u2032 = \u0016 T Ts \u0017 , H\u2032 = \u0016H/8 Hs \u0017 , W \u2032 = \u0016W/8 Ws \u0017 . (1) Please note that there are no positional embeddings after the tube tokenization as the stem with cascaded convolutions and poolings at early stage already captures relative spatio-temporal positional information [Hassani et al., 2021]. Temporal di\ufb00erence multi-head self-attention TD-MHSA). In selfattention mechanism [Vaswani et al., 2017, Dosovitskiy et al., 2021], the relationship between the tokens is modeled by the similarity between the projected query-key pairs, yielding the attention score. Instead of point-wise linear projection, we utilize temporal di\ufb00erence convolution (TDC) [Yu et al., 2020, Yu et al., 2021d] for query (Q) and key (K) projection, which could capture \ufb01ne-grained local temporal di\ufb00erence features for subtle color change description. TDC with learnable w can be formulated as TDC(x) = X pn\u2208R w(pn) \u00b7 x(p0 + pn) | {z } vanilla 3D convolution +\u03b8 \u00b7 (\u2212x(p0) \u00b7 X pn\u2208R\u2032 w(pn)) | {z } temporal difference term , (2) where p0=(0,0,0) indicates the current spatio-temporal location. R= {(\u22121, \u22121, \u22121), (\u22121, \u22121, 0), \u00b7 \u00b7 \u00b7 , (0, 1, 1), (1, 1, 1)} indicates the sampled local (3x3x3) spatiotemporal receptive \ufb01eld cube for 3D convolution in both current (t0) and adjacent time steps (t\u22121 and t1), while R\u2032 only indicates the local spatial regions in the adjacent time steps (t-1 and t1). The hyperparameter \u03b8 \u2208[0, 1] tradeo\ufb00s the contribution of temporal di\ufb00erence. The higher value of \u03b8 means the more importance of temporal difference information (e.g., trends of the skin color changes). Specially, TDC degrades to vanilla 3D convolution when \u03b8= 0. Then query and key are projected via unshared TDC and BN as Q = BN(TDC(Xtube)), K = BN(TDC(Xtube)). (3) For the value (V ) projection, point-wise linear projection without BN is utilized. Then Q, K, V \u2208 RD\u00d7T \u2032\u00d7H\u2032\u00d7W \u2032 are \ufb02attened into sequence, and separated into h heads (Dh = D/h for each head). For the i-th head (i \u2264h), the self-attention (SA) can be formulated SAi = Softmax(QiKT i /\u03c4)Vi, (4) where \u03c4 controls the sparsity. We \ufb01nd that the default setting \u03c4 = \u221aDh in [Vaswani et al., 2017, Dosovitskiy et al., 2021] performs poorly for rPPG measurement. According to the periodicity of rPPG features, we use a smaller \u03c4 value to obtain sparser attention activation. The corresponding study can be found in Table 7. The output of TD-MHSA is the concatenation of SA from all heads and then with a linear projection U \u2208RD\u00d7D TD-MHSA = Concat(SA1; SA2; ...; SAh)U. (5) As illustrated in Fig. 2, residual connection and layer normalization (LN) would be conducted after TD-MHSA. Spatio-temporal feed-forward (ST-FF). The vanilla feed-forward network consists of two linear transformation layers, where the hidden dimension D\u2032 between two layers is expanded to learn a richer feature representation. In contrast, we introduce a depthwise 3D convolution (with BN and nonlinear activation) between these two layers with extra slight computational cost but remarkable performance improvement. The bene\ufb01ts are two-fold: 1) as a complementation of TD-MHSA, ST-FF could re\ufb01ne the local inconsistency and parts of noisy features; 2) richer locality provides TD-MHSA su\ufb03cient relative position cues. 3.2 PhysFormer++ In the PhysFormer, the temporal length Ts of the tube token map is \ufb01xed. However, the \ufb01xed value of Ts might be sub-optimal for robust rPPG feature representation as the larger Ts reduces the temporal redundancy but loses the \ufb01ne-grained temporal clues, and vice versa for the smaller Ts. To alleviate this issue, we design the temporal enhanced version PhysFormer++ (see Fig. 3) consisting of two-stream SlowFast pathways with large and small Ts, respectively. Similar to the \fSpringer Nature 2021 L AT EX template Upsample Temporal Difference\u00a0Periodic Transformer x\u00a0N'=4 Concat Slow Pathway CxT'xH'xW' Fast Pathway (C/2)x(2T')xH'xW' Low-level Temporal Difference Transformer x\u00a0N'=4 Mid-level Temporal Difference\u00a0CrossAtten\u00a0 Transformer x N'=4 High-level Temporal Difference\u00a0CrossAtten\u00a0 Transformer x\u00a0N'=4 Temporal Difference\u00a0Periodic Transformer x\u00a0N'=4 Temporal Difference\u00a0Periodic Transformer x\u00a0N'=4 Lateral Connection Fig. 3 Framework of the PhysFormer++ with two-stream SlowFast pathways. Di\ufb00erent from the PhysFormer using only slow pathway, the PhysFormer++ extracts and fuses attentional features from slow and fast pathways. Moreover, temporal di\ufb00erence periodic transformer blocks are used in the slow pathway. The information \ufb02ow between two pathways interacts via temporal di\ufb00erence cross-attention transformer blocks and lateral connection. SlowFast concept in [Feichtenhofer et al., 2019, Kazakos et al., 2021], the Slow pathway has high channel capacity with low framerates, and reduces the temporal redundancy. In contrast, the Fast pathway operates at a \ufb01ne-grained temporal resolution with high framerates. Furthermore, two novel transformer blocks, temporal di\ufb00erence periodic transformer and temporal di\ufb00erence crossattention transformer, are designed for the slow and fast pathway, respectively. The former one encodes contextual rPPG periodicity clues for the slow pathway while the latter one introduces e\ufb03cient SlowFast interactive attentions for the fast pathway. The SlowFast architecture is able to adaptively mine richer temporally rPPG contexts for robust rPPG measurement. As illustrated in Fig. 3 and detailed architecture in Fig. 4, di\ufb00erent from the PhysFormer using a single tube tokenizer, two tube tokenizers Efast tube and Eslow tube are adopted in PhysFormer++ to form the spatio-temporal tube tokens Xfast tube \u2208RDfast\u00d7T fast\u00d7H\u2032\u00d7W \u2032 and Xslow tube \u2208 RDslow\u00d7T slow\u00d7H\u2032\u00d7W \u2032, respectively. Default settings Dslow = D = 2Dfast and T fast = 2T \u2032 = 2T slow are used for computational tradeo\ufb00. Here we set temporal scale to two by considering that there are many low-framerate videos in the VIPLHR dataset [Niu et al., 2019a]. Too higher scales would result in the pulse rhythm incompletion/artifacts for high HR values (e.g., \u00bf120 bpm). We will investigate more scales for higher framerate videos in the future. Subsequently, the tube tokens from the slow pathway will be forwarded with N = 3N \u2032 temporal di\ufb00erence periodic transformer Fig. 4 Architectures of PhysFormer++. Inside the brackets are the \ufb01lter sizes and feature dimensionalities. \u2018Conv\u2019 suggests the vanilla 3D convolution. All convolutional layers (except Tokenizer) are with stride=1 and are followed by a BN-ReLU layer while \u2018MaxPool\u2019 layers are with stride=1x2x2. blocks while tube tokens from the fast pathway will pass N \u2032 temporal di\ufb00erence transformer and 2N \u2032 temporal di\ufb00erence cross-attention transformer blocks. Speci\ufb01cally, the feature interactions between SlowFast pathways are in two folds: 1) all semantic midand high-level features from the slow path are cross-attentive with those from the fast path; and 2) the last mid-level features from two pathways Xfast-mid tube , Xslow-mid tube are lateral \fSpringer Nature 2021 L AT EX template 9 connected and then aggregated for the high-level propagation in the slow pathway. The lateral connection and aggregation can be formulated as Xslow-mid tube = Conv2(Concat(Xslow-mid tube , Conv1(Xfast-mid tube ))), (6) where Conv1 is the temporal convolution with size=3x1x1, stride=2x1x1, padding=1x0x0 while Conv2 denotes the point-wise convolution with D output channel. The lateral connection adaptively transfers the mid-level \ufb01ne-grained rPPG clues from the Fast pathway to the Slow pathway, and provides complementary temporal details for the Slow pathway to alleviate information loss especially for high-HR scenarios (e.g., after exercise). Finally, the re\ufb01ned high-level rPPG features from fast and slow (upsampled) pathways are concatenated and forwarded the rPPG predictor head with temporally aggregation, upsampling, spatially averaging, and 1D signal \u02c6 Y \u2208RT projection. Temporal di\ufb00erence multi-head crossand self-attention. Compared with the slow pathway, the fast pathway has more \ufb01ne-grained features but conducts ine\ufb03cient and inaccurate self-attention due to the temporal redundancy/artifacts. To alleviate the weak self-attention issue in the fast pathway, we propose the temporal difference multi-head crossand self-attention (TDMHCSA) module, which could be cascaded with ST-FF module to form the temporal di\ufb00erence cross-attention transformer. With TD-MHCSA, the features in the fast pathway can not only be re\ufb01ned by its own self-attention but also the cross-attention between the SlowFast pathways. The structure of the TD-MHCSA is illustrated in Fig. 5. The features from the fast pathway Xfast tube are \ufb01rst projected to query and key via Qfast = BN(TDC(Xfast tube)), Kfast = BN(TDC(Xfast tube)). (7) For the value (V fast) projection, point-wise linear projection without BN is utilized. Then Qfast, Kfast, V fast \u2208RDfast\u00d7T fast\u00d7H\u2032\u00d7W \u2032 are \ufb02attened into sequence, and separated into h heads (Dfast h = Dfast/h for each head). For the i-th head (i \u2264h), the self-attention can be formulated SAfast i = Softmax(Qfast i Kfast i T /\u03c4)V fast i . (8) Similarly, the features from the slow pathway Xslow tube are projected to key Kslow via BN(TDC(Xslow tube)) as well as the value (V slow) projection using point-wise linear projection. Then TDC TDC 1x1x1 Softmax CSA TDC 1x1x1 Softmax SAfast CA Fig. 5 Illustration of the temporal di\ufb00erence multihead crossand self-attention (TD-MHCSA) module. Kslow, V slow \u2208RDslow\u00d7T slow\u00d7H\u2032\u00d7W \u2032 are \ufb02attened into sequence, and separated into h heads. For the i-th head (i \u2264h), the cross-attention (CA) can be formulated as CAi = Softmax(Qfast i Kslow i T /\u03c4)V slow i . (9) Thus, the combined crossand self-attention (CSA) is formulated as CSAi = CAi + SAfast i . The output of TD-MHCSA is the concatenation of CSA from all heads and then with a linear projection U fast \u2208RDfast\u00d7Dfast, which is formulated TD-MHCSA = Concat(SCA1; SCA2; ...; SCAh)Ufast. (10) Finally, residual connection and LN layer would be conducted after TD-MHCSA. Temporal di\ufb00erence multi-head periodicand self-attention. Inspired by the music transformer [Huang et al., 2019] using relative attention [Shaw et al., 2018, Wu et al., 2021] to mine richer positional relationship (e.g., periodicity in music signals), we propose the temporal di\ufb00erence multi-head periodicand self-attention (TD-MHPSA), which extends the TD-MHSA (in Sec. 3.1) with learnable rPPG-aware positional contextual periodicity representation. Speci\ufb01cally, as shown in Fig. 6, the learnable contextual periodicity encoding R \u2208RT \u2032H\u2032W \u2032\u00d7T \u2032H\u2032W \u2032\u00d7D contains the spatio-temporal positional clues, and modulates the query vector Q into the periodic attention S = QRT . In consideration of the multihead h setting, for the i-th head, the joint contextual periodicity (CP) and self-attention (SA) can \fSpringer Nature 2021 L AT EX template TDC TDC 1x1x1 Q K V Contextual Periodicity Encoding Softmax BVP Signals Peak Map Supervision CPSA S R Fig. 6 Illustration of the temporal di\ufb00erence multihead periodicand self-attention (TD-MHPSA) module. be formulated as CPSAi = Softmax((QiKT i + \u03bb \u00b7 Si)/\u03c4)Vi, (11) where \u03bb tradeo\ufb00s the CP and SA. Here we follow the memory e\ufb03cient implementation in [Huang et al., 2019] for S calculation. Despite richer positional periodicity clues, the predicted periodic attention S might be easily in\ufb02uenced by some rPPG-unrelated clues (e.g., light changes and dynamic noise). To alleviate this issue, we propose a periodicity constraint to supervise the periodic S representation. As shown in the top left of Fig. 6, the approximate peak map PM can be obtained via 1) \ufb01rst extracting the binary peak signal P \u2208RT from the ground truth BVP signal Y \u2208RT via Pt\u2208T = \u001a 1, if Yt \u2208Rpeak, 0, if Yt / \u2208Rpeak, (12) where Rpeak denotes the 1D-region of peak locations; and then 2) calculating the auto-correlation of the peak signal P via PM = PP T . Finally, the periodic-attention loss Latten can be calculated with the binary cross-entropy (BCE) loss between the adaptive-spatial-pooled periodic attention maps S\u2032 \u2208RT \u2032\u00d7T \u2032 (from each head and each TD-MHPSA module) and the subsampled binary peak maps PM\u2019 \u2208RT \u2032\u00d7T \u2032. It can be formulated as Latten = 1 h \u00d7 N X i\u2208h,j\u2208N BCE(S\u2032, PM\u2019). (13) We also try supervision with L1 regression loss instead of BCE loss but with poorer performance. Relationship between PhysFormer and PhysFormer++. PhysFormer++ can be treated as an upgraded version of PhysFormer towards excellent performance while with more computational cost. With similar temporal difference transformers, PhysFormer can be seen as a slow-pathway version of PhysFormer++, which is more lightweight and e\ufb03cient. In contrast, PhysFormer++ is designed based on a dual-pathway SlowFast architecture with complex cross-tempo interactions, which is more robust to head motions and less sensitive to the video framerate, but with heavier computational cost (see Table 11 for e\ufb03ciency analysis). 3.3 Label Distribution Learning Similar to the facial age estimation task [Geng et al., 2013, Gao et al., 2018] that faces at close ages look quite similar, facial rPPG signals with close HR values usually have similar periodicity. Inspired by this observation, instead of considering each facial video as an instance with one label (HR), we regard each facial video as an instance associated with a label distribution. The label distribution covers a certain number of class labels, representing the degree that each label describes the instance. Through this way, one facial video can contribute to both targeted HR value and its adjacent HRs. To consider the similarity information among HR classes during the training stage, we model the rPPG-based HR estimation problem as a speci\ufb01c L-class multi-label classi\ufb01cation problem, where L=139 in our case (each integer HR value within [42, 180] bpm as a class). A label distribution p = {p1, p2, ..., pL} \u2208RL is assigned to each facial video X. It is assumed that each entry of p is a real value in the range [0,1] such that PL k=1 pk = 1. We consider the Gaussian distribution function, centered at the ground truth HR label YHR with the standard deviation \u03c3, to construct the \fSpringer Nature 2021 L AT EX template 11 corresponding label distribution p. pk = 1 \u221a 2\u03c0\u03c3 exp \u0012 \u2212(k \u2212(YHR \u221241))2 2\u03c32 \u0013 . (14) The label distribution loss can be formulated as LLD = KL(p, Softmax(\u02c6 p)), where divergence measure KL(\u00b7) denotes the Kullback-Leibler (KL) divergence [Gao et al., 2017], and \u02c6 p is the power spectral density (PSD) of predicted rPPG signals. Please note that the previous work [Niu et al., 2017] also considers the distribution learning for HR estimation. However, it is totally di\ufb00erent with our work: 1) the motivation in [Niu et al., 2017] is to smooth the temporal HR outliers caused by facial movements across continuous video clips, while our work is more generic, aiming at e\ufb03cient feature learning across adjacent labels under limited-scale training data; 2) the technique used in [Niu et al., 2017] is after a post-HR-estimation for the handcrafted rPPG signals, while our work is to design a reasonable supervision signal LLD for the PhysFormer family. 3.4 Curriculum Learning Guided Dynamic Loss Curriculum learning [Bengio et al., 2009], as a major machine learning regime with philosophy of easy-to-hard curriculum, is utilized to train PhysFormer. In the rPPG measurement task, the supervision signals from temporal domain (e.g., mean square error loss [Chen and McDu\ufb00, 2018], negative Pearson loss [Yu et al., 2019a, Yu et al., 2019b]) and frequency domain (e.g., cross-entropy loss [Niu et al., 2020, Yu et al., 2020], signalto-noise ratio loss [\u02c7 Spetl\u00b4 \u0131k et al., 2018]) provide di\ufb00erent extents of constraints for model learning. The former one gives signal-trend-level constraints, which is straightforward for model convergence but over\ufb01tting after that. In contrast, the latter one with strong constraints on frequency domain enforces the model learning periodic features within target frequency bands, which is hard to converge well due to the realistic rPPG-irrelevant noise. Inspired by the curriculum learning, we propose the dynamic supervision to gradually enlarge the frequency constraints, which alleviates the over\ufb01tting issue and bene\ufb01ts the intrinsic rPPG-aware feature learning gradually. Speci\ufb01cally, exponential increment strategy is adopted, and comparison with other dynamic strategies (e.g., linear increment) will be shown in Table 10. The dynamic loss Loverall can be formulated as Loverall = \u03b1 \u00b7 Ltime | {z } temporal + \u03b2 \u00b7 (LCE + LLD) | {z } frequency +Latten, \u03b2 = \u03b20 \u00b7 (\u03b7(Epochcurrent\u22121)/Epochtotal), (15) where hyperparameters \u03b1, \u03b20 and \u03b7 equal to 0.1, 1.0 and 5.0, respectively. Negative Pearson loss [Yu et al., 2019a, Yu et al., 2019b] and frequency cross-entropy loss [Niu et al., 2020, Yu et al., 2020] are adopted as Ltime and LCE, respectively. With the dynamic supervision, PhysFormer and PhysFormer++ could perceive better signal trend at the beginning while such perfect warming up facilitates the gradually stronger frequency knowledge learning later. 4 Experimental Evaluation In this section, experiments of rPPG-based physiological measurement for three types of physiological signals, i.e., heart rate (HR), heart rate variability (HRV), and respiration frequency (RF), are conducted on four benchmark datasets (VIPL-HR [Niu et al., 2019a], MAHNOB-HCI [Soleymani et al., 2011], MMSE-HR [Tulyakov et al., 2016], and OBF [Li et al., 2018]). Besides, comprehensive ablations about PhysFormer and PhysFormer++ are also investigated in the VIPL-HR dataset. 4.1 Datasets and Performance Metrics VIPL-HR [Niu et al., 2019a] is a large-scale dataset for remote physiological measurement under less-constrained scenarios. It contains 2,378 RGB videos of 107 subjects recorded with di\ufb00erent head movements, lighting conditions and acquisition devices. MAHNOBHCI [Soleymani et al., 2011] is one of the most widely used benchmark for remote HR measurement evaluations. It includes 527 facial videos of with 61 fps framerate and 780x580 resolution from 27 subjects. MMSE-HR [Tulyakov et al., 2016] \fSpringer Nature 2021 L AT EX template (a) VIPL-HR (b) MAHNOB-HCI (c) MMSE-HR (d) OBF Fig. 7 Example video frames from datasets (a) VIPL-HR [Niu et al., 2019a]; (b) MAHNOB-HCI [Soleymani et al., 2011]; (c) MMSE-HR [Tulyakov et al., 2016]; and (d) OBF [Li et al., 2018]. is a dataset including 102 RGB videos from 40 subjects, and the raw resolution of each video is at 1040x1392. OBF [Li et al., 2018] is a high-quality dataset for remote physiological signal measurement. It contains 200 \ufb01ve-minute-long RGB videos with 60 fps framerate recorded from 100 healthy adults. The example video frames from these four rPPG datasets are illustrated in Fig. 7. For MAHNOB-HCI, as there are no available BVP ground truth, we \ufb01rst smooth the sharp ECG signals (with 10-point averaging strategy) into pseudo BVP signals as ground truth. Speci\ufb01cally, to alleviate the incorrect synchronization between videos and ground truth signals in MAHNOB-HCI, OBF, and VIPL-HR datasets, we \ufb01rst extract the coarse green channel signals via averaging the segmented facial skin in each frame. Then, we calculate the cross-correlation between the coarse green rPPG signals and (pseudo) BVP signals, and use the maximum-correlation phase to calibrate/compensate the phase bias. Furthermore, we remove the samples with HR>180 in the VIPL-HR and MMSE-HR datasets because the ground truths in these samples are unreliable due to poor contact of sensors (resulting in very noisy and \ufb02uctuated HRs). In terms of evaluation metrics, average HR estimation task is evaluated on all four datasets while HRV and RF estimation tasks on highquality OBF [Li et al., 2018] dataset. Speci\ufb01cally, we follow existing methods [Yu et al., 2019b, Niu et al., 2020, Lu et al., 2021] and report low frequency (LF), high frequency (HF), and LF/HF ratio for HRV and RF estimation. We report the most commonly used performance metrics for evaluation, including the standard deviation (SD), mean absolute error (MAE), root mean square error (RMSE), and Pearson\u2019s correlation coe\ufb03cient (r). 4.2 Implementation Details Both PhysFormer and PhysFormer++ are implemented with Pytorch. For each video clip, the MTCNN face detector [Zhang et al., 2016] is used to crop the enlarged face area at the \ufb01rst frame and \ufb01x the region through the following frames. The videos in MAHNOB-HCI and OBF are downsampled to 30 fps for e\ufb03ciency. The numbers of temporal di\ufb00erence transformer blocks N=12, transformer heads h=4, channel dimension D=96, hidden dimension in ST-FF D\u2032=144 are used for PhysFormer while temporal di\ufb00erence coe\ufb03cient \u03b8=0.7 and attention sparsity \u03c4=2.0 for TDMHSA. \u03bb=0.5 is utilized in the TD-MHPSA. The targeted tube size Ts \u00d7Hs \u00d7Ws equals to 4\u00d74\u00d74. For the Rpeak calculation in Eq. (12), the function \u2018\ufb01ndpeaks()\u2019 in Matlab is used for BVP peak detection, and then the detected peak locations are extended with successive \u00b13 neighbors. In the training stage, we randomly sample RGB face clips with size 160\u00d7128\u00d7128 (T \u00d7 H \u00d7 W) as model inputs. Random horizontal \ufb02ipping and temporally up/downsampling [Yu et al., 2020] are used for data augmentation. The PhysFormer is trained with Adam optimizer and the initial learning rate and weight decay are 1e-4 and 5e-5, respectively. We cannot \ufb01nd obvious performance improvement using AdamW optimizer. We train models with 25 epochs with \ufb01xed setting \u03b1=0.1 for temporal loss while exponentially increased parameter \u03b2 \u2208[1, 5] for frequency losses. We set standard deviation \u03c3=1.0 for label distribution learning. The batch size is 4 on one V100 GPU. In the testing stage, similar to [Niu et al., 2019a], we uniformly separate 30-second videos into three short clips with 10 seconds, and then video-level HR is calculated via averaging HRs from three short clips. \fSpringer Nature 2021 L AT EX template 13 Table 2 Intra-dataset testing results on the VIPL-HR dataset. The symbols \u25b2, \u2666and \u22c6denote traditional, non-end-toend learning based and end-to-end learning based methods, respectively. Best results are marked in bold and second best in underline. Method SD \u2193 (bpm) MAE \u2193 (bpm) RMSE \u2193 (bpm) r \u2191 Tulyakov2016 [Tulyakov et al., 2016]\u25b2 18.0 15.9 21.0 0.11 POS [Wang et al., 2017]\u25b2 15.3 11.5 17.2 0.30 CHROM [De Haan and Jeanne, 2013]\u25b2 15.1 11.4 16.9 0.28 RhythmNet [Niu et al., 2019a]\u2666 8.11 5.30 8.14 0.76 ST-Attention [Niu et al., 2019b]\u2666 7.99 5.40 7.99 0.66 NAS-HR [Lu and Han, 2021]\u2666 8.10 5.12 8.01 0.79 CVD [Niu et al., 2020]\u2666 7.92 5.02 7.97 0.79 Dual-GAN [Lu et al., 2021]\u2666 7.63 4.93 7.68 0.81 I3D [Carreira and Zisserman, 2017]\u22c6 15.9 12.0 15.9 0.07 PhysNet [Yu et al., 2019a]\u22c6 14.9 10.8 14.8 0.20 DeepPhys [Chen and McDu\ufb00, 2018]\u22c6 13.6 11.0 13.8 0.11 VideoTransformer [Revanur et al., 2022]\u22c6 13.5 10.4 13.2 0.16 AutoHR [Yu et al., 2020]\u22c6 8.48 5.68 8.68 0.72 PhysFormer (Ours)\u22c6 7.74 4.97 7.79 0.78 PhysFormer++ (Ours)\u22c6 7.65 4.88 7.62 0.80 4.3 Intra-dataset Testing In this subsection, two datasets (VIPL-HR and MAHNOB-HCI) are used for intra-dataset testing on HR estimation while the OBF dataset is used for intra-dataset HR, HRV and RF estimation. HR estimation on VIPL-HR. Here we follow [Niu et al., 2019a] and use a subject-exclusive 5-fold cross-validation protocol on VIPL-HR. As shown in Table 2, all three traditional methods (Tulyakov2016 [Tulyakov et al., 2016], POS [Wang et al., 2017] and CHROM [De Haan and Jeanne, 2013]) perform poorly due to the complex scenarios (e.g., large head movement and various illumination) in the VIPL-HR dataset. In terms of deep learning based methods, the existing end-to-end learning based methods (e.g., PhysNet [Yu et al., 2019a], DeepPhys [Chen and McDu\ufb00, 2018], and AutoHR [Yu et al., 2020]) predict less reliable HR values with larger RMSE compared with non-end-to-end learning approaches (e.g., RhythmNet [Niu et al., 2019a], ST-Attention [Niu et al., 2019b], NASHR [Lu and Han, 2021], CVD [Niu et al., 2020], and Dual-GAN [Lu et al., 2021]). Such the large performance margin might be caused by the coarse and over\ufb01tted rPPG features extracted from the end-to-end models. In contrast, all \ufb01ve non-end-to-end methods \ufb01rst extract \ufb01ne-grained signal maps from multiple facial ROIs, and then more dedicated rPPG clues would be extracted via the cascaded models. Without strict and heavy Table 3 Intra-dataset results on the MAHNOB-HCI dataset. Method SD \u2193 (bpm) MAE \u2193 (bpm) RMSE \u2193 (bpm) r \u2191 Poh2010 [Poh et al., 2010a]\u25b2 13.5 13.6 0.36 CHROM [De Haan and Jeanne, 2013]\u25b2 13.49 22.36 0.21 Li2014 [Li et al., 2014]\u25b2 6.88 7.62 0.81 Tulyakov2016 [Tulyakov et al., 2016]\u25b2 5.81 4.96 6.23 0.83 SynRhythm [Niu et al., 2018]\u2666 10.88 11.08 RhythmNet [Niu et al., 2019a]\u2666 3.99 3.99 0.87 HR-CNN [\u02c7 Spetl\u00b4 \u0131k et al., 2018]\u22c6 7.25 9.24 0.51 rPPGNet [Yu et al., 2019b]\u22c6 7.82 5.51 7.82 0.78 DeepPhys [Chen and McDu\ufb00, 2018]\u22c6 4.57 AutoHR [Yu et al., 2020]\u22c6 4.73 3.78 5.10 0.86 Meta-rPPG [Lee et al., 2020]\u22c6 4.9 3.01 3.68 0.85 PhysFormer (Ours)\u22c6 3.87 3.25 3.97 0.87 PhysFormer++ (Ours)\u22c6 3.90 3.23 3.88 0.87 preprocessing procedure in [Niu et al., 2019a, Niu et al., 2019b, Lu and Han, 2021, Niu et al., 2020, Lu et al., 2021], the proposed PhysFormer and PhysFormer++ can be trained from scratch on facial videos directly, and achieve better or on par performance with state-ofthe-art non-end-to-end learning based method Dual-GAN [Lu et al., 2021]. It indicates that PhysFormer and PhysFormer++ are able to learn the intrinsic and periodic rPPG-aware features automatically. It can be seen from Table 2 that the proposed PhysFormer family outperforms the VideoTransformer [Revanur et al., 2022] by a large margin, indicating the importance of local and global spatio-temporal physiological propagation. In order to further check the correlations between the predicted HRs and the ground-truth HRs, we plot the HR estimation results against the ground truths in Fig. 8(a). From the \ufb01gure we can see that the predicted HRs from PhysFormer++ and the ground-truth HRs are well correlated in a wide range of HR from 47 bpm to 147 bpm. HR estimation on MAHNOB-HCI. For the HR estimation tasks on MAHNOB-HCI, similar to [Yu et al., 2019b], subject-independent 9-fold cross-validation protocol is adopted. In consideration of the convergence di\ufb03culty due to the low illumination and high compression videos in MAHNOB-HCI, we \ufb01netune the VIPL-HR pretrained models on MAHNOB-HCI for further 15 epochs. The HR estimation results are shown in Table 3. The proposed PhysFormer and PhysFormer++ achieves the lowest SD (3.87 bpm) and highest r (0.87) among the traditional, non-end-to-end learning, and end-to-end learning \fSpringer Nature 2021 L AT EX template Table 4 Performance comparison of HR and RF measurement as well as HRV analysis on the OBF dataset. HR(bpm) RF(Hz) LF(u.n) HF(u.n) LF/HF Method RMSE r RMSE r RMSE r RMSE r RMSE r ROI green [Li et al., 2018]\u25b2 2.162 0.99 0.084 0.321 0.24 0.573 0.24 0.573 0.832 0.571 CHROM [De Haan and Jeanne, 2013]\u25b22.733 0.98 0.081 0.224 0.206 0.524 0.206 0.524 0.863 0.459 POS [Wang et al., 2017]\u25b2 1.906 0.991 0.07 0.44 0.158 0.727 0.158 0.727 0.679 0.687 CVD [Niu et al., 2020]\u2666 1.26 0.996 0.058 0.606 0.09 0.914 0.09 0.914 0.453 0.877 rPPGNet [Yu et al., 2019b]\u22c6 1.8 0.992 0.064 0.53 0.135 0.804 0.135 0.804 0.589 0.773 PhysFormer (Ours)\u22c6 0.804 0.998 0.054 0.661 0.086 0.912 0.086 0.912 0.39 0.896 PhysFormer++ (Ours)\u22c6 0.765 0.998 0.052 0.686 0.083 0.921 0.083 0.921 0.368 0.908 Table 5 Cross-dataset results on the MMSE-HR dataset. Method SD \u2193 (bpm) MAE \u2193 (bpm) RMSE \u2193 (bpm) r \u2191 Li2014 [Li et al., 2014]\u25b2 20.02 19.95 0.38 CHROM [De Haan and Jeanne, 2013]\u25b2 14.08 13.97 0.55 Tulyakov2016 [Tulyakov et al., 2016]\u25b2 12.24 11.37 0.71 ST-Attention [Niu et al., 2019b]\u2666 9.66 10.10 0.64 RhythmNet [Niu et al., 2019a]\u2666 6.98 7.33 0.78 CVD [Niu et al., 2020]\u2666 6.06 6.04 0.84 PhysNet [Yu et al., 2019a]\u22c6 12.76 13.25 0.44 TS-CAN [Liu et al., 2020]\u22c6 3.85 7.21 0.86 AutoHR [Yu et al., 2020]\u22c6 5.71 5.87 0.89 E\ufb03cientPhys-C [Liu et al., 2021b]\u22c6 2.91 5.43 0.92 E\ufb03cientPhys-T1 [Liu et al., 2021b]\u22c6 3.48 7.21 0.86 PhysFormer (Ours)\u22c6 5.22 2.84 5.36 0.92 PhysFormer++ (Ours)\u22c6 5.09 2.71 5.15 0.93 methods, which indicates the reliability of the learned rPPG features from PhysFormer family under su\ufb03cient supervision. Our performance is on par with the latest end-to-end learning method Meta-rPPG [Lee et al., 2020] without transductive adaptation from target frames. HR, HRV and RF estimation on OBF. Besides HR estimation, we also conduct experiments for three types of physiological signals, i.e., HR, RF, and HRV measurement on the OBF [Li et al., 2018] dataset. Following [Yu et al., 2019b, Niu et al., 2020], we use a 10-fold subject-exclusive protocol for all experiments. All the results are shown in Table 4. It is clear that the proposed PhysFormer and PhysFormer++ outperform the existing state-ofthe-art traditional (ROI green [Li et al., 2018], CHROM [De Haan and Jeanne, 2013], POS [Wang et al., 2017]) and end-to-end learning (rPPGNet [Yu et al., 2019b]) methods by a large margin on all evaluation metrics for HR, RF and all HRV features. The proposed PhysFormer and PhysFormer++ give more accurate estimation in terms of HR, RF, and LF/HF compared with the preprocessed signal map based non-end-to-end learning method CVD [Niu et al., 2020]. These results indicate that PhysFormer family could not only handle the average HR estimation task but also give a promising prediction of the rPPG signal for RF measurement and HRV analysis, which shows its potential in many healthcare applications. We also check the short-time HR estimation performance of the after exercising scenario on the OBF, in which the subject\u2019s HR decreases rapidly. Two examples are given in Fig. 8(b). It can be seen that PhysFormer++ could follow the trend of HR changes well, which indicates the proposed model is robust in the signi\ufb01cant HR changing scenarios. We further check the predicted rPPG signals of the PhysFormer++ from these two examples in Fig. 8(c). From the results, we can see that the proposed method could give an accurate prediction of the interbeat intervals (IBIs), thus can give a robust estimation of RF and HRV features. 4.4 Cross-dataset Testing Besides of the intra-dataset testings on the VIPL-HR, MAHNOB-HCI, and OBF datasets, we also conduct cross-dataset testings on MMSEHR [Tulyakov et al., 2016] following the protocol of [Niu et al., 2019a]. The models trained on VIPL-HR are directly tested on MMSE-HR. All the results of the proposed PhysFormer family and the state-of-the-art methods are shown in Table 11. It is clear that PhysFormer and PhysFormer++ generalize well in unseen domains (e.g., skin tone and lighting conditions). It is worth noting that PhysFormer++ achieves the lowest SD (5.09 bpm), MAE (2.71 bpm), RMSE (5.15 bpm) as well as the highest r (0.93) among the traditional, non-end-to-end learning and end-to-end \fSpringer Nature 2021 L AT EX template 15 (b) (c) (a) Fig. 8 (a) The scatter plot of the ground truth HRgt and the predicted HRpre via PhysFormer++ of all the face videos on VIPL-HR dataset. (b) Two examples of the short-time HR estimation from PhysFormer++ for face videos with signi\ufb01cantly decreased HR. (c) Two example curves of the predicted rPPG signals from PhysFormer++ and the ground truth ECG signals used to calculate the HRV features. learning based methods, indicating 1) the predicted HRs are highly correlated with the ground truth HRs, and 2) the model learns domaininvariant intrinsic rPPG-aware features. Compared with the spatio-temporal transformer based E\ufb03cientPhys-T1 [Liu et al., 2021b], our proposed PhysFormer and PhysFormer++ are able to predict more accurate physiological signals, which indicates the e\ufb00ectiveness of the long-range spatiotemporal attention. 4.5 Ablation Study Here We provide the results of ablation studies for HR estimation on the Fold-1 of the VIPLHR [Niu et al., 2019a] dataset. Speci\ufb01cally, we \ufb01rst evaluate the impacts of architecture con\ufb01gurations for PhysFormer in terms of \u2018Tube Tokenization\u2019, \u2018TD-MHSA\u2019 and \u2018ST-FF\u2019. Then based on the optimal con\ufb01guration of PhysFormer, the impacts of architecture con\ufb01gurations of PhysFormer++ with \u2018TD-MHPSA\u2019 and \u2018SlowFast architecture\u2019 will be studied. Finally, we study the transformer con\ufb01gurations (\u2018\u03b8 in TDC\u2019 and \u2018layer/head numbers\u2019) and the training receipts (\u2018label distribution learning\u2019 and \u2018dynamic supervision\u2019) for the whole PhysFormer family (i.e., PhysFormer and PhyFormer++). Impact of tube tokenization in PhysFormer. In the default setting of PhysFormer, a shallow stem cascaded with a tube tokenization is used. In this ablation, we consider other four tokenization con\ufb01gurations with or w/o stem. It can be seen from the \ufb01rst row in Table 6 that the stem helps the PhysFormer see better [Xiao et al., 2021], and the RMSE increases dramatically (+3.06 bpm) when w/o the stem. Then we investigate the impacts of the spatial and temporal domains in tube tokenization. It is clear that the result in the fourth row with full spatial projection is quite poor (RMSE=10.61 bpm), indicating the necessity of the spatial attention. In contrast, tokenization with smaller tempos (e.g., [2x4x4]) or spatial inputs (e.g., 160x96x96) reduces performance slightly. Based on the observed results, tokenizations with [4x4x4] and [2x4x4] are adopted for the defaulted setting of slow and fast pathway in PhysFormer++, respectively. Impact of TD-MHSA and ST-FF in PhysFormer. As shown in Table 7, both the TDMHSA and ST-FF play vital roles in PhysFormer. The result in the \ufb01rst row shows that the performance degrades sharply without spatio-temporal attention. Moreover, it can be seen from the last two rows that without TD-MHSA/ST-FF, PhysFormer with vanilla MHSA/FF obtains 10.43/8.27 bpm RMSE. Thus, we can draw the conclusion that the key element \u2018vanilla MHSA\u2019 in transformer cannot provide rPPG performance gain \fSpringer Nature 2021 L AT EX template Table 6 Ablation of Tube Tokenization of PhysFormer. The three dimensions in tensors indicate length\u00d7 height\u00d7width. Inputs [Stem] Feature Size [Tube Size] Token Numbers RMSE \u2193 (bpm) 160 \u00d7 128 \u00d7 128 [\u00d7 ] 160 \u00d7 128 \u00d7 128 [4 \u00d7 32 \u00d7 32] 40 \u00d7 4 \u00d7 4 10.62 160 \u00d7 128 \u00d7 128 [\u221a] 160 \u00d7 16 \u00d7 16 [4 \u00d7 4 \u00d7 4] 40 \u00d7 4 \u00d7 4 7.56 160 \u00d7 96 \u00d7 96 [\u221a] 160 \u00d7 12 \u00d7 12 [4 \u00d7 4 \u00d7 4] 40 \u00d7 3 \u00d7 3 8.03 160 \u00d7 128 \u00d7 128 [\u221a] 160 \u00d7 16 \u00d7 16 [4 \u00d7 16 \u00d7 16] 40 \u00d7 1 \u00d7 1 10.61 160 \u00d7 128 \u00d7 128 [\u221a] 160 \u00d7 16 \u00d7 16 [2 \u00d7 4 \u00d7 4] 80 \u00d7 4 \u00d7 4 7.81 Table 7 Ablation of TD-MHSA and ST-FF in PhysFormer. MHSA \u03c4 Feed-forward RMSE (bpm) \u2193 ST-FF 9.81 TD-MHSA \u221aDh \u22484.9 ST-FF 9.51 TD-MHSA 2.0 ST-FF 7.56 vanilla MHSA 2.0 ST-FF 10.43 TD-MHSA 2.0 vanilla FF 8.27 Table 8 Ablation of TD-MHPSA for the single pathway con\ufb01guration in PhysFormer++. Pathway MHSA Latten RMSE (bpm) \u2193 Slow TD-MHSA 7.56 Slow TD-MHPSA 7.69 Slow TD-MHPSA \u221a 7.43 Fast TD-MHSA 7.81 Fast TD-MHPSA 8.12 Fast TD-MHPSA \u221a 7.85 although it captures the long-term global spatiotemporal physiological features. In contrast, the proposed \u2018TD-MHSA\u2019 bene\ufb01ts the rPPG measurement via local spatio-temporal physiological clue guided long-term global spatio-temporal physiological aggregation. One important \ufb01nding in this research is that, the temperature \u03c4 in\ufb02uences the MHSA a lot. When the \u03c4 = \u221aDh like previous ViT [Dosovitskiy et al., 2021, Arnab et al., 2021], the predicted rPPG signals are unsatis\ufb01ed (RMSE=9.51 bpm). Regularizing the \u03c4 with smaller value enforces sparser spatiotemporal attention, which is e\ufb00ective for the quasi-periodic rPPG task. Impact of TD-MHPSA for di\ufb00erent pathway in PhysFormer++. Based on the TDMHSA in PhysFormer, the PhysFormer++ further extends the slow pathway with the more periodic TD-MHPSA modules. Table 8 shows the Table 9 Ablation of SlowFast two-pathway based architecture in PhysFormer++. TD-MHPSA Lateral Connect TD-MHCSA RMSE 7.78 Slow Pathway 7.58 Slow Pathway High-level 7.34 Slow Pathway Mid&High-level 7.38 Slow Pathway High-level High-level 7.28 Slow Pathway High-level Mid&High-level 7.16 Slow Pathway High-level Low&Mid&High-level 7.24 results of the TD-MHPSA for single pathway con\ufb01guration. It is interesting to \ufb01nd that compared with TD-MHSA, the performance even drop for both slow and fast pathways when assembling with TD-MHPSA but without explicit attention supervision Latten. When training TD-MHPSA with Latten, the RMSE is decreased by 0.26 and 0.27 bpm for the slow and fast pathway, respectively. It indicates the importance of explicit rPPG-aware periodicity supervision. Some visualizations with and without Latten can be found in Sec. 4.7. From the results in Table 8 we can see that the TD-MHPSA with Latten bene\ufb01ts the periodic rPPG clue mining in the slow pathway while limited e\ufb00ects for the fast pathway. It may be because the attention loss calculated from the periodic maps with huger temporal resolution in the fast pathway is ine\ufb03cient to back-propagate the rPPG-aware information. Thus, we only apply the TD-MHPSA in the slow pathway as the defaulted setting for PhysFormer++. Impact of the SlowFast architecture in PhysFormer++. Table 9 illustrates the ablations of SlowFast two-pathway based architecture in PhysFormer++. From the results of the \ufb01rst two rows we can see that such SlowFast rPPG models even achieve inferior performance (7.78/7.58 vs. 7.56 bpm RMSE) compared with single pathway based PhysFormer. The unsatis\ufb01ed results might be caused by the lack of e\ufb03cient rPPG feature interaction between two pathways. We also conduct experiments with lateral connections in di\ufb00erent levels and cross-attention based TD-MHCSA in the fast pathway. From Table 9 we can obviously \ufb01nd that both lateral connections and TD-MHCSA improve the performance remarkably. This is because the former one brings more temporally \ufb01ne-grained clues back to the slow pathway to alleviate rPPG information loss while the latter one leverages the cross -attention features to re\ufb01ne the redundant rPPG features in \fSpringer Nature 2021 L AT EX template 17 (a) (b) Fig. 9 Impacts of the (a) \u03c3 in label distribution learning for PhysFormer and PhysFormer++ and (b) \u03b8 in TD-MHSA, TD-MHCSA, and TD-MHPSA. 6 9 12 layers RMSE (bpm) RMSE (bpm) 1 2 3 4 6 heads (a) (b) PhysFormer PhysFormer++ PhysFormer PhysFormer++ Fig. 10 Ablation of the (a) layers and (b) heads in PhysFormer and PhysFormer++. the fast pathway. The best con\ufb01guration for PhysFormer++ is with high-level lateral connection and mid&high-level TD-MHCSA. Impact of \u03b8 and layer/head numbers in the PhysFormer family. Hyperparameter \u03b8 tradeo\ufb00s the contribution of local temporal gradient information. As illustrated in Fig. 9(b), PhysFormer could achieve smaller RMSE when \u03b8=0.4 and 0.7 while PhysFormer++ obtains the best performance when \u03b8=0.7, indicating the importance of the normalized local temporal di\ufb00erence features for global spatio-temporal attention. We also investigate how the layer and head numbers in\ufb02uence the performance of PhysFormer and PhysFormer++. As shown in Fig. 10(a), with deeper temporal transformer blocks, the RMSE are reduced progressively despite heavier computational cost. In terms of the impact of head numbers, it is clear to \ufb01nd from Fig. 10(b) that the PhysFormer family with four heads performs the Table 10 Ablation of dynamic loss in the frequency domain for PhysFormer and PhysFormer++. The temporal loss Ltime is with \ufb01xed \u03b1=0.1 here. \u2018CE\u2019 and \u2018LD\u2019 denote cross-entropy and label distribution, respectively. Frequency loss \u03b2 Strategy RMSE (bpm) \u2193 PhysFormer LCE + LLD 1.0 \ufb01xed 8.48 LCE + LLD 5.0 \ufb01xed 8.86 LCE + LLD [1.0, 5.0] linear 8.37 LCE + LLD [1.0, 5.0] exponential 7.56 LCE [1.0, 5.0] exponential 8.09 LLD [1.0, 5.0] exponential 8.21 LLD (real distribution) [1.0, 5.0] exponential 8.72 PhysFormer++ LCE + LLD 1.0 \ufb01xed 7.98 LCE + LLD 5.0 \ufb01xed 8.54 LCE + LLD [1.0, 5.0] linear 8.13 LCE + LLD [1.0, 5.0] exponential 7.16 LCE [1.0, 5.0] exponential 7.76 LLD [1.0, 5.0] exponential 7.89 LLD (real distribution) [1.0, 5.0] exponential 8.67 best while fewer heads lead to sharp performance drops. Impact of label distribution learning for the PhysFormer family. Besides the temporal loss Ltime and frequency cross-entropy loss LCE, the ablations w/ and w/o label distribution loss LLD are shown in the last four rows of Table 10. Although the LLD performs slightly worse (respective +0.12 and +0.13 bpm RMSE for PhysFormer and PhysFormer++) than LCE, the best performance can be achieved using both losses, indicating the e\ufb00ectiveness of explicit distribution constraints for extreme-frequency interference alleviation and adjacent label knowledgement propagation. It is interesting to \ufb01nd from the last two rows in both PhysFormer and PhysFormer++ that using real PSD distribution from ground truth PPG signals as p, the performance is inferior due to the lack of an obvious peak in the distribution and partial noise. We can also \ufb01nd from the Fig. 9(a) that the \u03c3 ranged from 0.9 to 1.2 for LLD are suitable to achieve good performance. Impact of dynamic supervision for the PhysFormer family. Fig. 11 illustrates the testing performance of PhysFormer and PhysFormer++ on Fold-1 VIPL-HR when training with \ufb01xed and dynamic supervision. It is clear that with exponentially increased frequency loss, models in the blue curves converge faster and achieve smaller RMSE. We also compare several kinds of \ufb01xed and dynamic strategies in Table 10. The results in the \ufb01rst four rows indicate 1) using \ufb01xed higher \u03b2 leads to poorer performance caused \fSpringer Nature 2021 L AT EX template (a) PhysFormer (b) PhysFormer++ Fig. 11 Testing results of \ufb01xed and dynamic frequency supervisions for (a) PhysFormer and (b) PhysFormer++ on the Fold-1 of VIPL-HR. by the convergency di\ufb03culty; 2) models with the exponentially increased \u03b2 perform better than using linear increment. 4.6 E\ufb03ciency Analysis Here we also investigate the computational cost1 compared with the baselines. The number of parameters and the multiply\u2013accumulates (MACs) are shown in Table 11. Despite huge number of parameters, PhysFormer and PhysFormer++ are with smaller MACs compared with baselines PhysNet, TS-CAN, and AutoHR. Compared with PhysFormer, the PhysFormer++ introduces extra 2.76M paramters and 1.16G MACs. The inference time of one face clip 3x160x128x128 (CxTxHxW) for PhysFormer and PhysFormer++ on one V100 GPU is 29ms and 40ms, respectively. Despite slightly heavier, it can predict more accurate rPPG signals on both intra-dataset (-0.17 bpm RMSE on VIPLHR) and cross-dataset (-0.21 bpm RMSE on 1https://pypi.org/project/thop/ Table 11 Cross-dataset results with computational cost on MMSE-HR. The FLOPs are calculated with the video input size 3\u00d7160\u00d7128\u00d7128 (C \u00d7T \u00d7H \u00d7W) for PhysNet/ AutoHR/ PhysFormer/ PhysFormer++ while 3\u00d7160\u00d796\u00d796 for TS-CAN/E\ufb03cientPhys. Method #Param. (M) MACs (G) RMSE \u2193 (bpm) PhysNet [Yu et al., 2019a] 0.73 65.19 13.25 TS-CAN [Liu et al., 2020] 3.91 61.96 7.21 AutoHR [Yu et al., 2020] 0.99 189.22 5.87 E\ufb03cientPhys-C [Liu et al., 2021b] 3.84 31.32 5.43 PhysFormer (Ours) 7.03 47.01 5.36 PhysFormer++ (Ours) 9.79 49.85 5.15 MMSE-HR) testings. Towards e\ufb03cient mobilelevel rPPG applications, the computational cost of the proposed PhysFormer family is still unsatis\ufb01ed. One potential future direction is to design more lightweight PhysFormer with advanced network quantization [Lin et al., 2021b] and binarization [Qin et al., 2022] techniques. 4.7 Visualization and Discussion Visualization of the self-attention map. We visualize the attention maps from the last TDMHSA module of PhysFormer (left) and the last TD-MHCSA module in the fast pathway of PhysFormer++ (right) in Fig. 12. The x and y axes of the attention map indicate the attention con\ufb01dence from key and query tube tokens, respectively. From the attention maps activated from the video sample with limited head movement in Fig. 12(a), we can easily \ufb01nd periodic or quasiperiodic responses along both axes, indicating the periodicity of the intrinsic rPPG features from PhysFormer and PhysFormer++. To be speci\ufb01c, given the 530th tube token (in blue) from the forehead (spatial face domain) and peak (temporal signal domain) locations as a query, the corresponding key responses are illustrated at the blue line in the attention map. On the one hand, it can be seen from the key responses that dominant spatial attentions focus on the facial skin regions and discard unrelated background. On the other hand, the temporal localizations of the key responses are around peak positions in the predicted rPPG signals. All these patterns are reasonable: 1) the forehead and cheek regions [Verkruysse et al., 2008] have richer blood volume for rPPG measurement and are also reliable since these regions are less a\ufb00ected by facial muscle movements due to e.g., \fSpringer Nature 2021 L AT EX template 19 1 5 10 15 20 25 30 35 40 40 35 30 25 20 15 10 5 1 1 2 3 4 1 2 3 4 Query ( T'xH'xW' = 40x4x4 ) Key ( T'xH'xW' = 40x4x4 ) A Query Attention \u00a0Map Key Responses Query 1 5 10 15 20 25 30 35 40 35 30 25 20 15 10 5 1 1 2 3 4 1 2 3 4 Query ( T'xH'xW' = 40x4x4 ) Key ( T'xH'xW' = 40x4x4 ) A Query Attention \u00a0Map Key Responses 40 2 10 20 30 40 50 Query ( TfastxH'xW' = 80x4x4 ) Attention Map 60 70 80 80 70 60 50 40 30 20 10 2 Key ( TfastxH'xW' = 80x4x4 ) 40 Key Responses Query 2 10 20 30 40 50 60 70 80 70 60 50 40 30 20 10 2 Query ( TfastxH'xW' = 80x4x4 ) Key ( TfastxH'xW' = 80x4x4 ) Attention Map Key Responses 80 (a) Visualization on the video sample with limited head movement (b) Visualization on the video sample with serious head movement Query Query Fig. 12 Visualization of the attention maps from (left) the 1st head in last TD-MHSA module of PhysFormer and (right) the 1st head in last TD-MHCSA module of the fast pathway in PhysFormer++. Given the 530th and 276th tube tokens in blue as the query for the video samples with (a) limited head movement and (b) serioud head movement, representative key responses are illustrated (the brighter, the more attentive). The predicted downsampled rPPG signals as well as the ground truth BVP signals are shown for temporal attention understanding. \fSpringer Nature 2021 L AT EX template t t Periodic attention map w/ Periodic attention map Periodic attention map w/ Periodic attention map Fig. 13 Visualization of the periodic attention maps from the 1st head in last TD-MHPSA module of the slow pathway in PhysFormer++. The top row show the periodic attention map from the facial video with limited head movement while the bottom one with serious head movement. facial expressions, talking; and 2) rPPG signals from healthy people are usually periodic. We also visualize the attention maps from another video sample with serious head movement in Fig. 12(b). It can be observed from the left sub\ufb01gure that the attentional response of PhysFormer is inaccurate (e.g., focusing on the neck region) when the head moves to the left. Another issue is that due to the large temporal token size (Ts=4) in the tokenization stage, the temporal rPPG clues might be partially discarded, resulting in the sensitivity about the head movement and the biased rPPG prediction (i.e., huge IBI gaps between the predicted rPPG and ground truth BVP signals). In contrast, it can be seen from the right sub\ufb01gure in Fig. 12(b) that the attentional response and the predicted rPPG signal from PhysFormer++ are reliable, indicating the e\ufb00ectiveness of the SlowFast architecture and advanced attention modules. Overall, two limitations of the spatio-temporal attention could be concluded from Fig. 12. First, there are still some unexpected responses (e.g., continuous query tokens with similar key responses) in the attention map, which might introduce task-irrelevant noise and damage to the performance. Second, the temporal attentions RMSE (bmp) 250 500 1000 Original bitrate (kb/s) x264 codec with different bitrates on OBF\u00a0 RMSE (bmp) Different face resolutions on VIPL-HR 16x16 resolution 32x32 64x64 Original (a) (b) Fig. 14 HR results with di\ufb00erent (a) compression bitrates on OBF, and (b) resolutions on VIPL-HR. are not accurate under serious head movement scenarios, and some are coarse with phase shifts. Visualization of the periodic attention map. We also visualize the periodic attention map from the last TD-MHPSA module of PhysFormer++ in Fig. 13. It is interesting to \ufb01nd that the periodic attention maps from the PhysFormer++ 1) trained without Latten are more arbitrary and easily in\ufb02uenced by the large head movement; and 2) trained with Latten are more regular and keep the periodicity even under the scenarios with serious head movement. In other words, the proposed TDMHPSA with attention loss Latten enforces the PhysFormer++ to learn more periodic and robust attentional features from the face videos. Evaluation under serious motion, video compression, and low resolution. In realworld scenarios, large head movement, high video compression rate and low face resolution usually introduce serious motion noises, compression artifacts and blurriness, respectively. All these corruptions and quality degradations make the rPPG measurement challenging. Here we evaluate the performance under these challenging scenarios. First, we evaluate the PhysFormer family under scenarios of large head movement (i.e., \u2018v2\u2019 and \u2018v9\u2019 samples) on VIPL-HR dataset. PhysFormer and PhysFormer++ achieve RMSE of 11.46 bpm and 10.25 bpm, respectively. In other words, with richer temporally contextual rPPG clues, the two-pathway SlowFast architecture in PhysFormer++ is more motion-robust. Note that there are still performance gaps between non-end-to-end method (e.g., RhythmNet [Niu et al., 2019a] with RMSE=9.4 bpm). Second, we evaluate the PhysFormer family on OBF with high compression rates (250/500/1000 \fSpringer Nature 2021 L AT EX template 21 Table 12 HR results (RMSE (bpm)) when training with di\ufb00erent proportion of samples on VIPL-HR. Method 10% 50% 100% AutoHR [Yu et al., 2020] 15.77 10.27 8.68 PhysFormer (Ours) 14.84 11.18 7.79 PhysFormer++ (Ours) 13.92 10.29 7.62 kb/s) using x264 codec. The corresponding HR measurement results are illustrated in Fig. 14(a). Compared with the rPPGNet [Yu et al., 2019b], the PhysFormer family performs signi\ufb01cantly better when bitrates equal to 500 and 1000 kb/s. This might be because the spatio-temporal selfattention mechanism helps \ufb01lter out the compression artifacts. However, all three methods perform poorly under extremely high compression situation (i.e., bitrate=250 kb/s). Finally, we evaluate the PhysFormer family on VIPL-HR with di\ufb00erent low-resolution settings to mimic the long-distance rPPG monitoring scenario. Speci\ufb01cally, bilinear interpolation is used to downsample the face frames to the sizes 16x16/32x32/64x64 \ufb01rst, and then upsample them back to 128x128. The HR measurement results are illustrated in Fig. 14(b). Despite performance drops with lower face resolution for both AutoHR [Yu et al., 2020] and the PhysFormer family, PhysFormer++ still achieves RMSE=9.58 bpm with the lowest (16x16) resolution setting. Training with fewer samples. Since end-toend deep models (e.g., CNNs and transformers) are data hungry, here we investigate three methods (AutoHR [Yu et al., 2020], PhysFormer and PhysFormer++) under conditions of fewer training samples. As shown in Table 12, when training with only 10% or 50% samples, all these three methods obtain poor RMSE performance (>10 bpm). Another observation is that, compared with pure CNN-based AutoHR, the proposed PhysFormer++ still achieves on par or better performance with fewer training samples. It indicates that the proposed transformer architectures can learn CNN-comparable rPPG representation even with limited data. 5" + }, + { + "url": "http://arxiv.org/abs/2203.01918v1", + "title": "Investigating the limited performance of a deep-learning-based SPECT denoising approach: An observer-study-based characterization", + "abstract": "Multiple objective assessment of image-quality-based studies have reported\nthat several deep-learning-based denoising methods show limited performance on\nsignal-detection tasks. Our goal was to investigate the reasons for this\nlimited performance. To achieve this goal, we conducted a task-based\ncharacterization of a DL-based denoising approach for individual signal\nproperties. We conducted this study in the context of evaluating a DL-based\napproach for denoising SPECT images. The training data consisted of signals of\ndifferent sizes and shapes within a clustered-lumpy background, imaged with a\n2D parallel-hole-collimator SPECT system. The projections were generated at\nnormal and 20% low count level, both of which were reconstructed using an OSEM\nalgorithm. A CNN-based denoiser was trained to process the low-count images.\nThe performance of this CNN was characterized for five different signal sizes\nand four different SBR by designing each evaluation as an SKE/BKS\nsignal-detection task. Performance on this task was evaluated using an\nanthropomorphic CHO. As in previous studies, we observed that the DL-based\ndenoising method did not improve performance on signal-detection tasks.\nEvaluation using the idea of observer-study-based characterization demonstrated\nthat the DL-based denoising approach did not improve performance on the\nsignal-detection task for any of the signal types. Overall, these results\nprovide new insights on the performance of the DL-based denoising approach as a\nfunction of signal size and contrast. More generally, the observer study-based\ncharacterization provides a mechanism to evaluate the sensitivity of the method\nto specific object properties and may be explored as analogous to\ncharacterizations such as modulation transfer function for linear systems.\nFinally, this work underscores the need for objective task-based evaluation of\nDL-based denoising approaches.", + "authors": "Zitong Yu, Md Ashequr Rahman, Abhinav K. Jha", + "published": "2022-03-03", + "updated": "2022-03-03", + "primary_cat": "physics.med-ph", + "cats": [ + "physics.med-ph", + "cs.CV", + "eess.IV" + ], + "main_content": "INTRODUCTION Deep-learning (DL)-based methods are showing signi\ufb01cant interest in medical imaging, and in particular, in applications such as denoising.1\u20133 Typically, these methods are evaluated using \ufb01gures of merits (FoMs) such as structural similarity index (SSIM) and root mean square error (RMSE).4\u20136 These \ufb01delity-based FoMs measure the di\ufb00erence between the image obtained using DL-based approaches and a certain reference image. However, medical images are acquired for speci\ufb01c clinical tasks such as signal detection and quanti\ufb01cation.7,8 Thus, ideally, DL-based denoising approaches should be evaluated based on relevant clinical tasks. Recently, multiple studies have shown that several DL-based denoising methods may yield limited performance on signal-detection tasks.9,10 For example, it was observed that when evaluated on the task of detecting cardiac perfusion defects, a DL-based denoising approach developed for myocardial perfusion SPECT yielded similar or degraded performance as that obtained without applying denoising, even though the \ufb01delity-based FoMs Further author information: (Send correspondence to Abhinav K. Jha.) Abhinav K. Jha: E-mail: a.jha@wustl.edu Zitong Yu: E-mail: yu.zitong@wustl.edu \fsuggested that denoising was imrpoving performance.9 Given the promise and wide application of DL-based denoising approaches, these studies motivate further investigation for the reasons of the limited task performance of DL-based denoising approaches. The goal of this study is to investigate reasons for limited task performance of a commonly used DL-based denoising approach. For this purpose, we developed an approach to characterize the performance of DL-based methods. Generally, tools such as Fourier analysis, singular value decomposition,11 modulation transfer function,12 and Fourier cross-talk matrix8,13 are used to characterize new imaging methods. However, these tools assume linearity of the underlying imaging system. In contrast, DL-based methods are typically highly nonlinear and shift variant. Thus, these tools may have limited applicability in analysis of DL-based methods. An additional issue is that the characterization provided by these tools may not directly relate to task performance, although there are ongoing e\ufb00orts to address this challenge. To address these issues, we propose an observerstudy-based characterization of DL-based methods that quanti\ufb01es the performance of these methods for speci\ufb01c signal properties. We then use this approach to characterize a DL-based denoising method. This characterization then provides insights on the reasons for the limited performance of this method. The paper is organized as follows. In the Sec. 2.1.4, we describe the details of the DL-based denoising approach. Next, we describe components of the observer-study-based characterization in Sec. 2.1. Related results are shown in Sec. 3, followed by the conclusion and discussions in Sec. 4. 2. METHOD We conducted this study in the context of denoising SPECT images acquired at low counts. This was a simulationbased study, where objects with a circular signal within clustered-lumpy background (CLB) and 2-D parallel-hole collimator SPECT system were simulated. A commonly used DL-based denoising approach was evaluated both on \ufb01delity-based FoMs, including RMSE and SSIM, and on the task of detecting the signal. We then characterized the performance of the DL-based denoising approach using an observer study-based characterization. In this section, we describe the individual components of our study. 2.1 Observer-study-based characterization The observer-study-based characterization is rooted in principles of objective assessment of image quality, but, instead of quantifying the performance of the method over a population, quanti\ufb01es performance for speci\ufb01c signal properties. The characterization consists of following components. 2.1.1 De\ufb01nition of task As mentioned above, we objectively evaluated the DL-based denoising approach on the task of detecting signal in the images. We designed signals with \ufb01ve signal sizes and four signal-to-background ratio (SBR) values, a total of 20 types of signals, as described in more detail in the next sub-section. We designed the signal-detection task where, in the di\ufb00erent realizations, the signal properties (size, location, extent) were \ufb01xed while the background varied. In other words, for each signal type, we had an equivalent to signal-known-exactly/background-known-statistically (SKE/BKS) task. Since there were 20 signal types, there were a total of 20 SKE/BKS studies. The signal-detection performance of the DL-based denoising approach was characterized by all the 20 SKE/BKS studies. 2.1.2 Object model The objects were divided into two categories, namely the signal-absent case, denoted by H0, and signal-present case, denoted by H1. Let f, fb and fs denote the object, background, and signal, respectively. Then, the images under two hypotheses are given by f = ( fb, if f \u2208H0 fb + fs, if f \u2208H1. (1) The background fb was generated from a clustered lumpy background (CLB) model14 with parameters shown in Table 1. \fTable 1. Parameters of the CLB model used in this study. Mean number of clusters Mean number of blobs Lx [pixel] Ly [pixel] \u03b1 \u03b2 \u03c3\u03c6 [pixel] 150 20 5 2 1.25 0.5 24 The signal fs was designed to be a circular signal located at the center of the image and characterized by a speci\ufb01c signal size and signal to background ratio (SBR). The objects were pixelated into 256 \u00d7 256 pixel images with pixel size of 0.2 cm. We evaluated the performance of the DL-based denoising approach for \ufb01ve signal sizes uniformly ranging from 10 mm to 30 mm, and four SBR values varying between 1.4:1, 1.5:1, 1.8:1, and 2:1. We generated 200 signal-present objects and 200 signal-absent objects for each signal type for testing. One example of an object with 10 mm signal size and 1.4:1 SBR is shown in Fig. 1(a). 2.1.3 Simulating the image-formation process A 2D parallel-hole collimator SPECT system was simulated in this study. The system resolution of the SPECT system was 7 mm at 10 cm depth. Projections were acquired at 120 angular positions spaced uniformly over 180 degrees modeling a constant orbits. Generated projections were collapsed to 0.4 cm projection bins in a 120 \u00d7 128 projection image matrix. Then, projections were scaled to a counts level of 200,000 (referred to as normal counts) and 40,000 counts (referred to as low counts). Poisson noise was added to both the normal and low-count projections. The projection data were reconstructed using an ordered-subsets expectation maximization (OSEM)-based approach with two iterations and four subsets. For each signal type (i.e., each SKE/BKS study), we generated a total of 4,800 reconstructed images ((1,200 signal-present images + 1,200 signal-absent images)\u00d7 2 counts levels). The reconstructed images were of size 128 \u00d7 128, with pixel size of 0.4 cm. Examples of reconstructed images are shown in Fig. 1(b) and Fig. 1(c). The low-count images were then denoised using a commonly used DL-based denoising approach described in Sec. 2.1.4. Figure 1. (a) an example of object with 10 mm signal size and 1.4:1 SBR. (b) The same object reconstructed at normal counts level. (c) The same object reconstructed at 20% low counts level. 2.1.4 DL-based Denoising Approach The DL-based denoising approach that we investigated9 is one commonly used in medical imaging.2,15\u201317 To develop this DL-based denoising approach, we followed best practices that have been recently outlined for developing AI-based methods for nuclear medicine.18 The approach was based on a convolutional neural network (CNN). The network has a architecture similar to a 2D U-Net19 with an encoder-decoder architecture with skip connection. The last convolutional layer was activated using a leaky ReLU activity function, yielding the output of the network. The CNN was trained on 4,000 pairs of low-count and normal-count images, by minimizing a mean square error loss function quantifying the error between the denoised and normal-count images via the Adam optimization \falgorithm.20 In the training dataset, we have included both signal-absent and signal-present images with all signal types. We conducted rigorous \ufb01ve-folds cross validation to ensure the network had not over-\ufb01tted. The CNN was trained using Keras with TensorFlow 1.10.0 on state-of-the-art NVIDIA V100 with 32 GB of memory. In the testing dataset, we had another 200 low-count signal-absent and 200 low-count signal-present images, for each type of signals. We refer to this CNN-based approach as the DL-based denoising approach. 2.1.5 Extracting task-speci\ufb01c information We evaluated the DL-based denoising approach on the task of signal detection in an observer study using an anthropomorphic model observer. More speci\ufb01cally, a channelized Hotelling observer (CHO) with anthropomorphic channels was used, where the anthropomorphic channels were six rotationally symmetric frequency channels with a starting frequency and channel width of 1/64 cycle per pixel.21 The subsequent channels were adjacent to the previous channel and had double frequency width of the previous channel. Channels were normalized to be orthonormal to each other. This led to a 16384 \u00d7 6 sized channel matrix. To apply this anthropomorphic CHO, the reconstructed images were scaled to have values in the range of [0, 255]. The test statistic of each test image was calculated using the leave-one-out strategy.22 2.1.6 Figures of merit The test statistic of each test image was compared to a threshold, by which the image was classi\ufb01ed into signalpresent or signal-absent class. By varying the threshold, we plotted the receiver operating characteristics curve and calculated the area under the receiver operating characteristic (ROC) curve (AUC), using the LABROC4 program. 95% con\ufb01dence intervals of AUC values were also calculated. We used AUC values to quantify the performance of the DL-based denoising approach on the signal-detection task. A higher AUC value corresponds to improved performance on the signal-detection task. We compared the AUC values obtained with normalcount images, and the low-count images prior to and after applying denoising. We conducted this observer study for each signal type, yielding a characterization that directly quanti\ufb01es task performance for individual signal properties. Further, we calculated the pixel-wise RMSE and SSIM between the images obtained by those two approaches and the images reconstructed at normal-count level. 3. RESULTS The evaluation using conventional \ufb01delity-based FoMs showed that the DL-based denoising approach provided improved performance compared to the images prior to denoising over all signal types, as shown in Table 2. However, when evaluated on signal-detection task, the AUC values showed that the DL-based denoising approach did not improve performance on the signal-detection task over all signal types. In fact, the performance only worsened after applying the denoising operation. These results are similar to results in previous studies9,10,23\u201325 and again directly contradict the observer-study-based \ufb01ndings. Table 2. The RMSE, SSIM, and AUC values associated with the images prior to and after denoising were compared. The con\ufb01dence intervals are reported with brackets around the upper and lower limits. RMSE SSIM AUC Images prior to denoising 0.0562 [0.0561,0.0563] 0.4017 [0.4007,0.4028] 0.8050 [0.7957,0.8144] Images after denoising 0.0302 [0.0301,0.0304] 0.7743 [0.7738,0.7748] 0.6645 [0.6529,0.6762] Given these results, we used the proposed observer-study-based characterization to investigate the limited performance of the DL-based denoising approach. Fig. 2 shows the signal-detection performance of the DL-based denoising approach evaluated via the observer study for each signal type. AUC values obtained with normalcount images and low-count images prior to denoising are also shown. We observed that performing the DL-based denoising operation does not yield superior performance on the defect-detection task for any of the signal types. This shows that the observations of limited performance of the CNN seen in previous studies9,10,23\u201325 are valid not just for a speci\ufb01c population, but across a range of signal sizes and contrasts. Further, we observed that for an SBR of 2:1, as the size of the signal increased, the di\ufb00erence in the AUC values before and after applying reduced. This observation suggests that the DL-based denoising approach may be acting as a low-pass \ufb01lter that suppresses high-frequency components, thus, providing poor performance when the signal size is small. \fFigure 2. AUC values achieved by the DL-based denoising approach, along with by normal-count images and low-count images prior to denoising for each signal type. 4. DISCUSSIONS AND" + }, + { + "url": "http://arxiv.org/abs/2202.08192v3", + "title": "Flexible-Modal Face Anti-Spoofing: A Benchmark", + "abstract": "Face anti-spoofing (FAS) plays a vital role in securing face recognition\nsystems from presentation attacks. Benefitted from the maturing camera sensors,\nsingle-modal (RGB) and multi-modal (e.g., RGB+Depth) FAS has been applied in\nvarious scenarios with different configurations of sensors/modalities. Existing\nsingle- and multi-modal FAS methods usually separately train and deploy models\nfor each possible modality scenario, which might be redundant and inefficient.\nCan we train a unified model, and flexibly deploy it under various modality\nscenarios? In this paper, we establish the first flexible-modal FAS benchmark\nwith the principle `train one for all'. To be specific, with trained\nmulti-modal (RGB+Depth+IR) FAS models, both intra- and cross-dataset testings\nare conducted on four flexible-modal sub-protocols (RGB, RGB+Depth, RGB+IR, and\nRGB+Depth+IR). We also investigate prevalent deep models and feature fusion\nstrategies for flexible-modal FAS. We hope this new benchmark will facilitate\nthe future research of the multi-modal FAS. The protocols and codes are\navailable at https://github.com/ZitongYu/Flex-Modal-FAS.", + "authors": "Zitong Yu, Ajian Liu, Chenxu Zhao, Kevin H. M. Cheng, Xu Cheng, Guoying Zhao", + "published": "2022-02-16", + "updated": "2023-03-16", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Face recognition has been widely used in many interactive arti\ufb01cial intelligence systems for its convenience (e.g., access control and face payment). However, vulnerability to presentation attacks (e.g., print, video replay, and 3D masks) curtails its reliable deployment. For the reliable use of face recognition systems, face anti-spoo\ufb01ng (FAS) methods [25,28] are important to detect such presentation attacks (PAs). In recent years, plenty of hand-crafted feature based [1, 2, 26] and deep learning based [3, 4, 15\u201318, 21, 24, 29\u201331] methods have been proposed for RGB-based single-modal FAS. On one hand, some hand-crafted descriptors with faLive Spoof RGB Depth IR Deployment Phone with RGB sensor Pad with RGB+Depth sensors Phone with RGB+IR sensors Gate with RGB+Depth+IR sensors Training Model A with RGB Model B with RGB+Depth Model C with RGB+IR Model D with RGB+Depth+IR (a) Traditional singleand multi-modal FAS Live Spoof RGB Depth IR Deployment Phone with RGB sensor Pad with RGB+Depth sensors Phone with RGB+IR sensors Gate with RGB+Depth+IR sensors Training Model\u00a0with RGB+Depth+IR (b) Flexible-modal FAS Figure 1. The training and deployment frameworks of the (a) traditional singleand multi-modal FAS; and (b) \ufb02exible-modal FAS. The former one aims at separately train and deploy powerful models for each possible modality scenario, while the latter one focuses on training a uni\ufb01ed model for all real-world modality scenarios. cial color texture [1] and physiological signals [26] feature representation are designed based on crucial live/spoof clues (e.g., moir\u00b4 e pattern, noise artifacts, and bio-signal liveness), thus are robust for live/spoof discrimination. On the other hand, deep convolutional neural networks (CNN) [9] and vision transformer (ViT) [5, 7, 26] become mainstream in FAS due to their strong semantic representation capacities to distinguish the bona\ufb01de from PAs. With the development of hardware manufacture and inarXiv:2202.08192v3 [cs.CV] 16 Mar 2023 \fConcat conv Concat conv SE SE SE Cross Attention conv Cross Attention (a) (b) (c) Figure 2. Feature fusion modules. (a) Direct concatenation [27]. (b) Squeeze-and-excitation (SE) fusion [32]. (c) Cross-attention fusion. tegration technology, multi-modal FAS systems with acceptable costs are increasingly used in real-world applications. Meanwhile, a few large-scale multi-modal FAS datasets [8, 13, 32] as well as multi-modal deep learning based FAS methods [6, 14, 19, 20, 27] are proposed. In terms of multi-modal FAS datasets, CASIA-SURF [32] and CeFA [13] are with three modalities (RGB, Depth, and Infra-red (IR)) while WMCA [8] is with four modalities (RGB, Depth, IR, and Thermal). To learn intrinsic live/spoof features from multiple modalities, feature-level fusion strategies [12,19,22,23,27] are used. To better leverage contextual modality information and eliminate redundancy, spatial [20] and channel [20,32] attention is applied in multi-modal fusion. Existing singleand multi-modal FAS methods usually separately train and deploy models for each possible modality scenario (see Fig. 1(a)), which might be redundant and inef\ufb01cient. A few natural questions occur: Can we train a uni\ufb01ed model, and \ufb02exibly deploy it under various modality scenarios? How about the performance and ef\ufb01ciency gaps among separate and uni\ufb01ed singleand multi-modal models? To explore the questions above, we establish the \ufb01rst \ufb02exible-modal FAS benchmark with the principle \u2018train one for all\u2019, focusing on training a uni\ufb01ed model for multiple real-world modality scenarios (see Fig. 1(b)). Our contributions include: \u2022 We establish the \ufb01rst \ufb02exible-modal FAS benchmark with both intraand cross-dataset testings under four evaluation modality scenarios (RGB, RGB+Depth, RGB+IR, and RGB+Depth+IR). \u2022 We propose an elegant cross-attention fusion module to ef\ufb01ciently mine cross-modal clues for \ufb02exiblemodal deployment. The proposed cross-Attention module signi\ufb01cantly bene\ufb01ts the ViT [5] in both \ufb02exible intraand cross-testings. \u2022 We also investigate prevalent deep models (CDCN [31], ResNet [9], ViT [5]) and feature fusion strategies for \ufb02exible-modal FAS. We \ufb01nd that the modality dropout strategy [19] works well in \ufb02ex-modal intra-testings but poorly in \ufb02ex-modal cross-testings. 2. Multi-Modal Fusion Baselines For the RGB-based single-modal FAS task, given a face input XRGB, the corresponding deep features/descriptors FRGB could be extracted. Then a prediction head h is cascaded for binary live/spoof classi\ufb01cation. For the RGB+Depth+IR multi-modal FAS task, independentmodality features FRGB, FDepth, and FIR could be captured from face inputs XRGB, XDepth, and XIR, respectively. All these features will be fused to form Ffuse \ufb01rst, and then forward the prediction head h. In this paper, we focus on feature-level fusion strategy but there are also e.g., decision-level fusion (a late fusion strategy) for multimodal scenarios. Here we discuss about three feature-level fusion modules under the scenario with RGB+Depth+IR, and it is easily extended to scenarios with less or more modalities. Direct concatenation fusion. Despite coarse alignment in the spatial domain, the features (FRGB, FDepth, and FIR) in the channel domain have heterogeneous representation. As illustrated in Fig. 2(a), one classical solution is to concatenate these three features in the channel domain \ufb01rst [27], and then aggregate the multi-modal heterogeneous features with a lightweight fusion operator (e.g., convolution). The direct concatenation fusion can be formulated as Ffuse = ReLU(BN(Conv(Concat(FRGB, FDepth, FIR))). (1) Squeeze-and-excitation fusion. To alleviate the feature misalignment among modalities, squeeze-and-excitation (SE) module [10,32] is utilized in each independent modality branch \ufb01rst. With the channel-wise self-calibration via SE module, the re\ufb01ned features (F SE RGB, F SE Depth, and F SE IR ) are then concatenated and aggregated. The framework of SE fusion is shown in Fig. 2(b). The SE fusion can be for\fmulated as (where \u03c3 denotes the Sigmoid function): F SE RGB = FRGB \u00b7 \u03c3(FC(ReLU(FC(AvgPool(FRGB))))), F SE Depth = FDepth \u00b7 \u03c3(FC(ReLU(FC(AvgPool(FDepth))))), F SE IR = FIR \u00b7 \u03c3(FC(ReLU(FC(AvgPool(FIR))))), Ffuse = ReLU(BN(Conv(Concat(F SE RGB, F SE Depth, F SE IR ))), (2) Cross-attention fusion. Besides fusion via multi-modal feature concatenation, we also explore the feature addition in the homogeneous space. To this end, we calculate the relationship maps between FRGB and FDepth/FIR via crossattention (CA), and then the normalized modality-interacted maps are multiplied by FRGB to form cross-attentioned features F CA Depth and F CA IR . Finally, original RGB feature and cross-attentioned features are added and aggregated with an extra convolution. The framework of CA fusion is shown in Fig. 2(c). The CA fusion can be formulated as \u00af F CA Depth = Softmax( \u00af FDepth( \u00af FRGB)T ) \u00af FRGB, \u00af F CA IR = Softmax( \u00af FIR( \u00af FRGB)T ) \u00af FRGB, Ffuse = ReLU(BN(Conv(FRGB + F CA Depth + F CA IR )), (3) where F and \u00af F denote the spatial features and vectorized features, respectively. 3. Flexible-Modal FAS Benchmark In this section, we introduce the \ufb02exible-modal FAS benchmark in terms of datasets, modality-aware protocols, and evaluation metrics. Statistical description is shown in Table 2. Datasets. Three large-scale multi-modal datasets are used in the \ufb02exible-modal FAS benchmark. CASIASURF [32] consists of 1000 subjects with 21000 videos (7000 for RGB, Depth, and IR modality, respectively). There are two kinds of presentation attack instrument (PAI), i.e., print and cut print attacks, in CASIA-SURF. CeFA [13] is a cross-ethnicity FAS dataset, covering three ethnicities (Africa, East Asia, and Central Asia), three modalities (RGB, Depth, and IR), 1607 subjects with 7846 videos for each modality. In terms of PAIs, it consists of print, replay, 3D print and 3D silica gel mask attacks. WMCA [8] consists of 1941 short video recordings of both bona\ufb01de and PAs from 72 different identities. Each video is recorded from several spectrum channels including RGB, depth, IR, and thermal. In addition, there are seven PAIs (i.e., glasses, fake head, print, replay, rigid mask, \ufb02exible mask, and paper mask) in WMCA. Protocols. Towards the principle \u2018train one for all\u2019, four \ufb02exible-modal protocols are established. Specifically, after trained on CASIA-SURF and CeFA with Table 1: Statistics of the \ufb02exible-modal FAS benchmark. \u2018C.-S.\u2019 and \u2018D\u2019 are short for \u2018CASIA-SURF\u2019 and \u2018Depth\u2019, respectively. \u2018#Video\u2019 indicates the video numbers of each modality. Intra-dataset Cross-dataset Partition Training Validation Testing Testing Dataset C.-S. CeFA C.-S. CeFA C.-S. CeFA WMCA #Subject 300 600 100 300 600 699 72 #Video 2100 2400 700 1200 4200 4246 1679 #PAI 2 2 2 2 2 4 7 Modality Protocol 1 RGB+D+IR RGB RGB RGB Protocol 2 RGB+D+IR RGB+D RGB+D RGB+D Protocol 3 RGB+D+IR RGB+IR RGB+IR RGB+IR Protocol 4 RGB+D+IR RGB+D+IR RGB+D+IR RGB+D+IR RGB+Depth+IR modalities, the uni\ufb01ed multi-modal model is evaluated on both intra (CASIA-SURF and CeFA) and cross (WMCA) datasets under RGB-based (Protocol 1) single-modal, RGB+ Depth-based (Protocol 2) or RGB+IRbased (Protocol 3) bi-modal, and RGB+Depth+IR-based (Protocol 4) tri-modal scenarios. We also compare it with traditional separate training framework in terms of performance and ef\ufb01ciency. Evaluation metrics. For all experiments, ACER [11] and True Positive Rate (TPR)@False Positive Rate (FPR) are used as evaluation metrics. ACER calculates the mean of Attack Presentation Classi\ufb01cation Error Rate (APCER) and Bona Fide Presentation Classi\ufb01cation Error Rate (BPCER). The thresholds are determined by the Equal Error Rate (EER) threshold on validation set and set as 0.5 for intraand cross-dataset testings, respectively. Besides, TPR@FPR=0.1% and TPR@FPR=1% are also utilized for fair comparison. 4. Experiment 4.1. Implementation Details The experiments are implemented with Pytorch on one NVIDIA V100 GPU. Three deep models (CDCN [31], ResNet-50 [9] and ViT-Base [5]) are used with batch size 16, 64 and 64, respectively. We use the Adam optimizer with learning rate (lr) of 1e-4 for CDCN and ResNet while AdamW optimizer with lr=1e-5 for ViT-Base. CDCN is trained from scratch with 60 epochs while lr halves in the 30th epoch. Instead of supervision with pseudo depth maps [31], we follow [27] to supervise CDCN with simple binary maps. In contrast, ResNet50/ViT-Base is \ufb01netuned based on the ImageNet/ImageNet-21K pre-trained models with 30 epochs while lr halves in the 20th epoch. Direct concatenation is adopted as the default fusion method. For the SE fusion, the intermediate channel numbers are reduced to one eighth of original channels. The missing modalities are simply blocked as zeros in the testing phase of Protocols 2,3, and 4. To mimic such scenarios in the training phase, similar to [19], we randomly dropout the \fTable 2: Results of intra-dataset testings on CASIA-SURF and CeFA datasets with \u2018Separate\u2019 and \u2018Uni\ufb01ed\u2019 settings. Protocol 1 Protocol 2 Protocol 3 Protocol 4 Method ACER(%) \u2193 TPR(%)@FPR=0.1% \u2191 ACER(%) \u2193 TPR(%)@FPR=0.1% \u2191 ACER(%) \u2193 TPR(%)@FPR=0.1% \u2191 ACER(%) \u2193 TPR(%)@FPR=0.1% \u2191 CDCN [31] 32.49 4.7 6.22 47.4 43.98 0.93 6.46 55.43 ResNet50 [9] 10.41 45.93 1.7 88.23 41.26 3.13 3.07 62.33 Separate ViT-Base [5] 10.81 31.33 1.44 91.27 26.34 6.5 3.82 80.87 CDCN 32.69 2.43 5.08 68.17 41.32 0.6 6.46 55.43 CDCN w/ DropModal 36.49 1.47 7.91 40.83 35.89 0.97 11.36 30.1 CDCN SE 35.13 1.03 4.6 61 37.09 0.3 8.81 46.8 CDCN SE w/ DropModal 46.5 0.3 25.88 19.6 46.8 0.23 31.15 16.4 CDCN CA 34.99 1.93 31.55 2.37 35.18 1.83 34.33 2.2 CDCN CA w/ DropModal 40.58 1.23 38.9 1.43 39.38 1.2 40.94 1.2 ResNet50 27.03 15.37 2.67 80.93 34.17 2.63 3.07 62.33 ResNet50 w/ DropModal 14.24 23.23 9.32 61.5 18.18 22.77 8.1 56.27 ResNet50 SE 20.59 6.87 2.05 78.37 27.67 2.3 2.48 55.3 ResNet50 SE w/ DropModal 14.57 32.6 3.66 82.9 13.58 31.77 5.29 77.57 ResNet50 CA 28.46 4.87 13.95 18.47 28.33 4.07 14.75 9.83 ResNet50 CA w/ DropModal 18 9.8 13.72 26.1 16 7.47 13.21 20.37 ViT-Base 20.33 4.07 2.5 84.27 29.51 3.1 3.82 80.87 ViT-Base w/ DropModal 7.87 40.73 2.59 80.97 9.06 30.37 4.97 79.43 ViT-Base SE 23.28 1.5 1.87 92.67 37.38 1.93 2.7 88.2 ViT-Base SE w/ DropModal 8.58 36.6 3.01 77.73 10.18 30.8 3.03 78.6 ViT-Base CA 19.4 13.27 1.75 87.63 14.84 15.07 2.43 78.75 Uni\ufb01ed ViT-Base CA w/ DropModal 6.04 57.53 3.85 73.3 5.97 57.5 3.91 71.17 Table 3: Results of cross-dataset testings on WMCA when trained on CASIA-SURF and CeFA. Protocol 1 Protocol 2 Protocol 3 Protocol 4 Method ACER(%) \u2193 TPR(%)@FPR=1% \u2191 ACER(%) \u2193 TPR(%)@FPR=1% \u2191 ACER(%) \u2193 TPR(%)@FPR=1% \u2191 ACER(%) \u2193 TPR(%)@FPR=1% \u2191 CDCN [31] 34.35 9.61 27.58 8.17 40.62 3.27 28.28 4.71 ResNet50 [9] 36.57 16.14 29.98 12.01 50 0.77 30.72 13.74 Separate ViT-Base [5] 39.38 5.96 36.81 11.53 49.34 2.88 33.08 3.07 CDCN 50 7.11 27.06 6.63 50 2.93 28.28 4.71 CDCN w/ DropModal 32.25 7.78 25.44 13.83 32.95 6.24 26.92 11.82 CDCN SE 43.66 3.8 30.11 4.8 43.3 2.4 28.18 4.51 CDCN SE w/ DropModal 36.99 7.1 43.55 1.73 42.86 4.29 42.6 2.02 CDCN CA 40.61 1.06 32.73 3.5 41.26 1.06 40.74 2.5 CDCN CA w/ DropModal 35.26 6.53 34.49 7.2 35.56 7.2 34.44 5.48 ResNet50 46.02 4.71 35.81 7.49 42.22 2.21 30.72 13.74 ResNet50 w/ DropModal 46.45 11.53 25.36 19.79 46.53 10.28 19.43 19.79 ResNet50 SE 49.52 1.34 28.86 7.88 43.58 0.67 32.55 7.49 ResNet50 SE w/ DropModal 45.31 13.26 30.88 13.26 42.22 17.58 27.41 15.95 ResNet50 CA 39.15 10.66 34 11.24 36.9 8.45 34.76 7.3 ResNet50 CA w/ DropModal 43.32 4.42 31.94 7.3 47.65 5 37.17 8.17 ViT-Base 48.52 9.03 44.42 7.59 50 0.38 33.08 3.07 ViT-Base w/ DropModal 39.06 8.26 30.37 15.95 40.61 8.36 29.51 17.2 ViT-Base SE 49.95 3.75 30.64 5.09 50 1.83 41.03 5.48 ViT-Base SE w/ DropModal 33.48 9.51 30.67 9.41 35.96 5.19 31.33 8.55 ViT-Base CA 35.07 26.13 10.07 50.05 24.57 16.04 20.87 36.5 Uni\ufb01ed ViT-Base CA w/ DropModal 42.38 6.15 33.87 8.17 37.81 6.72 33.59 7.88 Depth and IR inputs (called DropModal). 4.2. Intra Testing The experimental results of \ufb02exible-modal intra-dataset testing on CASIA-SURF and CeFA datasets is shown in Table 1. We can see from the \ufb01rst block and \ufb01rst rows in last three blocks that 1) the separated trained ResNet50 and ViT models perform obviously better than the uni\ufb01ed counterparts while CDCN performs the opposite; and 2) ViT has higher TPR@FPR=0.1% than ResNet50 and CDCN on Protocols 2, 3, and 4 with both separated and uni\ufb01ed settings, indicating the excellent multi-modal modeling capacities based on global self-attentioned features. Impact of fusion modules. It can be seen from the \u2018Uni\ufb01ed\u2019 block in Table 1, compared with directly concatenation fusion, the SE fusion [32] has no gains for CDCN, ResNet50, and ViT-Base. In contrast, we can \ufb01nd from the results of \u2018ViT-Base\u2019 and \u2018ViT-Base CA\u2019 that the proposed CA module improves the ViT-Base remarkably (with gains 9.2%, 3.36% and 11.93% TPR@FPR=0.1% for Protocols 1, 2 and 3, respectively). Despite bene\ufb01ts from CA for ViT-Base backbone, the CA module still generalizes poorly across other architectures (e.g., CDCN and ResNet50). It is still an open question to design architecture-agnostic fusion methods for the \ufb02exible-modal FAS benchmark. Impact of DropModal. The multi-modal learning is easily dominated by partial-modal features (e.g., Depth modality) but neglecting other modalities with relatively weak clues (e.g., IR modality). The results of all variants of ResNet50 and ViT-Base on Protocols 1 and 3 are sharply improved with \u2018DropModal\u2019, indicating the augmentation with random modality dropout [19] alleviates the modality over\ufb01tting issue in intra testings. 4.3. Cross Testing Table 3 shows the results of \ufb02exible-modal cross-dataset testing on WMCA. Due to the domain shifts (e.g., from sensors and sessions) and unseen PAIs, the performance of both separate and uni\ufb01ed models are unsatisfactory (ACER>10%). Impact of fusion modules. Similar to the intra-dataset testings, the results with SE fusion [32] cannot bring obvious bene\ufb01ts for CDCN, ResNet50, and ViT-Base on cross-dataset testings. As can be seen from the results of \fTable 4: Results of \u2018shared\u2019 and \u2018unshared\u2019 multi-modal settings on intra-dataset testings on CASIA-SURF and CeFA. Method Protocol 1 Protocol 2 Protocol 3 Protocol 4 Overall TPR(%)@FPR=0.1% \u2191 TPR(%)@FPR=0.1% \u2191 TPR(%)@FPR=0.1% \u2191 TPR(%)@FPR=0.1% \u2191 #Param. (M) #FLOPs (G) Separate ViT-Base 31.33 91.27 6.5 80.87 362.01 132.31 ViT-Base (unshared) 43.4 91.73 1.7 88.63 688.58 132.31 Uni\ufb01ed ViT-Base 4.07 84.27 3.1 80.87 96.83 199.92 ViT-Base (unshared) 0.03 89.83 0.37 88.63 260.12 199.92 \u2018ResNet50 SE\u2019, \u2018ResNet50 CA\u2019, \u2018ViT-Base SE\u2019, and \u2018ViTBase CA\u2019 in Table 3, compared with SE fusion, the proposed CA is highly compatible with multi-modal ViT and ResNet50 architectures (especially on Protocols 1, 2, and 3), and improves the cross-dataset testing results dramatically. Impact of DropModal. It is reasonable to \ufb01nd in Table 3 that\u2018DropModal\u2019 bene\ufb01ts the cross-testing performance of direct and SE concatenation fusions for all three kinds of models. However, the results of \u2018ResNet50 CA w/ DropModal\u2019 and \u2018ViT-Base CA w/ DropModal\u2019 indicate that \u2018DropModal\u2019 degrades the cross-testing performance for CA fusion for both ResNet50 and ViT-Base backbones. It indicates that training CA with dropped-modality features limit the overall domain generalization capacity. 4.4. Ef\ufb01ciency Analysis Both performance and ef\ufb01ciency are important in \ufb02exible-modal benchmark. Here we analyze the ef\ufb01ciency based on ViT-Base with two kinds of settings: separate/uni\ufb01ed models and shared/unshared modality branches. As shown on the right part of Table 4, compared with separate models, the uni\ufb01ed models save more than 50% parameters but require a bit more FLOPs (due to the \ufb01xed tri-modal branch setting in the testing phase) overall 4 protocols. Besides, using unshared backbone for independent modality branch usually brings slight performance improvement but introducing extra huge parameters. Overall, it will be a good tradeoff if the uni\ufb01ed models with modality-shared backbones could achieve satisfactory performance. 5." + }, + { + "url": "http://arxiv.org/abs/2111.12082v2", + "title": "PhysFormer: Facial Video-based Physiological Measurement with Temporal Difference Transformer", + "abstract": "Remote photoplethysmography (rPPG), which aims at measuring heart activities\nand physiological signals from facial video without any contact, has great\npotential in many applications (e.g., remote healthcare and affective\ncomputing). Recent deep learning approaches focus on mining subtle rPPG clues\nusing convolutional neural networks with limited spatio-temporal receptive\nfields, which neglect the long-range spatio-temporal perception and interaction\nfor rPPG modeling. In this paper, we propose the PhysFormer, an end-to-end\nvideo transformer based architecture, to adaptively aggregate both local and\nglobal spatio-temporal features for rPPG representation enhancement. As key\nmodules in PhysFormer, the temporal difference transformers first enhance the\nquasi-periodic rPPG features with temporal difference guided global attention,\nand then refine the local spatio-temporal representation against interference.\nFurthermore, we also propose the label distribution learning and a curriculum\nlearning inspired dynamic constraint in frequency domain, which provide\nelaborate supervisions for PhysFormer and alleviate overfitting. Comprehensive\nexperiments are performed on four benchmark datasets to show our superior\nperformance on both intra- and cross-dataset testings. One highlight is that,\nunlike most transformer networks needed pretraining from large-scale datasets,\nthe proposed PhysFormer can be easily trained from scratch on rPPG datasets,\nwhich makes it promising as a novel transformer baseline for the rPPG\ncommunity. The codes will be released at\nhttps://github.com/ZitongYu/PhysFormer.", + "authors": "Zitong Yu, Yuming Shen, Jingang Shi, Hengshuang Zhao, Philip Torr, Guoying Zhao", + "published": "2021-11-23", + "updated": "2022-05-23", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Physiological signals such as heart rate (HR), respiration frequency (RF), and heart rate variability (HRV) are important vital signs to be measured in many circumstances, especially for healthcare or medical purposes. Traditionally, the Electrocardiography (ECG) and Photoplethysmograph (PPG) are the two most common ways for measuring heart *Corresponding author rPPG signals t1 t2 t3 direction Figure 1. The trajectories of rPPG signals around t1, t2, and t3 share similar properties (e.g., trends with rising edge first then falling edge later, and relatively high magnitudes) induced by skin color changes. It inspires the long-range spatio-temporal attention (e.g., blue tube around t1 interacted with red tubes from intraand inter-frames) according to their local temporal difference features for quasi-periodic rPPG enhancement. Here \u2018tube\u2019 indicates the same regions across short-time consecutive frames. activities and corresponding physiological signals. However, both ECG and PPG sensors need to be attached to body parts, which may cause discomfort and are inconvenient for long-term monitoring. To counter for this issue, remote photoplethysmography (rPPG) [12,36,66] methods are developing fast in recent years, which aim to measure heart activity remotely without any contact. In earlier studies of facial rPPG measurement, most methods analyze subtle color changes on facial regions of interest (ROI) with classical signal processing approaches [30, 49, 50, 55, 57]. Besides, there are a few color subspace transformation methods [13, 59] which utilize all skin pixels for rPPG measurement. Based on the prior knowledge from traditional methods, a few learning based approaches [25, 44, 45, 51] are designed as non-endto-end fashions. ROI based preprocessed signal representations (e.g., time-frequency map [25] and spatio-temporal map [44, 45]) are generated first, and then learnable models could capture rPPG features from these maps. However, these methods need the strict preprocessing procedure and neglect the global contextual clues outside the pre-defined ROIs. Meanwhile, more and more end-to-end deep learning based rPPG methods [11, 34, 53, 65, 67] are developed, \fwhich treat facial video frames as input and predict rPPG and other physiological signals directly. However, pure endto-end methods are easily influenced by the complex scenarios (e.g., with head movement and various illumination conditions) and rPPG-unrelated features can not be ruled out in learning, resulting in large performance decrease [63] in realistic datasets (e.g., VIPL-HR [45]). Recently, due to its excellent long-range attentional modeling capacities in solving sequence-to-sequence issues, transformer [22,32] has been successfully applied in many artificial intelligence tasks such as natural language processing (NLP) [56], image [15] and video [3] analysis. Similarly, rPPG measurement from facial videos can be treated as a video sequence to signal sequence problem, where the long-range contextual clues should be exploited for semantic modeling. As shown in Fig. 1, rPPG clues from different skin regions and temporal locations (e.g., signal trajectories around t1, t2, and t3) share similar properties (e.g., trends with rising edge first then falling edge later and relative high magnitudes), which can be utilized for long-range feature modeling and enhancement. However, different from the most video tasks aiming at huge motion representation, facial rPPG measurement focuses on capturing subtle skin color changes, which makes it challenging for global spatiotemporal perception. Furthermore, video-based rPPG measurement is usually a long-time monitoring task, and it is challenging to design and train transformers with long video sequence inputs. Motivated by the discussions above, we propose an endto-end video transformer architecture, namely PhysFormer, for remote physiological measurement. On one hand, the cascaded temporal difference transfomer blocks in PhysFormer benefit the rPPG feature enhancement via global spatio-temporal attention based on the fine-grained temporal skin color differences. On the other hand, to alleviate the interference-induced overfitting issue and complement the weak temporal supervision signals, elaborate supervision in frequency domain is designed, which helps PhysFormer learn more intrinsic rPPG-aware features. The contributions of this work are as follows: \u2022 We propose the PhysFormer, which mainly consists of a powerful video temporal difference transformer backbone. To our best knowledge, it is the first time to explore the long-range spatio-temporal relationship for reliable rPPG measurement. \u2022 We propose an elaborate recipe to supervise PhysFormer with label distribution learning and curriculum learning guided dynamic loss in frequency domain to learn efficiently and alleviate overfitting. \u2022 We conduct intraand cross-dataset testings and show that the proposed PhysFormer achieves superior or on par state-of-the-art performance without pretraining on large-scale datasets like ImageNet-21K. 2. Related Work Remote physiological measurement. An early study of rPPG-based physiological measurement was reported in [57]. Plenty of traditional hand-crafted approaches have been developed on this field since then. Selective merging information from different color channels [30,49,50] or different ROIs [27, 30] are proven to be efficient for subtle rPPG signal recovery. To improve the signal-to-noiseratio of the recovered rPPG signals, several signal decomposition methods such as independent component analysis (ICA) [27, 49, 50] and matrix completion [55] are also proposed. In recent years, deep learning based approaches dominate the field of rPPG measurement due to the strong spatio-temporal representation capabilities. On one hand, facial ROI based spatial-temporal signal maps [40, 41, 44, 46,47] are developed, which alleviate the interference from non-skin regions. Based on these signal maps, 2D-CNNs are utilized for rPPG feature extraction. On the other hand, end-to-end spatial networks [11, 53] and spatio-temporal models [20,34,35,48,63,65,67] are developed, which could recover rPPG signals from the facial video directly. However, previous methods only consider the spatio-temporal rPPG features from adjacent frames and neglect the longrange relationship among quasi-periodic rPPG features. Transformer for vision tasks. Transformer [32] is proposed in [56] to model sequential data in the field of NLP. Then vision transformer (ViT) [15] is proposed recently by feeding transformer with sequences of image patches for image classification. Many other ViT variants [8, 14, 22, 23, 26, 38, 54, 60, 70] are proposed from then, which achieve promising performance compared with its counterpart CNNs for image analysis tasks [6, 24, 74]. Recently, some works introduce vision transformer for video understanding tasks such as action recognition [1, 3, 4, 16, 21, 39, 42], action detection [37, 58, 62, 73], video superresolution [5], video inpainting [33, 71], and 3D animation [9,10]. Some works [21,42] conduct temporal contextual modeling with transformer based on single-frame features from pretrained 2D networks, while other works [1, 3, 4, 16, 39] mine the spatio-temporal attentions via video transformer directly. Most of these works are incompatible for long-video-sequence (>150 frames) signal regression task. There are two related works [35, 64] using ViT for rPPG feature representation. TransRPPG [64] extracts rPPG features from the preprocessed signal maps via ViT for face 3D mask presentation attack detection [68]. Based on the temporal shift networks [31, 34], EfficientPhysT [35] adds several swin transformer [38] layers for global spatial attention. Different from these two works, the proposed PhysFormer is an end-to-end video transformer, which is able to capture long-range spatio-temporal attentional rPPG features from facial video directly. \fStem Tube Tokens Temporal Difference Multi-head Self-attention Add & Norm Spatio-temporal Feed-forward Add & Norm Temporal Difference Transformer\u00a0 Predictor Video x N rPPG Signals T T' T TDC TDC 1x1x1 Vid2Seq Vid2Seq Vid2Seq Softmax Temporal Difference Multi-head Self-attention Multi-head Concat & Projection Seq2Vid 1x1x1 Depth-wise 3x3x3 Spatio-temporal Feed-forward Q K V 1x1x1 Figure 2. Framework of the PhysFormer. It consists of a shallow stem, a tube tokenizer, several temporal difference transformers, and a rPPG predictor head. The temporal difference transformer is formed from the Temporal Difference Multi-head Self-attention (TD-MHSA) and Spatio-temporal Feed-forward (ST-FF) modules, which enhances the global and local spatio-temporal representation, respectively. \u2018TDC\u2019 is short for the temporal difference convolution [63,69]. 3. Methodology We will first introduce the architecture of PhysFormer in Sec. 3.1, then introduce label distribution learning for rPPG measurement in Sec. 3.2, and at last present the curriculum learning guided dynamic supervision in Sec. 3.3. 3.1. PhysFormer As illustrated in Fig. 2, PhysFormer consists of a shallow stem Estem, a tube tokenizer Etube, N temporal difference transformer blocks Ei trans (i = 1, ..., N) and a rPPG predictor head. Inspired by the study in [61], we adopt a shallow stem to extract coarse local spatio-temporal features, which benefits the fast convergence and clearer subsequent global self-attention. Specifically, the stem is formed by three convolutional blocks with kernel size (1x5x5), (3x3x3) and (3x3x3), respectively. Each convolution operator is cascaded with a batch normalization (BN), ReLU and MaxPool. The pooling layer only halves the spatial dimension. Therefore, given an RGB facial video input X \u2208R3\u00d7T \u00d7H\u00d7W , the stem output Xstem = Estem(X), where Xstem \u2208RD\u00d7T \u00d7H/8\u00d7W/8, and D, T, W, H indicate channel, sequence length, width, height, respectively. Then Xstem would be partitioned into spatio-temporal tube tokens Xtube \u2208RD\u00d7T \u2032\u00d7H\u2032\u00d7W \u2032 via the tube tokenizer Etube. Subsequently, the tube tokens will be forwarded with N temporal difference transformer blocks and obtain the global-local refined rPPG features Xtrans, which has the same dimensions with Xtube. Finally, the rPPG predictor head temporally upsamples, spatially averages, and projects the features Xtrans to 1D signal Y \u2208RT . Tube tokenization. Here the coarse feature Xstem would be partitioned into non-overlapping tube tokens via Etube(Xstem), which aggregates the spatio-temporal neighbor semantics and reduces computational costs for the subsequent transformers. Specifically, with the targeted tube size Ts \u00d7 Hs \u00d7 Ws (the same as the partition step size in non-overlapping setting), the tube token map Xtube \u2208 RD\u00d7T \u2032\u00d7H\u2032\u00d7W \u2032 has length, height and width T ' = \\l e f t \\ lflo or \\ f r a c {T }{ T _{s}} \\right \\rfloor , H'=\\left \\lfloor \\frac {H/8}{H_{s}} \\right \\rfloor , W'=\\left \\lfloor \\frac {W/8}{W_{s}} \\right \\rfloor . \\label {eq:token} (1) Please note that there are no position embeddings after the tube tokenization as the stem at early stage already captures relative spatio-temporal positions. Temporal difference multi-head self-attention. In selfattention mechanism [15, 56], the relationship between the tokens is modeled by the similarity between the projected query-key pairs, yielding the attention score. Instead of point-wise linear projection, we utilize temporal difference convolution (TDC) [63,69] for query (Q) and key (K) projection, which could capture fine-grained local temporal difference features for subtle color change description. TDC with learnable w can be formulated as \\foo t n otes ize \\ b egin {sp l it } \\mathr m {TDC}(x) &= \\u n derbrac e {\\sum _{p_n \\ in \\mathcal {R}}w(p_n )\\cd ot x(p_0+p_n)}_{\\text {vanilla 3D convolution}}+\\theta \\cdot (\\underbrace {-x(p_0)\\cdot \\sum _{p_n\\in \\mathcal {R'}}w(p_n))}_{\\text {temporal difference term}}, \\\\ \\label {eq:CDC-T} \\end {split} (2) \fwhere p0, R and R\u2032 indicate the current spatio-temporal location, sampled local (3x3x3) neighborhood and sampled adjacent neighborhood, respectively. Then query and key are projected as Q = \\mathrm {BN } ( \\mathrm {TDC}(X_{\\text {tube}})), K= \\mathrm {BN}(\\mathrm {TDC}(X_{\\text {tube}})). \\vspace {-0.3em} (3) For the value (V ) projection, point-wise linear projection without BN is utilized. Then Q, K, V \u2208RD\u00d7T \u2032\u00d7H\u2032\u00d7W \u2032 are flattened into sequence, and separated into h heads (Dh = D/h for each head). For the i-th head (i \u2264h), the self-attention (SA) can be formulated \\ m athrm {SA}_{ i }=\\mathrm {Softmax}(Q_{i}K^{T}_{i}/\\tau )V_{i}, \\vspace {-0.3em} (4) where \u03c4 controls the sparsity. We find that the default setting \u03c4 = \u221aDh in [15, 56] performs poorly for rPPG measurement. According to the periodicity of rPPG features, we use smaller \u03c4 value to obtain sparser attention activation. The corresponding study can be found in Table 6. The output of TD-MHSA is the concatenation of SA from all heads and then with a linear projection U \u2208RD\u00d7D \\text {TD-MHSA} = \\ma thrm {Concat}(\\mathrm {SA}_{1}; \\mathrm {SA}_{2};...; \\mathrm {SA}_{h})U. \\vspace {-0.3em} (5) As illustrated in Fig. 2, residual connection and layer normalization (LN) would be conducted after TD-MHSA. Spatio-temporal feed-forward. The vanilla feed-forward network consists of two linear transformation layers, where the hidden dimension D\u2032 between two layers is expanded to learn a richer feature representation. In contrast, we introduce a depthwise 3D convolution (with BN and nonlinear activation) between these two layers with extra slight computational cost but remarkable performance improvement. The benefits are two-fold: 1) as a complementation of TDMHSA, ST-FF could refine the local inconsistency and parts of noisy features; 2) richer locality provides TD-MHSA sufficient relative position cues. 3.2. Label Distribution Learning Similar to the facial age estimation task [18, 19] that faces at close ages look quite similar, facial rPPG signals with close HR values usually have similar periodicity. Inspired by this observation, instead of considering each facial video as an instance with one label (HR), we regard each facial video as an instance associated with a label distribution. The label distribution covers a certain number of class labels, representing the degree that each label describes the instance. Through this way, one facial video can contribute to both targeted HR value and its adjacent HRs. To consider the similarity information among HR classes during the training stage, we model the rPPG-based HR estimation problem as a specific L-class multi-label classification problem, where L=139 in our case (each integer HR value within [42, 180] bpm as a class). A label distribution p = {p1, p2, ..., pL} \u2208RL is assigned to each facial video X. It is assumed that each entry of p is a real value in the range [0,1] such that PL k=1 pk = 1. We consider the Gaussian distribution function, centred at the ground truth HR label YHR with the standard deviation \u03c3, to construct the corresponding label distribution p. p _ k =\\f rac {1} { \\sqr t {2\\p i } \\ sigma }\\exp \\left ( -\\frac {(k-(Y_{HR}-41))^2}{2\\sigma ^2} \\right ). (6) The label distribution loss can be formulated as LLD = KL(p, Softmax(\u02c6 p)), where divergence measure KL(\u00b7) denotes the Kullback-Leibler (KL) divergence [17], and \u02c6 p is the power spectral density (PSD) of predicted rPPG signals. Please note that the previous work [43] also considers the distribution learning for HR estimation. However, it is totally different with our work: 1) the motivation in [43] is to smooth the temporal HR outliers caused by facial movements across continuous video clips, while our work is more generic, aiming at efficient feature learning across adjacent labels under limited-scale training data; 2) the technique used in [43] is after a post-HR-estimation for the handcrafted rPPG signals, while our work is to design a reasonable supervision signal LLD for PhysFormer. 3.3. Curriculum Learning Guided Dynamic Loss Curriculum learning [2], as a major machine learning regime with philosophy of easy-to-hard curriculum, is utilized to train PhysFormer. In the rPPG measurement task, the supervision signals from temporal domain (e.g., mean square error loss [11], negative Pearson loss [65, 67]) and frequency domain (e.g., cross-entropy loss [46,63], signalto-noise ratio loss [53]) provide different extents of constraints for model learning. The former one gives signaltrend-level constraints, which is straightforward and easy for model convergence but overfitting after that. In contrast, the latter one with strong constraints on frequency domain enforces the model learning periodic features within target frequency bands, which is hard to converged well due to the realistic rPPG-irrelevant noise. Inspired by the curriculum learning, we propose the dynamic supervision to gradually enlarge the frequency constraints, which alleviates the overfitting issue and benefits the intrinsic rPPGaware feature learning gradually. Specifically, exponential increment strategy is adopted, and comparison with other dynamic strategies (e.g., linear increment) will be shown in Table 7. The dynamic loss Loverall can be formulated as \\begin { s plit} \\m a thcal {L } _ { \\tex t {ov e ra l l}}&=\\und e r b ra c e {\\alpha \\cdot \\mathcal {L}_{\\text {time}}}_{\\text {temporal}}+\\underbrace {\\beta \\cdot (\\mathcal {L}_{\\text {CE}}+\\mathcal {L}_{\\text {LD}})}_{\\text {frequency}},\\\\ \\beta &=\\beta _{0}\\cdot (\\eta ^{({\\text {Epoch}}_{\\text {current}}-1)/{\\text {Epoch}}_{\\text {total}}}), \\vspace {-0.1em} \\end {split} (7) where hyperparameters \u03b1, \u03b20 and \u03b7 equal to 0.1, 1.0 and 5.0, respectively. Negative Pearson loss [65, 67] and frequency cross-entropy loss [46,63] are adopted as Ltime and \fLCE, respectively. With the dynamic supervision, PhysFormer could perceive better signal trend at the beginning while such perfect warming up facilitates the gradually stronger frequency knowledge learning later. 4. Experimental Evaluation Experiments of rPPG-based physiological measurement for three types of physiological signals, i.e., heart rate (HR), heart rate variability (HRV), and respiration frequency (RF), are conducted on four benchmark datasets (VIPL-HR [45], MAHNOB-HCI [52], MMSE-HR [55], and OBF [29]). 4.1. Datasets and Performance Metrics VIPL-HR [45] is a large-scale dataset for remote physiological measurement under less-constrained scenarios. It contains 2,378 RGB videos of 107 subjects recorded with different head movements, lighting conditions and acquisition devices. MAHNOB-HCI [52] is one of the most widely used benchmark for remote HR measurement evaluations. It includes 527 facial videos of with 61 fps framerate and 780x580 resolution from 27 subjects. MMSE-HR [55] is a dataset including 102 RGB videos from 40 subjects, and the raw resolution of each video is at 1040x1392. OBF [29] is a high-quality dataset for remote physiological signal measurement. It contains 200 five-minute-long RGB videos with 60 fps framerate recorded from 100 healthy adults. Average HR estimation task is evaluated on all four datasets while HRV and RF estimation tasks on high-quality OBF [29] dataset. Specifically, we follow existing methods [41, 46, 67] and report low frequency (LF), high frequency (HF), and LF/HF ratio for HRV and RF estimation. We report the most commonly used performance metrics for evaluation, including the standard deviation (SD), mean absolute error (MAE), root mean square error (RMSE), and Pearson\u2019s correlation coefficient (r). 4.2. Implementation Details Our proposed method is implemented with Pytorch. For each video clip, we use the MTCNN face detector [72] to crop the enlarged face area at the first frame and fix the region through the following frames. The videos in MAHNOB-HCI and OBF are downsampled to 30 fps for efficiency. The settings N=12, h=4, D=96, D\u2032=144 are used for PhysFormer while \u03b8=0.7 and \u03c4=2.0 for TD-MHSA. The targeted tube size Ts \u00d7 Hs \u00d7 Ws equals to 4\u00d74\u00d74. In the training stage, we randomly sample RGB face clips with size 160\u00d7128\u00d7128 (T \u00d7H \u00d7W) as model inputs. Random horizontal flipping and temporally up/down-sampling [63] are used for data augmentation. The PhysFormer is trained with Adam optimizer and the initial learning rate and weight decay are 1e-4 and 5e-5, respectively. We cannot find obvious performance improvement using AdamW optimizer. We train models with 25 epochs with fixed setting \u03b1=0.1 Table 1. Intra-dataset testing results on VIPL-HR [45]. The symbols \u25b2, \u2666and \u22c6denote traditional, non-end-to-end learning based and end-to-end learning based methods, respectively. Best results are marked in bold and second best in underline. Method SD \u2193 (bpm) MAE \u2193 (bpm) RMSE \u2193 (bpm) r \u2191 Tulyakov2016 [55]\u25b2 18.0 15.9 21.0 0.11 POS [59]\u25b2 15.3 11.5 17.2 0.30 CHROM [13]\u25b2 15.1 11.4 16.9 0.28 RhythmNet [45]\u2666 8.11 5.30 8.14 0.76 ST-Attention [47]\u2666 7.99 5.40 7.99 0.66 NAS-HR [40]\u2666 8.10 5.12 8.01 0.79 CVD [46]\u2666 7.92 5.02 7.97 0.79 Dual-GAN [41]\u2666 7.63 4.93 7.68 0.81 I3D [7]\u22c6 15.9 12.0 15.9 0.07 PhysNet [65]\u22c6 14.9 10.8 14.8 0.20 DeepPhys [11]\u22c6 13.6 11.0 13.8 0.11 AutoHR [63]\u22c6 8.48 5.68 8.68 0.72 PhysFormer (Ours)\u22c6 7.74 4.97 7.79 0.78 for temporal loss while exponentially increased parameter \u03b2 \u2208[1, 5] for frequency losses. We set \u03c3=1.0 for label distribution learning. The batch size is 4 on one 32G V100 GPU. In the testing stage, similar to [45], we uniformly separate 30-second videos into three short clips with 10 seconds, and then video-level HR is calculated via averaging HRs from three short clips. 4.3. Intra-dataset Testing HR estimation on VIPL-HR. In these experiments, we follow [45] and use a subject-exclusive 5-fold crossvalidation protocol on VIPL-HR. As shown in Table 1, all three traditional methods (Tulyakov2016 [55], POS [59] and CHROM [13]) perform poorly due to the complex scenarios (e.g., large head movement and various illumination) in the VIPL-HR dataset. Similarly, the existing end-to-end learning based methods (e.g., PhysNet [65], DeepPhys [11], and AutoHR [63]) predict unreliable HR values with large RMSE compared with non-end-to-end learning approaches (e.g., RhythmNet [45], ST-Attention [47], NAS-HR [40], CVD [46], and Dual-GAN [41]). Such the large performance margin might be caused by the coarse and overfitted rPPG features extracted from the end-to-end models. In contrast, all five non-end-to-end methods first extract finegrained signal maps from multiple facial ROIs, and then more dedicated rPPG clues would be extracted via the cascaded models. Without strict and heavy preprocessing procedure in [40,41,45\u201347], our proposed PhysFormer can be trained from scratch on facial videos directly, and achieves comparable performance with state-of-the-art non-end-toend learning based method Dual-GAN [41]. It indicates that PhysFormer is able to learn the intrinsic and periodic rPPG-aware features automatically. \fTable 2. Performance comparison of HR and RF measurement as well as HRV analysis on OBF [29]. HR(bpm) RF(Hz) LF(u.n) HF(u.n) LF/HF Method SD RMSE r SD RMSE r SD RMSE r SD RMSE r SD RMSE r ROI green [29]\u25b2 2.159 2.162 0.99 0.078 0.084 0.321 0.22 0.24 0.573 0.22 0.24 0.573 0.819 0.832 0.571 CHROM [13]\u25b2 2.73 2.733 0.98 0.081 0.081 0.224 0.199 0.206 0.524 0.199 0.206 0.524 0.83 0.863 0.459 POS [59]\u25b2 1.899 1.906 0.991 0.07 0.07 0.44 0.155 0.158 0.727 0.155 0.158 0.727 0.663 0.679 0.687 CVD [46]\u2666 1.257 1.26 0.996 0.058 0.058 0.606 0.09 0.09 0.914 0.09 0.09 0.914 0.453 0.453 0.877 rPPGNet [67]\u22c6 1.756 1.8 0.992 0.064 0.064 0.53 0.133 0.135 0.804 0.133 0.135 0.804 0.58 0.589 0.773 PhysFormer (Ours)\u22c6 0.804 0.804 0.998 0.054 0.054 0.661 0.085 0.086 0.912 0.085 0.086 0.912 0.389 0.39 0.896 Table 3. Intra-dataset results on MAHNOB-HCI [52]. Method SD \u2193 (bpm) MAE \u2193 (bpm) RMSE \u2193 (bpm) r \u2191 Poh2011 [49]\u25b2 13.5 13.6 0.36 CHROM [13]\u25b2 13.49 22.36 0.21 Li2014 [30]\u25b2 6.88 7.62 0.81 Tulyakov2016 [55]\u25b2 5.81 4.96 6.23 0.83 SynRhythm [44]\u2666 10.88 11.08 RhythmNet [45]\u2666 3.99 3.99 0.87 HR-CNN [53]\u22c6 7.25 9.24 0.51 rPPGNet [67]\u22c6 7.82 5.51 7.82 0.78 DeepPhys [11]\u22c6 4.57 AutoHR [63]\u22c6 4.73 3.78 5.10 0.86 Meta-rPPG [28]\u22c6 4.9 3.01 3.68 0.85 PhysFormer (Ours)\u22c6 3.87 3.25 3.97 0.87 HR estimation on MAHNOB-HCI. For the HR estimation tasks on MAHNOB-HCI, similar to [67], subjectindependent 9-fold cross-validation protocol is adopted. In consideration of the convergence difficulty due to the low illumination and high compression videos in MAHNOBHCI, we finetune the VIPL-HR pretrained model on MAHNOB-HCI for further 15 epochs. The HR estimation results are shown in Table 3. The proposed PhysFormer achieves the lowest SD (3.87 bpm) and highest r (0.87) among the traditional, non-end-to-end learning, and end-toend learning methods, which indicates the reliability of the learned rPPG features from PhysFormer under sufficient supervision. Our performance is on par with the latest end-toend learning method Meta-rPPG [28] without transductive adaptation from target frames. HR, HRV and RF estimation on OBF. We also conduct experiments for three types of physiological signals, i.e., HR, RF, and HRV measurement on the OBF [29] dataset. Following [46,67], we use a 10-fold subject-exclusive protocol for all experiments. All the results are shown in Table 2. From the results, we can see that the proposed approach outperforms the existing state-of-the-art traditional (ROI green [29], CHROM [13], POS [59]) and end-to-end learning (rPPGNet [67]) methods by a large margin on all evaluation metrics for HR, RF and all HRV features. The proposed PhysFormer also gives more accurate estimation Table 4. Cross-dataset results on MMSE-HR [55]. Method SD \u2193 (bpm) MAE \u2193 (bpm) RMSE \u2193 (bpm) r \u2191 Li2014 [30]\u25b2 20.02 19.95 0.38 CHROM [13]\u25b2 14.08 13.97 0.55 Tulyakov2016 [55]\u25b2 12.24 11.37 0.71 ST-Attention [47]\u2666 9.66 10.10 0.64 RhythmNet [45]\u2666 6.98 7.33 0.78 CVD [46]\u2666 6.06 6.04 0.84 PhysNet [65]\u22c6 12.76 13.25 0.44 TS-CAN [34]\u22c6 3.85 7.21 0.86 AutoHR [63]\u22c6 5.71 5.87 0.89 EfficientPhys-C [35]\u22c6 2.91 5.43 0.92 EfficientPhys-T1 [35]\u22c6 3.48 7.21 0.86 PhysFormer (Ours)\u22c6 5.22 2.84 5.36 0.92 in terms of HR, RF, and LF/HF compared with the preprocessed signal map based non-end-to-end learning method CVD [46]. These results indicate that PhysFormer could not only handle the average HR estimation task but also give a promising prediction of the rPPG signal for RF measurement and HRV analysis, which shows its potential in many healthcare applications. 4.4. Cross-dataset Testing Besides of the intra-dataset testings on the VIPL-HR, MAHNOB-HCI, and OBF datasets, we also conduct crossdataset testing on MMSE-HR [55] following the protocol of [45]. The models trained on VIPL-HR are directly tested on MMSE-HR. All the results of the proposed approach and the state-of-the-art methods are shown in Table 4. It is clear that the proposed PhysFormer generalizes well in unseen domain. It is worth noting that PhysFormer achieves the lowest SD (5.22 bpm), MAE (2.84 bpm), RMSE (5.36 bpm) as well as the highest r (0.92) among the traditional, nonend-to-end learning and end-to-end learning based methods, indicating 1) the predicted HRs are highly correlated with the ground truth HRs, and 2) the model learns domaininvariant intrinsic rPPG-aware features. Compared with the spatio-temporal transformer based EfficientPhys-T1 [35], our proposed PhysFormer is able to predict more accurate physiological signals, which indicates the effectiveness of the long-range spatio-temporal attention. \fTable 5. Ablation of Tube Tokenization of PhysFormer. The three dimensions in tensors indicate length\u00d7 height\u00d7width. Inputs [Stem] Feature Size [Tube Size] Token Numbers RMSE \u2193 (bpm) 160 \u00d7 128 \u00d7 128 [\u00d7 ] 160 \u00d7 128 \u00d7 128 [4 \u00d7 32 \u00d7 32] 40 \u00d7 4 \u00d7 4 10.62 160 \u00d7 128 \u00d7 128 [\u221a] 160 \u00d7 16 \u00d7 16 [4 \u00d7 4 \u00d7 4] 40 \u00d7 4 \u00d7 4 7.56 160 \u00d7 96 \u00d7 96 [\u221a] 160 \u00d7 12 \u00d7 12 [4 \u00d7 4 \u00d7 4] 40 \u00d7 3 \u00d7 3 8.03 160 \u00d7 128 \u00d7 128 [\u221a] 160 \u00d7 16 \u00d7 16 [4 \u00d7 16 \u00d7 16] 40 \u00d7 1 \u00d7 1 10.61 160 \u00d7 128 \u00d7 128 [\u221a] 160 \u00d7 16 \u00d7 16 [2 \u00d7 4 \u00d7 4] 80 \u00d7 4 \u00d7 4 7.81 Table 6. Ablation of TD-MHSA and ST-FF in PhysFormer. MHSA \u03c4 Feed-forward RMSE (bpm) \u2193 ST-FF 9.81 TD-MHSA \u221aDh \u22484.9 ST-FF 9.51 TD-MHSA 2.0 ST-FF 7.56 vanilla MHSA 2.0 ST-FF 10.43 TD-MHSA 2.0 vanilla FF 8.27 Table 7. Ablation of dynamic loss in the frequency domain. The temporal loss Ltime is with fixed \u03b1=0.1 here. \u2018CE\u2019 and \u2018LD\u2019 denote cross-entropy and label distribution, respectively. Frequency loss \u03b2 Strategy RMSE (bpm) \u2193 LCE + LLD 1.0 fixed 8.48 LCE + LLD 5.0 fixed 8.86 LCE + LLD [1.0, 5.0] linear 8.37 LCE + LLD [1.0, 5.0] exponential 7.56 LCE [1.0, 5.0] exponential 8.09 LLD [1.0, 5.0] exponential 8.21 LLD (real distribution) [1.0, 5.0] exponential 8.72 4.5. Ablation Study We also provide the results of ablation studies for HR estimation on the Fold-1 of the VIPL-HR [45] dataset. Impact of tube tokenization. In the default setting of PhysFormer, a shallow stem cascaded with a tube tokenization is used. In this ablation, we consider other four tokenization configurations with or w/o stem. It can be seen from the first row in Table 5 that the stem helps the PhysFormer see better [61], and the RMSE increases dramatically (+3.06 bpm) when w/o the stem. Then we investigate the impacts of the spatial and temporal domains in tube tokenization. It is clear that the result in the fourth row with full spatial projection is quite poor (RMSE=10.61 bpm), indicating the necessity of the spatial attention. In contrast, tokenization with smaller tempos (e.g., [2x4x4]) or spatial inputs (e.g., 160x96x96) reduces performance slightly. Impact of TD-MHSA and ST-FF. As shown in Table 6, both the TD-MHSA and ST-FF play vital roles in PhysFormer. The result in the first row shows that the performance degrades sharply without spatio-temporal attention. Moreover, it can be seen from the last two rows Figure 3. Testing results of fixed and dynamic frequency supervisions on the Fold-1 of VIPL-HR. that without TD-MHSA/ST-FF, PhysFormer with vanilla MHSA/FF obtains 10.43/8.27 bpm RMSE. One important finding in this research is that, the temperature \u03c4 influences the MHSA a lot. When the \u03c4 = \u221aDh like previous ViT [1, 15], the predicted rPPG signals are unsatisfied (RMSE=9.51 bpm). Regularizing the \u03c4 with smaller value enforces sparser spatio-temporal attention, which is effective for the quasi-periodic rPPG task. Impact of label distribution learning. Besides the temporal loss Ltime and frequency cross-entropy loss LCE, the ablations w/ and w/o label distribution loss LLD are shown in the last four rows of Table 7. Although the LLD performs slightly worse (+0.12 bpm RMSE) than LCE, the best performance can be achieved using both losses, indicating the effectiveness of explicit distribution constraints for extreme-frequency interference alleviation and adjacent label knowledgement propagation. It is interesting to find from the last two rows that using real PSD distribution from groundtruth PPG signals as p, the performance is inferior due to the lack of an obvious peak and partial noise. We can also find from the Fig. 4(a) that the \u03c3 ranged from 0.9 to 1.2 for LLD are suitable to achieve good performance. Impact of dynamic supervision. Fig. 3 illustrates the testing performance on Fold-1 VIPL-HR when training with fixed and dynamic supervision. It is clear that with exponential increased frequency loss, models in the blue curve converge faster and achieve smaller RMSE. We also compare several kinds of fixed and dynamic strategies in Table 7. The results in the first four rows indicate 1) using fixed higher \u03b2 leads to poorer performance caused by the convergency difficulty; 2) models with the exponentially increased \u03b2 perform better than using linear increment. Impact of \u03b8 and layer/head numbers. Hyperparameter \u03b8 tradeoffs the contribution of local temporal gradient information. As illustrated in Fig. 3(b), PhysFormer could achieve smaller RMSE when \u03b8=0.4 and 0.7, indicating the importance of the normalized local temporal difference features for global spatio-temporal attention. We also investigate how the layer and head numbers influence the performance. As shown in Fig. 5(a), with deeper temporal trans\f(a) (b) Figure 4. Impacts of the (a) \u03c3 in label distribution learning and (b) \u03b8 in TD-MHSA. 6 9 12 layers RMSE (bpm) RMSE (bpm) 1 2 3 4 6 heads (a) (b) Figure 5. Ablation of the (a) layers and (b) heads in PhysFormer. former blocks, the RMSE are reduced progressively despite heavier computational cost. In terms of the impact of head numbers, it is clear to find from Fig. 5(b) that PhysFormer with four heads perform the best while fewer heads lead to sharp performance drops. 4.6. Visualization and Discussion We visualize the attention map from the last TD-MHSA module as well as one example about the query-key interaction in Fig. 6. The x and y axes indicate the attention confidence from key and query tube tokens, respectively. From the attention map, we can easily find periodic or quasiperiodic responses along both axes, indicating the periodicity of the intrinsic rPPG features from PhysFormer. To be specific, given the 530th tube token (in blue) from the forehead (spatial face domain) and peak (temporal signal domain) locations as a query, the corresponding key responses are illustrated at the blue line in the attention map. On one hand, it can be seen from the key responses that dominant spatial attentions focus on the facial skin regions and discard unrelated background. On the other hand, the temporal localizations of the key responses are around peak positions in the predicted rPPG signals. All these patterns are reasonable: 1) the forehead and cheek regions [57] have richer blood volume for rPPG measurement and are also reliable since these regions are less affected by facial muscle movements due to e.g., facial expressions, talking; and 2) rPPG signals from healthy people are usually periodic. However, we also find two limitations of the spatiotemporal attention from Fig. 6. First, there are still some unexpected responses (e.g., continuous query tokens with similar key responses) in the attention map, which might introduce task-irrelevant noise and damage to the perfor1 5 10 15 20 25 30 35 40 40 35 30 25 20 15 10 5 1 1 2 3 4 1 2 3 4 Query ( T'xH'xW' = 40x4x4 ) Key ( T'xH'xW' = 40x4x4 ) A Query Attention Map Key Responses rPPG signals (downsampled) Query Figure 6. Visualization of the attention map from the 1st head in last TD-MHSA module. Given the 530th tube token in blue as a query, representative key responses are illustrated (the brighter, the more attentive). The predicted downsampled rPPG signals are shown for temporal attention understanding. mance. Second, the temporal attentions are not always accurate, and some are coarse with phase shifts (e.g., the first vertical dotted line of the rPPG signals in bottom Fig. 6). 5." + }, + { + "url": "http://arxiv.org/abs/2004.08388v1", + "title": "Multi-Modal Face Anti-Spoofing Based on Central Difference Networks", + "abstract": "Face anti-spoofing (FAS) plays a vital role in securing face recognition\nsystems from presentation attacks. Existing multi-modal FAS methods rely on\nstacked vanilla convolutions, which is weak in describing detailed intrinsic\ninformation from modalities and easily being ineffective when the domain shifts\n(e.g., cross attack and cross ethnicity). In this paper, we extend the central\ndifference convolutional networks (CDCN) \\cite{yu2020searching} to a\nmulti-modal version, intending to capture intrinsic spoofing patterns among\nthree modalities (RGB, depth and infrared). Meanwhile, we also give an\nelaborate study about single-modal based CDCN. Our approach won the first place\nin \"Track Multi-Modal\" as well as the second place in \"Track Single-Modal\n(RGB)\" of ChaLearn Face Anti-spoofing Attack Detection Challenge@CVPR2020\n\\cite{liu2020cross}. Our final submission obtains 1.02$\\pm$0.59\\% and\n4.84$\\pm$1.79\\% ACER in \"Track Multi-Modal\" and \"Track Single-Modal (RGB)\",\nrespectively. The codes are available at{https://github.com/ZitongYu/CDCN}.", + "authors": "Zitong Yu, Yunxiao Qin, Xiaobai Li, Zezheng Wang, Chenxu Zhao, Zhen Lei, Guoying Zhao", + "published": "2020-04-17", + "updated": "2020-04-17", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Face recognition has been widely used in many interactive arti\ufb01cial intelligence systems for its convenience (e.g., access control, face payment and device unlock). However, vulnerability to presentation attacks (PAs) curtails its reliable deployment. Merely presenting printed images or videos to the biometric sensor could fool face recognition systems. Typical examples of presentation attacks are print, video replay, and 3D masks. For the reliable use of face recognition systems, face anti-spoo\ufb01ng (FAS) methods are important to detect such presentation attacks. In recent years, several hand-crafted feature based [3, 4, 7, 15, 28, 27] and deep learning based [38, 33, 29, 22, 12, 34, 2, 8, 9] methods have been proposed for presentation attack detection (PAD). On one hand, the classical hand\u2217denotes corresponding author Living 1 Living 2 RGB Depth IR RGB Depth IR Spoo\ufb01ng 1 Spoo\ufb01ng 2 RGB Depth IR RGB Depth IR Spoo\ufb01ng 3 Spoo\ufb01ng 4 RGB Depth IR RGB Depth IR Figure 1. Examples of living and spoo\ufb01ng faces from CASIASURF CeFA dataset [21]. crafted descriptors (e.g., local binary pattern (LBP) [3]) leverage local relationship among the neighbours as the discriminative features, which is robust for describing the detailed invariant information (e.g., color texture, moir\u00b4 e pattern and noise artifacts) between the living and spoo\ufb01ng faces. On the other hand, due to the stacked convolution operations with nonlinear activation, the convolutional neural networks (CNN) hold strong representation abilities to distinguish the bona \ufb01de from PAs. However, CNN based methods focus on the deeper semantic features, which are weak in describing detailed intrinsic information between living and spoo\ufb01ng faces and easily being ineffective when acquisition conditions varies (e.g., light illumination and camera type). In order to solve this issue, central difference convolutional networks (CDCN) is developed [39] for single-modal (RGB) FAS task and achieves state-of-theart performance on several benchmark datasets. Although the state-of-the-art single-modal FAS methods are robust in some existing testing protocols, it is still challenging when arXiv:2004.08388v1 [cs.CV] 17 Apr 2020 \fencountering new kinds of domain shift (e.g., cross ethnicity). Recently, a large-scale cross-ethnicity face anti-spoo\ufb01ng dataset, the CASIA-SURF CeFA [21], is established, which covers three ethnicities, three modalities, 1607 subjects, and 2D plus 3D attack types. Some typical examples are shown in Fig. 1. The most challenging protocol 4 (simultaneously cross-attack and cross-ethnicity) is utilized for ChaLearn Face Anti-spoo\ufb01ng Attack Detection Challenge@CVPR2020 [20]. The baseline results in CASIASURF CeFA dataset [21] indicate: 1) multiple modalities (i.e., RGB, depth and infrared (IR)) fusion is more robust than using an arbitrary single modal, and 2) the multi-modal result, only 31.8\u00b110.0% ACER in protocol 4, is barely satisfactory. Hence it is necessary to explore more effective multi-modal FAS methods for cross-attack and crossethnicity testing. Motivated by the discussions above, we \ufb01rst analyze how different modality in\ufb02uences the performance of CDCN. Then we extend CDCN to a multi-modal version, intending to capture intrinsic spoo\ufb01ng patterns among various modalities. Our contributions include: \u2022 We are the \ufb01rst to utilize CDCN for depth and infrared modalities based FAS and analyze how CDCN performs with these two modalities. Besides considering CDCN as a single-modal network, we extend it to a multi-modal version, which captures rich discriminative clues among modalities and represents invariant intrinsic patterns across ethnicities and attacks. \u2022 Our approach won the \ufb01rst place in Track MultiModal1 as well as the second place in Track SingleModal (RGB)2 of ChaLearn Face Anti-spoo\ufb01ng Attack Detection Challenge@CVPR2020 [20]. 2. Related Work In this section, we \ufb01rst introduce some recent progress in the single-modal FAS community; and then demonstrate few recent works about multi-modal FAS. Finally, classical convolution operators for vision tasks are presented. Single-Modal Face Anti-Spoo\ufb01ng. Traditional singlemodal face anti-spoo\ufb01ng methods usually extract handcrafted features from the RGB facial images to capture the spoo\ufb01ng patterns. Several classical local descriptors such as LBP [3, 7], SIFT [27], SURF [5], HOG [15] and DoG [28] are utilized to extract frame level features while video level methods usually capture dynamic clues like dynamic texture [14], micro-motion [32] and eye blinking [24]. More recently, a few deep learning based methods are proposed for both frame level and video level face anti-spoo\ufb01ng. For 1https://competitions.codalab.org/competitions/23318 2https://competitions.codalab.org/competitions/22151 frame level methods [39, 29, 16, 26, 9, 12], deep CNN models are utilized to extract features in a binary-classi\ufb01cation setting. In contrast, auxiliary depth supervised FAS methods [2, 22] are introduced to learn more detailed information effectively. On the other hand, several video level CNN methods are presented to exploit the dynamic spatiotemporal [33, 34, 19] or rPPG [17, 22, 18, 36, 37, 31] features for PAD. Despite achieving state-of-the-art performance, single-modal methods are easily in\ufb02uenced by unseen domain shift (e.g., cross ethnicity and cross attack types) and not robust for challenging cases (e.g., harsh environment and realistic attacks). Multi-Modal Face Anti-Spoo\ufb01ng. There are also few works for multi-modal face anti-spoo\ufb01ng. Zhang et al. [40] take ResNet18 as the backbone and propose a three-stream network, where the input of each stream is RGB, Depth and IR face images, respectively. Then, these features are concatenated and passed to the last two residual blocks. Aleksandr et al. [25] also consider the similar fusion network with three streams. ResNet34 is chosen as the backbone and multi-scale features are fused at all residual blocks. Tao et al. [30] present a multi-stream CNN architecture called FaceBagNet. In order to enhance the local detailed representation ability, patch-level images are adopted as inputs. Moreover, modality feature erasing operation is designed to prevent over\ufb01tting and obtain more robust modal-fused features. All previous methods just consider standard backbone (ResNet) with stacked vanilla convolutions for multiple modalities, which might be weak in representing the intrinsic features between living and spoo\ufb01ng faces. Convolution Operators. The convolution operator is commonly used in extracting basic visual features in deep learning framework. Recently extensions to the vanilla convolution operator have been proposed. In one direction, classical local descriptors (e.g., LBP [1] and Gabor \ufb01lters [11]) are considered into convolution design. Representative works include Local Binary Convolution [13] and Gabor Convolution [23], which are proposed for saving computational cost and enhancing the resistance to the spatial changes, respectively. Recently, Yu et al. propose Central Difference Convolution (CDC) [39], which is suitable for FAS task because of its excellent representation ability for detailed intrinsic patterns. Another direction is to modify the spatial scope for aggregation. Two related works are dialated convolution [35] and deformable convolution [6]. However, these convolution operators are always designed for RGB modality, it is still unknown how they perform for depth and IR modalities. In order to overcome the above-mentioned drawbacks and \ufb01ll in the blank, we extend the state-of-the-art singlemodal network CDCN to a multi-modal version for challenging cross-ethnicity and cross-attack FAS task. \f3. Methodology In this section, we will \ufb01rst introduce CDC [39] as a preliminary in Section 3.1, then demonstrate our singlemodal and multi-modal neural architectures in Section 3.2 and Section 3.3, respectively. At last the supervision signals and loss functions are presented in Section 3.4. 3.1. Preliminary: CDC The feature maps and convolution can be represented in 3D shape (2D spatial domain and extra channel dimension) in modern deep learning frameworks. For simplicity, all convolutions in this paper are described in 2D while extension to 3D is straightforward. Vanilla Convolution. There are two main steps in the 2D spatial convolution: 1) sampling local receptive \ufb01eld region R over the input feature map x; 2) aggregation of sampled values via weighted summation. Hence, the output feature map y can be formulated as y(p0) = X pn\u2208R w(pn) \u00b7 x(p0 + pn), (1) where p0 denotes current location on both input and output feature maps while pn enumerates the locations in R. For instance, local receptive \ufb01eld region for convolution operation with 3\u00d73 kernel and dilation 1 is R = {(\u22121, \u22121), (\u22121, 0), \u00b7 \u00b7 \u00b7 , (0, 1), (1, 1)}. Central Difference Convolution. For FAS task, the discriminative and robust features indicate \ufb01ne-grained living/spoo\ufb01ng patterns and environment invariant clues, respectively. Local gradient operator (e.g., basic element in local binary pattern (LBP) [3]), as a residual and difference term, is able to capture rich detailed patterns and not easily affected by external changes. Inspired by LBP [3], we introduce central difference context into vanilla convolution to enhance its representation and generalization capacity. Similar to vanilla convolution, central difference convolution also consists of two steps, i.e., sampling and aggregation. The sampling step is similar to that in vanilla convolution while the aggregation step is different: central difference convolution prefers to aggregate the center-oriented gradient of sampled values. Thus Eq. (1) becomes y(p0) = X pn\u2208R w(pn) \u00b7 (x(p0 + pn) \u2212x(p0)). (2) When pn = (0, 0), the gradient value always equals to zero with respect to the central location p0 itself. As both the intensity-level semantic information and gradient-level detailed message are crucial for distinguishing the living and spoo\ufb01ng faces, which indicates that combining vanilla convolution with central difference convolution might be a feasible manner to provide more robust Vanilla Convolution\u00a0\u00a0 Central Difference Convolution Generalized Central\u00a0 Difference Convolution (CDC) expand Figure 2. Generalized central difference convolution (CDC). modeling capacity. As illustrated in Fig. 2, we generalize central difference convolution as y(p0) = \u03b8 \u00b7 X pn\u2208R w(pn) \u00b7 (x(p0 + pn) \u2212x(p0)) | {z } central difference convolution +(1 \u2212\u03b8) \u00b7 X pn\u2208R w(pn) \u00b7 x(p0 + pn) | {z } vanilla convolution , (3) where hyperparameter \u03b8 \u2208[0, 1] tradeoffs the contribution between intensity-level and gradient-level information. The higher value of \u03b8 means the more importance of central difference gradient information. Similar to [39], we refer to this generalized central difference convolution as CDC. 3.2. Single-Modal CDCN We follow the similar con\ufb01guration \u2018CDCN++\u2019 [39] as our single-modal backbone, including low-mid-high level cells and Multiscale Attention Fusion Module (MAFM). In the consideration of the large-scale training data in CASIASURF CeFA dataset, we set the initial channel number as 80 instead of 64. The speci\ufb01c network is shown in Fig. 3(a). Single-modal face image with size 256\u00d7256\u00d73 is taken as the network input and the output is the predicted 32\u00d732 grayscale mask. 3.3. Multi-Modal CDCN We adopt the con\ufb01guration \u2018CDCN\u2019 [39] as the backbone of each modality branch as we \ufb01nd the MAFM would drop the performance when using multi-modal fusion. As illustrated in Fig. 3(b), the backbone network of each modality branch is not shared. Thus each branch is able to learn modality-aware features independently. The multilevel features from each modality branch are fused via concatenation. Finally, the two head layers aggregate the multimodal features and predict the grayscale mask. As the feature-level fusion strategy might not be optimal for all protocols, we also try two other fusion strategies: 1) \fCDC CDC CDC_2_1.6 CDC_2_1.2 CDC_2_1.4 CDC CDC_2_1.2 CDC CDC Low-level Cell Mid-level Cell High-level Cell 3x256x256 80x256x256 160x256x256 160x128x128 160x64x64 160x32x32 160x32x32 1x32x32 MAFM 480x32x32 stem0\u00a0 \u00a0 \u00a0 \u00a0 \u00a0stem1 head0\u00a0 \u00a0 \u00a0 \u00a0 \u00a0head1 RGB IR Depth Binary Mask (a) Single-Modal CDCN RGB IR Depth Backbone Backbone Backbone Concat 3x256x256 3x256x256 3x256x256 CDC CDC 128x32x32 1x32x32 384x32x32 head0\u00a0 \u00a0 \u00a0 \u00a0 \u00a0head1 Binary Mask CDC CDC 64x256x256 128x128x128 128x64x64 128x32x32 128x32x32 Concat 384x32x32 CDC_Block CDC CDC 128 channels 128 channels CDC 196 channels CDC_Block CDC_Block CDC_Block Backbone (b) Multi-Modal CDCN Figure 3. The architecture of (a) single-model and (b) multi-modal CDCN. The red thin rectangle denotes a max pool layer with stride 2. \u2018CDC 2 r\u2019 means using two stacked CDC to increase channel number with ratio r \ufb01rst and then decrease back to the original channel size. input-level fusion via concatenating three-modal inputs to 256\u00d7256\u00d79 directly, and 2) score-level fusion via weighting the predicted score from each modality. For these two fusion strategies, the architecture of single-modal CDCN (see Fig. 3(a)) is used. The corresponding ablation study will be shown in Section 4.4. 3.4. Supervision Compared with traditional guidance from the binary scalar score, pixel-wise supervision [9] helps to learn more discriminative patterns between living and spoo\ufb01ng faces. As a result, our network prefets to predict 32\u00d732 grayscale mask instead of traditional scalar score. In terms of ground truth label, we generate the binary mask via simply set the non-zero pixel value to \u20181\u2019 because the intensity values of non-face background have already been \u20180\u2019 in CASIASURF CeFA dataset. For the loss function, mean square error loss LMSE is Figure 4. The kernel KCDL n in contrastive depth loss.. utilized for pixel-wise supervision, which is formulated: LMSE = 1 H \u00d7 W X i\u2208H,j\u2208W (Bpre(i,j) \u2212Bgt(i,j))2, (4) where H, W denote the height and width of the binary mask, respectively, and Bpre and Bgt mean the predicted grayscale mask and ground truth binary mask, respectively. Moreover, for the sake of \ufb01ne-grained supervision needs in FAS task, contrastive depth loss (CDL) LCDL [33] is considered to help the networks learn more detailed features. \fTable 1. Albation study of the hyperparameter \u03b8 with RGB modality. Single-Modal CDCN Protocol 4@1 Protocol 4@2 Protocol 4@3 Overall APCER(%) BPCER(%) ACER(%) APCER(%) BPCER(%) ACER(%) APCER(%) BPCER(%) ACER(%) ACER(%) \u03b8=0.5 12.61 4.0 8.31 6.67 2.0 4.33 4.56 8.5 6.53 6.39 \u03b8=0.6 11.67 8.0 9.83 10.56 3.0 6.78 3.89 5.0 4.44 7.02 \u03b8=0.7 12.83 1.25 7.04 13.33 2.0 7.67 3.72 3.0 3.36 6.02 \u03b8=0.8 14.33 1.5 7.92 10.0 6.25 8.13 3.83 7.25 5.54 7.19 \u03b8=0.9 11.17 2.5 6.83 21.33 5.75 13.54 3.56 7.5 5.53 8.63 Table 2. Results of Single-Modal CDCN (\u03b8=0.7) with different modalities. Modality Protocol 4@1 Protocol 4@2 Protocol 4@3 Overall APCER(%) BPCER(%) ACER(%) APCER(%) BPCER(%) ACER(%) APCER(%) BPCER(%) ACER(%) ACER(%) RGB 12.83 1.25 7.04 13.33 2.0 7.67 3.72 3.0 3.36 6.02 Depth 5.22 1.25 3.24 2.72 0.5 1.61 4.94 1.75 3.35 2.73 IR 1.56 1.0 1.28 27.72 0.25 13.99 29.56 0.5 15.03 10.1 Table 3. Best submission result in Track Single-Modal (RGB). Method Protocol 4@1 Protocol 4@2 Protocol 4@3 Overall APCER(%) BPCER(%) ACER(%) APCER(%) BPCER(%) ACER(%) APCER(%) BPCER(%) ACER(%) ACER(%) SD-Net [21] 35.2\u00b15.8 Ours (Single-Modal) 11.17 2.5 6.83 6.67 2.0 4.33 3.72 3.0 3.36 4.84\u00b11.79 CDL can be formulated as LCDL = P i\u2208H,j\u2208W,n\u2208N(KCDL n \u2299Bpre(i,j) \u2212KCDL n \u2299Bgt(i,j))2 H \u00d7 W \u00d7 N , (5) where KCDL n is the n-th contrastive convolution kernel, and N denotes the kernel numbers. The details of the kernels (N = 8) can be found in Fig. 4. Finally, the overall loss Loverall can be formulated as Loverall = LMSE + LCDL. 4. Experiments In this section, extensive experiments are performed to demonstrate the effectiveness of our method. In the following, we sequentially describe the employed datasets & metrics (Sec. 4.1), implementation details (Sec. 4.2), results (Sec. 4.3 4.4) and visualization (Sec. 4.5). 4.1. Datasets and Metrics CASIA-SURF CeFA Dataset [21]. CASIA-SURF CeFA aims to provide with the largest up to date face antispoo\ufb01ng dataset to allow for the evaluation of the generalization performance cross-ethnicity and cross-attacks. It consists of 2D and 3D attack subsets. For the 2D attack subset, it includes print and video-reply attacks, and three ethnicites (African, East Asian and Central Asian) with two attacks (print face from cloth and video-replay). Each ethnicity has 500 subjects. Each subject has one real sample, two fake samples of print attack captured in indoor and outdoor, and 1 fake sample of video-replay. In total, there are 18000 videos (6000 per modality). There are four evaluation protocols in CASIA-SURF CeFA for cross-ethnicity, cross-attack, cross-modality, and cross-ethnicity & cross-attack testing. In this paper, our experiments are all conducted on the most challenging protocol 4 (cross-ethnicity & cross-attack), which has been utilized for ChaLearn Face Anti-spoo\ufb01ng Attack Detection Challenge@CVPR2020. Performance Metrics. Three metrics, i.e., Attack Presentation Classi\ufb01cation Error Rate (APCER), Bona Fide Presentation Classi\ufb01cation Error Rate (BPCER), and Average Classi\ufb01cation Error Rate (ACER) [10] are utilized for performance comparison. They can be formulated as APCER = FP TN + FP , BPCER = FN FN + TP , ACER = APCER + BPCER 2 , (6) where FP, FN, TN and TP denote the false positive, false negative, true negative and true positive sample numbers, respectively. ACER is used to determine the \ufb01nal ranking in ChaLearn Face Anti-spoo\ufb01ng Attack Detection Challenge@CVPR2020. 4.2. Implementation Details Our proposed method is implemented with Pytorch. In the training stage, models are trained with Adam optimizer and the initial learning rate and weight decay are 1e-4 and 5e-5, respectively. We train models with 50 epochs while learning rate halves every 20 epochs. The batch size is 8 on a P100 GPU. In the testing stage, we calculate the mean value of the predicted grayscle map as the \ufb01nal score. 4.3. Single-Modal Testing In this subsection, we give the ablation study about the hyperparameter \u03b8 with RGB modality \ufb01rstly. Then based on the optimal \u03b8 for CDCN, we test depth and IR modalities. \fTable 4. Ablation study of fusion strategies for multi-modal CDCN. We only report the results tried in the FAS challenge. Modality Protocol 4@1 Protocol 4@2 Protocol 4@3 APCER(%) BPCER(%) ACER(%) APCER(%) BPCER(%) ACER(%) APCER(%) BPCER(%) ACER(%) Feature-level fusion 0.33 0.5 0.42 5.89 3.25 4.57 4.22 3.25 3.74 Input-level fusion 0.5 3.75 2.13 5.67 1.5 3.58 2.61 3.25 2.93 Score-level fusion 1.39 0.75 1.07 1.44 1.75 1.6 Table 5. Best submission result in Track Multi-Modal. Method Protocol 4@1 Protocol 4@2 Protocol 4@3 Overall APCER(%) BPCER(%) ACER(%) APCER(%) BPCER(%) ACER(%) APCER(%) BPCER(%) ACER(%) ACER(%) PSMM-Net [21] 33.3 15.8 24.5 78.2 8.3 43.2 50.0 5.5 27.7 31.8\u00b110.0 Ours (Multi-Modal) 0.33 0.5 0.42 1.39 0.75 1.07 1.44 1.75 1.6 1.02\u00b10.59 Finally, we summarize our best submission results in Track Single Modal (RGB) on ChaLearn Face Anti-spoo\ufb01ng Attack Detection Challenge@CVPR2020. Impact of \u03b8 with RGB modality. As shown in Table 1, the best overall performance (ACER=6.02%) is achieved when \u03b8 = 0.7, which is consistent with the evidence in [39]. As for the sub-protocols, \u03b8 = 0.9, \u03b8 = 0.5 and \u03b8 = 0.7 obtain the lowest ACER in protocol 4@1 (6.83%), 4@2 (4.33%) and 4@3 (3.36%), respectively. Results of Depth and IR modalities. Table 2 shows the results of different modalities using single-modal CDCN when \u03b8 = 0.7. It is surprising that the performance varies a lot across modalities. The IR modality performs the best in protocol 4@1 (testing without Africa) but the worst in protocol 4@2 and 4@3 (testing with Africa), indicating that the IR modality generalizes poorly for unseen Africa ethnicity. Compared with RGB and IR modalities, the depth modality is more robust and discriminative in most cases (e.g., print attacks in testing stage) because the 3D depth shape is quite distinguishable between living and print faces. The excellent overall performance indicates central difference convolution is not only suitable for RGB modality, but also for IR and depth modalities. Best Submission Result in Track Single-Modal (RGB). Our best submission result (4.84\u00b11.79% ACER) is shown in Table 3, which wins the second place in Track Single-Modal (RGB) on ChaLearn Face Anti-spoo\ufb01ng Attack Detection Challenge@CVPR2020. This \ufb01nal result is combined with the best sub-protocols results (i.e., \u03b8 =0.9, 0.5 and 0.7, respectively). 4.4. Multi-Modal Testing In this subsection, three fusion strategies are studied in multi-modal testing. Then the best submission results in Track Multi-Modal will be presented. Multi-Modal Fusion Strategies. As shown in Table 4, our proposed multi-modal CDCN (i.e., feature-level fusion with three modalities) achieves the lowest ACER (0.42%) in protocol 4@1. When using the concatenated inputs with three modalities (input-level fusion), the CDCN could obtain comparable performance with the single-modal results in Table 2. However, it still causes the performance drops compared with the best single-modal results (i.e., IR modality for protocol 4@1, depth modality for protocol 4@2 and protocol 4@3). It also re\ufb02ects the issue for both featureand input-level fusion, i.e., simple fusion with concatenation might be sub-optimal because it is weak in representing and selecting the importance of modalities. It is worth exploring more effective fusion methods (e.g., attention mechanism for modalities) in future. Based on the prior results in Table 2, we weight the results of RGB and depth modalities averagely as the scorelevel fusion (i.e., fusion score = 0.5\u2217RGB score+0.5\u2217 depth score). As shown in Table 4 (the third row), this simple ensemble strategy helps to boost the performance signi\ufb01cantly. Compared with single-depth modality, scorelevel fusion gives 0.54% and 1.13% ACER improvements for protocol 4@2 and 4@3, respectively. Best Submission Result in Track Multi-Modal. Table 3 shows our best submission result (1.02\u00b10.59% ACER), which wins the \ufb01rst place in Track Multi-Modal on ChaLearn FAS Attack Detection Challenge@CVPR2020. This \ufb01nal result is combined with the best sub-protocols results (i.e., feature-level fusion for protocol 4@1 while score-level fusion for protocol 4@2 and 4@3). 4.5. Feature Visualization The visualizations of CDCN with three modalities are shown in Fig. 5. On one hand, it is clear that the lowlevel, mid-level and high-level features in CDCN are distinguishable between living and spoo\ufb01ng faces among all three modalities. In terms of low-level features, the living have more detailed texture (especially in IR modality). As for the high-level features, the living face regions are purer and plainer while the spoo\ufb01ng ones are with more spoo\ufb01ng/noise patterns. On the other hand, depth and IR modalities are complementary to RGB modality and helpful for robust liveness detection. We can see from the last row in Fig. 5 that CDCN fails to detect spoo\ufb01ng1 only using RGB input while spoofing1 could be accurately detected by depth or IR inputs. \fFigure 5. Visualization of CDCN with three modalities. 5." + }, + { + "url": "http://arxiv.org/abs/2003.04092v1", + "title": "Searching Central Difference Convolutional Networks for Face Anti-Spoofing", + "abstract": "Face anti-spoofing (FAS) plays a vital role in face recognition systems. Most\nstate-of-the-art FAS methods 1) rely on stacked convolutions and\nexpert-designed network, which is weak in describing detailed fine-grained\ninformation and easily being ineffective when the environment varies (e.g.,\ndifferent illumination), and 2) prefer to use long sequence as input to extract\ndynamic features, making them difficult to deploy into scenarios which need\nquick response. Here we propose a novel frame level FAS method based on Central\nDifference Convolution (CDC), which is able to capture intrinsic detailed\npatterns via aggregating both intensity and gradient information. A network\nbuilt with CDC, called the Central Difference Convolutional Network (CDCN), is\nable to provide more robust modeling capacity than its counterpart built with\nvanilla convolution. Furthermore, over a specifically designed CDC search\nspace, Neural Architecture Search (NAS) is utilized to discover a more powerful\nnetwork structure (CDCN++), which can be assembled with Multiscale Attention\nFusion Module (MAFM) for further boosting performance. Comprehensive\nexperiments are performed on six benchmark datasets to show that 1) the\nproposed method not only achieves superior performance on intra-dataset testing\n(especially 0.2% ACER in Protocol-1 of OULU-NPU dataset), 2) it also\ngeneralizes well on cross-dataset testing (particularly 6.5% HTER from\nCASIA-MFSD to Replay-Attack datasets). The codes are available at\n\\href{https://github.com/ZitongYu/CDCN}{https://github.com/ZitongYu/CDCN}.", + "authors": "Zitong Yu, Chenxu Zhao, Zezheng Wang, Yunxiao Qin, Zhuo Su, Xiaobai Li, Feng Zhou, Guoying Zhao", + "published": "2020-03-09", + "updated": "2020-03-09", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Face recognition has been widely used in many interactive arti\ufb01cial intelligence systems for its convenience. However, vulnerability to presentation attacks (PA) curtail its reliable deployment. Merely presenting printed images or videos to the biometric sensor could fool face recognition \u2217denotes corresponding author Print Attack 1 Print Attack 2 \u00a0 \u00a0 Vanilla Convolution Central Difference \u00a0 \u00a0 \u00a0Convolution Inconsistent consistent Figure 1. Feature response of vanilla convolution (VanillaConv) and central difference convolution (CDC) for spoo\ufb01ng faces in shifted domains (illumination & input camera). VanillaConv fails to capture the consistent spoo\ufb01ng pattern while CDC is able to extract the invariant detailed spoo\ufb01ng features, e.g., lattice artifacts. systems. Typical examples of presentation attacks are print, video replay, and 3D masks. For the reliable use of face recognition systems, face anti-spoo\ufb01ng (FAS) methods are important to detect such presentation attacks. In recent years, several hand-crafted features based [7, 8, 15, 29, 45, 44] and deep learning based [49, 64, 36, 26, 62, 4, 19, 20] methods have been proposed for presentation attack detection (PAD). On one hand, the classical handcrafted descriptors (e.g., local binary pattern (LBP) [7]) leverage local relationship among the neighbours as the discriminative features, which is robust for describing the detailed invariant information (e.g., color texture, moir\u00b4 e pattern and noise artifacts) between the living and spoo\ufb01ng faces. On the other hand, due to the stacked convolution operations with nonlinear activation, the convolutional neural networks (CNN) hold strong representation abilities to distinguish the bona \ufb01de and PA. However, CNN based methods focus on the deeper semantic features, which are weak in describing detailed \ufb01ne-grained information between liv1 arXiv:2003.04092v1 [cs.CV] 9 Mar 2020 \fing and spoo\ufb01ng faces and easily being ineffective when the environment varies (e.g., different light illumination). How to integrate local descriptors with convolution operation for robust feature representation is worth exploring. Most recent deep learning based FAS methods are usually built upon image classi\ufb01cation task based backbones [61, 62, 20], such as VGG [54], ResNet [22] and DenseNet [23]. The networks are usually supervised by binary cross-entropy loss, which easily learns the arbitrary patterns such as screen bezel instead of the nature of spoofing patterns. In order to solve this issue, several depth supervised FAS methods [4, 36], which utilize pseudo depth map label as auxiliary supervised signal, have been developed. However, all these network architectures are carefully designed by human experts, which might not be optimal for FAS task. Hence, to automatically discover best-suited networks for FAS task with auxiliary depth supervision should be considered. Most existing state-of-the-art FAS methods [36, 56, 62, 32] need multiple frames as input to extract dynamic spatiotemporal features (e.g., motion [36, 56] and rPPG [62, 32]) for PAD. However, long video sequence may not be suitable for speci\ufb01c deployment conditions where the decision needs to be made quickly. Hence, frame level PAD approaches are advantageous from the usability point of view despite inferior performance compared with video level methods. To design high-performing frame level methods is crucial for real-world FAS applications. Motivated by the discussions above, we propose a novel convolution operator called Central Difference Convolution (CDC), which is good at describing \ufb01ne-grained invariant information. As shown in Fig. 1, CDC is more likely to extract intrinsic spoo\ufb01ng patterns (e.g., lattice artifacts) than vanilla convolution in diverse environments. Furthermore, over a speci\ufb01cally designed CDC search space, Neural Architecture Search (NAS) is utilized to discover the excellent frame level networks for depth supervised face antispoo\ufb01ng task. Our contributions include: \u2022 We design a novel convolution operator called Central Difference Convolution (CDC), which is suitable for FAS task due to its remarkable representation ability for invariant \ufb01ne-grained features in diverse environments. Without introducing any extra parameters, CDC can replace the vanilla convolution and plug and play in existing neural networks to form Central Difference Convolutional Networks (CDCN) with more robust modeling capacity. \u2022 We propose CDCN++, an extended version of CDCN, consisting of the searched backbone network and Multiscale Attention Fusion Module (MAFM) for aggregating the multi-level CDC features effectively. \u2022 To our best knowledge, this is the \ufb01rst approach that searches neural architectures for FAS task. Different from the previous classi\ufb01cation task based NAS supervised by softmax loss, we search the well-suited frame level networks for depth supervised FAS task over a speci\ufb01cally designed CDC search space. \u2022 Our proposed method achieves state-of-the-art performance on all six benchmark datasets with both intraas well as cross-dataset testing. 2. Related Work Face Anti-Spoo\ufb01ng. Traditional face anti-spoo\ufb01ng methods usually extract hand-crafted features from the facial images to capture the spoo\ufb01ng patterns. Several classical local descriptors such as LBP [7, 15], SIFT [44], SURF [9], HOG [29] and DoG [45] are utilized to extract frame level features while video level methods usually capture dynamic clues like dynamic texture [28], micro-motion [53] and eye blinking [41]. More recently, a few deep learning based methods are proposed for both frame level and video level face anti-spoo\ufb01ng. For frame level methods [30, 43, 20, 26], pre-trained deep CNN models are \ufb01ne-tuned to extract features in a binary-classi\ufb01cation setting. In contrast, auxiliary depth supervised FAS methods [4, 36] are introduced to learn more detailed information effectively. On the other hand, several video level CNN methods are presented to exploit the dynamic spatio-temporal [56, 62, 33] or rPPG [31, 36, 32] features for PAD. Despite achieving state-of-the-art performance, video level deep learning based methods need long sequence as input. In addition, compared with traditional descriptors, CNN over\ufb01ts easily and is hard to generalize well on unseen scenes. Convolution Operators. The convolution operator is commonly used in extracting basic visual features in deep learning framework. Recently extensions to the vanilla convolution operator have been proposed. In one direction, classical local descriptors (e.g., LBP [2] and Gabor \ufb01lters [25]) are considered into convolution design. Representative works include Local Binary Convolution [27] and Gabor Convolution [38], which is proposed for saving computational cost and enhancing the resistance to the spatial changes, respectively. Another direction is to modify the spatial scope for aggregation. Two related works are dialated convolution [63] and deformable convolution [14]. However, these convolution operators may not be suitable for FAS task because of the limited representation capacity for invariant \ufb01ne-grained features. Neural Architecture Search. Our work is motivated by recent researches on NAS [11, 17, 35, 47, 68, 69, 60], while we focus on searching for a depth supervised model with high performance instead of a binary classi\ufb01cation model for face anti-spoo\ufb01ng task. There are three main categories of existing NAS methods: 1) Reinforcement learning based [68, 69], 2) Evolution algorithm based [51, 52], and 3) Gradient based [35, 60, 12]. Most of NAS approaches search 2 \fnetworks on a small proxy task and transfer the found architecture to another large target task. For the perspective of computer vision applications, NAS has been developed for face recognition [67], action recognition [46], person ReID [50], object detection [21] and segmentation [65] tasks. To the best of our knowledge, no NAS based method has ever been proposed for face anti-spoo\ufb01ng task. In order to overcome the above-mentioned drawbacks and \ufb01ll in the blank, we search the frame level CNN over a specially designed search space with the new proposed convolution operator for depth-supervised FAS task. 3. Methodology In this section, we will \ufb01rst introduce our Central Difference Convolution in Section 3.1, then introduce the Central Difference Convolutional Networks (CDCN) for face antispoo\ufb01ng in Section 3.2, and at last present the searched networks with attention mechanism (CDCN++) in Section 3.3. 3.1. Central Difference Convolution In modern deep learning frameworks, the feature maps and convolution are represented in 3D shape (2D spatial domain and extra channel dimension). As the convolution operation remains the same across the channel dimension, for simplicity, in this subsection the convolutions are described in 2D while extension to 3D is straightforward. Vanilla Convolution. As 2D spatial convolution is the basic operation in CNN for vision tasks, here we denote it as vanilla convolution and review it shortly \ufb01rst. There are two main steps in the 2D convolution: 1) sampling local receptive \ufb01eld region R over the input feature map x; 2) aggregation of sampled values via weighted summation. Hence, the output feature map y can be formulated as y(p0) = X pn\u2208R w(pn) \u00b7 x(p0 + pn), (1) where p0 denotes current location on both input and output feature maps while pn enumerates the locations in R. For instance, local receptive \ufb01eld region for convolution operation with 3\u00d73 kernel and dilation 1 is R = {(\u22121, \u22121), (\u22121, 0), \u00b7 \u00b7 \u00b7 , (0, 1), (1, 1)}. Vanilla Convolution Meets Central Difference. Inspired by the famous local binary pattern (LBP) [7] which describes local relations in a binary central difference way, we also introduce central difference into vanilla convolution to enhance its representation and generalization capacity. Similarly, central difference convolution also consists of two steps, i.e., sampling and aggregation. The sampling step is similar to that in vanilla convolution while the aggregation step is different: as illustrated in Fig. 2, central difference convolution prefers to aggregate the center-oriented gradient of sampled values. Eq. (1) becomes expand input feature map output feature map sampling central differnce Conv aggregation Figure 2. Central difference convolution. y(p0) = X pn\u2208R w(pn) \u00b7 (x(p0 + pn) \u2212x(p0)). (2) When pn = (0, 0), the gradient value always equals to zero with respect to the central location p0 itself. For face anti-spoo\ufb01ng task, both the intensity-level semantic information and gradient-level detailed message are crucial for distinguishing the living and spoo\ufb01ng faces, which indicates that combining vanilla convolution with central difference convolution might be a feasible manner to provide more robust modeling capacity. Therefore we generalize central difference convolution as y(p0) = \u03b8 \u00b7 X pn\u2208R w(pn) \u00b7 (x(p0 + pn) \u2212x(p0)) | {z } central difference convolution +(1 \u2212\u03b8) \u00b7 X pn\u2208R w(pn) \u00b7 x(p0 + pn) | {z } vanilla convolution , (3) where hyperparameter \u03b8 \u2208[0, 1] tradeoffs the contribution between intensity-level and gradient-level information. The higher value of \u03b8 means the more importance of central difference gradient information. We will henceforth refer to this generalized Central Difference Convolution as CDC, which should be easy to identify according to its context. Implementation for CDC. In order to ef\ufb01ciently implement CDC in modern deep learning framework, we decompose and merge Eq. (3) into the vanilla convolution with additional central difference term y(p0) = X pn\u2208R w(pn) \u00b7 x(p0 + pn) | {z } vanilla convolution +\u03b8 \u00b7 (\u2212x(p0) \u00b7 X pn\u2208R w(pn)) | {z } central difference term . (4) According to the Eq. (4), CDC can be easily implemented by a few lines of code in PyTorch[42] and TensorFlow[1]. The derivation of Eq. (4) and codes based on Pytorch are shown in Appendix A. Relation to Prior Work. Here we discuss the relations between CDC and vanilla convolution, local binary convolution[27] and gabor convolution[38], which share 3 \fsimilar design philosophy but with different focuses. The ablation study is in Section 4.3 to show superior performance of CDC for face anti-spoo\ufb01ng task. Relation to Vanilla Convolution. CDC is more generalized. It can be seen from Eq. (3) that vanilla convolution is a special case of CDC when \u03b8 = 0, i.e., aggregating local intensity information without gradient message. Relation to Local Binary Convolution[27]. Local binary convolution (LBConv) focuses on computational reduction so its modeling capacity is limited. CDC focuses on enhancing rich detailed feature representation capacity without any additional parameters. On the other side, LBConv uses pre-de\ufb01ned \ufb01lters to describe the local feature relation while CDC can learn these \ufb01lters automatically. Relation to Gabor Convolution[38]. Gabor convolution (GaborConv) devotes to enhancing the representation capacity of spatial transformations (i.e., orientation and scale changes) while CDC focuses more on representing \ufb01negrained robust features in diverse environments. 3.2. CDCN Depth-supervised face anti-spoo\ufb01ng methods [36, 4] take advantage of the discrimination between spoo\ufb01ng and living faces based on 3D shape, and provide pixel-wise detailed information for the FAS model to capture spoo\ufb01ng cues. Motivated by this, a similar depth-supervised network [36] called \u201cDepthNet\u201d is built up as baseline in this paper. In order to extract more \ufb01ne-grained and robust features for estimating the facial depth map, CDC is introduced to form Central Difference Convolutional Networks (CDCN). Note that DepthNet is the special case of the proposed CDCN when \u03b8 = 0 for all CDC operators. The details of CDCN are shown in Table 1. Given a single RGB facial image with size 3 \u00d7 256 \u00d7 256, multilevel (low-level, mid-level and high-level) fused features are extracted for predicting the grayscale facial depth with size 32 \u00d7 32. We use \u03b8 = 0.7 as the default setting and ablation study about \u03b8 will be shown in Section 4.3. For the loss function, mean square error loss LMSE is utilized for pixel-wise supervision. Moreover, for the sake of \ufb01ne-grained supervision needs in FAS task, contrastive depth loss LCDL [56] is considered to help the networks learn more detailed features. So the overall loss Loverall can be formulated as Loverall = LMSE + LCDL. 3.3. CDCN++ It can be seen from Table 1 that the architecture of CDCN is designed coarsely (e.g., simply repeating the same block structure for different levels), which might be suboptimized for face anti-spoo\ufb01ng task. Inspired by the classical visual object understanding models [40], we propose an extended version CDCN++ (see Fig. 5), which consists of a NAS based backbone and Multiscale Attention Fusion Level Output DepthNet [36] CDCN (\u03b8 = 0.7) 256 \u00d7 256 3 \u00d7 3 conv, 64 3 \u00d7 3 CDC, 64 Low 128 \u00d7 128 \uf8ee \uf8ef \uf8ef \uf8f0 3 \u00d7 3 conv, 128 3 \u00d7 3 conv, 196 3 \u00d7 3 conv, 128 3 \u00d7 3 max pool \uf8f9 \uf8fa \uf8fa \uf8fb \uf8ee \uf8ef \uf8ef \uf8f0 3 \u00d7 3 CDC, 128 3 \u00d7 3 CDC, 196 3 \u00d7 3 CDC, 128 3 \u00d7 3 max pool \uf8f9 \uf8fa \uf8fa \uf8fb Mid 64 \u00d7 64 \uf8ee \uf8ef \uf8ef \uf8f0 3 \u00d7 3 conv, 128 3 \u00d7 3 conv, 196 3 \u00d7 3 conv, 128 3 \u00d7 3 max pool \uf8f9 \uf8fa \uf8fa \uf8fb \uf8ee \uf8ef \uf8ef \uf8f0 3 \u00d7 3 CDC, 128 3 \u00d7 3 CDC, 196 3 \u00d7 3 CDC, 128 3 \u00d7 3 max pool \uf8f9 \uf8fa \uf8fa \uf8fb High 32 \u00d7 32 \uf8ee \uf8ef \uf8ef \uf8f0 3 \u00d7 3 conv, 128 3 \u00d7 3 conv, 196 3 \u00d7 3 conv, 128 3 \u00d7 3 max pool \uf8f9 \uf8fa \uf8fa \uf8fb \uf8ee \uf8ef \uf8ef \uf8f0 3 \u00d7 3 CDC, 128 3 \u00d7 3 CDC, 196 3 \u00d7 3 CDC, 128 3 \u00d7 3 max pool \uf8f9 \uf8fa \uf8fa \uf8fb 32 \u00d7 32 [concat (Low, Mid, High), 384] 32 \u00d7 32 \uf8ee \uf8f0 3 \u00d7 3 conv, 128 3 \u00d7 3 conv, 64 3 \u00d7 3 conv, 1 \uf8f9 \uf8fb \uf8ee \uf8f0 3 \u00d7 3 CDC, 128 3 \u00d7 3 CDC, 64 3 \u00d7 3 CDC, 1 \uf8f9 \uf8fb # params 2.25 \u00d7 106 2.25 \u00d7 106 Table 1. Architecture of DepthNet and CDCN. Inside the brackets are the \ufb01lter sizes and feature dimensionalities. \u201cconv\u201d and \u201cCDC\u201d suggest vanilla and central difference convolution, respectively. All convolutional layers are with stride=1 and followed by a BN-ReLU layer while max pool layers are with stride=2. Module (MAFM) with selective attention capacity. Search Backbone for FAS task. Our searching algorithm is based on two gradient-based NAS methods [35, 60], and more technical details can be referred to the original papers. Here we mainly state the new contributions about searching backbone for FAS task. As illustrated in Fig. 3(a), the goal is to search for cells in three levels (low-level, mid-level and high-level) to form a network backbone for FAS task. Inspired by the dedicated neurons for hierarchical organization in human visual system [40], we prefer to search these multi-level cells freely (i.e., cells with varied structures), which is more \ufb02exible and generalized. We name this con\ufb01guration as \u201cVaried Cells\u201d and will study its impacts in Sec. 4.3 (see Tab. 2). Different from previous works [35, 60], we adopt only one output of the latest incoming cell as the input of the current cell. As for the cell-level structure, Fig. 3(b) shows that each cell is represented as a directed acyclic graph (DAG) of N nodes {x}N\u22121 i=0 , where each node represents a network layer. We denote the operation space as O, and Fig. 3(c) shows eight designed candidate operations (none, skip-connect and CDCs). Each edge (i, j) of DAG represents the information \ufb02ow from node xi to node xj, which consists of the candidate operations weighted by the architecture parameter \u03b1(i,j). Specially, each edge (i, j) can be formulated by a function \u02dc o(i,j) where \u02dc o(i,j)(xi) = P o\u2208O \u03b7(i,j) o \u00b7 o(xi). Softmax function is utilized to relax architecture parameter \u03b1(i,j) into operation weight o \u2208O, that is \u03b7(i,j) o = exp(\u03b1(i,j) o ) P o\u2032\u2208O exp(\u03b1(i,j) o\u2032 ). The intermediate node can be denoted as xj = P i