diff --git "a/abs_29K_G/test_abstract_long_2405.01280v1.json" "b/abs_29K_G/test_abstract_long_2405.01280v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.01280v1.json" @@ -0,0 +1,430 @@ +{ + "url": "http://arxiv.org/abs/2405.01280v1", + "title": "Reinforcement Learning for Edit-Based Non-Autoregressive Neural Machine Translation", + "abstract": "Non-autoregressive (NAR) language models are known for their low latency in\nneural machine translation (NMT). However, a performance gap exists between NAR\nand autoregressive models due to the large decoding space and difficulty in\ncapturing dependency between target words accurately. Compounding this,\npreparing appropriate training data for NAR models is a non-trivial task, often\nexacerbating exposure bias. To address these challenges, we apply reinforcement\nlearning (RL) to Levenshtein Transformer, a representative edit-based NAR\nmodel, demonstrating that RL with self-generated data can enhance the\nperformance of edit-based NAR models. We explore two RL approaches: stepwise\nreward maximization and episodic reward maximization. We discuss the respective\npros and cons of these two approaches and empirically verify them. Moreover, we\nexperimentally investigate the impact of temperature setting on performance,\nconfirming the importance of proper temperature setting for NAR models'\ntraining.", + "authors": "Hao Wang, Tetsuro Morimura, Ukyo Honda, Daisuke Kawahara", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "Model AND Based AND Reinforcement AND Learning", + "gt": "Non-autoregressive (NAR) language models are known for their low latency in\nneural machine translation (NMT). However, a performance gap exists between NAR\nand autoregressive models due to the large decoding space and difficulty in\ncapturing dependency between target words accurately. Compounding this,\npreparing appropriate training data for NAR models is a non-trivial task, often\nexacerbating exposure bias. To address these challenges, we apply reinforcement\nlearning (RL) to Levenshtein Transformer, a representative edit-based NAR\nmodel, demonstrating that RL with self-generated data can enhance the\nperformance of edit-based NAR models. We explore two RL approaches: stepwise\nreward maximization and episodic reward maximization. We discuss the respective\npros and cons of these two approaches and empirically verify them. Moreover, we\nexperimentally investigate the impact of temperature setting on performance,\nconfirming the importance of proper temperature setting for NAR models'\ntraining.", + "main_content": "Introduction Non-autoregressive (NAR) language models (Gu et al., 2018) generate translations in parallel, enabling faster inference and having the potential for real-time translation applications. However, despite their computational efficiency, NAR models have been observed to underperform autoregressive (AR) models due to the challenges posed by the large decoding space and difficulty in capturing dependency between target words accurately (Gu et al., 2018). To bridge the performance gap, many NAR architectures and training methods have been proposed, including edit-based models like Insertion Transformer (Stern et al., 2019) and Levenshtein Transformer (Gu et al., 2019). Prior research has also explored knowledge distilla*Work done during internship at CyberAgent AI Lab. tion (Ghazvininejad et al., 2019), which is effective but introduces additional complexity. Unlike AR models, preparing teacher data and designing appropriate training objectives have always been challenging for NAR models (Li et al., 2023). Teacher forcing with inappropriate teacher data may exacerbate the exposure bias problem (Ranzato et al., 2016), affecting model performance. Reinforcement learning (RL) is known for its ability to tackle the exposure bias (Ranzato et al., 2016) and alleviate the object mismatch issue (Ding and Soricut, 2017). Despite its importance, explorations of RL for NAR are still scarce. Shao et al. (2021) proposed a method for reducing the estimation variance. However, this method is only applicable to NAR models with a fixed output length, which is unsuitable for edit-based models. In this paper, we empirically analyze conditions for performance improvement in applying RL to edit-based NAR models in neural machine translation (NMT). Specifically, we focus on Levenshtein Transformer (LevT) (Gu et al., 2019), a prominent edit-based NAR architecture that has shown promise in reducing decoding latency and flexible length adjustment. We demonstrate that RL with self-generated data significantly improves LevT\u2019s performance. Importantly, our methods are orthogonal to existing research on NAR architectures, indicating potential for widespread applicability. We explore two RL approaches: stepwise reward maximization, which computes rewards after each edit operation, and episodic reward maximization, which only computes rewards after all generations are completed. We analyze these two approaches\u2019 respective advantages and disadvantages and empirically verify them. Furthermore, through a series of experiments, we investigate the impact of temperature settings on softmax sampling, aiming to identify the optimal temperature that strikes a balance between exploration and exploitation during the RL training process. arXiv:2405.01280v1 [cs.CL] 2 May 2024 \f2 Background Reinforcement Learning Reinforcement learning has been widely applied to improve the performance of AR NMT models (Ranzato et al., 2016; Bahdanau et al., 2016; Wu et al., 2016) because its ability to train models to optimize nondifferentiable score functions and tackle the exposure bias problem (Ranzato et al., 2016). In practice, REINFORCE (Williams, 1992) with a baseline is commonly used for estimating the policy gradient, which can be computed as follows: \u25bd\u03b8L(\u03b8) \u2248\u2212(r(y) \u2212b(s)) \u25bd\u03b8 log\u03c0\u03b8(y|s), (1) where r is the reward function, b is the baseline, y is a sample from policy \u03c0\u03b8 and state s. Softmax with Temperature In the domain of RL, we need to consider the explorationexploitation trade-off (Sutton and Barto, 2018), where temperature \u03c4 is an important parameter. \u03c4 is used to control the softness of the softmax distribution, pi = exp(yi/\u03c4) P i exp(yi/\u03c4). (2) A larger \u03c4 leads to a more uniform distribution, promoting exploration, while a smaller \u03c4 creates a more peaky distribution, emphasizing exploitation. Kiegeland and Kreutzer (2021) shows that training with an increased temperature can mitigate the peakiness effect due to RL (Choshen et al., 2020), indicating that a suitable temperature is significant for RL training in NMT. RL for NAR Compared to AR methods, studies of reinforcement learning for NAR remain unexplored. Shao et al. (2021) proposed a method to reduce the estimation variance of REINFORCE by fixing the predicted word at position t and sampling words of other positions for n times. However, this method is only applicable to models with a fixed length, which is unsuitable for edit-based models. Levenshtein Transformer Levenshtein Transformer (Gu et al., 2019) is an NAR model based on three edit operations: delete tokens, insert placeholders, and replace placeholders with new tokens. It uses a supervised dual-policy learning algorithm to minimize the Levenshtein distance (Levenshtein, 1965) for training and greedy sampling for decoding. The decoding stops when two consecutive refinement iterations return the same output or a maxFigure 1: The illustration of Levenshtein Transformer\u2019s decoding process (Gu et al., 2019). In each decoding iteration, three edit operations are performed sequentially: delete tokens, insert placeholders, and replace placeholders with new tokens. imum number of iterations (set to 10) is reached. We illustrate the decoding process in Figure 1. LevT\u2019s dual-policy learning generates teacher data by corrupting the ground truth and reconstructing it with its adversary policy. This mechanism not only offers a unique approach to data generation but also underscores the inherent difficulty in preparing teacher data. This introduces concerns regarding the exposure bias, particularly whether the training process can maintain consistency with the text during decoding. To address this issue, we employ RL approaches that use self-generated data for training. 3 Approaches In this section, we present our reinforcement learning approaches in detail. We train a Levenshtein Transformer model as our baseline using the dualpolicy learning algorithm. Based on it, we introduce two distinct RL approaches within the REINFORCE framework: stepwise reward maximization and episodic reward maximization. Moreover, we present our methods for temperature control. Stepwise Reward Maximization General RL training methods for AR NMT models are all episodic1, as it is difficult to calculate BLEU (Papineni et al., 2002) when the sentence is not fully generated. In contrast, NAR models can calculate BLEU on outputs at each decoding step. From the perspective of estimating a more accurate gradient, we propose stepwise reward maximization, which 1In this context, \u201cepisodic\u201d denotes training based on entirely generated sequences \fFigure 2: The illustration of the two RL approaches. (A) is the stepwise reward maximization, which randomly samples from a previous node for each edit operation and calculates BLEU and RL gradient after each edit operation (except for the insert operation, since it is not easy to calculate BLEU after inserting placeholders). (B) is the episodic reward maximization, where each sample is edited multiple times in a linear fashion, without branching into different paths, and BLEU and RL gradient are calculated only after the completion of all edit operations. At every orange node, we sample k times from this node (in this example, the sample size k is 2). calculates reward for each edit operation2 using score differences from one previous edit. Since every step\u2019s reward is calculated separately, this approach should be easier to learn than episodic approaches (Sutton and Barto, 2018). However, it is also more prone to learning bias since the editing process is inherently multi-step. This drawback should not be emphasized since maximizing the reward for each step will likely maximize the episodic reward in NAR models\u2019 training. We use a leave-one-out baseline (Luo, 2020) for b(s) in Equation 1 instead of the greedy baseline proposed in SCST (Rennie et al., 2017) because the greedy decoding is too strong in LevT, which makes gaining positive rewards in SCST difficult and may reduce learning efficiency. For each edit, we sample k actions from the policy at this point. Then, we calculate the baseline as follows: bi(s) = 1 k \u22121 X j\u0338=i r(yj), (3) where yj is the jth sample from the current policy. The final RL gradient estimation becomes \u25bd\u03b8L(\u03b8) \u2248\u2212(r(yi) \u2212bi(s)) \u25bd\u03b8 log\u03c0\u03b8(yi|s). (4) In a straightforward implementation, one might consider applying sampling again to all k samples 2In practice, since it is not easy to calculate BLEU after inserting placeholders, we consider placeholder insertion and token replacement as one edit operation. from the last edit. However, this will cause a combination explosion when the number of edit operations increases. Practically, we randomly choose a sample from the previous edit to perform the subsequent operations. We show an illustration of the sampling process in (A) of Figure 2 and pseudo code of our algorithm in Appendix A. Episodic Reward Maximization We also introduce episodic reward maximization, which calculates rewards only once for each sample and gives all actions the same weight. It is a more traditional way to train NMT models in RL. It allows unbiased learning but may not be efficient. We use the leave-one-out baseline for the episodic reward as well as the stepwise reward. We sample k samples from the initial input. Each sample will be edited multiple times without a branch. After the final edit, we calculate the rewards and baselines. We show an illustration of the sampling process in (B) of Figure 2 and pseudo code of our algorithm in Appendix B. Temperature Control Applying RL to NAR differs significantly from AR because there could be various types of actions rather than just predicting the next token, like deletion and insertion. Due to this difficulty, NAR may need more fine-grained temperature control during training. To investigate the impact of exploration and exploitation in the training process, we explore five different settings of the temperature. Due to the large decoding space \fof Levenshtein Transformer, default temperature 1 may result in poor rewards, and too small temperature may result in peaky distribution, which are both harmful to learning. We use three constant temperature settings set to 0.1, 0.5, and 1 to verify the effect of temperature magnitude. An annealing schedule is known for balancing the trade-off between model accuracy and variance during training (Jang et al., 2016). There are two ways of thinking here. First, to reduce the exposure bias, we want to get close to the decoding scenario, which is greedy decoding in our experiments. Thus, we can apply a regular annealing schedule to gradually reduce the temperature from 1 to 0.1 during training. The temperature function can be written as follows: \u03c4i+1 = max(\u03c4i \u2217exp(\u2212log(\u03c40/\u03c4T ) T ), \u03c4T ), (5) where T is the number of total training steps, and \u03c40 and \u03c4T are the initial and the target temperatures. Second, using high temperatures in the early stages of training may lead to poor rewards and result in low learning efficiency. We can apply an inverted annealing schedule to gradually increase the temperature from 0.1 to 1, guaranteeing stable training in the early stages and gradually increasing the exploration space for efficient training. The temperature function can be written as follows: \u03c4i+1 = min(\u03c4i/exp(\u2212log(\u03c4T /\u03c40) T ), \u03c4T ). (6) In each decoding iteration, multiple edit operations occur, and each operation has a different decoding space size. It may be beneficial to optimize this by using varying temperatures for each operation in every iteration. This is a complicated research question and we leave this exploration to future work. 4 Experiments 4.1 Experimental Setup Data & Evaluation We use WMT\u201914 EnglishGerman (EN-DE) (Bojar et al., 2014) and WAT\u201917 English-Japanese (EN-JA) Small-NMT datasets (Nakazawa et al., 2017) for experiments. We use BPE token-based BLEU scores for evaluations. Data preprocessing follows Gu et al. (2019). Baseline We use Levenshtein Transformer as our baseline. Following Gu et al. (2019), we trained a LevT with 300K steps and a max batch size of 65,536 tokens per step. However, like Reid et al. (2023), we cannot reproduce the results of Gu et al. (2019). We use our results in this paper. RL According to Gu et al. (2019), most decodings are gotten in 1-4 iterations, and the average number of decoding iterations is 2.43. To minimize the gap between the training and decoding states, we start with a null string and conduct 3 iterations (8 edits) for each sample during RL training. We set the total training steps T to 50,000, with a max batch size of 4,096 tokens per step. To prevent the out-of-memory issue, we limit the decoding space of placeholder insertion from 256 to 64. The sample size k of the baseline is set to 5. Our implementation is based on Fairseq3. Computational Cost The pre-training phase of LevT on a GCP VM instance with A100x4 GPUs requires roughly 3 days, while the subsequent RL fine-tuning process takes approximately 1 day to complete. 4.2 Results We show the BLEU scores of our approaches in Table 1. The episodic reward model4 showed notable improvement over the baseline. The score is even close to the distillation model, which requires a heavy pre-training5 of AR models. However, the stepwise reward model showed only limited improvement. To explain this, we focus on the advantage, r(y) \u2212b(s), included in the policy gradient (Equation 1), as a larger value of the advantage can increase the policy gradient\u2019s magnitude. A higher standard deviation (SD) of the advantages indicates larger fluctuations in policy gradients. Table 2 shows the SDs of the advantages of the stepwise reward model, with notably higher values in the early stages of edit operations compared to later stages. This suggests that the stepwise reward model disproportionately focuses on early operations, potentially leading to uneven learning and reduced performance. In contrast, the episodic reward model applies the same rewards and advantages across all operations, facilitating more uniform learning and improved performance. 3https://github.com/facebookresearch/fairseq 4The term \u201cepisode/stepwise reward model\u201d specifically refers to the model trained using the \u201cepisode/stepwise reward maximization\u201d approach. 5To produce a distillation model, we need to train an autoregressive Transformer first, which needs additional 3 days of training on our machine. \fModel EN-DE EN-JA LevT 24.03 31.76 LevT + distillation 26.49 LevT + RL (stepwise) 24.29 31.73 LevT + RL (episodic) 25.72 32.75 Table 1: The BLEU scores of our approaches and the baseline. Temperatures are set to 1. Due to the limited computational resources, we only trained the distillation model for the EN-DE dataset using the ready-made distillation dataset. Iteration Edit Operation EN-DE EN-JA 1 Insert + Replace 9.99 8.59 2 Delete 2.05 1.35 Insert + Replace 3.28 2.48 3 Delete 1.67 1.29 Insert + Replace 3.04 1.60 Table 2: Stepwise reward model\u2019s standard deviation (SD) of the advantage in each edit operation. Insertion and replacement share the same reward. We only report scores of applying RL to the model without distillation since we found that RL significantly improved the model without distillation (max 1.69 points) compared to when distillation was applied (max 0.5 point). Moreover, when confronted with distillation models, it raises questions such as which data we should use for RL training, the original or the distillation one. We leave these research questions to future work. We show the BLEU scores of different temperature settings in Table 3. Model performance varies significantly with temperature settings (max 1.01 points in EN-JA). Among constant setting models, the model with a temperature of 0.5 performed best in EN-DE, and the model with a temperature of 0.1 performed best in EN-JA, indicating that too large temperature harms RL training. The two models using annealing schedules performed great in both tasks, showing the effectiveness of the annealing algorithms for improving learning efficiency. However, the annealing models did not always outperform the constant models, which suggests the difficulty of seeking the optimal temperature setting for NAR models\u2019 RL training. Also, we found the inverted annealing model (\u03c4=0.1\u21921) begins dropping performance after 10,000 steps training in EN-JA, indicating that the speed of annealing will significantly affect the model training quality. Temperature EN-DE EN-JA Constant (\u03c4 = 1) 25.72 32.75 Constant (\u03c4 = 0.5) 25.98 33.45 Constant (\u03c4 = 0.1) 25.76 33.60 Annealing (\u03c4 = 1 \u21920.1) 25.83 33.76 Annealing (\u03c4 = 0.1 \u21921) 25.90 33.43 Table 3: The BLEU scores of episodic reward models using different temperature settings. We also quickly surveyed the relationship between performance and the number of decoding iterations in RL. The model performance dropped when we reduced the number of iterations to 2 during training and remained flat when we increased it to 4, indicating that our setting is reasonable. 5", + "additional_graph_info": { + "graph": [ + [ + "Hao Wang", + "Jianwei Li" + ], + [ + "Hao Wang", + "Zhengyu Li" + ], + [ + "Jianwei Li", + "Dongkuan Xu" + ], + [ + "Jianwei Li", + "Tianchi Zhang" + ], + [ + "Jianwei Li", + "Sheng Liu" + ], + [ + "Zhengyu Li", + "Hong Guo" + ], + [ + "Zhengyu Li", + "Curtis Bright" + ], + [ + "Zhengyu Li", + "Vijay Ganesh" + ] + ], + "node_feat": { + "Hao Wang": [ + { + "url": "http://arxiv.org/abs/2405.01280v1", + "title": "Reinforcement Learning for Edit-Based Non-Autoregressive Neural Machine Translation", + "abstract": "Non-autoregressive (NAR) language models are known for their low latency in\nneural machine translation (NMT). However, a performance gap exists between NAR\nand autoregressive models due to the large decoding space and difficulty in\ncapturing dependency between target words accurately. Compounding this,\npreparing appropriate training data for NAR models is a non-trivial task, often\nexacerbating exposure bias. To address these challenges, we apply reinforcement\nlearning (RL) to Levenshtein Transformer, a representative edit-based NAR\nmodel, demonstrating that RL with self-generated data can enhance the\nperformance of edit-based NAR models. We explore two RL approaches: stepwise\nreward maximization and episodic reward maximization. We discuss the respective\npros and cons of these two approaches and empirically verify them. Moreover, we\nexperimentally investigate the impact of temperature setting on performance,\nconfirming the importance of proper temperature setting for NAR models'\ntraining.", + "authors": "Hao Wang, Tetsuro Morimura, Ukyo Honda, Daisuke Kawahara", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction Non-autoregressive (NAR) language models (Gu et al., 2018) generate translations in parallel, enabling faster inference and having the potential for real-time translation applications. However, despite their computational efficiency, NAR models have been observed to underperform autoregressive (AR) models due to the challenges posed by the large decoding space and difficulty in capturing dependency between target words accurately (Gu et al., 2018). To bridge the performance gap, many NAR architectures and training methods have been proposed, including edit-based models like Insertion Transformer (Stern et al., 2019) and Levenshtein Transformer (Gu et al., 2019). Prior research has also explored knowledge distilla*Work done during internship at CyberAgent AI Lab. tion (Ghazvininejad et al., 2019), which is effective but introduces additional complexity. Unlike AR models, preparing teacher data and designing appropriate training objectives have always been challenging for NAR models (Li et al., 2023). Teacher forcing with inappropriate teacher data may exacerbate the exposure bias problem (Ranzato et al., 2016), affecting model performance. Reinforcement learning (RL) is known for its ability to tackle the exposure bias (Ranzato et al., 2016) and alleviate the object mismatch issue (Ding and Soricut, 2017). Despite its importance, explorations of RL for NAR are still scarce. Shao et al. (2021) proposed a method for reducing the estimation variance. However, this method is only applicable to NAR models with a fixed output length, which is unsuitable for edit-based models. In this paper, we empirically analyze conditions for performance improvement in applying RL to edit-based NAR models in neural machine translation (NMT). Specifically, we focus on Levenshtein Transformer (LevT) (Gu et al., 2019), a prominent edit-based NAR architecture that has shown promise in reducing decoding latency and flexible length adjustment. We demonstrate that RL with self-generated data significantly improves LevT\u2019s performance. Importantly, our methods are orthogonal to existing research on NAR architectures, indicating potential for widespread applicability. We explore two RL approaches: stepwise reward maximization, which computes rewards after each edit operation, and episodic reward maximization, which only computes rewards after all generations are completed. We analyze these two approaches\u2019 respective advantages and disadvantages and empirically verify them. Furthermore, through a series of experiments, we investigate the impact of temperature settings on softmax sampling, aiming to identify the optimal temperature that strikes a balance between exploration and exploitation during the RL training process. arXiv:2405.01280v1 [cs.CL] 2 May 2024 \f2 Background Reinforcement Learning Reinforcement learning has been widely applied to improve the performance of AR NMT models (Ranzato et al., 2016; Bahdanau et al., 2016; Wu et al., 2016) because its ability to train models to optimize nondifferentiable score functions and tackle the exposure bias problem (Ranzato et al., 2016). In practice, REINFORCE (Williams, 1992) with a baseline is commonly used for estimating the policy gradient, which can be computed as follows: \u25bd\u03b8L(\u03b8) \u2248\u2212(r(y) \u2212b(s)) \u25bd\u03b8 log\u03c0\u03b8(y|s), (1) where r is the reward function, b is the baseline, y is a sample from policy \u03c0\u03b8 and state s. Softmax with Temperature In the domain of RL, we need to consider the explorationexploitation trade-off (Sutton and Barto, 2018), where temperature \u03c4 is an important parameter. \u03c4 is used to control the softness of the softmax distribution, pi = exp(yi/\u03c4) P i exp(yi/\u03c4). (2) A larger \u03c4 leads to a more uniform distribution, promoting exploration, while a smaller \u03c4 creates a more peaky distribution, emphasizing exploitation. Kiegeland and Kreutzer (2021) shows that training with an increased temperature can mitigate the peakiness effect due to RL (Choshen et al., 2020), indicating that a suitable temperature is significant for RL training in NMT. RL for NAR Compared to AR methods, studies of reinforcement learning for NAR remain unexplored. Shao et al. (2021) proposed a method to reduce the estimation variance of REINFORCE by fixing the predicted word at position t and sampling words of other positions for n times. However, this method is only applicable to models with a fixed length, which is unsuitable for edit-based models. Levenshtein Transformer Levenshtein Transformer (Gu et al., 2019) is an NAR model based on three edit operations: delete tokens, insert placeholders, and replace placeholders with new tokens. It uses a supervised dual-policy learning algorithm to minimize the Levenshtein distance (Levenshtein, 1965) for training and greedy sampling for decoding. The decoding stops when two consecutive refinement iterations return the same output or a maxFigure 1: The illustration of Levenshtein Transformer\u2019s decoding process (Gu et al., 2019). In each decoding iteration, three edit operations are performed sequentially: delete tokens, insert placeholders, and replace placeholders with new tokens. imum number of iterations (set to 10) is reached. We illustrate the decoding process in Figure 1. LevT\u2019s dual-policy learning generates teacher data by corrupting the ground truth and reconstructing it with its adversary policy. This mechanism not only offers a unique approach to data generation but also underscores the inherent difficulty in preparing teacher data. This introduces concerns regarding the exposure bias, particularly whether the training process can maintain consistency with the text during decoding. To address this issue, we employ RL approaches that use self-generated data for training. 3 Approaches In this section, we present our reinforcement learning approaches in detail. We train a Levenshtein Transformer model as our baseline using the dualpolicy learning algorithm. Based on it, we introduce two distinct RL approaches within the REINFORCE framework: stepwise reward maximization and episodic reward maximization. Moreover, we present our methods for temperature control. Stepwise Reward Maximization General RL training methods for AR NMT models are all episodic1, as it is difficult to calculate BLEU (Papineni et al., 2002) when the sentence is not fully generated. In contrast, NAR models can calculate BLEU on outputs at each decoding step. From the perspective of estimating a more accurate gradient, we propose stepwise reward maximization, which 1In this context, \u201cepisodic\u201d denotes training based on entirely generated sequences \fFigure 2: The illustration of the two RL approaches. (A) is the stepwise reward maximization, which randomly samples from a previous node for each edit operation and calculates BLEU and RL gradient after each edit operation (except for the insert operation, since it is not easy to calculate BLEU after inserting placeholders). (B) is the episodic reward maximization, where each sample is edited multiple times in a linear fashion, without branching into different paths, and BLEU and RL gradient are calculated only after the completion of all edit operations. At every orange node, we sample k times from this node (in this example, the sample size k is 2). calculates reward for each edit operation2 using score differences from one previous edit. Since every step\u2019s reward is calculated separately, this approach should be easier to learn than episodic approaches (Sutton and Barto, 2018). However, it is also more prone to learning bias since the editing process is inherently multi-step. This drawback should not be emphasized since maximizing the reward for each step will likely maximize the episodic reward in NAR models\u2019 training. We use a leave-one-out baseline (Luo, 2020) for b(s) in Equation 1 instead of the greedy baseline proposed in SCST (Rennie et al., 2017) because the greedy decoding is too strong in LevT, which makes gaining positive rewards in SCST difficult and may reduce learning efficiency. For each edit, we sample k actions from the policy at this point. Then, we calculate the baseline as follows: bi(s) = 1 k \u22121 X j\u0338=i r(yj), (3) where yj is the jth sample from the current policy. The final RL gradient estimation becomes \u25bd\u03b8L(\u03b8) \u2248\u2212(r(yi) \u2212bi(s)) \u25bd\u03b8 log\u03c0\u03b8(yi|s). (4) In a straightforward implementation, one might consider applying sampling again to all k samples 2In practice, since it is not easy to calculate BLEU after inserting placeholders, we consider placeholder insertion and token replacement as one edit operation. from the last edit. However, this will cause a combination explosion when the number of edit operations increases. Practically, we randomly choose a sample from the previous edit to perform the subsequent operations. We show an illustration of the sampling process in (A) of Figure 2 and pseudo code of our algorithm in Appendix A. Episodic Reward Maximization We also introduce episodic reward maximization, which calculates rewards only once for each sample and gives all actions the same weight. It is a more traditional way to train NMT models in RL. It allows unbiased learning but may not be efficient. We use the leave-one-out baseline for the episodic reward as well as the stepwise reward. We sample k samples from the initial input. Each sample will be edited multiple times without a branch. After the final edit, we calculate the rewards and baselines. We show an illustration of the sampling process in (B) of Figure 2 and pseudo code of our algorithm in Appendix B. Temperature Control Applying RL to NAR differs significantly from AR because there could be various types of actions rather than just predicting the next token, like deletion and insertion. Due to this difficulty, NAR may need more fine-grained temperature control during training. To investigate the impact of exploration and exploitation in the training process, we explore five different settings of the temperature. Due to the large decoding space \fof Levenshtein Transformer, default temperature 1 may result in poor rewards, and too small temperature may result in peaky distribution, which are both harmful to learning. We use three constant temperature settings set to 0.1, 0.5, and 1 to verify the effect of temperature magnitude. An annealing schedule is known for balancing the trade-off between model accuracy and variance during training (Jang et al., 2016). There are two ways of thinking here. First, to reduce the exposure bias, we want to get close to the decoding scenario, which is greedy decoding in our experiments. Thus, we can apply a regular annealing schedule to gradually reduce the temperature from 1 to 0.1 during training. The temperature function can be written as follows: \u03c4i+1 = max(\u03c4i \u2217exp(\u2212log(\u03c40/\u03c4T ) T ), \u03c4T ), (5) where T is the number of total training steps, and \u03c40 and \u03c4T are the initial and the target temperatures. Second, using high temperatures in the early stages of training may lead to poor rewards and result in low learning efficiency. We can apply an inverted annealing schedule to gradually increase the temperature from 0.1 to 1, guaranteeing stable training in the early stages and gradually increasing the exploration space for efficient training. The temperature function can be written as follows: \u03c4i+1 = min(\u03c4i/exp(\u2212log(\u03c4T /\u03c40) T ), \u03c4T ). (6) In each decoding iteration, multiple edit operations occur, and each operation has a different decoding space size. It may be beneficial to optimize this by using varying temperatures for each operation in every iteration. This is a complicated research question and we leave this exploration to future work. 4 Experiments 4.1 Experimental Setup Data & Evaluation We use WMT\u201914 EnglishGerman (EN-DE) (Bojar et al., 2014) and WAT\u201917 English-Japanese (EN-JA) Small-NMT datasets (Nakazawa et al., 2017) for experiments. We use BPE token-based BLEU scores for evaluations. Data preprocessing follows Gu et al. (2019). Baseline We use Levenshtein Transformer as our baseline. Following Gu et al. (2019), we trained a LevT with 300K steps and a max batch size of 65,536 tokens per step. However, like Reid et al. (2023), we cannot reproduce the results of Gu et al. (2019). We use our results in this paper. RL According to Gu et al. (2019), most decodings are gotten in 1-4 iterations, and the average number of decoding iterations is 2.43. To minimize the gap between the training and decoding states, we start with a null string and conduct 3 iterations (8 edits) for each sample during RL training. We set the total training steps T to 50,000, with a max batch size of 4,096 tokens per step. To prevent the out-of-memory issue, we limit the decoding space of placeholder insertion from 256 to 64. The sample size k of the baseline is set to 5. Our implementation is based on Fairseq3. Computational Cost The pre-training phase of LevT on a GCP VM instance with A100x4 GPUs requires roughly 3 days, while the subsequent RL fine-tuning process takes approximately 1 day to complete. 4.2 Results We show the BLEU scores of our approaches in Table 1. The episodic reward model4 showed notable improvement over the baseline. The score is even close to the distillation model, which requires a heavy pre-training5 of AR models. However, the stepwise reward model showed only limited improvement. To explain this, we focus on the advantage, r(y) \u2212b(s), included in the policy gradient (Equation 1), as a larger value of the advantage can increase the policy gradient\u2019s magnitude. A higher standard deviation (SD) of the advantages indicates larger fluctuations in policy gradients. Table 2 shows the SDs of the advantages of the stepwise reward model, with notably higher values in the early stages of edit operations compared to later stages. This suggests that the stepwise reward model disproportionately focuses on early operations, potentially leading to uneven learning and reduced performance. In contrast, the episodic reward model applies the same rewards and advantages across all operations, facilitating more uniform learning and improved performance. 3https://github.com/facebookresearch/fairseq 4The term \u201cepisode/stepwise reward model\u201d specifically refers to the model trained using the \u201cepisode/stepwise reward maximization\u201d approach. 5To produce a distillation model, we need to train an autoregressive Transformer first, which needs additional 3 days of training on our machine. \fModel EN-DE EN-JA LevT 24.03 31.76 LevT + distillation 26.49 LevT + RL (stepwise) 24.29 31.73 LevT + RL (episodic) 25.72 32.75 Table 1: The BLEU scores of our approaches and the baseline. Temperatures are set to 1. Due to the limited computational resources, we only trained the distillation model for the EN-DE dataset using the ready-made distillation dataset. Iteration Edit Operation EN-DE EN-JA 1 Insert + Replace 9.99 8.59 2 Delete 2.05 1.35 Insert + Replace 3.28 2.48 3 Delete 1.67 1.29 Insert + Replace 3.04 1.60 Table 2: Stepwise reward model\u2019s standard deviation (SD) of the advantage in each edit operation. Insertion and replacement share the same reward. We only report scores of applying RL to the model without distillation since we found that RL significantly improved the model without distillation (max 1.69 points) compared to when distillation was applied (max 0.5 point). Moreover, when confronted with distillation models, it raises questions such as which data we should use for RL training, the original or the distillation one. We leave these research questions to future work. We show the BLEU scores of different temperature settings in Table 3. Model performance varies significantly with temperature settings (max 1.01 points in EN-JA). Among constant setting models, the model with a temperature of 0.5 performed best in EN-DE, and the model with a temperature of 0.1 performed best in EN-JA, indicating that too large temperature harms RL training. The two models using annealing schedules performed great in both tasks, showing the effectiveness of the annealing algorithms for improving learning efficiency. However, the annealing models did not always outperform the constant models, which suggests the difficulty of seeking the optimal temperature setting for NAR models\u2019 RL training. Also, we found the inverted annealing model (\u03c4=0.1\u21921) begins dropping performance after 10,000 steps training in EN-JA, indicating that the speed of annealing will significantly affect the model training quality. Temperature EN-DE EN-JA Constant (\u03c4 = 1) 25.72 32.75 Constant (\u03c4 = 0.5) 25.98 33.45 Constant (\u03c4 = 0.1) 25.76 33.60 Annealing (\u03c4 = 1 \u21920.1) 25.83 33.76 Annealing (\u03c4 = 0.1 \u21921) 25.90 33.43 Table 3: The BLEU scores of episodic reward models using different temperature settings. We also quickly surveyed the relationship between performance and the number of decoding iterations in RL. The model performance dropped when we reduced the number of iterations to 2 during training and remained flat when we increased it to 4, indicating that our setting is reasonable. 5" + } + ], + "Jianwei Li": [ + { + "url": "http://arxiv.org/abs/2312.05725v2", + "title": "FP8-BERT: Post-Training Quantization for Transformer", + "abstract": "Transformer-based models, such as BERT, have been widely applied in a wide\nrange of natural language processing tasks. However, one inevitable side effect\nis that they require massive memory storage and inference cost when deployed in\nproduction. Quantization is one of the popularized ways to alleviate the cost.\nHowever, the previous 8-bit quantization strategy based on INT8 data format\neither suffers from the degradation of accuracy in a Post-Training Quantization\n(PTQ) fashion or requires an expensive Quantization-Aware Training (QAT)\nprocess. Recently, a new numeric format FP8 (i.e. floating-point of 8-bits) has\nbeen proposed and supported in commercial AI computing platforms such as H100.\nIn this paper, we empirically validate the effectiveness of FP8 as a way to do\nPost-Training Quantization without significant loss of accuracy, with a simple\ncalibration and format conversion process. We adopt the FP8 standard proposed\nby NVIDIA Corp. (2022) in our extensive experiments of BERT variants on GLUE\nand SQuAD v1.1 datasets, and show that PTQ with FP8 can significantly improve\nthe accuracy upon that with INT8, to the extent of the full-precision model.", + "authors": "Jianwei Li, Tianchi Zhang, Ian En-Hsu Yen, Dongkuan Xu", + "published": "2023-12-10", + "updated": "2023-12-12", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "main_content": "Introduction A number of large-scale neural network (NN) architectures have been proposed in recent years and achieved remarkable performance in a wide range of tasks. However, large-scale models require colossal training memory, enormous storage space, and expensive inference costs, making them hard to deploy in production. For example, GPT-3, leading in many natural language processing tasks, has 175 billion parameters to evaluate (Brown et al. 2020). Quantization is one of the most popularized ways to reduce the model size and decrease deployment costs. The core operation is to map a set of continuous real-valued numbers into a fixed discrete set of numbers to reduce the number of bits required for neural networks. This way, computation with low-precision formats can be executed *Accepted by DCAA@AAAI 2023 \u2020North Carolina State University, Email: jli265@ncsu.edu \u2021University of Michigan, Email: tonyztc@umich.edu \u00a7Moffett AI, Email: ian@moffett.ai \u00b6North Carolina State University, Email: dxu27@ncsu.edu Copyright \u00a9 2023, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: INT8 Uniform Quantization v.s. FP8 Non-Uniform Quantization. [\u2206i, \u2206i+1] refers to the discrete quantization interval. with a smaller circuit area, fewer clock cycles, and less energy, making model inference quick and environmentfriendly (Horowitz 2014; Burgess et al. 2019). Moving from full-precision floating-point representations to low-precision fixed integer values represented in eight bits or less is currently the most widely accepted quantization method. For example, Han, Mao, and Dally (2016); Jacob et al. (2018); Banner et al. (2018) implement quantization with INT8 format and achieves low accuracy loss on CNN-based models. This kind of method primarily trains the model with the full-precision design first and then performs quantization without any finetuning, also known as PostTraining Quantization (PTQ). Therefore, the overhead of PTQ can be negligible. However, it is proved that PTQ with INT8 is unreliable because it may significantly decrease the performance of Transformer-based models (Anderson et al. 2021). For example, Kuzmin et al. (2022) discover that the accuracy of BERT on GLUE datasets significantly drops after quantization. To resolve the problem above, Kim et al. (2021); Zafrir et al. (2019); Bhandare et al. (2019) finetune the model after quantization to recover the accuracy of the original full-precision model, also known as Quantization Aware Training (QAT). However, its complexity and expensive retraining cost hinder the widespread adoption of QAT. This paper proposes quantifying the full-precision numarXiv:2312.05725v2 [cs.AI] 12 Dec 2023 \fbers to floating 8-bit values (FP8) instead of INT8. We delve into the hidden reasons for the failure of PTQ in Transformer-based models and find that there are many outliers in BERT, which is unfriendly for the uniform quantization strategy with INT8. Similarly, Miyashita, Lee, and Murmann (2016) shows that, in some trained neural networks, parameters are non-uniformly distributed. Therefore, it is intuitive that non-uniform floating-point 8-bit precision may perform better than uniformly distributed INT8 for quantization. We describe the difference between INT8 uniform quantization strategy and FP8 non-uniform quantization strategy in Figure 1. Previously, FP8 has not been frequently mentioned as a quantization target due to the lack of hardware support. However, Graphcore, AMD, and Qualcomm published their FP8 specification in 2022 (Noune et al. 2022). At a similar time, NVIDIA, Arm, and Intel released their standard (Micikevicius et al. 2022). Both of them include two encodings, E4M3 (1 sign bit, 4 exponents bits, 3 mantissa bits) and E5M2 (1 sign bit, 5 exponents bits, 2 mantissa bits). Moreover, quantization with FP8 becomes realistic as commercial computing platforms with FP8 support release. For example, NVIDIA published RTX 40 Series graphic cards with FP8 support natively (NVIDIA Corp. 2022). In addition, Habana, Graphcore, and Moffett also announced the plan to support FP8 in the future products (Habana Labs Ltd. 2022; Graphcore Inc. 2022). We describe our contribution as follows: \u2022 We corroborate the performance of the Post-Training FP8 Quantization method on a wide range of tasks, models, and datasets. \u2022 We propose a reliable Post-Training Quantization strategy for BERT and other Transformer-based models that is as simple as the most widely accepted PTQ INT8 Quantization while yielding much closer accuracy to the full-precision model. \u2022 We provide an empirical FP8 quantization guideline for future research. The rest of the paper is structured as follows. First, we introduce related work in the next section. Then we describe our Post-Training Quantization strategy in the methodology section. Finally, in the experiment section, we compare the results of FP8 quantization strategy and INT8 strategy on Transformer-based and CNN-based models, followed by analysis and conclusion. Related Work Quantization is a method to represent full or higher precision values by a discrete set of numbers to minimize the number of bits required and maximize the accuracy simultaneously. For a machine learning model using single-precision (FP32) format, encoding it with a lower-bit format, like IEEE half-precision format (FP16) or Brain floating point (BF16), can significantly reduce the model size and accelerate model training and inference (Courbariaux, Bengio, and David 2014; Gupta et al. 2015; Micikevicius et al. 2017). Quantization with 8-bit integer (INT8) is a popular trend since some edge devices only support integer arithmetic (Arm Ltd. 2020). INT8 Quantization has been well explored on many CNN-based models and achieves almost the same accuracy with full-precision versions (Han, Mao, and Dally 2016; Jacob et al. 2018; Banner et al. 2018). In parallel, previous works also find that INT8 Quantization with Transformer-based models requires more effort to keep accuracy (Anderson et al. 2021). Bhandare et al. (2019) and Zafrir et al. (2019) quantize BERT with mixed precision and achieve low accuracy loss. Kim et al. (2021) further enables the BERT model to perform integer-only inference by replacing sensitive operators with specific functions. However, both of these methods require expensive extra model retraining. In this paper, we try to overcome this deficiency with FP8. Recently, new hardware supporting 8-bit floating numbers (FP8) has been successively released (NVIDIA Corp. 2022; Habana Labs Ltd. 2022), which inspire us to explore 8-bit quantization with FP8. Early works like Sun et al. (2019); Wang et al. (2018); Cambier et al. (2020) has demonstrated FP8\u2019s potential ability to maintain accuracy on some CNNbased models compared with FP32. Furthermore, the parallel work (Kuzmin et al. 2022) also indicates that FP8 brings significant improvement of accuracy in the quantization of Transformer-based models, which is consistent with our conclusion. Methodology This section introduces the FP8 specifications we adopt and then demonstrates the FP8 quantization methods applied in our experiments. Generally, we directly borrow the strategies from INT8 quantization. FP8 Specifications The industry has released two FP8 specifications in 2022. One is supported by Graphcore, AMD, and Qualcomm, while NVIDIA, Arm, and Intel propose the other. In this paper, we adopt the standard from NVIDIA Corp. (2022), and verify the performance of two kinds of encoding: E4M3 (1 sign bit, 4 exponents bits, 3 mantissa bits) and E5M2 (1 sign bit, 5 exponents bits, 2 mantissa bits). FP8 Simulation Due to the lack of ubiquitous hardware and software supporting FP8 arithmetic, we simulate this low-precision logic with full-precision numbers. Specifically, we implement the C++ library to support the conversion between the FP32 and FP8 numbers. The converting process is rule-based: (1) the exponent values in two formats should be aligned. (2) the mantissa value in the casting process should always keep bits with high precision. Finally, we can evaluate FP8 results on general hardware (such as CPU). In addition, we also evaluate the FP8 result on Moffett\u2019s next-generation chip (Photon), which natively supports FP8 arithmetic. Non-Uniform FP8 Quantization FP8 quantization requires defining the function used to quantize neural network (NN) weights and activations with FP8 format. The core functionality of this function is to map \fthe real values in FP32 precision into a low-precision range. For example, the integer range for INT8 is [-128, 127]; the float range for FP8 E4M3 is [-448.0, 448.0]; the float range of FP8 E5M2 is [-57334.0, 57334.0]. In this paper, we borrow the idea from INT8 quantization and design Equation 1 to do this work. Qfp8(r) = Xi, if r S \u2208[\u03b4i, \u03b4i+1) (1) where Qfp8 is the quantization operator, r is the original float number, S is scaling factor, Xi denotes the discrete FP8 quantized levels, \u03b4i represents the FP8 quantized steps. In this paper, we obtain the value of Xi by simply casting r S from FP32 format to FP8 format. Therefore, we rewrite the function as Equation 2. Qfp8(r) = Castfp32 to fp8( r S ) \u2212Z. (2) where Z is the offset for zero point. Note that our resulting FP8 quantized values are non-uniformly spaced in 256 8-bit representations because of the nature of floating point numbers. In contrast, the INT8 quantized values are uniformly spaced with the same mapping function. Symmetric FP8 Quantization According to the definition of Equation 1, we know the scaling factor S decides the final FP8 quantized values. Therefore, we borrow the idea from INT8 Symmetric Quantization and design Equation 3 to obtain the scaling factor. S = \u03b2 \u2212\u03b1 Maxfp8 ; \u03b2 = \u2212\u03b1 (3) where [\u03b1, \u03b2] is the clipping range of real values. Moreover, the Z in Equation 1 is set to be 0 because of its symmetric property. Clippling Range Calibration In order to derive the optimal scaling factor, we may first need to choose the optimal clipping range. Many methods have been proposed to handle this problem. For example, Wu et al. (2020); McKinstry et al. (2019) uses the min/max value of NN weights or activations to define the clipping range; Migacz (2017) calibrates the clipping range by minimizing KL divergence between the real values and the FP8 quantized values. This paper adopts the most straightforward method: min/max signal. Therefore, the Equation 3 can be rewrite as Equation 4. S = Max(|rmax|, |rmin|) Maxfp8 (4) Static FP8 Quantization At the inference phase, NN activations will change as the input change, so we may generate the scaling factor for each input to reduce quantization error. This method of quantization is known as Dynamic Quantization. However, computing the clipping range dynamically can be expensive. Therefore, we choose static scaling factors for NN activations and weights in this paper. Channelwise and Layerwise FP8 Quantization NN weights have multiple dimensions, where the channel dimension is relatively independent. For CNN-based models, each channel corresponds to a number of filters; For Transformer-based models, each channel corresponds to a sequence of token embeddings. The filters and token embeddings from different channels mostly have different ranges of values. Therefore, the granularity of choosing clipping range [\u03b1, \u03b2] for NN weights at each channel is crucial for the final FP8 quantization performance. Currently, Channelwise Quantization is the standard approach used for the NN weights in INT8 quantization, so we adopt the same strategy in FP8 quantization. Regarding the NN activations, we choose the basic Layererwise Quantization strategy. POST Training FP8 Quantization For INT8 quantization, previous works mostly require model retraining to recover the accuracy, especially for Transformer-based models. This kind of method is known as Quantization Aware Training. In this paper, our FP8 quantization generally does not require model retraining, also known as Post-Training Quantization. Mixed-precision FP8 Quantization Some non-linear operators (such as Softmax, Gelu) and Layernorm are sensitive to Quantization. As a result, it is easy to see significant accuracy degradation when we quantized them. Therefore, we accept half-precision format (BF16) for these special operators in our strategy. Experiments In this section, we aim to empirically verify the FP8 quantization results in a wide range of tasks, models, and datasets with basic quantization strategies introduced in Section . We also compare our results with INT8 quantization in the same experimental settings. Finally, we explore and analyze the experiment results and the hidden reasons. Baselines and Setup We validate the effect of FP8 quantization with BERTbase and BERT-large on GLUE dev datasets for the natural language understanding task. In addition, we also check their quantization results on Squad v1.1 for the questionanswering task. Moreover, We do experiments on ResNet18 and ResNet50 with CIFAR10, CIFAR100, and ImageNet datasets for the image classification task. All the models are first trained and evaluated in FP32 format and then used as our first baseline. After that, the second baseline is derived from INT8 quantization. Note that the two quantization settings are the same as each other. That means the INT8 quantization strategy in the baseline and our FP8 quantization strategy are both Post-Training Quantization, which does not require retraining. In this paper, we only test the quantization results of one encoding of FP8: E4M3. \fModels MNLI-m Acc QNLI Acc QQP Acc MRPC Acc SST-2 Acc COLA Mrr RTE Acc STS-B Spear BERT-base FP32 84.58 91.4 90.91 87.6 92.43 56.3 72.3 89.0 BERT-base Int8 34.39 51.62 63.16 31.62 52.64 9.0 47.29 27.6 BERT-base FP8 (E4M3) 84.63 91.58 90.96 87.2 92.32 56.0 72.2 89.0 BERT-large FP32 86.6 92.3 91.3 89.1 93.2 60.6 74.8 90 BERT-large Int8 43.25 52.12 67.33 34.52 59.33 9.2 50.12 29.32 BERT-large FP8 (E4M3) 85.9 92.2 91.3 88.9 93.3 60.0 75.1 89.8 Table 1: BERT: FP32 v.s. INT8 Quantization v.s. FP8 Quantization on GLUE dev set. Note that our quantization results (INT8 and FP8) are both from POST-Training Quantization. We only quantize the general matrix multiply operators(GEMM). Model squad v1 EM / F1 BERT-base FP32 80.8 / 88.5 BERT-base Int8 0.18 / 4.42 BERT-base FP8 (E4M3) 78.89 / 86.44 BERT-large FP32 84.1 / 90.9 BERT-large Int8 0.24 / 6.4 BERT-large FP8 (E4M3) 82.87 / 88.9 Table 2: BERT: FP32 v.s. INT8 Quantization v.s. FP8 Quantization on SQuAD v1.1 Natural Language Understanding Task For Transformer-based models (such as BERT-base and BERT-large), we follow the previous literature (Bhandare et al. 2019; Zafrir et al. 2019) only to quantize the general matrix multiply operators (GEMM) since they are the bottleneck of execution efficiency of quantized models. In contrast, we use the half-precision (BF16) to calculate the rest operators (such as LayerNorm, Gelu, and Softmax). This kind of quantization is also known as Mixed-Precision Quantization. Finally, we verify the models\u2019 accuracy on the GLUE dev set, and Table 1 describes the FP8 quantization results. Question Answering Task In addition, we also check the FP8 quantization results of Transformer-based models on SQuAD v1.1 Different from GLUE, SQuAD is more challenging because it is a questionanswering task. We show the FP8 quantization results in Table 2. Result Analysis. Based on the results in Tables 1-??, we discover that Post-Trianing FP8 Quantization can significantly improve the accuracy of 8-bit quantized Transformerbased models, even close to the original full-precision models. We conjecture that weights and activations of Transformer-based models consist of a large number of outlier values, which is unfriendly for Post-Training INT8 Quantization. In contrast, FP8 is natively non-uniform and represents a more extensive range of values, which is ideal for the Post-Training Quantization of Transformer-based models. Therefore, our FP8 quantization method is a reliable Post-Training Quantization strategy, and the results of our experiments can provide an empirical guideline for future FP8 quantization research." + }, + { + "url": "http://arxiv.org/abs/2312.05720v4", + "title": "Beyond Gradient and Priors in Privacy Attacks: Leveraging Pooler Layer Inputs of Language Models in Federated Learning", + "abstract": "Language models trained via federated learning (FL) demonstrate impressive\ncapabilities in handling complex tasks while protecting user privacy. Recent\nstudies indicate that leveraging gradient information and prior knowledge can\npotentially reveal training samples within FL setting. However, these\ninvestigations have overlooked the potential privacy risks tied to the\nintrinsic architecture of the models. This paper presents a two-stage privacy\nattack strategy that targets the vulnerabilities in the architecture of\ncontemporary language models, significantly enhancing attack performance by\ninitially recovering certain feature directions as additional supervisory\nsignals. Our comparative experiments demonstrate superior attack performance\nacross various datasets and scenarios, highlighting the privacy leakage risk\nassociated with the increasingly complex architectures of language models. We\ncall for the community to recognize and address these potential privacy risks\nin designing large language models.", + "authors": "Jianwei Li, Sheng Liu, Qi Lei", + "published": "2023-12-10", + "updated": "2024-03-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "cs.CR" + ], + "main_content": "Introduction Language models trained under the Federated Learning paradigm play a pivotal role in diverse applications such as next-word predictions on mobile devices and electronic health record analysis in hospitals [Ramaswamy et al., 2019, Li et al., 2020]. This training paradigm prioritizes user privacy by restricting raw data access to local devices and centralizing only the model\u2019s updates, such as gradients and parameters [McMahan et al., 2017]. While the FL framework is created to protect user privacy, vulnerabilities still persist. Previous works [Geiping et al., 2020, Yin et al., 2021, Jeon et al., 2021] proved that attackers can almost perfectly recover image data with gradients, which also highlighted the potential risks in the realm of textual data. Many studies have investigated vulnerabilities of private data in FL when applied to language models [Zhu et al., 2019, Deng et al., 2021, Balunovic et al., 2022, Gupta et al., 2022]. Zhu et al. [2019] and Deng et al. [2021] leverage gradient information and well-designed objective functions to build an optimization-based pipeline that can recover certain training textual data in minimal batch sizes. Balunovic et al. [2022] and Gupta et al. [2022] further improve the recovery rate by incorporating prior knowledge embedded in LLM to provide additional optimization or retrieval signals. All of them mainly focus on algorithm design and external information but rarely notice inherent privacy leakage risks embedded in the language models themselves. In contrast to these approaches, Fowl et al. [2022] and Boenisch et al. [2023] assume a malicious central server, designing specific malicious parameters and architectures for language models to enhance attack performance. To a certain degree, their research is connected to the privacy vulnerabilities of specific module designs. However, their methods depend on matched tampered parameters, such as identity weight matrices. These approaches also violate the training objective, generating useless gradients and parameter updates. Pooler Act Classfier Vulnerable Module Analytics-based Estimization Gradient Match Feature Match Selection Privacy Guarded Attack with Features, Gradients, and Priors 1 2 Optimization-based Recovery Figure 1: Schematic illustration of the two-stage privacy attack method proposed in this study. 1) The first stage involves the analytics-based reconstruction of the feature information associated with the specific Pooler layer in Transformer-based language models. 2) The second stage utilizes the reconstructed feature information, combined with gradient inversion and prior knowledge, to guide the recovery of training data. This figure highlights the approach of intermediate feature recovery while exposing the inherent privacy risks in contemporary language model architectures. Recently, Wang et al. [2023] provides a theoretical analysis that recovers training samples from gradient information for a two-layer fully connected network. A major limitation of their approach is the reliance on a randomly initialized network instead of an actual, pre-trained network. This reliance poses challenges in applying their insights to the prevalent training paradigm that fine-tuning pre-trained models. When applied to deeper networks, their method also relies on identity modules and other transparently detectable weight manipulations, limiting its practical utility. However, building upon this foundation, we have found that certain prevalent modules in contemporary language model architectures possess intrinsic privacy vulnerabilities. This paper focuses on Transformer-based language models featuring a distinctive Pooler layer and proposes a two-stage privacy attack method that exploits vulnerabilities in this specific module. Specifically, in the first stage, we employ an analytics-based reconstruction method to initially recover the direction of features associated with this module. This feature information is not averaged over the number of samples in the same batch or the sequence length, enabling the extraction of more unique information for each token. Subsequently, we utilize this feature information as an additional (beyond gradient and priors) supervisory signal to guide the recovery of training data. This research differentiates itself in the following ways: i) Different from Wang et al. [2023], the research is solidly based on textual data and deep language models in real-world scenarios. ii) Different from Fowl et al. [2022] and Boenisch et al. [2023], this research does not depend on trap weights, such as Identity module, and maintains adherence to an effective training roadmap. iii) In contrast to the honest-but-curious works [Zhu et al., 2019, Deng et al., 2021, Balunovic et al., 2022, Gupta et al., 2022], this research provides feature-level supervisory signals that differ from conventional gradients and priors. The contributions of this research are outlined as follows: 1) Instead of directly recovering the training samples of the entire model, this paper proposes a two-stage attack method to first approximate the intermediate feature information and then recover the real input. 2) We design a strategic weight initialization method coupled with a flexible tuning approach, thereby empowering an analytics-based method to accurately and efficiently deduce the direction of intermediate features in a specific module. 3) When integrated with gradient inversion and prior knowledge, our method consistently 2 \fA PREPRINT surpasses previous methods in the attack performance across a variety of benchmark datasets and scenarios. We also propose using the distance between semantic embeddings as a more comprehensive metric for evaluation. 4) This research brings to light the inherent privacy leakage risks that are embedded within the design of contemporary language model architectures. 2 Preliminaries In this section, we describe the relevant background of federated learning, gradient inversion, priors knowledge, two-layer-networks-based reconstruction, as well as the threat model of our proposed attack method. 2.1 Federated Learning Introduced by McMahan et al. [2017], federated learning solves data privacy concerns by promoting decentralized model training. In this approach, models are refined using local updates from individual clients, which are merged at a central server [Kone\u02c7 cn\u00fd et al., 2015, 2016, 2017]. This field has attracted significant attention due to its potential business applications, underlining its promise in academia and industry [Ramaswamy et al., 2019, Li et al., 2020]. 2.2 Gradient Inversion Gradient inversion is a significant technique that could potentially breach privacy in federated learning [Zhu et al., 2019, Zhao et al., 2020]. Although federated learning is designed to provide a decentralized training mechanism ensuring local data privacy, gradient inversion shows that this privacy may not be infallible. Problem definition: Consider the supervised learning framework wherein a neural network f(\u00b7; \u0398) : x \u2208Rd \u2192f(x; \u0398) \u2208R is trained using the objective: min \u0398 X (x,y)\u2208D \u2113(f(x; \u0398), y) (1) where \u2113is the loss function and D denotes the dataset of input-output pairs. In the federated paradigm, during the communication between the central server and clients, each node reports an average gradient of its local batch S [McMahan et al., 2017, Kone\u02c7 cn\u00fd et al., 2015]. This is mathematically formulated as: G := 1 B \u2207\u0398 B X i=1 \u2113(f (xi, \u0398) , yi) (2) where B is the batch size of S. Given the above definition, the challenge posed by gradient inversion becomes apparent: With access to a once-queried gradient and a known model as well as its loss function, is it possible to reconstruct the input training data? Objective of Gradient Inversion: Based on the above problem, the objective of gradient inversion can be represented as: min {\u02c6 xi,\u02c6 yi}B i=1 d 1 B B X i=1 \u2207\u0398\u2113(f (\u02c6 xi; \u0398) , \u02c6 yi) , G ! (3) Here, d(\u00b7, \u00b7) quantifies the difference between the provided and deduced gradients, and ( \u02c6 xI, \u02c6 yi) refers to the estimated input and its label. Prominent works have leveraged this objective to attempt the retrieval of private data [Zhu et al., 2019, Zhao et al., 2020, Geiping et al., 2020]. 2.3 Prior Knowledge Relying solely on gradient inversion to recover textual data often proves challenging, especially when handling larger batch sizes and long sequences. Researchers often seek to acquire the prior knowledge encapsulated in pre-trained language models like GPT-2 [Radford et al., 2019] to address this. These models are adept at predicting the probability of the next token based on the preceding sequence. This property aids in evaluating the quality of text searched through gradient inversion. Specifically, it introduces the perplexity as an evaluation metric to guide optimization by pinpointing optimal starting points or intermediate points of the attack [Balunovic et al., 2022, Gupta et al., 2022]. 3 \fA PREPRINT 2.4 Two-layer Network-based Reconstruction Wang et al. [2023] identified a gap in existing literature regarding the capability of gradient information to unveil training data. Their study demonstrates that it might be possible to reconstruct training data solely from gradient data using a theoretical approach within a two-layer neural network. Consider a two-layer neural network: f(x; \u0398) = Pm j=1 aj\u03c3(wj \u00b7 x), with parameters defined as \u0398 = (a1, ..., am, w1, ..., wm). Here, m represents the hidden dimension. The objective function is represented as: L(\u0398) = PB i=1(yi \u2212f(xi; \u0398))2. A notable finding is that the gradient for aj is solely influenced by wj, making it independent from other parameters. This gradient is represented as: gj := \u2207ajL(\u0398) = B X i=1 ri\u03c3 \u0000wT j xi \u0001 (4) where the residual ri is given by ri = f(xi; \u0398) \u2212yi. For wide neural networks with random initialization from a standard normal distribution, the residuals ri concentrate to a constant, r\u2217 i . By set g(w) := PB i=1 r\u2217 i \u03c3(w\u22a4xi), gj can be expressed as gj = g(wj) + \u03f5, where \u03f5 represents noise. Then the third derivative of g(w) is represented as: \u22073g(w) = B X i=1 r\u2217 i \u03c3(3)(wTxi)x\u22973 i (5) Here, x\u22973 i signifies the tensor product of vector xi with itself three times. The researchers postulated that if they can accurately estimate \u22073g(w), it is possible to determine {xi}B i=1 by using tensor decomposition techniques, especially when these features are independent. They used Stein\u2019s Lemma, expressed as: E[g(X)Hp(X)] = E[g(p)(X)] to approximate \u22073g(w) as: T = EW [\u22073 W g(W)] = EW \u223cN(0,I)[g(W)H3(W)] \u22481 m m X j=1 g(wj)H3(wj) = \u02c6 T (6) Where H3(wj) is the p-th Hermite function of wj. By leveraging this approach, they successfully reconstructed each unique xi. Their approach is primarily theoretical, focusing on two-layer fully connected networks and largely confined to randomly initialized networks. When applied to deeper networks, their method uses identity modules and other transparently detectable weight manipulations, which also limits its practical use. Inspired by Wang et al. [2023], we have identified a vulnerable module commonly found in current language model architectures, such as BERT and RoBERTa [Devlin et al., 2018, Liu et al., 2019]. The module, comprising the Pooler and Classifier layers, typically forms the final stages of these networks. Given this module does not have deeper topological connections, it can be treated as an independent two-layer network. Leveraging this insight, we have designed specific techniques to enhance the two-layer network-based reconstruction method, enabling it to recover the feature information associated with this special module in pre-trained language models. Further details of this methodology are discussed in Section 3.2. 2.5 Threat Model To facilitate comparisons with previous works, our threat model is based on the following principles: 1) We opt to freeze the gradients of both token and positional embeddings. This is because it is relatively easy to deduce the training text tokens or max sequence length without this restriction since only the tokens and positions in the current training batch will receive gradient updates on the embedding matrix. By freezing the updates of these embeddings, we create a more challenging attack scenario. 2) We ensure that the training process maintains effective gradient aggregation and consistently aims to minimize the training loss. This principle is crucial for preserving the integrity and efficiency of the training process, ensuring it remains free from suspicion. 3) We refrain from using trap weights, such as the Identity module, in our approach. However, instead of using pre-trained weights for initialization, we may opt to initialize randomly for partial weights of certain layers. 3 Methodology Gradient inversion seeks to reconstruct the original training data by harnessing the gradients of a known network. A closer look at this method reveals several challenges. Central to these is the nonconvexity of the issue, marked by the 4 \fA PREPRINT presence of numerous local minima that complicate the pursuit of the global optimum. Additionally, the problem is over-determined because it has more equations to resolve than unknown parameters. While these equations remain consistent, they complicate the optimization process. This complexity persists even when reduced to a single-sample scenario. As a result, gradient inversion remains an NP-complete problem, implying that procuring an exact solution within a feasible time frame is difficult [Wang et al., 2023]. From a broader perspective, it is crucial to recognize that text tokens represent discrete data, different from the continuous nature of images. Additionally, the gradients of language models are not just averaged over the number of samples in the same batch, but also across the sequence length of the longest sentence in that batch. These properties make the recovery of textual training data more challenging compared to image data. To overcome these challenges, this paper endeavors to seek more valuable information that could guide the recovery or retrieval process. Instead of solely focusing on gradients and external information (priors), we shift our attention toward the inner workings of the model itself. 3.1 Vulnerable Module Identification Building upon the work of Wang et al. [2023], which demonstrated the feasibility of recovering training data using gradient information from a two-layer fully connected network, we are reminded to scrutinize similar vulnerabilities embedded in contemporary language model architectures. However, the increasing complexity and depth of these models render it nearly impossible to apply a similar strategy without resorting to trap weights, such as an identity module. This leads us to a critical observation when examining the architecture of widely-used transformer-based language models like BERT and RoBERTa. Notably, these models contain a module consisting of a Pooler layer and a Classifier layer, with a non-linear activation function between them. By analyzing the topological operational order, we discern that this module can be treated as an independent two-layer fully connected network, given its position at the top of the language model, implying no deeper logical operations. With this keen realization, instead of attempting to recover the training samples of the entire model in one step, we propose first recovering the input information for this special module using an analytics-based method. This additional information then aids the optimization-based method (gradient inversion) with extra supervisory signals, thereby enhancing privacy attack performance. we have successfully unearthed a novel source of valuable information, distinct from conventional gradients and priors, embedded within the internal module of the model itself. We identify this module as a vulnerable module, which harbors inherent privacy leakage risks. Nonetheless, Wang et al. [2023]\u2019s reconstruction method, as described in Section 2.4, comes with its own set of assumptions and limitations. Most critically, it does not recover the actual features but their direction within the feature space. To ensure the efficacy of this analytics-based method, especially in its application to real pre-trained language models, we have designed several critical techniques to relax or remove these limitations. 3.2 First-stage Analytics-based Attack In the first-stage attack, we aim to first recover the feature information directed to the identified vulnerable module with an enhanced analytics-based method. To better elucidate our tailored design, let us first establish the notations used in this context: Let X \u2208RB\u00d7d be the input to this vulnerable module, where B is the batch size and d is the feature dimension. Let W1 \u2208Rd\u00d7v and W2 \u2208Rv\u00d7n denote the weights of the Pooler layer and the Classifier layer, respectively, with v being the pooler dimension and n being number of classes. Let \u03c3 represent the non-linear activation function that is positioned after the Pooler layer. Enlarge Pooler Dimension: The initial configuration of language models often sets the pooler dimension v to match the hidden dimension d (For BERTBASE, it\u2019s 768) [Devlin et al., 2018]. This setting is insufficient to promise the accuracy of tensor decomposition when applying the analytics-based reconstruction method. To address this limitation, we temporarily expand the pooler dimension to match the vocabulary size for BERT. To be explicit, we set v = |V |. The rationale behind this change is grounded in enhancing the model\u2019s expressiveness while ensuring our modification is subtle. It\u2019s important to note that, given the expanding width of state-of-the-art language models such as GPT-3 Brown et al. [2020], the hidden dimension has become sufficiently large to assure the accuracy of our method. Consequently, we can bypass this certain step that would otherwise be necessary in BERT. Strategic Weight Initialization: As mentioned in Section 2.4, m signifies the intermediate dimension in the two-layer network. In our setting, we should have m = v. However, during our computation of \u02c6 T as outlined in Equation 6, we noticed an anomaly in gj. Due to the random initialization of W1 by requirement, a substantial portion of gradients gj approached a value close to 0. This side effect impacts the subsequent decomposition procedure. To address this issue, rather than setting m = v, we set m = v \u2212d. This approach ensures the original pre-trained weights W \u2032 1 \u2208Rd\u00d7d are retained in the new weight matrix W1 \u2208Rd\u00d7v, allowing us to obtain optimal gradients for W1 and W2. Simultaneously, 5 \fA PREPRINT the remaining dimensions W \u2032\u2032 1 \u2208Rd\u00d7(v\u2212d) are randomly initialized and adequate to promise the accuracy of tensor decomposition. For the classifier layer, we utilize a strategy similar to that of the Pooler layer, adjusting the remaining dimensions to a constant (i/m, where i represents the class index for the classification task). More details are in Appendix B.1. From one perspective, it may appear that we have altered the parameters. However, it is important to clarify that we have not assigned any special properties to these weights. Our approach involves initializing part of the weight randomly, which is a standard operation in model initialization. Furthermore, this random initialization is confined only to the identified vulnerable module, allowing the rest of the language model to utilize pre-trained weights for initialization. Consequently, this approach avoids creating trap weights and ensures the normal training roadmap. Flexible Tuning Framework: Wang et al. [2023] suggests significantly expanding the pooler dimension m in comparison to the input dimension d to reduce the tensor decomposition error. In our setting, the specific relationship between recovery dimension d and pooler dimension v = m + d remains undetermined. Recognizing these constraints, we keep m constant and design an alternative method to tweak d. Specifically, instead of attempting to recover the full dimension d, our strategy focuses on recovering a dimension d\u2032 and d\u2032 \u2264d. This approach sets the subweights (d: , d\u2032 : ) of W1 to zero. Then the gradient gj in Equation 6 remains functional but is exclusively tied to the subweights (: , : d\u2032) of W1. As a result, we embrace a more flexible and efficient methodology by centering our reconstruction on the feature subset X \u2208RB\u00d7d\u2032. More details can be found in Appendix B.1. Reorder Feature Information: When applying tensor decomposition techniques to retrieve features from \u02c6 T, a significant issue arises when the batch size exceeds one: the exact order of the recovered features remains uncertain. Under adversarial conditions, one might try every conceivable permutation as a reference. However, we simplify the procedure by sequentially comparing each recovered feature to the actual input features with cosine similarity until the best order is discerned. In certain cases, a single recovered feature displayed a notably high cosine similarity with multiple actual inputs simultaneously. Interestingly, although a 1-m greedy relationship might exhibit a high correlation, it did not exceed the attack performance of a straightforward 1-1 match in the final outcome. Consequently, we adopted the 1-1 relationship to achieve the best attack result. Activation Function Exploration: Our empirical observations have indicated that various activation functions result in differing effects on information retrieval. Consequently, rather than limiting our focus to the original activation function, we have experimented with a range of activation functions to assess their impact on the final attack performance. Currently, we include activation functions such as Tanh, ReLU, SeLU, and a custom function defined as \u03c3(x) = x3 +x2. A more detailed discussion of this design will be presented in Section 5. 3.3 Second-stage Optimization-based Attack In the second-stage attack, we aim to recover the real input for the entire language models with an optimization-based method. Specifically, following Balunovic et al. [2022], we divide our entire optimization process into three phases: Initialization, Training, and Token Swap. In the initialization and token swap stages, we aim to leverage certain metrics to identify optimal starting or intermediary points for the subsequent training phase. This stage is also commonly recognized as discrete optimization. In this setting, we\u2019ve chosen a mix of metrics to guide the choice, including gradient match loss and perplexity obtained from pre-trained language models. In the optimization stage, we optimize the embeddings derived from input IDs to minimize gradient matching loss and cosine distance between the input of the Pooler layer with the recovered feature information from our first-stage attack. This phase falls under the category of continuous optimization. We oscillate between continuous and discrete optimization phases to bolster the final attack performance. More details can be found in Appendix B.1. 4 Experiments This section initially presents the fundamental setup for our experiments. Subsequently, we demonstrate the results of experiments in various settings and provide an in-depth analysis from multiple perspectives. 4.1 Set Up Datasets: Following previous work Balunovic et al. [2022], our experimental design incorporates three binary text classification datasets to ensure a comprehensive evaluation. Specifically, we utilize CoLA and SST-2 from the GLUE benchmark [Warstadt et al., 2018, Socher et al., 2013, Wang et al., 2019], with their sequences predominantly ranging between 5-9 and 3-13 words, respectively. Additionally, the RottenTomatoes dataset presents a more complex scenario with sequence lengths oscillating between 14 and 27 words [Pang and Lee, 2005]. More details can be found in 6 \fA PREPRINT Appendix C.3. Within the scope of our experiments, we utilize a subset of 100 randomly selected sequences from the training sets of these datasets as our evaluation benchmark, a method also endorsed by Balunovic et al. [2022]. Models: Experiments are primarily based on BERTBASE [Devlin et al., 2018] architecture. Consistent with Balunovic et al. [2022], we use models that have been fine-tuned on downstream tasks for two epochs. To ensure a fair comparison, we adopt the same fine-tuned models from Balunovic et al. [2022]. As for the auxiliary language model employed to extract prior knowledge, we choose GPT-2 [Radford et al., 2019], a choice also used by Balunovic et al. [2022]. Metrics: Following Deng et al. [2021] and Balunovic et al. [2022], we evaluate attack performance with the ROUGE metric suite [Lin, 2004]. Specifically, we present the F-scores of ROUGE-1, ROUGE-2, and ROUGE-L. These metrics respectively assess the retrieval of unigrams, bigrams, and the proportion of the longest continuous matching subsequence relative to the entire sequence. We omit all padding tokens in the reconstruction and evaluation phases. Baselines: We benchmark our approach against three baselines: DLG, TAG, and LAMP, with a similar definition of threat model described in Section 2.5. Among them, LAMP represents the state-of-the-art. We employ the open-sourced implementation from LAMP, which encompasses the implementations for all three baselines [Deng et al., 2021, Zhu et al., 2019, Balunovic et al., 2022]. Following previous work, we assume the lengths of sequences are known for both baselines and our attacks, as an adversary can run the attack for all possible lengths [Balunovic et al., 2022]. Implementation: Our method is implemented based on LAMP\u2019s framework. To ensure a fair comparison, we standardized the experimental conditions and settings when comparing our approach with baselines. We adopt all of LAMP\u2019s hyperparameters, including the optimizer, learning rate, learning rate schedule, regularization coefficient, and optimization steps. For hyperparameters unique to our method, we made selections using a grid search on BERTBASE and shared them in different settings. We also assume the prior knowledge of input labels. This is typical in text classification tasks with limited categories because we can easily iterate all the potentials. More details can be found in Appendix B.1. Table 1: Analysis of Text Privacy Attacks on BERTBASE: Impact of Various Batch Sizes and Datasets. The terms R-1, R-2, and R-L represent ROUGE-1, ROUGE-2, and ROUGE-L scores, respectively. We calculate the average change by comparing it with the baseline LAMPCOS. Symbols \u2193and \u2191denote degradation and improvement, respectively. Notably, a bold format is used to highlight the most effective attack performance under identical settings. The symbol \u22c6signifies the model\u2019s original activation function. Method B=1 B=2 B=4 B=8 Avg \u2206 R-1 R-2 R-L R-1 R-2 R-L R-1 R-2 R-L R-1 R-2 R-L R-1 R-2 R-L CoLA DLG 59.3 7.7 46.2 36.9 2.6 31.4 35.3 1.4 31.9 16.5 0.8 7.9 \u219317.7 \u219316.8 \u219319.0 TAG 78.9 10.2 53.3 45.6 4.6 36.9 35.3 1.6 31.3 33.3 1.6 30.4 \u21936.4 \u219315.4 \u219310.4 LAMP COS 84.8 46.2 73.1 57.2 21.9 49.8 40.4 6.4 36.2 36.4 5.1 34.4 \u2014 \u2014 \u2014 Ours Tanh\u22c6 84.5 46.1 72.8 56.9 22.0 49.6 41.2 7.8 40.1 37.2 5.2 34.4 \u21910.25 \u21910.4 \u21910.9 Ours ReLU 84.5 45.9 72.6 57.3 19.3 49.8 42.3 8.4 40.1 37.6 5.6 34.5 \u21910.7 \u21930.1 \u21910.9 Ours SeLU 86.6 51.5 76.7 69.5 31.2 60.6 50.5 11.8 43.9 40.8 8.3 38.1 \u21917.1 \u21915.8 \u21916.5 Ours x3+x2 84.6 45.2 72.4 57.3 19.2 49.8 43.9 11.4 40.1 37.8 5.9 34.8 \u21911.2 \u21910.5 \u21911.0 SST-2 DLG 57.7 11.7 48.2 39.1 7.6 37.2 38.7 6.5 36.4 36.6 4.7 35.5 \u219316.0 \u219319.3 \u219314.1 TAG 71.8 16.1 54.4 46.1 10.9 41.6 44.5 9.1 40.1 41.4 6.7 38.9 \u21938.0 \u219316.2 \u21939.7 LAMP COS 87.7 54.1 76.4 59.6 26.5 53.8 48.9 17.1 45.4 39.7 10.0 38.2 \u2014 \u2014 \u2014 Ours Tanh\u22c6 88.5 56.9 77.3 66.4 33.2 61.2 49.9 15.6 46.1 43.5 10.9 40.5 \u21913.1 \u21912.2 \u21912.8 Ours ReLU 88.5 56.1 77.1 67.3 32.6 60.8 50.4 14.1 46.2 43.5 11.2 40.8 \u21913.4 \u21911.6 \u21912.8 Ours SeLU 90.3 59.0 78.2 71.0 35.3 63.4 58.6 26.3 54.2 45.4 11.5 43.2 \u21917.3 \u21916.1 \u21916.3 Ours x3+x2 93.1 61.6 81.5 78.3 40.9 67.9 60.6 23.1 54.9 49.5 16.5 47.3 \u219111.4 \u21918.6 \u21919.5 Rotten Tomatoes DLG 20.1 0.4 15.2 18.9 0.6 15.4 18.7 0.4 15.7 20.0 0.3 16.9 \u219317.4 \u21935.4 \u219311.5 TAG 31.7 2.5 20.1 26.9 1.0 19.1 27.9 0.9 20.2 22.6 0.8 18.5 \u21939.5 \u21934.5 \u21937.8 LAMP COS 63.4 13.8 42.6 38.4 6.4 28.8 24.6 2.3 20.0 20.7 0.7 17.7 \u2014 \u2014 \u2014 Ours Tanh\u22c6 64.2 15.5 43.8 38.8 5.8 28.9 28.3 2.4 21.2 22.4 1.1 18.9 \u21911.7 \u21910.4 \u21911.0 Ours ReLU 64.1 15.7 44.2 40.2 5.4 28.8 31.1 2.6 23.6 22.8 1.3 18.8 \u21912.8 \u21910.5 \u21911.6 Ours SeLU 71.9 19.2 48.7 48.1 8.2 34.2 33.0 4.23 25.3 24.6 2.0 20.6 \u21917.6 \u21912.6 \u21914.9 Ours x3+x2 72.2 21.0 49.3 44.6 7.0 31.8 29.9 3.5 24.3 23.6 1.7 19.8 \u21915.8 \u21912.5 \u21914.0 4.2 Results and Analysis We present experimental results in Table 1. These findings demonstrate that our approach outperforms all baselines (DLG, TAG, and LAMP) across various datasets and batch sizes. Examining the impact of batch size variations, we notice that launching an attack becomes more challenging as the batch size increases. All attack methods, including ours, exhibit a decline in attack performance. However, our method brings a more noticeable improvement at batch 7 \fA PREPRINT sizes 2 and 4, surpassing its efficacy at batch sizes 1 and 8. We posit that for a batch size of 1, where the gradient is only averaged solely over tokens, the benefit of incorporating the feature information is less evident because the gradient information still plays a leading role in the optimization process. For a batch size of 8, the improvement scale is also not pronounced, we explore the background reason in Section 5. 100 200 300 400 500 600 700 d': Recovred Dimension 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 Cosine Similarity (0.6 to 1) Cos Similarity (B=1) Cos Similarity (B=2) Cos Similarity (B=4) Figure 2: Cosine similarity between recovered features and ground truth of BERTBASE on SST-2 across varying dimensions (50\u223c750 in 50-step intervals) and batch sizes (1, 2, 4) Turning our attention to variations in sequence length across datasets, we notice a clear trend: as sequences get longer, the benefit from feature information at a batch size of 1 becomes more pronounced. Specifically, for the CoLA dataset with token counts between 5-9, we see an average improvement in ROUGE metrics of 3%. This improvement grows to 5% for the SST-2 dataset with token counts from 2 to 13. For the Rotten Tomatoes dataset, which features even longer sequences with token counts ranging from 14 to 27, the average ROUGE metric improvement further increases to 8%. This suggests a correlation between sequence length and the extent of improvement observed. Moreover, when the batch size exceeds one, the benefits observed for these three datasets are consistently notable. Recall that gradient averaging occurs only over tokens at a batch size of 1, it implies that with longer sentences, the gradient information becomes less effective, leading to greater benefits from featurelevel supervision signals. When batch sizes are larger than 1, averaging happens over the number of tokens and sentences simultaneously. This results in our method consistently yielding pronounced benefits across sequences with different lengths. Our findings further reinforce the idea that relying exclusively on gradient information diminishes efficacy with larger batch sizes and longer sequences. In a word, the experiments consistently show that the use of feature information significantly enhances the success of privacy attacks on language models. This evidence further confirms that current language models, particularly in their Pooler and Classifier layers, inherently possess vulnerabilities that pose risks of privacy leakage. 5 Discussion Impact of Activation Function: As outlined in Section 3.2, we replaced the original Tanh activation function with ReLU, SeLU, and a custom activation function \u03c3 = x3 + x2 to investigate how different activation functions affect attack performance in our strategy. Table 1 presents the performance of these attacks under various settings. SeLU and \u03c3 = x3 + x2 consistently yield significant improvements in attack performance, while the enhancements seen with Tanh and ReLU are relatively less pronounced. We speculate that the latter activation functions\u2019 n-th derivative leads to a zero expectation (EZ\u223cN(0,1)[\u03c3(n)(Z)] = 0), thereby affecting the estimation of T as explained in Equation 6. In contrast, the earlier activation functions, whose n-th derivatives are neither odd nor even, do not exhibit this issue, potentially resulting in a more pronounced risk of privacy leakage. Another interesting phenomenon observed is that for the CoLA dataset, all attack methods show only minor improvements, except for attacks utilizing the SeLU. This suggests the existence of an unknown correlation between datasets and the feature information recovered in different activation functions. Further details and discussions are provided in the Appendix B.3. Impact of Recovery Dimension: In Section 3.2, we propose fixing m and adjusting d\u2032 to identify the optimal mapping for d\u2032 (where d\u2032 < d) and m. Accordingly, we conduct experiments using BERTBASE with various batch sizes to investigate the quality of the recovered feature information by calculating their cosine similarity with the ground truth. The results are illustrated in Figure 2. Our findings suggest that when the batch size is 1, the recovered quality gradually degrades as the recovery dimension d\u2032 increases, yet it remains as high as 0.99 across all configurations. However, this pattern does not hold when the batch size exceeds 1. We also observed that the recovered quality consistently declines as the batch size increases. We hypothesize that multiple inputs might exhibit some undisclosed dependencies, particularly features within the deeper layers of language models, thereby affecting the efficacy of tensor decomposition. For simplicity, we set d\u2032 = 100 across all experiments. However, under adversarial conditions, attackers might experiment with various d\u2032 settings to enhance their attack performance. Impact on Other Models: To demonstrate the effectiveness of our attack method on various model architectures, we also apply our method on the RoBERTa [Liu et al., 2019]. While RoBERTa shares similarities with BERT, it distinguishes itself through unique training configurations and datasets. Notably, unlike BERTBASE, RoBERTa does not 8 \fA PREPRINT have a Pooler layer. Instead, it employs a classifier composed of two linear layers in the head. In our experiments, we treat the first layer as an analogous Pooler layer and endeavor to reconstruct its input first. All the models used in this experiment are from Hugging Face, contributed by TextAttack [Morris et al., 2020]. As for the auxiliary model, we employ RoBERTa itself due to a specific challenge: we can\u2019t locate another generative model using the same tokenizer with RoBERTa. However, it\u2019s essential to note that we use the exact same settings for baselines and our method. We present the experiment results in Table 2. While the overall attack performance significantly decreases due to the auxiliary masked language model, our approach still outperforms the baseline. Furthermore, in numerous instances (as illustrated in Table 2), our method appears to restore the essence of the reference sample almost flawlessly. However, due to the limitation of evaluation metrics, they may have equal or even worse evaluation metrics than some terrible cases. Hence, we also employ the cosine similarity metric between the embeddings generated by SBERT for both the reference and recovery texts to assess the attack performance [Reimers and Gurevych, 2019]. Table 2: Text privacy attack on RoBERTa BASE. R-1, R-2, and R-L are same within Table 1. CosS indicates the average cosine similarity between references and recovered samples. Dataset Method R-1 R-2 R-L CosS Recovered Samples CoLA reference sample: The box contains the ball LAMP 15.5 2.6 14.4 0.36 likeTHETw box contains divPORa Ours 17.4 3.8 15.9 0.41 like Mess box contains contains balls SST2 reference sample: slightly disappointed LAMP 20.1 2.2 15.9 0.56 likesmlightly disappointed a Ours 19.7 2.1 16.8 0.59 like lightly disappointed a Toma reference sample: vaguely interesting, but it\u2019s just too too much LAMP 19.9 1.6 15.1 0.48 vagueLY\u2019, interestingtooMuchbuttoojusta Ours 21.5 1.8 16.0 0.51 vagueLY, interestingBut seemsMuch Toolaughs 6 Related Work While federated learning features with data privacy, recent studies show that model updates (gradients and parameters) can be intentionally leveraged to uncover sensitive data [Phong et al., 2017, Zhao et al., 2020, Zhu and Blaschko, 2020, Zhu et al., 2019]. This susceptibility is especially pronounced in the field of CV [Huang et al., 2021, Geiping et al., 2020, Yin et al., 2021, Jeon et al., 2021]. Textual data poses unique challenges in the context of private data attacks, especially given the prevalence of Transformer architectures. In Transformer, gradients average across sequences and tokens, which inherently masks specific token details. Furthermore, the inputs, expressed as discrete token IDs, starkly contrast the continuous features found in image data. Nonetheless, numerous studies have highlighted the risks associated with textual information. Fowl et al. [2021, 2022], Boenisch et al. [2023] distribute networks with embedded backdoors or trap parameters that facilitate easy reconstruction of training data. However, one can employ prefixed, recognized architectures to counter the former attack and guard against potential backdoor threats. For the latter attack, consistently monitoring statistics of features and weights across different layers can help detect malicious parameter [Balunovic et al., 2022]. Some approaches assume a trustworthy central server. Even with its integrity, the shared parameters and gradients could still be leveraged to extract private data [Zhu et al., 2019]. For example, methods introduced by Zhu et al. [2019] and Deng et al. [2021] employ optimization-based strategies using finely-tuned objective functions for data retrieval. Balunovic et al. [2022] leverages prior knowledge from extensive language models for data recovery. However, due to the self-imported limitation (Server is benign without doing any change to the model), these methods tend to be less effective with larger batch sizes. Notably, the method introduced by [Gupta et al., 2022] remains effective even with considerable batch sizes. Nevertheless, this vulnerability can be easily defended by suspending updates of embedding matrix. 7" + }, + { + "url": "http://arxiv.org/abs/2310.13191v3", + "title": "Towards Robust Pruning: An Adaptive Knowledge-Retention Pruning Strategy for Language Models", + "abstract": "The pruning objective has recently extended beyond accuracy and sparsity to\nrobustness in language models. Despite this, existing methods struggle to\nenhance robustness against adversarial attacks when continually increasing\nmodel sparsity and require a retraining process. As humans step into the era of\nlarge language models, these issues become increasingly prominent. This paper\nproposes that the robustness of language models is proportional to the extent\nof pre-trained knowledge they encompass. Accordingly, we introduce a\npost-training pruning strategy designed to faithfully replicate the embedding\nspace and feature space of dense language models, aiming to conserve more\npre-trained knowledge during the pruning process. In this setup, each layer's\nreconstruction error not only originates from itself but also includes\ncumulative error from preceding layers, followed by an adaptive rectification.\nCompared to other state-of-art baselines, our approach demonstrates a superior\nbalance between accuracy, sparsity, robustness, and pruning cost with BERT on\ndatasets SST2, IMDB, and AGNews, marking a significant stride towards robust\npruning in language models.", + "authors": "Jianwei Li, Qi Lei, Wei Cheng, Dongkuan Xu", + "published": "2023-10-19", + "updated": "2024-01-11", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "main_content": "Introduction Pruning is a widely recognized compression method employed to decrease the model size and accelerate model inference (Frankle and Carbin, 2018; Chen et al., 2020; Prasanna et al., 2020; Chen et al., 2021). In the age of large language models (Andrew and Gao, 2007; Brown et al., 2020; Chowdhery et al., 2022; OpenAI, 2023; Touvron et al., 2023; Ouyang et al., 2022; Smith et al., 2022), the necessity of pruning has increased because it greatly reduces deployment costs (Frantar and Alistarh, 2023). In addition to the significant computation cost, the robustness of language models has emerged as a crucial factor that demands attention. This is primarily because models need to remain resilient against adversarial attacks, even in challenging real-world circumstances (Tran et al., 2022; Wang et al., 2023). Therefore, exploring robust pruning strategies against adversarial attacks in language models could potentially yield a substantial impact (Xu et al., 2021; Du et al., 2023). Recent research has extended the pruning of language models beyond accuracy and sparsity, with an emphasis on the trade-off between accuracy, sparsity, robustness and cost (Du et al., 2023; Xu et al., 2021; Liang et al., 2021; Xi et al., 2022). Zheng et al. (2022) propose a joint optimization objective to guide the pruning and adversarial training simultaneously. Their approach views the identified subnetworks as robust tickets, which can be trained as normal and offer enhanced robustness. Despite achieving state-of-the-art results on target datasets, these methods still display vulnerabilities, as evidenced by a significant gap between metrics of clean accuracy 1 and accuracy under attack. Moreover, the performance also rapidly declines when sparsity exceeds a moderate level. Expanding on their work, Xi et al. (2022) propose using robust early-bird tickets to reduce the computational cost from adversarial training. However, they face similar challenges regarding the trade-off between robustness and sparsity. In summary, existing robust pruning works often demonstrate limited sparsity, insufficient robustness, and expensive cost, indicating the ongoing challenge of the balance between accuracy and the other three aspects. To address this challenge, this paper investigates why language models are susceptible to adversarial attacks. (Wang et al., 2021; Garg and Ramakrishnan, 2020; Jin et al., 2020). Previous studies have indicated that language models frequently capitalize on biases and artifacts inherent in datasets as predictive shortcuts, which impedes reasoning ability and skills to develop advanced semantic comprehension. (Du et al., 2021; Niven and Kao, 2019; 1accuracy without adversarial attacks arXiv:2310.13191v3 [cs.CL] 11 Jan 2024 \fMcCoy et al., 2020; Du et al., 2023). This reliance leads to a more severe loss of pre-trained knowledge during the pruning process. Furthermore, the adversarial samples in Natural Language Processing (NLP) are crafted by replacing components of sentences with semantically similar counterparts, thereby retaining high semantic similarity in the entire sentence (Li et al., 2020a; Ren et al., 2019; Jin et al., 2020). In this way, language models that depend on spurious features from particular words can not defend against adversarial attacks constructed by replacing those words with semantically similar alternatives. To put it more plainly, this primarily stems from the fact that, without pre-trained knowledge, the sparse language model treats the substitute word simply as an integer identifier. Based on the above observation, we explore the following questions in this paper: Question 1. What is the core to defend against adversarial attacks for sparse language models? This paper proposes that the robustness of sparse language models is directly proportional to the amount of pre-trained knowledge retained after pruning. Intuitively, the robustness of a sparse language model is fundamentally tied to its capability to distill advanced semantic features from input sentences. This capability is largely established during the pre-training phase of dense language models, emphasizing the pivotal role of acquired semantic knowledge. The extensive experiments well support our statement. Question 2. How can we efficiently prevent the loss of pre-trained knowledge in pruning to preserve or even enhance robustness? Previous research has demonstrated that pruning exacerbates the model\u2019s dependency on spurious features (Xu et al., 2021; Du et al., 2023). We further confirm that traditional pruning methods lead to a considerable loss of pre-trained knowledge and poor robustness. To prevent the above things, we propose a pruning approach that minimizes damage to the embedding space and feature space of dense language models, striving to replicate the features in each layer completely. Specifically, for each layer, we iteratively eliminate a single weight at a time and counterbalance the loss by updating the remaining weights based on the Hessian Matrix. In this setup, the reconstruction error at each layer arises not only from its own layer but also incorporates the accumulated error from preceding layers. This is achieved by adaptively updating the pruning-dependent information in accordance with the sparse output generated by previous layers. Concurrently, there\u2019s an ongoing effort to correct these errors collectively. Moreover, our method, being a post-training approach, is cost-effective for current language models, as it circumvents rigorous retraining processes. Extensive experiments show that our approach achieves a better trade-off between accuracy, sparsity, robustness, and pruning cost in SST2, AGNews, and IMDB compared with other state-of-art methods. 2 Related Work Textual Adversarial Attacks and Defense. Textual adversarial attacks pose a significant challenge to the robustness of language models. These attacks, formulated by carefully altering certain segments of sentences with semantically similar counterparts, aim to fool language models (Jin et al., 2020; Li et al., 2020a). To enhance the robustness of language models and defend against adversarial attacks, a range of potent defensive strategies, such as adversarial training, has been proposed. (Madry et al., 2017; Zhu et al., 2019; Li and Qiu, 2021). Different from their research, which focuses on dense models, we explore the robustness in the context of language model pruning. Robust Model Pruning. Prior studies indicate that sparse models tend to underperform in Compression Identified Examples (CIE), suggesting that the pruning process exacerbates the inherent algorithmic biases hidden within the datasets (Hooker et al., 2020). In Computer Vision (CV), simultaneous optimization of model pruning and adversarial training has been advocated as an effective solution to this issue (Gui et al., 2019; Ye et al., 2019; Sehwag et al., 2020; Vemparala et al., 2021). In NLP, Du et al. (2023) propose to prevent model overfitting on easy samples by leveraging sample difficulty in the context of pruning. Concurrently, Xu et al. (2021) suggest the generation of robust subnetworks through Knowledge Distillation and Posttraining Quantization. Taking a different approach, Liang et al. (2021) strive to enhance model generalizability by extracting the super tickets, while Zheng et al. (2022) and Xi et al. (2022) seek to identify robust tickets. Despite recent advancements, achieving enhanced robustness alongside increased sparsity remains a challenge. This paper significantly promotes a better trade-off among accuracy, robustness, sparsity, and pruning cost. \f3 Preliminary 3.1 Shortcut Learning and Mitigation Recent studies provide evidence that language models are inclined to capitalize on inherent biases and spurious features present in datasets, using these as convenient predictive shortcuts (Niven and Kao, 2019; Du et al., 2021; McCoy et al., 2020). This tendency impedes the development of more advanced semantic understanding and reasoning capacity necessary for NLU tasks. Various preliminary studies have begun to address this bias issue, such as adversarial training and posterior regularization (Stacey et al., 2020; Chen et al., 2021). From a unique perspective, we let language models against adversarial attacks by mitigating this shortcut issue through weight averaging. This method will be elaborated further in Section 4.2. 3.2 Pruning with Hessian Matrix Drawing inspiration from (LeCun et al., 1989; Hassibi et al., 1993), previous study has provided mathematical formulations for effectively eliminating a single weight from a layer and updating the remaining weights to correct the resulting error according to the information from Hessian Matrix (Frantar and Alistarh, 2022). The equations are presented below: wp = argmin wp w2 p [H\u22121]pp wr\u2212= wp [H\u22121]pp \u00b7 H\u22121 :,p (1) where H is the Hessian Matrix, wp represents the single weight that will be pruned, while wr denotes the remaining weights that will be updated. The notation [H\u22121]pp refers to the pth diagonal entry of the inverse Hessian Matrix, and H\u22121 :,p represents its pth column. However, the inversion of the Hessian Matrix requires updates at each weight removal, which is exceedingly costly. Frantar and Alistarh (2022) observes that Hessian values across different weight matrix rows are independent, as a single weight removal only impacts its respective row output. Accordingly, they simplify the calculation of Hessian Matrix H and leverage the Gaussian elimination technique to accelerate the update of H\u22121, as described mathematically below: H = XXT H\u22121 \u2212p = (H\u22121 \u2212 1 [H\u22121]pp H\u22121 :,p H\u22121 p,: )\u2212p (2) Here, \u2212p denotes the removal action of a single weight at index p. A more detailed explanation can be found in the Appendix. 4 Methodology This section proposes a pruning method for language models that can better balance accuracy, sparsity, robustness, and pruning cost. Figure 1 depicts the architecture of this method. 4.1 Rethink Robust Model Pruning Given that the predominant challenge in robust pruning primarily centers on robustness and pruning cost, we mainly focus on these two aspects in this paper. To enhance the robustness, we explore the root cause of the poor performance of sparse language models under adversarial attacks. We note that adversarial samples are often crafted by replacing certain words in the sentence with semantically similar substitutes. Thus it is essential to ensure that the representation of the original words and their substitutes remain similar in the embedding space and feature space even after pruning. Based on the above observation, we propose to maintain a highly close alignment between the sparse and dense language models. In other words, robust pruning is supposed to seek sparse parameters \u02c6 Wl that minimize the discrepancy between the outputs of dense and sparse layers. The problem can be formally expressed as follows: argmin \u02c6 W l EXl L(fl(Xl, Wl), fl(Xl, \u02c6 Wl)) s.t. \u2225\u02c6 Wl\u22250 \u2264k (3) Here, each layer of language models is represented by a mathematical function fl(Wl, Xl), and Xl denotes inputs, k designates the total number of weights that remain non-zero after the pruning process. Predominantly, the Mean Squared Error (MSE) is usually employed to measure the pruning error of each layer. Therefore, the preceding problem can be further reformulated using the MSE, as expressed in the subsequent equation: argmin \u02c6 Wl||WlXl \u2212\u02c6 WlXl||2 s.t. \u2225\u02c6 Wl\u22250 \u2264k (4) To reduce the pruning cost, we adopt a posttraining setting in our strategy. Specifically, we only utilize a small subset of data to calibrate the weights and generate sparse substitutes to replace them. In summary, our pruning method does not need a rigorous retraining process. 4.2 Weight Averaging for Robust Dense Model We also realize that language models may rely on surface-level or spurious features in the data \fM1 M2 M3 Mn h1, s1 m1 m2 m3 mn Descending order with mi M6 M2 M4 M9 M3 M2 M Aggressively chose ingredients Robust dense model Weight Average W0 W1 W2 x1 \u2026 \u2026 y0 A: Weight Averaging for Robust Dense Model B: Adaptive Pruning for Robust Sparse Model ! \"0 ! \"1 ! \"2 w0 w1 w2 1 2 3 1 ! \"0 ! \"1 ! \"2 ! #0 ! #1 ! #2 y1 y2 x2 x0 $ %1 H=x1x1T x1 w1 y1 % &1 Pruning With Hessian Matrix 2 Adaptive Hessian x1 w1 y1 % &1 % '0 Adaptive Weight Pruning With Hessian Matrix $ %1 Adaptive Pruning with Hessian Matrix Pruning with Hessian Matrix h2, s2 h3, s3 hn, sn Figure 1: Architecture of Main Strategy. A: First, we generate a robust and dense language model in two steps: 1 we fine-tune the pre-trained weight with various hyperparameters and settings, resulting in multiple models with different knowledge; 2 we then employ a greedy algorithm to only average the weights of models that contribute to the final performance. B: Second, 3 we apply our adaptive pruning method to generate robust and sparse language models in a layer-wise setting. Specifically, we optimize the 1 original independent pruning process of each layer to 2 an adaptive way. This requires subsequent layers to update the Hessian Matrix and the optimal dense weight according to the sparse outputs of preceding layers, thereby inheriting and correcting the accumulated error together. rather than capturing sophisticated semantic features. Thus, when sparse language models fail to defend against adversarial attacks, it becomes challenging to determine whether the failure stems from the pruning methods or inherent issues within the dense model. We circumvents this risk by constructing a robust and dense model before pruning. Inspired by Croce et al. (2023) and Wortsman et al. (2022), we generate a robust language model via weight averaging. The key idea is to train multiple models with different hyperparameters and settings, allowing each model to capture distinct nuances of the data and generalize in diverse ways. By averaging their weights, we can create a robust model that benefits from collective knowledge. Specifically, we order these models in descending order based on the accuracy under attack. Then, we selectively average the weights that contribute to the final robustness. Finally, we obtain a robust and dense model as the foundation of subsequent operations. This approach ensures that any detected vulnerabilities in sparse language models result from the pruning process, eliminating the possibility of them arising from spurious features. More details can be found in Algorithm 3. 4.3 Ada-Pruning for Robust Sparse Model 4.3.1 Notation To accurately replicate the dense model\u2019s behavior regarding embedding space and feature space of each layer, we use the method described in Section 3.2 as the backbone. However, its layer-wise setting, which treats each layer as an independent pruning problem, introduces limitations in realizing a globally optimal solution. To elaborate, let\u2019s consider a single layer as an example in the following sections. We\u2019ll use Xl, Wl, and Yl to represent the input, weight, and output of the layer, respectively, with the subscript l indicating lth layer. The use of a hat, as seen in \u02c6 Xl, \u02c6 Wl, or \u02c6 Yl, represents the input, weight, or output within a sparse context. 4.3.2 Adaptive Hessian Matrix After completing the pruning of the lth layer, a certain amount of error stemming from the sparse matrix operation inevitably arises. No matter how minor this error might be, it\u2019s important to realize that the output of this layer, denoted as \u02c6 Yl, influ\fAlgorithm 1 Prune linear layers {l1..ln} of BERT with target sparsity s and calibration data X Require: Collect original X, W, Y for l 1: procedure LAYERWISE PRUNING({l1..ln}) 2: for i \u21901 to n do 3: Wi, Xi, Yi \u2190li 4: 5: # Adaptive update 6: H\u22121 i \u2190(XiXT i )\u22121 7: if i \u0338= 0 then 8: Wi \u2190H\u22121 i XT i Yi 9: end if 10: 11: # Pruning with Hessian Matrix 12: din \u2190input dimension 13: k \u2190int (din \u00b7 s) 14: for j \u21901 to k do \u25b7parallel in rows 15: p \u2190argminp\u2208din 1 [H\u22121 i ]pp \u00b7 [Wi]2 p 16: Wi \u2190Wi \u2212[Hi]\u22121 :,p 1 [H\u22121 i ]pp \u00b7 [Wi]p 17: tmp \u2190[Hi]\u22121 :,p [Hi]\u22121 p,: 18: H\u22121 i \u2190H\u22121 i \u2212 1 [H\u22121 i ]pptmp 19: Wi \u2190Wi remove [Wi]p 20: end for 21: 22: # Adaptive update 23: Yi \u2190WiXi 24: Xi+1 \u2190post-process(Yi) 25: end for 26: return {Wi..Wn} 27: end procedure ences the input of the subsequent layer, denoted as \u02c6 Xl+1. As a result, the initial Hessian Matrix for the (l + 1)th layer, defined as Hl+1 = Xl+1XT l+1, becomes outdated. Thus it\u2019s crucial to recalculate the Hessian Matrix to obtain more precise pruningdependent information. We suggest adaptively updating the Hessian Matrix for the subsequent layer after pruning the preceding layers. 4.3.3 Adaptive Dense Weight We also note that the loss generated by removing a single weight depends on the current weight Wl from corresponding layer, as derived from Equation 1. However, an inevitable fact is that the original dense weight Wl is not optimal for the expected dense output Yl after pruning the preceding layers (\u02c6 0th . . . \u02c6 (l \u22121)th). Given that the input Xl has been altered to \u02c6 Xl due to the accumulated error, it would be suboptimal to continue using the original weight Wl to calculate the pruning loss for the current layer. To be more clear, the result of \u02c6 XlWl could substantially deviate from the original output Yl. This is incompatible with our goal of producing an output \u02c6 Yl identical to the original Yl in the pruning process. Thus, it\u2019s essential to update the dense weight so that \u02c6 Xl \u00af Wl can approximates the original output Yl more closely. Here, \u00af Wl denotes the updated dense weight, and we design the following equations to derive \u00af Wl: \u00af Wl = ( \u02c6 XT l \u02c6 Xl)\u22121 \u02c6 XT l Yl (5) where T represents the transpose operation, and \u22121 denotes the inverse operation. To ensure that \u02c6 XT l \u02c6 Xl is invertible, we also introduce a regularization term, such as 1e \u22124, to the diagonal entries of the matrix. Finally, we can compute the pruning loss more accurately with the updated weight \u00af Wl. We also calibrate the optimal weights for nonpruned layers (such as the pooler layer and classification layer in BERT) with Equation 5, aligning the dense layers\u2019 output with the altered input. Algorithm 1 provides detailed steps for the code implementation, offering a comprehensive overview of our methodology. We also provide a comprehensive analysis of the computational complexity of our method in the Appendix. 5 Experiments We first compare our method against several baseline methods, assessing accuracy, robustness, sparsity, and cost. Then, an ablation study is performed to elucidate the contributions of each part in our method. Finally, we augment our core findings with additional experiments and analyses to further illuminate our method. 5.1 Baselines and Datasets Consistent with the previous works (Devlin et al., 2018; Du et al., 2023; Xu et al., 2021; Zheng et al., 2022; Xi et al., 2022), BERTbase serves as the foundational model for all our experiments. We compare our approach with various baselines including:RobustT (Zheng et al., 2022), which optimizes the pruning mask and input perturbation simultaneously for robust tickets; Bag-of-Ticks (Xu et al., 2021), which improves sparse model robustness via Knowledge Distillation and Post-Training Quantization; RMC (Du et al., 2023), a technique preventing sparse language models from overfitting on easy samples using sample difficulty; SuperTicket (Liang et al., 2021), which identifies a super mask during pruning to reduce variance while preserving bias. Our evaluation primarily involves three text classification datasets: Internet Movie Database (IMDB, Maas et al. 2011), AG News Corpus (AGNEWS, Zhang et al. 2016), and Stanford Sentiment Treebank for binary classification (SST-2, Socher et al. 2013). \fMethods #Param Re-T SST2 AGNEWS IMDB Acc Aua Asr Acc Aua Asr Acc Aua Asr Fine-tune 85M Y 92.3 12.7 86.2 94.7 19.1 80.0 95.1 7.4 92.2 FreeLB 85M Y 91.5 28.3 69.1 94.8 37.8 60.1 94.3 36.2 61.6 Weight Average 85M Y 91.4 30.4 66.75 94.4 48.5 48.6 95.2 44.4 53.4 sparsity \u226430% SuperTicket 72M Y 93.2 14.3 84.7 94.8 9.7 89.8 95.0 17.3 81.8 Bag-of-Tricks 60M N 86.3 25.7 70.3 87.3 31.8 63.6 85.4 24.6 71.2 RMC 60M Y 91.2 17.6 80.7 94.2 21.4 77.3 93.9 22.3 76.3 RobusT 60M Y 90.8 28.9 68.2 94.9 33.4 64.8 92.1 55.7 39.5 Ours 60M N 90.2 42.3 53.1 93.8 48.6 48.2 94.6 57.3 39.4 sparsity = 50% Bag-of-Tricks 43M N 87.2 21.6 75.2 90.6 33.5 63.0 91.3 21.2 76.8 RMC 43M Y 90.8 9.7 89.3 94.1 21.2 77.5 94.1 14.7 84.4 RobusT 43M Y 90.5 24.8 73.9 94.8 28.8 69.7 93.2 31.5 66.2 Ours 43M N 88.31 43.1 51.2 93.4 48.5 48.1 94.2 53.2 43.6 sparsity = 87.5% Bag-of-Tricks 11M N 85.9 17.8 85.7 89.4 11.3 87.4 87.7 8.9 89.9 RMC 11M Y 86.3 3.6 95.8 92.1 4.5 95.5 91.3 11.2 87.7 RobusT 11M Y 85.2 7.8 90.8 91.8 8.3 91.0 89.2 6.5 92.7 Ours 11M N 85.6 37.6 56.1 92.4 41.3 55.3 91.6 35.6 61.1 Table 1: Summary of Adversarial Robustness Assessment on BERTbase. The entry highlighted with an orange background denotes our robust and dense model, which serves as the initialization for a range of robust pruning methods except RobustT (RobustT is generated from the pre-trained weight). Obviously, our method consistently outperforms all baselines in terms of the Aua% and Asr% metrics. Regarding Acc%, there is a minor decrease in our method\u2019s performance at lower sparsity levels, yet it regains superiority at higher sparsity levels. The highest performance is highlighted in bold. The column Re-T indicates whether the method necessitates model retraining. Consistent with previous research, we exclude embedding matrices from the calculation of parameter count. 5.2 Robustness Evaluation We assess our model\u2019s effectiveness against adversarial attacks using the TextFooler, which substitutes crucial words in sentences with semantically similar synonyms (Jin et al., 2020). Following previous works (Zheng et al., 2022; Xi et al., 2022), our evaluations utilize key metrics like Clean Accuracy Acc% (accuracy on clean test data), Accuracy Under Attack Aua% (accuracy when subjected to adversarial attacks), and Attack Success Rate Asr% (ratio of successful text perturbations to total attempts). A robust method is expected to show higher clean accuracy and accuracy under attack coupled with a lower attack success rate. We also evaluate more attack methods in the Appendix. 5.3 Implementation Details To begin with, we employ the technique mentioned in Section 4.2 to generate a robust language model for each dataset. Subsequently, we use our method to prune these robust language models with a small calibration dataset. All experimental results are the average of five trials, each initiated with different seeds. Furthermore, we assess the performance under three different levels of sparsity: 30%, 50%, and 87.5%. Additional implementation details can be found in Appendix. 5.4 Main Result on Robustness Evaluation Table 1 provides a comprehensive comparison of various robust pruning methods, evaluated across three distinct datasets: SST2, AGNEWS, and IMDB, and under varying degrees of model sparsity. Key observations can be made as follows: 1) Our strategy even enhances the robustness of language models after pruning. We believe this enhancement stems from the regularization effect of sparse architecture. 2) Our strategy distinguishes itself by consistently surpassing other methods in the Aua% and Asr%s, regardless of the dataset or the level of sparsity. These results imply that our strategy effectively maintains robustness during the pruning of language models. 3) Impressively, our method achieves higher robustness even with fewer parameters compared to several other approaches, which further underscores the effectiveness of our robust pruning method. 4) Although the Acc% of \f(a) (b) (c) (d) (e) (f) Figure 2: Attention Score Visualisation in BERTbase. We have selected an adversarial sample (\"it\u2019s a bewitching and often repercussions journey.\") from SST2 and visualized the attention scores in the robust and dense model (2b, 2e), the sparse language model generated with IMP+FreeLB (2a, 2d), and the sparse language model created using our method (2c, 2f). Here, Figures 2a, 2b, and 2c depict the attention scores from the first transformer block of BERTBase, while Figures 2d, 2e,and 2f show scores from the last transformer block. Evidently, the attention scores produced by our method align more closely with those from the robust and dense model. Methods #Param ReT SST2 AGNEWS IMDB Acc Aua Asr Acc Aua Asr Acc Aua Asr Fine-tune 85M Y 92.3 12.7 86.2 94.7 19.1 80.0 95.1 7.4 92.2 Weight Average 85M Y 91.4 30.4 66.75 94.4 48.5 48.6 95.2 44.4 53.4 IMP 43M Y 92.6 4.8 94.8 94.9 7.1 92.5 94.1 7.7 91.8 IMP + FreeLB 43M Y 92.4 7.9 91.5 94.3 9.2 90.2 93.8 14.3 84.8 LTH 43M Y 91.6 2.8 96.9 93.5 10.1 89.2 93.2 4.6 95.1 LTH + FreeLB 43M Y 91.7 9.8 89.3 93.2 12.3 86.8 93.1 9.5 89.8 Ours 43M N 88.31 43.1 51.2 93.4 48.5 48.1 94.2 53.2 43.6 Table 2: Ablation Study with Pruning Methods Replacement. We replace our pruning method with most famous others (IMP and LTH) supplemented with adversarial training (FreeLB). Similarly, the orange entry is used for model initialization. Once again, our method outperforms others in preserving or even enhancing robustness. our method is generally lower than other baselines at lower sparsity levels, the improvement of robustness (reflected in Aua% and Asr%) far outweighs the degree of accuracy degradation. 5) At higher levels of sparsity, our method outperforms other baselines across all metrics. 6) Our method does not require model retraining, confirming that our approach offers a better trade-off between accuracy, robustness, sparsity, and pruning cost. Beyond Bertbase, our methodology was also extended to Bertlarge, a model encompassing 330M parameters. The resulting performance, as presented in Table 3, reaffirms the superiority of our method when compared to the baselines. Moreover, we explore the effectiveness of our methods within a structured pruning context, and once again, our approach outperforms the state-of-the-art method: EarlyRobust (Xi et al., 2022). More details can be found in Appendix. 5.5 Ablation Study To elucidate the contributions of each part of our approach, we conduct an ablation study with the following settings:We replace our pruning technique with methods known as LTH and IMP (Frankle et al., 2020; Frankle and Carbin, 2018), and supple\fMethods #Param Re-T SST2 AGNEWS IMDB Acc Aua Asr Acc Aua Asr Acc Aua Asr Weight Average 309M Y 93.5 36.4 61.1 96.2 56.5 41.3 95.9 48.4 49.6 Bag-of-Tricks 155M N 90.3 27.6 69.4 93.1 35.5 61.9 93.4 29.3 68.6 RMC 155M Y 92.6 14.7 84.1 95.4 19.2 79.9 95.8 16.7 82.6 RobusT 155M Y 92.1 29.8 67.7 95.1 32.8 65.6 95.2 31.9 66.5 Ours 155M N 91.7 47.1 48.6 95.5 53.5 44.0 95.3 55.8 41.4 Table 3: Summary of Adversarial Robustness Assessment on BERTlarge. Similarly, the entry highlighted with an orange background is used for model initialization. Once again, our method consistently outperforms all baselines in terms of the Aua% and Suc% metrics. ment them with the additional adversarial training method FreeLB (Zhu et al., 2019). The results are presented in Table 2. From the results, we can make the following key observations: 1) Sparse language models generated by traditional pruning methods performs even worse than the vanilla finetuned dense model. This highlights the challenges associated with robust pruning. 2) Our approach consistently generates more robust sparse language models than conventional pruning methods, even supplemented with adversarial training methods. 3) We conjecture that the limited effect of adversarial training here stems from the discrete nature of word tokens and the substantial loss of pre-trained knowledge during pruning. 5.6 Discussion In this section, we design additional experiments to illustrate our robust pruning method further. 5.6.1 Pretrained Knowledge Detection To demonstrate the effectiveness of our robust pruning mechanism in preserving pre-trained knowledge, we\u2019ve chosen adversarial samples that are effectively defended by our method but not by others. We then visualize the attention scores of them in Figure 2. Our method demonstrates superior performance, as evidenced by more reasonable attention scores that align more closely with those from the robust and dense model. In addition, we visualize the distance of sentence representation from sparse language models and their dense counterparts in the feature space. As depicted in Table 4 and Figure 5, our method results in smaller distances between the dense and sparse representations. These findings indicate the superior ability of our robust pruning method to preserve semantic knowledge and maintain cognizance. In other words, our method outperforms others in maintaining robustness during pruning. Table 4: Quantitative Analysis of Distance from Sentence Embeddings. We compare the distances between sentence embeddings derived from various layers of dense and sparse language models. Our findings reveal that our method aligns better with the dense model, regardless of whether we use the original or adversarial sentence. Refer to Figure 5 for a visualization of these sentence embeddings. Layer Distance with dense Data IMP + ADT (2x) v.s. Ours (2x) 1 0.0086 > 0.0000 Ori 0.0086 > 0.0000 Adv 2 0.0144 > 0.0015 Ori 0.0142 > 0.0015 Adv 3 0.0156 > 0.0014 Ori 0.0258 > 0.0012 Adv 4 0.0193 > 0.0017 Ori 0.0407 > 0.0017 Adv 5 0.0324 > 0.0067 Ori 0.1319 > 0.0069 Adv 6 0.0763 > 0.0255 Ori 0.0967 > 0.0253 Adv 7 0.1299 > 0.0496 Ori 0.1478 > 0.0501 Adv 8 0.2530 > 0.1308 Ori 0.2547 > 0.1078 Adv 9 0.1880 > 0.0958 Ori 0.2767 > 0.0749 Adv 10 0.2804 > 0.1254 Ori 0.3909 > 0.1049 Adv 11 0.4932 > 0.2322 Ori 0.7317 > 0.0625 Adv 12 0.6872 > 0.2231 Ori 0.6903 > 0.0349 Adv 5.6.2 Impact of Calibration Data The calibration data is crucial for our methodology because it directly affects the computation of the Hessian Matrix. As outlined in Algorithm 1, the Hessian Matrix can be derived from H = XT X. To further explore the impact of the number of data points, we designed experiments that gradually increased the number of data points used in our strategy. The results of these experiments are detailed in Figure 3. Our observations indicate that as the number of used data points increases, the robustness and accuracy of the sparse language modes increase, but only up to a certain threshold. We hypothesize that the model can initially retain \fmore general knowledge as data points increase. However, once a threshold is crossed where the new data cannot provide additional information for general features, adding more data points from a similar distribution no longer contributes to model robustness and accuracy. 5.6.3 Impact of Sparsity As illustrated in Figure 4, we explore the robustness and accuracy of our sparse language models across a range of sparsity levels. In a departure from previous studies Zheng et al. (2022), our observations indicate that as sparsity increases, robustness decreases with a similar pace like accuracy. This trend suggests that the impact of increasing sparsity on model robustness might be less severe than previously assumed. This disparate pattern may stem from the post-training nature of our method. Furthermore, our observations regarding the trend in robustness align with the findings of previous studies by Zheng et al. (2022) and Liang et al. (2021). We note that the robustness of our sparse language models initially improves as sparsity escalates up to a certain threshold. After crossing this threshold, the robustness begins to decline. However, it sustains a level of robustness that is higher than the peak value observed in other models and does not collapse even with 10x compression. This finding further highlights the outstanding performance of our method in robust pruning. Figure 3: Impact of # of Calibration Data from SST2. 6" + }, + { + "url": "http://arxiv.org/abs/2310.13183v2", + "title": "Breaking through Deterministic Barriers: Randomized Pruning Mask Generation and Selection", + "abstract": "It is widely acknowledged that large and sparse models have higher accuracy\nthan small and dense models under the same model size constraints. This\nmotivates us to train a large model and then remove its redundant neurons or\nweights by pruning. Most existing works pruned the networks in a deterministic\nway, the performance of which solely depends on a single pruning criterion and\nthus lacks variety. Instead, in this paper, we propose a model pruning strategy\nthat first generates several pruning masks in a designed random way.\nSubsequently, along with an effective mask-selection rule, the optimal mask is\nchosen from the pool of mask candidates. To further enhance efficiency, we\nintroduce an early mask evaluation strategy, mitigating the overhead associated\nwith training multiple masks. Our extensive experiments demonstrate that this\napproach achieves state-of-the-art performance across eight datasets from GLUE,\nparticularly excelling at high levels of sparsity.", + "authors": "Jianwei Li, Weizhi Gao, Qi Lei, Dongkuan Xu", + "published": "2023-10-19", + "updated": "2024-01-11", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CL" + ], + "main_content": "Introduction One of the main challenges in deploying large neural networks (such as BERT (Devlin et al., 2019) and GPT-3 (Brown et al., 2020)) in production is the huge memory footprint and computational costs. Meanwhile, studies show that large and sparse models often yield higher accuracy than small but dense models (Gomez et al., 2019). As a result, pruning has been popularized to dramatically reduce memory size and computational power consumption with little to no performance degradation. (Hoefler et al., 2021; Glorot et al., 2011; Kaplan et al., 2020; Li et al., 2020; Mhaskar and Poggio, 2016; Brutzkus et al., 2017; Du et al., 2018). Pruning aims to eliminate redundant weights, neurons, and even layers in models. Many works focus on magnitude-based pruning (Hagiwara, 1993; Gale et al., 2019; Thimm and Fiesler; Han et al., 2015; Zhu and Gupta, 2017; Cuadros et al., \u03c4 = 0.027 2/3\u03c4 4/3\u03c4 28.74% 1.15e-8 0.8645 Magnitude Min Max 28.16% 1.15e-8 36% 15.6% 13.1% 34.8% 64% 18.35 9.8% 7.18% 2/3\u03c4 4/3\u03c4 0.8645 \u03c4 = 0.055 Min Max Magnitude (1) (2) Figure 1: Weight Distribution in a Feedforward Layer of BERTBase at Various Sparsity Levels (0.52 and 0.83), Corresponding to Pruning Thresholds \u03c4 = 0.027 and \u03c4 = 0.055. Notably, around 29% of the weights lie within the range [ 2 3\u03c4, 4 3\u03c4]. This observation puts into question the efficacy of magnitude-based pruning, as these weights, despite their proximity to the threshold, might play a crucial role in maintaining the model\u2019s accuracy. This suggests that directly eliminating weights with smaller magnitudes could potentially lead to a suboptimal pruning strategy. 2020), namely to remove the elements with the smallest magnitude. Here the magnitude refers to not only the weights but also the output sensitivity, gradients, or Hessian matrices of the training loss (Luo et al., 2017; Yu et al., 2018; He et al., 2019; Lis et al., 2019; Molchanov et al., 2019; Singh and Alistarh, 2020; Dong et al., 2017). While magnitude-based pruning can generate state-of-theart results in a wide range of tasks, its pruning strategy is deterministic and solely depends on a single criterion, which lacks variety (we demonstrate this more thoroughly in the next paragraph). Furthermore, magnitude-based pruning is proven not optimal at high-level sparsity (Sanh et al., 2020). To further improve the pruning performance, Zhuang et al. (2020); Ge et al. (2011); Savarese et al. (2020); Verdenius et al. (2020); arXiv:2310.13183v2 [cs.CV] 11 Jan 2024 \fAzarian et al. (2020) try to enlarge the search space of sparse architecture with regularization-based methods, which are non-deterministic. They design fine-designed L0 or L1 penalty terms added to the loss function. In this way, the model proactively shrinks some of the weights until they don\u2019t contribute to the final loss. Regularization-based methods can achieve noticeably better results than magnitude-based methods, especially at high-level sparsity (Sanh et al., 2020). However, this line of work often suffers from a non-convex landscape and is challenging to optimize with extra hyperparameters. In parallel, Su et al. (2020); Liu et al. (2022) adopt a more aggressive strategy to prune elements in a completely random way. Their methods demonstrate the competitive effect in small datasets (such as CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009)) but fail in large datasets (such as ImageNet (Deng et al., 2009)). Different from these works, this paper introduces a mildly random pruning method that brings a controllable degree of randomness into the pruning mask generation procedure. We demonstrate the weakness of magnitudebased pruning in Figure 1. It presents the weight distribution of a feedforward layer of BERT (Devlin et al., 2019). Define \u03c4 as the pruning boundary. We consider two scenarios: \u03c4 = 0.027 and \u03c4 = 0.055, leading to sparsity levels of 0.52 and 0.83, respectively. As shown in Figure 1, a large portion of weights (\u224829%) falls into the range of [2 3\u03c4, 4 3\u03c4], which cannot be overlooked because that it is unclear if the pruned weights close to the threshold contribute less than the kept weights in the final accuracy. The weights with smaller magnitudes can still be crucial, especially when dealing with edge cases or infrequent situations. Proximity between weights can intensify decisionmaking challenges. This is why the direct removal of weights with smaller magnitudes is sub-optimal, as also demonstrated in Gomez et al. (2019). Based on the above observations, we investigate the following questions in this paper: Question 1. Which is better for pruning? a deterministic way or a randomized way? Previous literature has not reached a consistent conclusion. While Su et al. (2020) and Liu et al. (2022) have provided evidence that random pruning can yield competitive or better results compared to deterministic methods, this finding does not consistently hold true for larger datasets. Moreover, these results have not been universally extended to language models. We conjecture that their methods introduce unbridled randomness but do not provide any effective negative feedback. Moreover, exploring the extent of introduced randomness in a principled way has also not been explored in the previous literature. In this paper, we study and extend the above question systematically. Question 2. Can we design a consistently effective randomized pruning method? This paper answers the above question with the following contribution. First, we propose a randomized pruning mask generation strategy that can introduce controllable randomness in a principled way. Second, we design Mask Candidate Selection Strategy (MCSS) to choose the optimal mask from the pool of mask candidates, ensuring the introduced randomness always guides pruning in a beneficial direction. Third, to further enhance efficiency, we introduce Early Mask Evaluation Pipeline (EMEP) to mitigate the overhead associated with training under multiple pruning masks. Last, we offer an empirical guidance for randomized pruning on Bertbase and Bertlarge. Our results show a consistent accuracy boost (0.1%\u223c2.6%) on the GLUE benchmark, outperforming other stateof-the-art pruning techniques at a 16x compression rate. Notably, our approach showcases even more significant enhancements (2%\u223c4%) at extreme sparsity levels like the 100x compression rate. 2 Preliminaries 2.1 Pruning Iterative Magnitude Pruning Iterative Magnitude Pruning (IMP) is the most well-known strategy because it yields state-of-art results than others (Frankle and Carbin, 2019; Frankle et al., 2020), such as Single-shot Network Pruning (SNIP) (Lee et al., 2018). Specifically, we divide the pruning process into multiple stages by gradually increasing the sparsity. In each stage, pruning is to find and eliminate redundant parameters or neurons at that time. The most intuitive approach is to assign an importance score to each element and keep only the top-k elements. The score used to rank elements can be the absolute value of weights, output sensitivity, gradients, or other fine-designed metrics (Hagiwara, 1993; Gale et al., 2019; Thimm and Fiesler; Han et al., 2015; Zhu and Gupta, 2017; Cuadros et al., 2020; Luo et al., 2017). In this work, \fdifferent from the traditional deterministic way, we extend the IMP in a random way. 2.2 Knowledge Distillation Knowledge Distillation (KD) (Hinton et al., 2015) is another compressing technique trying to transfer the knowledge from a well-trained large model T to a small model S. Many previous works have proved that pruning with KD can significantly reduce accuracy loss for Transformer-based models (Xu et al., 2021; Xia et al., 2022). Our experiments evaluate the pruning methods based on BERT (Devlin et al., 2019), and we apply the KD method to both the baseline and our strategy. Specifically, we distill the knowledge from the hidden state of each transformer block and the attention score of each self-attention layer. Figure 6 demonstrates the distillation strategy in our settings. 2.3 Multinomial Distribution In probability theory, a multinomial distribution describes the probability distribution of n (n > 2) sampling trials from elements with k (k > 2) categories, which is a generalization of the binomial distribution (Ross, 2010). In our setting, the number of categories equals the number of elements, and the target sparsity and the total number of elements determine the number of trials. Note that the sampling process in this paper is done without replacement. This kind of sampling is also referred to as sampling from a multivariate hypergeometric distribution (Berkopec, 2007). 3 Methodology In this section, we first rethink the traditional deterministic pruning method and introduce our basic idea and method. Following that, we elaborate on the details of our randomized pruning mask generation and selection strategy. The architecture of our strategy is depicted in Figure 2, and the detailed procedure is outlined step-by-step in Algorithm 1. 3.1 Rethink Iterative Magnitude Pruning Traditional IMP divides pruning into multiple stages and generates a deterministic pruning mask by retaining the top-k elements at each stage. This process is based on the assumption that the top-k elements contribute more than the removed part. However, given the complex topology of model architecture and observations from Figure 1, it is difficult to draw such a conclusion. In this paper, we aim to introduce a certain degree of randomness into the process of pruning mask generation, thereby expanding the search space for locally optimal pruning masks at each stage. Specifically, we propose a strategy for generating and selecting randomized pruning masks at each pruning stage. 3.2 Randomized Pruning Mask Generation 3.2.1 Mask Sampling Different from the deterministic mask generation, we seek to infuse controllable randomness into this process. In essence, our approach is to sample the retained elements from a multinomial distribution without replacement. Specifically, the first step is to derive a probability distribution by normalizing the magnitude of the elements. Within our framework, magnitude is defined as the absolute value of the weight. Subsequently, we sample k indices from this distribution, where k represents the number of elements retained. Finally, we generate a binary mask, with locations corresponding to these indices set to one, effectively outlining the sparse architecture of the model. The utilization of this approach offers a refreshing departure from the deterministic way and creates a larger optimization space for model pruning. 3.2.2 Controllable Randomness We have proposed a random method to generate pruning masks. However, for current models with several million parameters per layer, a single iteration of sampling leads to considerable randomness due to the minute probability post-normalization. To quantify this randomness, we propose ir (introduced randomness) in Equation 1: ir = (C \u2217sparsity \u2212Cs)/Cs (1) Here, C and Cs represent the total count of weights and the count of weights pruned by both deterministic and random approaches, respectively. A small value of ir indicates that the sampled mask resembles the deterministic one. Conversely, a larger value suggests a noticeable departure from the deterministic method. We assess the introduced randomness with ir and simultaneously strive to regulate its quantity. Drawing inspiration from the concept of model soup (Wortsman et al., 2022), we manage the randomness by sampling M masks and adding them element-wise to craft an ensemble mask. This mask has its top-k values set to 1, with the remainder set to 0, thus yielding a mask with controllable \fDense Sparsity = 0.31 A B Prune Sparsity = 0.52 Prune Prune Iterative Magnitude-Based Pruning Deterministic Pruning Randomized Pruning Top K Abs (Weight Matrix) Sparse Weight Matrix 2 1 Randomized Pruning Mask Generation Randomized Pruning Mask Selection normalization Probability Distribution Abs (Weight Matrix) + Sampling Sampling Sampling + Top K Pruning Mask Mask1 Mask2 Mask3 Mask4 Mask Early Evaluation Pipeline 1. Train one epoch for each mask candidate with large learning rate 2. Chose the mask with highest validation score Figure 2: Main Architecture of Our Strategy. We replace A the deterministic mask generation way in IMP with B our randomized method. Specifically, 1 we first introduce a degree of randomness into the process of mask generation in a principled way, 2 then we employ a specific mask selection rule, paired with an efficient evaluation pipe, to distinguish the optimal mask from a pool of candidates. randomness (k is the number of kept elements). Importantly, the degree of introduced randomness shows a positive correlation with the value of M. 3.2.3 Accelerated Mask Sampling Controlling randomness solely by increasing the number of sampled masks can be time-intensive. To address this, we suggest deriving the sampling probability distribution from the wT , where w is the weight of the corresponding layer. In this scenario, the power value T in the exponential term is used to control the variance of the sampling probability. As T increases, the sampling probabilities for larger and smaller magnitudes diverge more, allowing Mask Sampling to curtail the introduced randomness swiftly. Moreover, aligning with our motivation to introduce randomness in mask generation, we only sample weights whose magnitudes are close to the pruning boundary \u03c4. We introduce more details in Appendix. 3.3 Randomized Pruning Mask Selection 3.3.1 Mask Candidate Selection Strategy Our sampling approach expands the search space for locally optimal masks compared to the deterministic way. However, this inadvertently introduces undesired noise, leading to poor model accuracy because we introduce randomness without providing any effective negative feedback to the model optimization. To address this, we propose Mask Candidate Selection Strategy (MCSS) to ensure the introduced randomness always guides the model optimization in a beneficial direction. Specifically, at each pruning stage, we generate N candidate masks and select the best one for the next pruning stage. To ensure robustness in our approach, we adopt a deterministic mask as one of our mask candidates. By doing so, we are not solely relying on random or heuristic methods but also have a reliable fallback. 3.3.2 Early Mask Evaluation Pipeline To accelerate the mask selection, we design Early Mask Evaluation Pipeline (EMEP) to reduce computational costs. Specifically, we only fine-tune the model one epoch with a large learning rate for each candidate mask. The candidate that achieves a superior early-stop evaluation metric on the validation dataset is then deemed the winner. We crafted this strategy based on findings by Li et al. (2019) and You et al. (2019), which suggest that using a high learning rate during the earlier optimization iterations can yield a good approximation of the sparse network structure. Once the winner has \fAlgorithm 1: Randomized Pruning Mask Generation and Selection Input: w ; /* one layer weight */ Result: w, M; /* mask */ s \u2190[s1, s2, ...] ; /* pruning schedule */ sr \u21900.00005 ; /* sampling ratio */ n \u21908 ; /* # of candidates */ train w ; /* dense model */ foreach st \u2282s do for i \u21900 to n do p \u2190|w|/ P |w| ; p \u2190p5 ; /* sampling prob */ x \u2190st \u00d7 numw ; /* x is # of zeros in wj after pruning */ k \u2190numw \u2212x ; /* k is # of non-zeros in wj after pruning */ Mi \u2190zeros_like w; m \u2190zeros_like w; y \u2190int(sr \u00d7 x); for _ \u21900 to y do m \u21900 pos \u2190sampling k positions from p; m[pos] \u21901; Mi \u2190Mi + m end Mi[topk] \u21901; otherwise 0 w = w \u00d7 Mi; finetune one epoch with large lr; metrici \u2190evaluate validation dataset; end select M with best metric; rewind w and lr; w = w \u00d7 M; finetune w ; /* sparse model */ end been chosen, we revert the weights and learning rate to their state before the last pruning step. Subsequently, the winning candidate mask is employed for continuous regular training. 4 Experiments We evaluate the effectiveness of our pruning strategy in a wide range of natural language understanding tasks. Following previous work, we use BERT as our backbone and then apply different pruning methods to compare their performance. 4.1 Baselines BERTbase and BERTlarge are first chosen as our baseline models. Based on them, we apply IMP to generate 16x sparse models. These sparse models are used as our main baselines. In addition, we compare our strategy with previous works that have reported results on the same datasets, which including: BERT-PKD (Sun et al., 2019), Stru-PruningRoberta (Wang et al., 2020), SNIP (Lin et al., 2020), EBERT (Liu et al., 2021), BERT-of-Theseus (Xu et al., 2020), EfficientBERT (Dong et al., 2021), SparseBERT (Xu et al., 2021), RPP (Guo et al., 2019), Pretrained Ticket (Chen et al., 2020), Lottery Ticket (Prasanna et al., 2020), Prune at Pretraining (Gordon et al., 2020), Movement Pruning (Sanh et al., 2020), DistillBert6 (Sanh et al., 2019), and TinyBert6 (Jiao et al., 2020). 4.2 Datasets and Data Augmentation Following previous works, we select eight tasks from the GLUE dataset (excluding WNLI) to evaluate the effectiveness of our pruning strategy (Wang et al., 2018). We also follow the data augmentation method from TinyBert (Jiao et al., 2020). More details can be found in Appendix. 4.3 Setup We follow the strategy from SparseBert (Xu et al., 2021) to do pruning and knowledge distillation simultaneously at downstream tasks. We also imitate the setting from (Frankle and Carbin, 2019) to adopt a simple pruning schedule in our experiments. Specifically, we only use 4-9 pruning stages to increase the sparsity gradually (such as 0.54, 0.83. 0.91, 0.9375). Furthermore, after choosing a decision mask at each pruning stage, the number of epochs in the finetuning phase is no longer limited until the model converges. We apply exactly the same setting for the IMP baseline and our approach. For more details about hyperparameters, please refer to Appendix. 4.4 Main Results and Analysis We report the results on dev set of 8 datasets from GLUE and summarize them in Table 1-2. We also compare our results with distillation-based methods and describe the results in Appendix. From the above results, we can easily observe that our strategy consistently generates better performances than the main baseline IMP in a totally same setting. Moreover, the result of our method is also optimal in similar settings compared to other pruning techniques. These findings demonstrate the superiority of our random way in the mask generation process over the deterministic approach and confirm that our mask selection rule can effectively navigate the optimization after introducing randomness. In other words, our methods successfully increase the probability of finding better pruning \fMethods #params MNLI-m Acc QNLI Acc QQP F1/Acc MRPC F1 SST-2 Acc COLA Mrr RTE Acc STS-B Spear BERTBase 110M 84.5 91.4 89.59/91.0 90.1 92.5 56.3 69.3 89.0 left #params \u226550% BERT-PKD 50% 81.3 88.4 -/88.4 85.7 91.3 45.5 66.5 86.2 Stru Pruning 73% 89.05 88.61 92.09 88.18 SNIP 50% 82.4 89.5 88.1 91.8% EBERT 60% 83.1 90.2 87.5/90.8 92.2 BERT-of-Theseus 50% 82.3 89.5 -/89.6 89.0 91.5 51.1 68.2 88.7 Pretrained Ticket 50%-90% 82.6 88.9 -/90.0 84.9 91.9 53.8 66.0 88.2 Lottery Ticket 38%-51% 84.0 91.0 -/91.0 84.0 92.0 54.0 61.0 88.0 IMP 50% 84.6 91.3 88.0/91.0 90.8 92.8 53.1 72.0 89.4 Ours 50% 84.7 91.5 88.1/91.1 91.5 93.0 54.3 72.3 89.5 left #params \u226410% RPP 10% 78 87 88.0/80.0 89 Movement Pruning 10% 80.7 87.1/90.5 EfficientBERT 9% 81.7 89.3 86.7/90.1 90.1 39.1 63.2 79.9 SparseBERT 5% 90.6 88.5 52.1 69.1 IMP 6% 83.3 90.5 87.6/90.8 90.2 92.2 53.1 66.7 87.0 Ours 6% 83.4 90.9 87.9/90.9 91.5 92.7 53.4 69.3 87.5 Table 1: Main Comparison Results between Our Strategy and Other Pruning Baselines with BertBase on dev Sets of 8 Datasets from GLUE Benchmark. Note that the pruning results of IMP and Ours are achieved by ourselves in a totally same setting, while others are from the corresponding literature. masks by introducing randomness in a principled way. We also notice that the potential improvement of performance may be limited by the magnitude used to derive sampling probability. In our setting, we use the absolute value of weights to decide the importance of each neuron connection. Thus our pruning results can not surpass the theoretical optimal result (upper bound) when pruning with the absolute value of weights. This reminds us that our method can be easily transplanted to other magnitude-based pruning methods, such as the gradients-based methods, and may produce the same effect, helping us find better pruning masks. Furthermore, we realize that the effect of our strategy in different datasets is not uniform. We obtain a more noticeable improvement in accuracy on small datasets. We argue that small datasets have more local minimums of loss surface, and therefore our strategy can more easily help find better pruning masks. 4.5 Ablation Study We try different ablation settings to figure out the functionality of each part of our strategy and analyze why they are effective for model pruning. 4.5.1 Impact of Randomness and Schedule Prior studies have demonstrated that there is no loss in accuracy at lower levels of sparsity, particularly when sparsity is less than 50%. This suggests that the model maintains robustness with the architectures identified in the early stages of pruning. We conjecture there is a high level of redundancy in the weights at the early pruning stage. As such, introducing more randomness could potentially expand the search space for sparse architecture in the early stages without hurt to the accuracy. However, previous works also argue the concept of early model adaptions and emphasize the importance of early searched architecture to the final pruning targets. Thus, introducing too much randomness at the beginning stage may not be a good idea. In the last pruning stage, the closely-matched magnitudes of the remaining weights significantly impact accuracy. Careful selection is required for the elements to be pruned. Too much randomness could corrupt the model, hindering recovery from the last pruning step, while too little might restrict our search for potentially optimal masks. Hence, deciding the best schedule of randomness at each pruning stage is crucial. \fMethods #params MNLI-m Acc QNLI Acc QQP F1 MRPC F1 SST-2 Acc COLA Mrr RTE Acc STS-B Spear BERTLarge 330M 86.6 92.3 91.3 89.1 93.2 60.6 74.8 90.0 IMP 20% 85.2 91.6 90.8 90.9 92.8 59.0 73.2 89.1 Ours 20% 86.2 91.8 91.1 91.9 93.7 60.9 75.5 89.9 Table 2: Comparison between Our Strategy and IMP with BERTLarge on GLUE dev Sets. (a) MRPC 16x (b) RTE 16x (c) SST-2 16x Figure 3: Comparing the Impact of Randomness in Two Different Schedules with a Deterministic Approach (IMP), which features zero randomness. The horizontal axis presents the logarithmic outputs of ir, with larger ir indicating a greater amount of total introduced randomness. The vertical axis signifies the model\u2019s accuracy. To investigate the impact of randomness, we introduce a hyper-parameter sr that controls the number of sampled masks M, and thereby controls the total introduced randomness. We also propose two simple randomness schedules at varying pruning stages: (1) Decrease, where we reduce the introduced randomness by increasing the number of sampled masks as the count of pruned weights increases (M = sr \u2217Cpruned), and (2) Increase, where we enhance the introduced randomness by decreasing the number of sampled masks as the count of pruned weights increases (M = sr \u2217(C \u2212Cpruned)). We conduct experiments comparing these two schedules against our primary baseline (IMP) under different sr values. The results are displayed in Figure 3, leading us to the following observations: 1) Excessive randomness results in our strategy performing even worse than the deterministic method (the region blue and orange lines below the green line). In this region, the Decrease strategy outperforms the Increase strategy. 2) Existing a threshold below which both our two randomness schedules outperform IMP, highlighting the superiority of our random approach over the deterministic way. 3) Existing another threshold above which the performances of the two randomness schedules become virtually identical. 4) The Decrease strategy consistently equals or outperforms the Increase strategy. This proves that the model accuracy is not sensitive to randomness in the early pruning stage and is gradually becoming sensitive as it approaches target sparsity. 4.5.2 Impact of MCSS We assessed the role of MCSS by removing it from our strategy and comparing the results with our primary findings. The results are summarized in Figure 4. We make the following observation: 1) In the setting with MCSS, there is a certain threshold of randomness below which our strategy significantly outperforms the deterministic way. 2) In contrast, In the setting without MCSS, the model\u2019s performance against the deterministic approach is inconsistent and lacks a clear trend or pattern. This precisely demonstrates that MCSS can ensure the introduced randomness consistently guides the model optimization toward a beneficial direction. In other words, MCSS effectively enhances the lower accuracy boundary in our experiments. 4.5.3 Impact of Sparsity We have examined our strategy across various levels of sparsity, and the findings have been encapsulated in Figure 5(a). We observe that our random pruning strategy has consistently demonstrated superior performance compared to the baseline (IMP) across all levels of compression. The advantage is particularly pronounced at higher levels of sparsity, \f(a) MRPC with MCSS (b) MRPC w/o MCSS Figure 4: Mask Sampling 4(a) v.s. Mask Sampling + MCSS 4(b). Note that the green line in 4(a) and 4(b) represents the same value of accuracy from IMP. The value on the horizontal axis represents the amount of introduced randomness. The value on the vertical axis indicates model accuracy. such as those equal to or greater than 16x sparsity. 4.5.4 Impact of Masks Candidates To verify the relationship between the number of mask candidates in MCSS with the final performance, we design experiments to increase the number of candidate masks at each pruning stage from 2 to 10, and the results are depicted in Figure 5(b). We conclude a positive correlation between the quality of the searched mask and the number of candidate masks at each stage. As the number of mask candidates close to 10, the performance gain is gradually missing. Additionally, the variance of the performance gain is also gradually minimized, which proves that the MCSS can effectively navigate the optimization of pruning in a beneficial direction. 4.6 Discussion 4.6.1 Efficiency Analysis We analyze the efficiency of our method through the training and inference phases. Training Phase: In the training phase, we compare the required computation of our method with the traditional IMP method. It\u2019s easy to find that (a) (b) Figure 5: Impact of Sparsities 5(a) and Impact of #Mask Candidates in MCSS 5(b). The horizontal axes represent the sparsity level and the number of mask candidates for Figures 5(a) and 5(b) respectively, while the vertical axes in both figures denote model accuracy. additional computation mainly from the randomized mask generation and selection. For the generation process, it\u2019s crucial to emphasize that the creation of each mask is independent of the others. This independence allows for parallel processing, meaning that the time consumption doesn\u2019t increase linearly as the increase of mask candidates. On the other hand, we measured the GFLOPS required to generate a single mask and compared it with the GFLOPS needed for one forward pass of BERTbase. It\u2019s roughly 1 percent to the later operation. However, due to implementation challenges, we couldn\u2019t concurrently sample k positions multiple times from the weight matrix, leading to an overall increase in processing time for single randomized mask generation. For the selection process, we only require one epoch to identify the optimal mask, where the overhead is minimal compared with the entire pruning process. Inference Phase: In real-world applications, although there might be overheads during training, the benefits reaped during inference make it worthwhile. Our method stands out with a 16x compression rate and can sustain performance even at \fhigher sparsity levels, achieving up to a 100x compression rate. This ensures that our pruned neural networks, once deployed, bring about significant improvements in performance and efficiency. 4.6.2 Extending to Billion Parameters In the current age of large language models, achieving effective pruning is a formidable challenge, particularly when striving to preserve high sparsity without sacrificing performance. While initiatives like SparseGPT have ventured into pruning for these colossal models, they have only managed a 2x compression rate (Frantar and Alistarh, 2023). The computational complexity of our method is primarily determined by the number of parameters involved. Consequently, our random pruning technique is not yet adaptable to models with billions of parameters. Nevertheless, we are diligently working on refining methods that incorporate controllable randomness more efficiently. 5 Related Work A number of researchers have explored pruning in BERT. Prasanna et al. (2020) prunes the model with Michel et al. (2019)\u2019s first-order importance metric and proves that unstructured magnitudebased pruning always produces sparser and higheraccuracy models than structured pruning. Gordon et al. (2020) and Chen et al. (2020) both argue that pruning in the pre-trained stage is better than finetuning stage because there is no need to prune for each downstream task. They also conclude that knowledge in sparse training can be transferred as well as dense models. In contrast, Xu et al. (2021) find that pruning at the pre-trained stage has huge computational problems while pruning at the finetuning stage can save computational efforts and keep accuracy simultaneously. These pruning methods are based on the magnitude and prune weights in a deterministic way. In parallel, a linear of works defeats magnitude-based methods at highlevel sparsity by applying a non-deterministic way: regularization-based pruning. Specifically, a finedesigned L0 or L1 penalty terms are added to the loss function. Then the model proactively shrinks some of the weights until they do not contribute to the final loss. The regularization-based method can generally achieve significantly better results than the magnitude-based methods, especially at highlevel sparsity (Sanh et al., 2020). However, penalty terms can introduce additional local minima to the loss function and are difficult to navigate optimization. On the other hand, there is a lack of research on random pruning applied to Transformer-based models (such as BERT) in previous studies. Therefore, our paper complements the gap in this area. 6" + }, + { + "url": "http://arxiv.org/abs/2306.14490v1", + "title": "TaiChi Action Capture and Performance Analysis with Multi-view RGB Cameras", + "abstract": "Recent advances in computer vision and deep learning have influenced the\nfield of sports performance analysis for researchers to track and reconstruct\nfreely moving humans without any marker attachment. However, there are few\nworks for vision-based motion capture and intelligent analysis for professional\nTaiChi movement. In this paper, we propose a framework for TaiChi performance\ncapture and analysis with multi-view geometry and artificial intelligence\ntechnology. The main innovative work is as follows: 1) A multi-camera system\nsuitable for TaiChi motion capture is built and the multi-view TaiChi data is\ncollected and processed; 2) A combination of traditional visual method and\nimplicit neural radiance field is proposed to achieve sparse 3D skeleton fusion\nand dense 3D surface reconstruction. 3) The normalization modeling of movement\nsequences is carried out based on motion transfer, so as to realize TaiChi\nperformance analysis for different groups. We have carried out evaluation\nexperiments, and the experimental results have shown the efficiency of our\nmethod.", + "authors": "Jianwei Li, Siyu Mo, Yanfei Shen", + "published": "2023-06-26", + "updated": "2023-06-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "main_content": "Introduction Human motion capture (Mocap) is usually used to obtain 3D human movement information in proactive health care and intelligent sports. Sports performance analysis and evaluation can improve athletes\u2019 competitive ability or promote public scientific fitness. The widely used inertial and optical motion capture systems can track and record human movement well, but need to bind sensors or paste marks on human body, which may affect human movement. Moreover, most current optical and inertial motion capture systems are expensive, and how to stick the markers also requires certain professional knowledge. Visual motion capture methods use cameras to non-invasively capture human motion images and then obtain the motion data through human pose estimation (HPE) and 3D reconstruction. Vision-based human action analysis is an important research topic in computer vision, and in recent years it has been widely applied in intelligent sports. Accurate 3D human motion modeling is the prerequisite for reliable human motion analysis. Human action recognition (HAR) and action quality assessment (AQA) are two tasks of performance analysis in intelligent sports. The former aims to identify the action classification, while the latter aims to automatically quantify the performance of the action or to score its performance. Traditional methods for action analysis are mainly based on artificial design features, and compare the action sequences by estimating the distance error or dynamic time warping. Deeplearning methods use the deep network to directly learn action features and have shown more powerful performance. Generally speaking, deep-learning methods consist of video-based methods and skeleton-based methods. Algorithms based on video generally extract features directly from images, such as C3D [Parmar and Morris, 2020], I3D [Carreira and Zisserman, 2017], and TSN [Xiang et al., 2018] and Pseudo3D [Qiu et al., 2017], and then extract time domain features by LSTM, pooling, and so on. The finally score prediction is performed by a fully connected neural network. Skeletonbased methods first detect the human skeleton in the images or video, and then model the correlation information between human joints, so as to realize human motion modeling and motion quality evaluation. At present vision-based performance analysis is mature in motion recognition and has made remarkable progress, but the performance of action quality assessment in sports motion scoring and intelligent sports training is still lower than the current application needs. Many studies on human movement evaluation with computer vision have been proposed, however most of them only select a few relatively simple fitness or clinical rehabilitation movements to identify and evaluate. As the current mainstream method, deep learning needs largescale human motion dataset to train a better model, which limits their effects on sports action analysis. Although some human professional sports datasets have been presented in recent years, however most of them are RGB images or videos collected from the Internet, such as AQA-7[Parmar and Morris, 2019a] and Yoga-82 [Verma et al., 2020]. The performance of sports scoring or quality evaluation is still below the current application requirements. According to above analyses, existing studies focus more on the recognitions of regular actions or assessments of competitive sports, and lack of 3D action dataset. As a complement to above work, we focus on how to capture and intelligently assess the quality of TaiChi actions. In summary the arXiv:2306.14490v1 [cs.CV] 26 Jun 2023 \fFigure 1: The system of TaiChi performance capture and analysis. main contributions of this paper are the followings: 1. A professional TaiChi dataset consists of 23,232 action samples captured through multi-view cameras; 2. An effective 3D human modelling framework with multi-camera calibration, 3D skeleton fusion and 3D surface reconstruction; 3. A normalized modeling method for skeleton sequences based on motion transfer to analyse TaiChi performance from different groups. 2 Related Work Vision-based motion capture. Human Mocap system based on vision technology can obtain 3D movement information non-invasively, and is gradually applied in the field of sports performance analysis. The type of human motion description includes human skeleton model (e.g., MPII [Andriluka et al., 2014]), human parametric model (e.g., SMPL [Loper et al., 2015]) and dense shape model (e.g., HumanNeRF [Weng et al., 2022]). Among them, the skeleton model of human body describes the non-rigid motion of 3D surface with high degrees of freedom as surface motion driven by the dynamic chain. As a structured representation of human pose, skeleton model can conveniently and effectively represent quantitative information of human movements, and is widely used in action analysis. According to the number of viewpoints, visual Mocap systems can be divided into single-view system and multi-view system. The single-view system generally uses a single camera to capture human motion from a fixed perspective, while the multi-view system obtains human motion images from multiple perspectives based on the multicamera system. The main challenge of single-view method is the problems of occlusion and depth uncertainty. Compared with the single-view method, the multi-view method can provide multi-view information, which can alleviate the occlusion problems and better restore 3D human posture. Imocap[Dong et al., 2020] can capture human motion from multiple Internet videos, and open a new direction for 3D HPE. DeepMultiCap[Zheng et al., 2021] uses the pixel alignment implicit function based on the parameterized model to reconstruct the invisible region response to the severe occlusion problem in the close-range interaction scene, and captures geometric details of human surface over time based on the attention module. Multi-view video data can contain more temporal and spatial information, but human posture may vary dramatically in continuous frames of video data, so how to integrate the data effectively remains to be solved. Currently, mainstream deep learning methods often require a large number of labeled data, which increases the difficulty of model training for sports action. Vision-based human motion datasets. NTU RGB+D [Liu et al., 2019] is so far the largest Kinect-based action dataset collected from 106 distinct subjects and contains more than 114 thousand video samples and 8 million frames. The dataset contains 120 different action classes including daily, mutual, and health-related activities. Human 3.6M [Ionescu et al., 2014] is another large dataset with 3.6 million human poses and corresponding images. There are 11 subjects and 17 action scenes, and the data is made up of four digital cameras, one time sensor and ten motion cameras. UCF-sport [Soomro and Zamir, 2014] is the first sports action dataset, and contains close to 200 action video sequences collected from various sports which are typically featured on broadcast television channels such as BBC and ESPN. Since then a number of sports motion datasets [Li et al., 2018a; Shao et al., 2020; Verma et al., 2020] used for action recognition have emerged. FineGym [Shao et al., 2020] provides coarse-to-fine annotations both temporally and semantically for gymnastics videos. There are three levels of categorical labels, and the temporal dimension is also divided into two levels, i.e., actions and sub-actions. UMONS-TAICHI [Tits et al., 2018] includes 2,200 sequences of 13 classes (relative to different Taijiquan techniques) performed by 12 participants of different levels of expertise. Fitness-AQA [Parmar et al., 2022] is a new exercise dataset comprising of three exercises (BackSquat, BarbellRow and OverheadPress), has been annotated by expert trainers for multiple crucial and typically occurring exercise errors. At present, most sports action datasets used for performance analysis are competition data based on publicly available RGB images or videos, such as public FSD-10[Liu et al., 2020] and Finediving [Xu et al., 2022], but few of them have multi-view 3D skeleton poses. Vision-based action analysis. With the development of intelligent sports and computer vision, many action analysis methods for sports have been gradually proposed in recent years. Deep learning is currently the mainstream method for vision-based action analysis, where the most widely used models are RNNs, CNNs, GCNs and Transform-based. According to the type of input data, there are mainly imagebased [Duan et al., 2022; Parmar and Morris, 2019b] and skeleton-based [Yan et al., 2018; Pan et al., 2019] methods. ScoringNet [Li et al., 2018b] and SwingNet [McNally et al., 2019] are based on images, and support fine-grained action classification and action scoring. These methods focus on the visual activity information of the whole scene including the performer\u2019s body and background, but may tend to ignore the motion relationship within the human skeleton joints. Skeleton-based methods generally begin by extracting human skeleton, and then conduct spatio-temporal modeling on the association information between skeleton joints. For example, the joint relational graph method proposed by Pan et al. [Pan et al., 2019] models the conventional motion and motion difference between different parts of human body according to the joint common module and joint difference \fFigure 2: The organization of TaiChi data. module respectively. HDVR [Hu and Ahuja, 2021] proposes a hierarchical dance video recognition framework by estimating 3D human pose from the corresponding 2D human pose sequences. For competitive gymnastics, SportsCap [Chen et al., 2021] uses ST-GCN [Yan et al., 2018] method to predict a fine-grained semantic action attributes, and adopts a semantic attribute mapping block to assemble various correlated action attributes into a high-level action label for the overall detailed understanding of the whole movement. In recent years, vision-based HAR methods are mature and have made remarkable progress, and AQA technologies have also been developed gradually. However, human body is often in self-occlusion with large folding or bending in sports, the performance of existing methods in rehabilitation training and sports scoring is still lower than current application requirements. The recognition accuracy of uncommon or highly similar human motion is still limited, and how to effectively model and analyze human motion with challenging situation, such as complex movements and view change, needs to be further studied. 3 Methods For TaiChi performance capture and analysis, we design a non-invasive system with multi-view geometry and artificial intelligence technology. 3.1 Experimental Setup Since there has a lot of body rotation in TaiChi movements, we set up a ring array multi-camera system to realize a better motion capture. The installation bracket of the system is a positive 16-sided shape with a diameter of 450 cm and a height of 250 cm, with a total of 16 columns. Each two RGB cameras (2448\u00d72048p) from the FLIR company are installed on each column. There are 32 cameras in total, 16 of them are 100 cm from the ground and the rest of them are 200 cm from the ground. Cameras on the top are tilted down about 20 degrees, and cameras on the bottom are tilted down about 10 degrees. Each four cameras are connected to a server, and the 8 servers are networked through a 10 GB router. All cameras are synchronously controlled through a special trigger device. Data process and experiments are conducted on the PyTorch deep-learning framework on a standard desktop PC with 11 GB 1080Ti GPUs. 3.2 System Composition As shown in Figure 1, the proposed system mainly contains five modules: Figure 3: Visual simulation of our multi-camera calibration. \u2022 Multi-camera calibration. Before motion capture, the system is calibrated to obtain the internal reference matrix of each camera and the pose relationship between multi cameras. \u2022 Motion capture. The 32 high frame rate RGB cameras through 8 servers are synchronously controlled to capture and store the high-definition TaiChi motion data. \u2022 3D skeleton fusion. The 2D skeletons are obtained by the HPE method from each RGB image and then fused into 3D human skeleton by multi-visual geometry matching. \u2022 3D surface reconstruction. The 3D human surface model is reconstructed with multi-view images through camera poses estimation and neural radiance field rendering. \u2022 Performance analysis. The skeleton sequences of different subjects are transferred to a standard model to eliminate individual appearance differences, and then TaiChi action quality is compared and evaluated by comparing the trajectory and angle changes of the re-targeted skeletons. 4 TaiChi Performance Capture 4.1 Multi-view Data Organization Figure 2 shows the organization of multi-view TaiChi action data, which contains 23,232 action samples, including each sample\u2019s RGB image, depth image, 2D skeleton and 3D skeleton data. Each TaiChi action sample is captured by 32 RGB cameras from 32 different views and a RGB-D camera (Kinect Azure) from the front view simultaneously. During TaiChi data acquisition, 11 subjects (3 female and 8 male) have performed the 24-form TaiChi actions same as TaiChi24 [Li et al., 2022]. Each action sample is manually segmented and labeled with category and action quality. The 2D skeletons are computed through Openpose [Cao et al., 2017] algorithm, while the 3D skeletons are obtained from Kinect Azure SDK. 4.2 Multi-camera Calibration To get the pose relationship of the 32 RGB cameras, we design a multi-camera calibration tool by the 2D planar checkerboard calibration method [Zhang, 2000]. Figure 3 shows a visual simulation of our multi-camera calibration process. The 2D checkerboard is located in the center of the multi-camera system, and its orientation and pitch angle are changed uniformly. More than 100 checkerboard images are selected in \fFigure 4: Example of 3D skeleton fusion results from 8 different views. each calibration. The grids in the checkerboard are 10 \u00d7 15, and the actual side length of each grid is 5 cm. In the process of data acquisition and processing, we have carried out 10 times of calibration. Based on the camera projection model, we construct a minimization objective function with re-projection error to solve the camera parameters: min X \u2225PXi \u2212xi\u22252, (1) where x is the feature in RGB image, X is the of in the checkerboard, and P = K[R|t] is the camera projection matrix. K is the i-th camera internal reference matrix, and [Ri|ti] is the i-th camera external reference matrix. In order to further improve the accuracy of calibration, bundleadjustment (BA) [Triggs et al., 2000] is used for optimization. 4.3 3D Skeleton Fusion 2D skeletons estimated from a single view RGB image often have the problem of occlusion, which will affect the accuracy of performance analysis. Therefore, we make 3D skeleton fusion by direct linear transformation (DLT) [Adbel-Aziz, 1971] algorithm. The main calculation process is shown in the following formula: si = Ki[Ri|ti]S, i \u2208(1, m) (2) where si = {s1, s2, ...sN} is the 2D skeleton with N joints in the i-th camera view, S = {S1, S2, ...SN} is the corresponding 3D skeleton, and m is the number of fused camera views. Figure 4 shows an example of 3D skeleton fusion results (rendered on 2D images) for a subject from 8 different views. 4.4 3D Surface Reconstruction Considering the excellent modeling and rendering capabilities of neural radiation fields (NeRFs), we realize 3D human surface reconstruction by joint using the traditional Colmap [Schonberger and Frahm, 2016] and deep-learning based Instant NeRF [M\u00a8 uller et al., 2022]: 1) Firstly, we detect and extract SIFT features from each input image; 2) And then the positions of the multi cameras are estimated by feature matching; 3) Finally, data conversion is performed for NeRF rendering. Given a 3D point and a viewing direction d \u2208R3, NeRF estimates RGB color values and density (c, \u03c3) that are Figure 5: 3D human surface reconstruction with NeRFs. then accumulated via quadrature to calculate the expected color of each camera ray: C(r) = Z tf tn exp(\u2212 Z t tn \u03c3(s)ds)\u03c3(t)c(t, d)dt, (3) where tf and tn define the near and far bounds, and the camera ray is indicated as r(t) = o + td. In order to speed up the signed distance function (SDF) training, 3D training positions are uniformly sampled and computed to the triangle mesh. Figure 5 shows the 3D surface reconstruction results of a subject with Instant NeRF. The top are the overall relationship between the camera poses and human models, while the bottom are the reconstruction details from the front, top and back views respectively. 5 TaiChi Performance Analysis 5.1 Data Precision Analysis Accurate recovery of 3D human movement information is the premise of reliable analysis of human movement. In order to verify the accuracy of our multi-camera system with IMUbased motion capture system, we conduct experiments using these two systems in time and space synchronization. A subject has wore the IMU equipments (Perception Neuron Studio, PNS) and performed three upper limb exercises (lateral hand lift, left punch and right punch) and three lower limb exercises (left knee lift, right knee lift and lunging squat) when the frame rates of the multi-camera system are 30 fps and 60 fps respectively. Figure 6 shows the wearing position of the PNS sensors and human skeleton nodes (Body-25) extracted by Openpose. It can be seen that there has a position deviation for the joints obtained by these two Mocap systems. We have compared and analyzed the Mocap data by the coordinate information of shoulder, elbow, hip and knee joints. Figure 7 shows the MSE error between our multi-camera system and IMU-based Mocap system. The average angle error \f(a) The wearing positions of the PNS sensor (b) The joints extracted with Openpose algorithm Figure 6: Comparison of the wearing positions of IMU and human skeleton joints extracted from our visual Mocap system. Figure 7: Comparison of multi-camera and IMU-based Mocap system by angle error (deg) and distance error (cm) of key joints. and distance error are 6.18 deg and 6.81 cm respectively in 30 fps. The average angle error and distance error are 6.68 deg and 6.41 cm respectively in 60 fps. The main reason for the error lies in the deviation between the paste positions of the PNS sensor and skeleton nodes obtained by visual estimation (as shown in Figure 6). 5.2 Performance Analysis For TaiChi performance analysis, we combine 32 camera views from 16 orientations into 16 stereo pairs to obtain 3D skeletons. 3D skeleton of each camera pair also is computed with 3D reconstruction module of Openpose. The quantization accuracies of TaiChi action recognition and assessment have been discussed in [Li et al., 2022]. Therefore, we mainly conduct performance analysis by comparing the movements between the coach and the students. To avoid the influence of individual differences, the action skeleton sequences from the coach and students are uniformly transferred to a standard virtual human model for comparison. The flowchart of performance analysis with the motion transfer network is shown in Figure 8. The model decomposes the skeleton sequences and recombines the elements to generate a new skeleton sequence, which can be viewed at any desired view-angle. We implement the action transfer based on Transmomo [Yang et al., 2020] without using any paired data for supervision. The transfer network is trained in an unsupervised manner by exploiting invariance properties of three orthogonal factors of variation including motion, Figure 8: The flowchart of performance analysis with motion transfer network. structure, and view-angle. The motion is invariant despite structural and view-angle perturbations, the structure is consistent through time and invariant despite view-angle perturbations, and the view-angle is consistent through time and invariant despite structural perturbations. For an input sequence sT \u2208RT\u00d72N where T is the length of the skeleton sequence and N is the number of body joints. The motion encoder uses several layers of one dimensional temporal convolution to extract the motion information. The structure encoder has a similar network structure with the difference that the final structure code is obtained after a temporal max pooling. The view code is obtained the same way we obtained the structure code. The decoder takes the motion, body and view codes as input and reconstructs a 3D joint sequence ST \u2208RT\u00d73N through convolution layers, in symmetry with the encoders. The total loss function is derived based on invariance and weighted by the following loss terms: Ltotal = \u03bbrecLrec + \u03bbcrsLcrs + \u03bbtripLtrip +\u03bbinvLinv + \u03bbadvLadv, (4) where Lrec is the reconstruction loss to minimize the difference between real data and 3D reconstructions projected back to 2D, Lcrs is the cross reconstruction loss for two sequences, Ltrip is the triplet loss to map views, Linv is the structural invariance loss to ensure that the view code is invariant to structural change estimations from the same sequence to a small neighborhood while alienating estimations from rotated sequences, and Ladv is used to measure the domain discrepancy between the projected 2D sequences and real 2D sequences. We evaluate the action quality by comparing the trajectory and angle changes of the key joints, such as shoulder, elbow, hip and knee. We select the first 15 key points defined in \f(a) Trajectories before motion transfer from student 1. (b) Trajectories after motion transfer from student 1. (c) Trajectories before motion transfer from student 2. (d) Trajectories after motion transfer from student 2. Figure 9: The changes of the key joints\u2019 trajectories before and after motion transfer. (a) Angles before motion transfer from student 1. (b) Angles after motion transfer from student 1. (c) Angles before motion transfer from student 2. (d) Angles after motion transfer from student 2. Figure 10: The changes of the key joints\u2019 angles before and after motion transfer. \fFigure 11: Examples of the re-targeted key-frame skeletons (top: the coach; middle: student 1; bottom: student 2). Body-25 for motion transfer. For the j-th joint in the skeleton model, the angle sequence (\u03b8T j ) for an action sample is calculated as follows: \u03b8T j = arccos((Sj\u22121 \u2212Sj) \u00b7 (Sj+1 \u2212Sj) |Sj\u22121 \u2212Sj || Sj+1 \u2212Sj| ), j \u2208(0, 15) (5) In our experiments, we take the RGB image (1920\u00d71080p) sequence of the coach collected by Kinect camera as the target, and two skeleton sequences of the students captured by our multi-camera system as input data. Figure 9 and Figure 10 show the changes of the trajectories and angels of the shoulder (S-l and S-r), elbow (E-l and E-r), hip (H-l and Hr) and knee (K-l and K-r) joints respectively. The length of the action sample is 48 frames. By analyzing the changes of trajectories and angles after motion transfer, we can find the major difference joints and emergence moments. In the case of these two students, the differences of them are mainly concentrated in the shoulder and knee joints during the movements located at the end of the action. Figure 11 shows the visual analysis results of re-targeted skeletons from two students compared with the coach. It can be seen that the TaiChi motion of student1 is more standard. According to the scores of professional coach, student 1 (100) also scored higher than student 2 (86). 6" + }, + { + "url": "http://arxiv.org/abs/2211.16922v3", + "title": "Learning Motion-Robust Remote Photoplethysmography through Arbitrary Resolution Videos", + "abstract": "Remote photoplethysmography (rPPG) enables non-contact heart rate (HR)\nestimation from facial videos which gives significant convenience compared with\ntraditional contact-based measurements. In the real-world long-term health\nmonitoring scenario, the distance of the participants and their head movements\nusually vary by time, resulting in the inaccurate rPPG measurement due to the\nvarying face resolution and complex motion artifacts. Different from the\nprevious rPPG models designed for a constant distance between camera and\nparticipants, in this paper, we propose two plug-and-play blocks (i.e.,\nphysiological signal feature extraction block (PFE) and temporal face alignment\nblock (TFA)) to alleviate the degradation of changing distance and head motion.\nOn one side, guided with representative-area information, PFE adaptively\nencodes the arbitrary resolution facial frames to the fixed-resolution facial\nstructure features. On the other side, leveraging the estimated optical flow,\nTFA is able to counteract the rPPG signal confusion caused by the head movement\nthus benefit the motion-robust rPPG signal recovery. Besides, we also train the\nmodel with a cross-resolution constraint using a two-stream dual-resolution\nframework, which further helps PFE learn resolution-robust facial rPPG\nfeatures. Extensive experiments on three benchmark datasets (UBFC-rPPG, COHFACE\nand PURE) demonstrate the superior performance of the proposed method. One\nhighlight is that with PFE and TFA, the off-the-shelf spatio-temporal rPPG\nmodels can predict more robust rPPG signals under both varying face resolution\nand severe head movement scenarios. The codes are available at\nhttps://github.com/LJW-GIT/Arbitrary_Resolution_rPPG.", + "authors": "Jianwei Li, Zitong Yu, Jingang Shi", + "published": "2022-11-30", + "updated": "2022-12-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Heart rate (HR) is an important physiological signal which is widely used in many circumstances, especially for healthcare or medical purposes. Electrocardiography (ECG) and Photoplethysmograph (PPG)/Blood Volume Pulse (BVP) are the two most common methods of measuring heart activities. However, these sensors need to be attached to body parts, limiting their usefulness and scalability. Due to the inconvenience of long-term monitoring and discomfort for the *Equal contribution \u2020Corresponding author Copyright \u00a9 2023, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. Figure 1: rPPG measurement from arbitrary resolution videos with head movements. (a) The ROIs might have different spatial size and shape across the temporal dimension. (b) Compared with the baseline PhysNet (Yu, Li, and Zhao 2019), the proposed TFA-PFE can predict more accurate rPPG signals with the ground truth BVP signals. users, traditional ways limit the application scenarios such as driver status assessment and burn patient health monitoring. To solve this problem, non-contact HR measurement, which aims to measure heart activity remotely, has become an increasingly popular research problem in physiological signal measurement in recent years. Most existing non-contact HR measurement approaches are based on the facial video-based remote Photoplethysmograph (rPPG) technique (Yu, Li, and Zhao 2021). The rPPG method uses digital cameras to record variations of re\ufb02ected ambient light on facial skin, which contains information on cardiovascular blood volume and pulsation. However, the rPPG measurement is very susceptible and vulnerable to the quality of video recording and head motions. In the early stage, handcrafted features based methods (Takano and Ohta 2007; Verkruysse, Svaasand, and Nelson 2008) require an exhausted multi-stage process (preprocessing, \ufb01ltering and post-processing) and are with low robustness to head motions and illumination changes. Thus, they are usually tested and deployed under controlled lab environment scenarios. With the rapid development of deep learning, neural network models have also been widely applied in the rPPG \ufb01eld. Recent spatio-temporal representation map (Niu et al. 2020; Lu, Han, and Zhou 2021) based and end-to-end (Yu, Li, and Zhao 2019; Chen and McDuff 2018) based modarXiv:2211.16922v3 [cs.CV] 2 Dec 2022 \fels utilize Convolutional Neural Networks (CNNs) to learn the spatio-temporal rPPG cues from facial videos with \ufb01xed resolution, which have shown superior performance compared with traditional approaches. However, most of the existing rPPG approaches rarely consider the real-world practical situation with arbitrary resolution face videos (e.g., the distance of the participants varies by time, see Figure 1(a) for visualization). Previous methods (McDuff 2018) simply use the spatial interpolation method to scale the face frames with arbitrary resolutions to a \ufb01xed size to adapt the model input, where the vital pixels of the region of interest (ROI) might be confused by the interpolation method, thus harming the accuracy of rPPG measurement. Meta-SR (Hu et al. 2019) and LIIF (Chen, Liu, and Wang 2021) bring the scalearbitrary super-resolution problem into the horizon. To our best knowledge, there is still no solution proposed yet to counter the problem of rPPG measurement from arbitrary face resolution videos. Besides arbitrary face resolution, another noteworthy issue of rPPG measurement is the motion-robustness. Due to the limited spatio-temporal receptive \ufb01eld of CNN with weak capacity of spatial contextual ROI localization, both rigid and non-rigid head motions usually have serious impacts on rPPG measurement. Existing end-to-end models (Yu, Li, and Zhao 2019; Yu et al. 2021) do not take initiative (e.g., face alignment operation) to describe the head movement to compensate the motion artifacts, resulting the vulnerability to severe head movement. Few works passively adopt the ROI tracking (Niu et al. 2018b) or utilize normalized frame difference motion representation as input (Chen and McDuff 2018; Liu et al. 2020). However, these methods have limited stability and are hard to directly plug in the off-the-shelf end-to-end spatio-temporal rPPG models. Motivated by the discussions above, we propose two plug-and-play blocks (i.e., Physiological Signal Feature Extraction block (PFE) and Temporal Face Alignment block (TFA)) to capture resolutionand motion-robust rPPG features. To learn the similarity of rPPG signals from arbitrary resolution frames, we design a new cross-resolution constraint using a dual-resolution framework, which further helps PFE learn resolution-robust facial rPPG features. As shown in Figure 1(b), compared with the vanilla PhysNet (Yu, Li, and Zhao 2019), the proposed TFA and PFE blocks can bene\ufb01t more accurate rPPG measurement from arbitrary resolution videos with head movements. The contributions of this work are as follows: \u2022 To our best knowledge, we provide the \ufb01rst solution to plug-and-play modules on robust rPPG measurement from facial videos with arbitrary \ufb01xed resolution or varying face resolution. \u2022 We propose the PFE block to adaptively encode the arbitrary resolution facial frames to the \ufb01xed-resolution facial structure features. Besides, we propose to train the model with a cross-resolution constraint using a twostream dual-resolution framework, which further helps PFE learn resolution-robust facial rPPG features. \u2022 We propose the TFA block to counteract the rPPG signal confusion caused by the head movement via wrapping facial frames by the estimated optical \ufb02ow, which bene\ufb01ts the motion-robust rPPG signal recovery and alleviates the in\ufb02uence of head movement. \u2022 We conduct extensive experiments on benchmark datasets to demonstrate the superior performance of the proposed method under both arbitrary face resolution and severe head movement scenarios. Related Work Remote Photoplethysmography Measurement Plenty of handcrafted rPPG measurement methods have been proposed since the researches (Takano and Ohta 2007; Verkruysse, Svaasand, and Nelson 2008) show the feasibility of recovering physiological signals through a digital camera. Some early works take traditional signal processing methods into consideration, which contain matrix transformation (Tulyakov et al. 2016; Shi et al. 2020), Least Mean Squares (Li et al. 2014), and Blind Source Separation (BSS) (Poh, McDuff, and Picard 2010a,b). In recent years, with the boost of deep learning methods, Deephys (Chen and McDuff 2018) and PhysNet (Yu, Li, and Zhao 2019) \ufb01rstly introduce end-to-end based CNN framework to this \ufb01eld. Meanwhile, spatio-temporal signal map based methods (Niu et al. 2020; Lu, Han, and Zhou 2021) also attract more attention due to their excellent performance. Towards ef\ufb01cient rPPG measurement, Auto-HR (Yu et al. 2020) and Ef\ufb01cientPhys (Liu et al. 2022) search and design lightweight end-to-end models. Recently, PhysFormer (Yu et al. 2022a) gains progress via temporal difference transformers to explore the long-range spatio-temporal relationship for rPPG representation. Besides supervised learning with labeled facial videos, unsupervised learning has also been validated in (Gideon and Stent 2021) to achieve rPPG measurement. The vulnerability of rPPG models has also been discussed recently, such as phase difference (Mironenko et al. 2020; Moc \u00b8o et al. 2018), camera rolling (Moc \u00b8o et al. 2018; Zhan et al. 2020) and video compression format (McDuff, Blackford, and Estepp 2017). Face Resolution and its Impact for RPPG In real-world applications, the distances between the camera and participants are variant, resulting in arbitrary resolution on facial region. Some recent works aim to extract rPPG from low-resolution videos with \ufb01xed size. They use superresolution models through the end-to-end (McDuff 2018) or two-stage (Song et al. 2020; Yue et al. 2021) network to recover the high-quality face videos and corresponding rPPG signals simultaneously. However, the target of superresolution (Shi et al. 2018; Shi and Zhao 2019) is mainly toward the recovery of visual performance, but not to maintain the quality of rPPG signals. Furthermore, it is still a challenge to obtain reasonable rPPG signals in the arbitrary face resolution scenario, which is more practical in the realworld application. Face Alignment for RPPG The alignment of face video enhances the performance of rPPG measurement. One straightforward evidence is that \fFigure 2: Overall framework of the proposed method. The frames in the sequence can be the arbitrary resolution. In the spatial stream, each arbitrary resolution face frame in the sequence forwards the Physiological Signal Feature Extraction (PFE) block, which maps the arbitrary size features to the \ufb01xed size facial structure features. In the temporal stream, the Temporal Face Alignment (TFA) block interpolates the frames to the same shape to generate the temporal aligned features. The facial structure and temporal aligned features are added to form the facial structure-motion features. Finally the facial structure-motion features forwards a rPPG Signal backbone to predict the rPPG signals. with the assistance of face landmark (Xia et al. 2022; Wan et al. 2020; Shi et al. 2022), generated facial ROI-based spatio-temporal signal maps (Lu, Han, and Zhou 2021; Lu and Han 2021; Niu et al. 2018a, 2020, 2018b) bene\ufb01t the motion-robust rPPG measurement. However, the ef\ufb01cacy of these methods highly depend on the accuracy of the detected face landmarks. As reported in (Niu et al. 2018b), the huge rotation of head easily causes the loss of landmarks. Recently, temporal alignment is wildly applied in video frame interpolation and video super-resolution. Optical \ufb02ow calculated for image-level alignment (Yi et al. 2019) and featurelevel alignment (Chan et al. 2021) are two representative streams for temporal alignment. In this work, we introduce the optical \ufb02ow guided feature alignment for motion-robust rPPG measurement. Methodology Overall Framework As illustrated in Figure 8, given a arbitrary resolution face sequence X = [x1, x2, ..., xi] , xi \u2208R3\u00d7hi\u00d7wi, i \u2208{1, ..., T} as input, the proposed method forwards the two-stream pathway including the Physiological Signal Feature Extraction block (PFE) and the Temporal Face Alignment block (TFA) to form the facial structure-motion features. Then the rPPG backbone (e.g., PhysNet (Yu, Li, and Zhao 2019)) is used for rPPG signals prediction. Please note that the arbitrary width hi and height wi of each frame can be different. Before the PFE stream, we adopt a ConvBlock to extract features Xar = \u0002 x1 ar, x2 ar, ..., xi ar \u0003 , xar \u2208RC\u00d7 hi 2 \u00d7 wi 2 from the arbitrary resolution face sequence X. Speci\ufb01cally, the ConvBlock is formed by a convolutional block with kernel size (1 \u00d7 5 \u00d7 5) cascaded with batch normalization (BN), RELU, and MaxPool, where the pooling layer halves the spatial dimension. Then Xar forwards the PFE block to generate facial structure features Xst \u2208RT \u00d7C\u00d7H\u00d7W . T, C, H, W indicate constant clip length, channel, height and width, respectively. In term of the TFA stream, the TFA block \ufb01rst interpolates the arbitrary resolution sequence X to \u02c6 X \u2208RT \u00d73\u00d7H\u00d7W , which has the same height and width as Xst. Then, TFA uses the bi-directional optical \ufb02ow (forward and backward) from successive frames to obtain temporal face alignment features Xmo \u2208RT \u00d7C\u00d7H\u00d7W . The output of PFE Xst and the output of TFA Xmo are summed to form the facial structure-motion features Xst\u2212mo \u2208RT \u00d7C\u00d7H\u00d7W . Finally, the rPPG signal backbone predicts the 1D rPPG signals Y \u2208RT from Xst\u2212mo. More details for reproducibility could be found in our code. Physiological Signal Feature Extraction (PFE) The PFE block is used in the spatial dimension for each face frame. As shown in Figure 3, the proposed PFE contains two parts. The upper and lower branches are devised for facial information and position information respectively. For upper branch, the features xar from arbitrary-resolution frame are \ufb01rst interpolated to constant resolution features xcr \u2208RC\u00d7H\u00d7W . To exploit the facial features, receptive \ufb01eld expansion is conducted as Eq.(1) to obtain the expanded features \u02c6 xcr \u2208R(n2C)\u00d7H\u00d7W . Meanwhile, in the lower branch, Representative area encoding (RAE) is em\fFigure 3: The structure of the PFE block. ployed as Eq.(2) to record the mapping relationship of pixel positions between xar and xcr. The relationship is described as coordinate tensor xsize \u2208R2\u00d7H\u00d7W , where two channels represent the scaling ratio on the height and width accordingly. Then, the expanded features \u02c6 xcr together with coordinate tensor xsize are fed into the facial feature encoding as Eq.(3) to produce the facial structure features xst \u2208RC\u00d7H\u00d7W . Receptive \ufb01eld expansion. To enrich and mine the contextual information contained in the facial structure features xcr, we unfold the facial structure features xcr \ufb01rst, and then expand its receptive \ufb01eld via concatenating the n \u00d7 n neighboring features to obtain \u02c6 xcr. Formally, the receptive \ufb01eld expansion is de\ufb01ned as \u02c6 xcr(i, j) = Concat({xcr(i + n, j + n)}n\u2208Neighbor), (1) where i and j indicate the spatial position of the features. n = 3 is used as the default setting. Representative area encoding (RAE). As the arbitrary size features xar have different spatial size with the structure features xst, spatial positions in xst correspond to different areas from xar. It is important to explicitly describe the representative area for each spatial position. Here we formulate the representative area information xsize \u2208R2\u00d7H\u00d7W as xsize (i, j) = [\u03c3H, \u03c3W ], \u03c3H = h H , \u03c3W = w W , (2) where \u03c3H and \u03c3W mean the scaling ratio on the height and width dimensions when xar is transformed to xst. Facial feature encoding. We concatenate \u02c6 xcr and xsize along the channel dimension. A shallow facial feature encoding function is designed to mine the semantic facial structure features, which is simply parameterized as an MLP. Facial feature encoding takes the form: xst = Reshape (MLP (Flatten(Concat (\u02c6 xcr, xsize)))) (3) After extracting the facial structure features xst \u2208RC\u00d7H\u00d7W from each frame, we merge them in temporal dimension to form the Xst \u2208RT \u00d7C\u00d7H\u00d7W . Temporal Face Alignment (TFA) The facial structure features Xst from the PFE block have rich representation capacity on the arbitrary resolution conFigure 4: The structure of TFA block. Pf and Pb mean the forward and backward optical \ufb02ow face alignment block, respectively. dition. However, in practice, head movement in\ufb02uences endto-end rPPG measurement signi\ufb01cantly. For example, hugeangle rotation will make partial facial features out of the scope of the facial structure features Xst. Previous works use landmark detection methods such as OpenFace (Baltru\u02c7 saitis, Robinson, and Morency 2016) to extract the face landmarks for facial ROI alignment. However, the robustness of rPPG measurement is highly depended on the accuracy of face landmarks. Here, three problems are noted for face alignment: 1. Landmarks status. The position of face landmarks could change dramatically because of head motion, which induces the inaccurate detection of ROIs in the facial clips. 2. Interpolation. The shape of ROI might be different, and interpolation is usually used to keep the consistency (Hu et al. 2021). However, interpolation may corrupt the color change of pixels, and eliminate the rPPG cues. 3. Lost landmarks. When the head movement encounters huge-angle rotation, partial face may disappear from the frame. In this case, the predicted landmarks would mark some regions randomly. Head rotation is a continuous process, and the state of each frame is correlated with the forward and backward states. According to these observations, we propose the temporal face alignment (TFA) block to leverage optical \ufb02ow to describe the facial motion and wrap the facial structure features. As shown in Figure 4, TFA adopts a typical bidirectional recurrent network. The video sequence \u02c6 X forwards optical \ufb02ow face alignment as Eq.(4) to get head motion features Hb,f = h hb,f 1 , hb,f 2 , ..., hb,f i i , hb,f i \u2208RC\u00d7H\u00d7W . Then, hb and hf are fed into bidirectional aggregation as Eq.(5) to produce the facial structure features xst. Optical \ufb02ow face alignment. The video sequence \u02c6 X is \ufb01rst fed into \u2018Optical\u2019 to calculate the optical \ufb02ow si by SPyNet (Ranjan and Black 2017). Then, the optical \ufb02ow si is utilized to \u2019Warp\u2019 the head motion features hi\u22121 of previous frame to get aligned features \u00af hi\u22121 for frontalizing faces. Notice status h0 is initialized by features of all zeros. The aligned features together with the recent frame are then \fpassed to 15 basic residual blocks for obtaining hi. Optical \ufb02ow face alignment takes the form: sb,f i = Optical (xi, xi\u00b11) , \u00af hi\u00b11 = Wrap \u0010 hi\u00b11, sb,f i \u0011 , hb,f i = ResBlock \u0000Concat \u0000xi, \u00af hi\u00b11 \u0001\u0001 , (4) The head motion features hi can be represented from two temporal directions (i.e., forward and backward). Thus, we use hf i and hb i to represent the hi via forwarding and backwarding xi. Bidirectional aggregation. To aggregate the backward and forward head motion features, we concatenate hb i and hf i along the channel dimension and introduce a convolutional layer to maintain the number of channels. Formally, the bidirectional aggregation is de\ufb01ned as xmo = F \u0010 Concat \u0010 hb i, hf i \u0011\u0011 (5) where F (\u00b7) represents a 1 \u00d7 1 convolutional layer and xmo \u2208RC\u00d7H\u00d7W represents generated temporal face alignment features. Finally, the facial structure features xst are added to xmo to obtain the facial structure-motion features xst\u2212mo \u2208RC\u00d7H\u00d7W . Cross-Resolution Constraint and Loss Functions Despite we have designed the PFE block to tackle with arbitrary resolution problem, it is still hard to learn resolutioninvariant rPPG features with the traditional Negative Pearson loss Ltime (Yu, Li, and Zhao 2019) and frequency crossentropy loss Lfre (Niu et al. 2020). Here we design a novel cross-resolution constraint Lcrc which forces the model to learn consistent rPPG predictions between two resolution views. Speci\ufb01cally, as shown in Figure 5, we sample video clip into different resolutions as X1 \u2208RT \u00d73\u00d7h1\u00d7w1 and X2 \u2208RT \u00d73\u00d7h2\u00d7w2. The two sampled clips forward the unshared PFE and shared TFA blocks \ufb01rst, and then go through a shared rPPG backbone model to predict the corresponding rPPG signals Y1 \u2208RT \u00d71 and Y2 \u2208RT \u00d71. The crossresolution constraint Lcrc can be formulated via calculating the L1 distance between two predicted signals. The overall loss function Loverall can be formulated as Lcrc = \u2225Y1 \u2212Y2 \u22251, Loverall = Ltime + Lfre + \u03b1 \u00b7 Lcrc, (6) where hyperparameter \u03b1 equals to 0.1. The loss function avoids the model to pay attention to the similarity of lowlevel features from different resolution. In other words, Lcrc focuses on the consistency of predicted rPPG signals, instead of the feature-level consistency, which determines the performance measurement and provide the direct supervision signals for the model learning. Experiment We \ufb01rst conduct experiments of rPPG-based HR measurement on three benchmark datasets with their original protocols and normal setting. Then, the UBFC-rPPG (Bobbia Figure 5: The framework of the Cross-Resolution Constraint. It is calculated on the two sequence inputs of arbitrary face images from the same clip. et al. 2019) dataset is used for performance evaluation on arbitrary-resolution facial videos and ablation studies. Dataset UBFC-rPPG. The UBFC-rPPG dataset (Bobbia et al. 2019) includes 42 videos, which are about 2 minutes long and recorded. The bio-signals ground-truth were recorded by a pulse oximeter with a 60 Hz sampling rate. PURE. The PURE dataset (Stricker, M\u00a8 uller, and Gross 2014) contains 60 videos from 10 subjects performing six different head motion tasks: steady, talking, slow translation, fast translation, small rotation, and medium rotation. COHFACE. The COHFACE dataset (Heusch, Anjos, and Marcel 2017) is consisted of 160 one-minute videos from 40 healthy individuals, captured under studio and natural light. The videos are heavily compressed using MPEG-4 Visual, which was noted by (McDuff, Blackford, and Estepp 2017) to potentially cause corruption of the rPPG signal. Implementation Details Preprocessing and training procedure. For each video clip, we use the MTCNN (Zhang et al. 2016) to crop the enlarged face area and resize each frame to 128 \u00d7 128 pixels. And then we downsample the face image ranging from 1.0 to 4.0 times to get the arbitrary scale frames. The facial video clip with arbitrary sizes would be mapped to the \ufb01xed size H=W=64 after PFE and TFA blocks while C=16. Random horizontal \ufb02ipping is used for clip-level spatial data augmentation. The proposed method is trained with batchsize 2 on RTX3090 GPU with PyTorch. The Adam optimizer is used and the learning rate is set as 1e-4. The weight decay is 5e-5. Metrics and evaluation. Following (Comas, Ruiz, and Sukno 2022), we calculate the root mean squared error (RMSE) and mean absolute error (MAE) between the predicted average HR versus the groundtruth HR. We \ufb01rst forward the models using 160-frame clips without overlapping to predict clip-level HR. To fairly compare our method with the state-of-the-art methods (\u02c7 Spetl\u00b4 \u0131k, Franc, and Matas 2018; Comas, Ruiz, and Sukno 2022; Lokendra and Puneet 2022), the whole-video performance comparisons are calculated via averaging the clip-level predictions. \fTable 1: HR estimation results (bpm) on UBFC, PURE, and COHFACE datasets. The proposed TFA and PFE remarkably improve the performance of the baseline PhysNet. Method UBFC PURE COHFACE MAE\u2193 RMSE \u2193 MAE \u2193 RMSE \u2193 MAE \u2193 RMSE \u2193 CHROM 3.44 4.61 2.07 2.5 POS 2.44 6.61 3.14 10.57 HR-CNN 1.84 2.37 8.10 10.8 DeepPhys 2.90 3.63 1.84 2.31 Zhan et al. 2.44 3.17 1.82 2.29 Gideon et al. 3.60 4.60 2.30 2.90 2.30 7.60 AND-rPPG 2.67 4.07 TDM 2.32 3.08 1.83 2.33 PhysNet 2.95 3.67 2.16 2.7 5.38 10.76 TFA-PFE (Ours) 0.76 1.62 1.44 2.50 1.31 3.92 Comparison on Normal Face Resolution Setting For fair comparison, we train the baseline model PhysNet (Yu, Li, and Zhao 2019) and our methods using the same recipe to alleviate the in\ufb02uence of arbitrary resolution and data resolution augmentation. As shown in Table 4, the performance of our re-implemented PhysNet is slightly better than the result reported in (Gideon and Stent 2021). The UBFC-rPPG (UBFC) dataset is used for the ablation study about the proposed TFA and PFE blocks under arbitrary-resolution and head motion scenarios. We compare our method with eight state-of-the-art methods (CHROM (De Haan and Jeanne 2013), POS (Wang et al. 2016), HR-CNN (\u02c7 Spetl\u00b4 \u0131k, Franc, and Matas 2018), DeepPhys (Chen and McDuff 2018),Zhan et al. (Zhan et al. 2020), Gideon et al. (Gideon and Stent 2021), AND-rPPG (Lokendra and Puneet 2022), TDM (Comas, Ruiz, and Sukno 2022)) in Table 1. Results on UBFC-rPPG. It can be seen from the second column of Table 1 that the vanilla 3DCNN-based PhysNet performs worse than two other deep learning based methods (DeepPhys and TDM). When assembled with the proposed TFA and PFE blocks, the PhysNet+TFA+PFE achieves the best performance on UBFC-rPPG, outperforming the DeepPhys and TDM by -1.86 bpm and -1.28 bpm MAE, respectively. On other words, the proposed TFA and PFE blocks improve the baseline PhysNet performance, and reduce 1.81 bpm MAE and 1.39 bpm RMSE on UBFC-rPPG, indicating the effectiveness of robust rPPG features representation via TFA-PFE blocks. Results on PURE. As shown in the third column of Table 1, compared with the baseline PhysNet, the proposed TFA and PFE blocks improve the MAE performance from 2.16 bpm to 1.44 bpm on PURE. It indicates that TFA-PFE is able to represent more motion-robust rPPG features as there are plenty of hard samples with severe head movement in PURE. As for the RMSE metric, the proposed method performs slightly worse than TDM (+0.17 bpm), which is mainly caused by the difference of rPPG backbones (two extra differential temporal convolutions are used in TDM). Please note that the proposed TFA and PFE blocks might also plug-and-play on TDM to further improve performance. Results on COHFACE. Compared with UBFC-rPPG and PURE, the face videos are highly compressed on COHFACE, resulting obvious compression artifacts. As can Table 2: The performance of TFA and PFE on UCLA-rPPG Method Vanilla PFE PFE+TFA MAE\u2193 RMSE \u2193 MAE \u2193 RMSE \u2193 MAE \u2193 RMSE \u2193 PhysNet 11.82 17.23 9.15 14.82 8.73 13.23 PhysFormer 11.75 16.39 8.92 11.28 5.96 12.17 Figure 6: HR estimation results (bpm) on UBFC-rPPG under different \ufb01xed face resolution setting. be seen from the last column of Table 1, existing supervised CNN-based methods (HR-CNN, Gideon et al. (Gideon and Stent 2021), and PhysNet) performs poorly (>5 bpm RMSE) due to the low face video quality. Thanks to the feature re\ufb01nement from the TFA and PFE blocks, the proposed method achieves state-of-the-art performance (RMSE=3.92 bpm), outperforming previous methods by a large margin. Results on UCLA-rPPG. UCLA-rPPG involves more diverse scenarios and presence of subject skin tones. To show our paper is trying to target the general issues of rPPG, we also train and test the TFA and PFE blocks on it. As shown in Table 2, our method still improve the performance. Comparison on Arbitrary Face Resolution Settings Here we downsample the face images from the UBFC dataset with two settings: \ufb01xed face resolution and varying face resolution. The former one describes the long-distance scenario while the latter mimics the varying face-cameradistance scenario. Results on \ufb01xed face resolution. It can be seen from the Figure 6 that the baseline PhysNet is easily in\ufb02uenced by the face resolution. When the \ufb01xed face resolutions are smaller than 50x50, the performance of PhysNet drops sharply (e.g., MAE>5 bpm). In contrast, when assembled with the proposed TFA and PFE blocks, it can predict accurate rPPGbased HR (MAE<2 bpm) in most face solution settings. Results on varying face resolution. rPPG measurement from varying face resolution video is challenging due to the complex temporal contextual interference. The results of varying face resolution on UBFC are shown in Table 3. Compared with the vanilla PhysNet, the proposed TFA and PFE blocks bene\ufb01t the facial feature alignment and re\ufb01nement among consecutive frames, thus improving the MAE performance (8.87 bpm, 6.17 bpm, 10.00 bpm and 5.17 bpm) in four scenarios (high/low resolution gradually decreasing resolution, and high/low resolution gradually decreasing and then increasing resolution), respectively. Ablation Study We also provide the ablation studies of the PFE and TFA blocks under arbitrary face resolution and severe head movement scenarios on the UBFC dataset. \fTable 3: MAE results (bpm) of the PFE and TFA blocks on UBFC under varying face resolution setting. \u2018128 to 64\u2019 means that the face resolution gradually decreases from 128x128 to 64x64 in a video clip. Model Resolution 128 to 64 64 to 32 128 to 64 to 128 64 to 32 to 64 Baseline 10.73 8.14 11.85 7.15 PFE 3.49 3.84 3.29 5.59 TFA 5.44 5.04 5.14 4.33 TFA-PFE 1.86 1.97 1.85 1.88 Table 4: MAE results (bpm) of the PFE and TFA blocks on UBFC under different \ufb01xed face resolution settings. Model Resolution 128\u00d7128 96\u00d796 85\u00d785 75\u00d775 64\u00d764 Baseline 2.38 2.64 2.21 6.06 6.82 PFE w/o RAE 3.79 3.77 3.80 3.73 3.76 PFE w/o Lcrc 2.87 2.85 2.89 2.86 2.84 PFE 2.40 2.39 2.41 2.34 2.33 TFA 8.73 8.30 8.72 8.76 8.29 TFA(single)-PFE 2.28 2.29 2.35 2.35 2.44 TFA-PFE 1.60 1.61 1.62 1.60 1.63 Ef\ufb01cacy of the cross-resolution constraint. In the default setting, the models with PFE are trained in two views with different face resolution frames using a cross-resolution constraint Lcrc. In this ablation, we consider how Lcrc impacts the PFE. As shown in Table 4, \u2018PFE\u2019 outperforms \u2018PFE w/o Lcrc\u2019 by a convincing margin (0.4 to 0.5 bpm MAE) for almost all different face resolution settings. It indicates such simple resolution consistency bene\ufb01ts the PFE learn resolution-robust rPPG cues. Ef\ufb01cacy of the PFE block on arbitrary face resolution. Here we investigate the impacts of the representative area encoding (RAE) of PFE on \ufb01xed face resolution UBFC \ufb01rst. It can be seen from the results \u2018PFE w/o RAE\u2019 and \u2018PFE\u2019 in Table 4 that the PFE without RAE performs even worse than the baseline PhysNet itself under high-resolution cases. When equipped PFE with RAE, it can achieve robust HR estimation under all different \ufb01xed face resolution settings. Besides, under more challenging varying face resolution scenarios, we can also \ufb01nd the consistent conclusion from Table 3 that PFE signi\ufb01cantly improves the baseline performance (reducing 7.24 bpm MAE for high resolution gradually decreasing resolution scenario). Ef\ufb01cacy of the TFA block on arbitrary face resolution. As shown in the result of \u2018TFA\u2019 in Table 4, the baseline with only TFA block performs even worse than the baseline itself. It is because when estimating the optical \ufb02ow, all clips are interpolated to a \ufb01xed resolution, which makes the TFA block be weak in describing rPPG-aware color area (Xue et al. 2019). We can \ufb01nd from the result \u2018TFA-PFE\u2019 that the best performance can be achieved for all different \ufb01xed face resolution settings when assembling baseline with both PFE and TFA blocks. Similar conclusion can be also drawn from the varying face resolution setting in Table 3. Moreover, we also consider the online testing case that the temporal alignment state from backward frames is not available. In this case, we design a TFA block with a single forward direction for facial feature alignment. It can be seen from the result \u2018TFA(single)-PFE\u2019 in Table 4 that the MAE performance degrades sightly (around 0.6 bpm) compared with Figure 7: HR estimation results (bpm) on the samples with severe head movement and huge face rotation. Table 5: The number of parameters and FLOPs for the model and inference time on edge devices. Model Parameters FLOPs Xavier AGX Agx orin Physnet 768.64KB 35.00G 0.18s 0.25s Physnet+PFE 805.77KB 46.51G 2.24s 3.63s Physnet+PFE+TFA 1344.67KB 92.67G 3.37s 4.82s Physformer 7.37MB 50.48G 1.83s 2.18s Physformer+PFE 7.42MB 81.52G 2.47s 3.81s Physformer+PFE+TFA 8.24MB 129.59G 3.52s 4.67s bi-directional TFA, but still can improve the performance of the model compared with baseline with only PFE. Ef\ufb01cacy of the PFE and TFA blocks on severe head movement. To con\ufb01rm the ef\ufb01cacy of the PFE and TFA blocks on severe head movements with large angle rotation, we also conduct studies on the carefully selected videos with large angle rotation from UBFC, PURE and COHFACE. Speci\ufb01cally, the participants in COHFACE quickly rotate their heads at an average angle of 80\u00b0, while the participants in UBFC and PURE rotate their heads very slowly at an average angle of 35\u00b0. The results are shown in Figure 7. We can \ufb01nd that 1) compared with baseline PhysNet, the PFE obviously improves the performance on these videos with severe head movement, and reduces MAE by 1.76 bpm, 2.05 bpm, and 5.63 bpm on UBFC, PURE and COHFACE, respectively; 2) the proposed TFA-PFE further decreases the MAE by 0.57 bpm, 0.58 bpm, and 2.07 bpm on these datasets due to the excellent motion-robust capacity of TFA. Edge Deployment. Considering that rPPG will be employed mostly on edge devices with limited computational capacity, we provide a detailed analysis of our model in Table 5." + } + ], + "Zhengyu Li": [ + { + "url": "http://arxiv.org/abs/2306.13319v7", + "title": "A SAT Solver and Computer Algebra Attack on the Minimum Kochen-Specker Problem", + "abstract": "One of the fundamental results in quantum foundations is the Kochen-Specker\n(KS) theorem, which states that any theory whose predictions agree with quantum\nmechanics must be contextual, i.e., a quantum observation cannot be understood\nas revealing a pre-existing value. The theorem hinges on the existence of a\nmathematical object called a KS vector system. While many KS vector systems are\nknown, the problem of finding the minimum KS vector system in three dimensions\n(3D) has remained stubbornly open for over 55 years.\n To address the minimum KS problem, we present a new verifiable\nproof-producing method based on a combination of a Boolean satisfiability (SAT)\nsolver and a computer algebra system (CAS) that uses an isomorph-free orderly\ngeneration technique that is very effective in pruning away large parts of the\nsearch space. Our method shows that a KS system in 3D must contain at least 24\nvectors. We show that our sequential and parallel Cube-and-Conquer (CnC)\nSAT+CAS methods are significantly faster than SAT-only, CAS-only, and a prior\nCAS-based method of Uijlen and Westerbaan. Further, while our parallel pipeline\nis somewhat slower than the parallel CnC version of the recently introduced\nSatisfiability Modulo Theories (SMS) method, this is in part due to the\noverhead of proof generation. Finally, we provide the first computer-verifiable\nproof certificate of a lower bound to the KS problem with a size of 40.3 TiB in\norder 23.", + "authors": "Zhengyu Li, Curtis Bright, Vijay Ganesh", + "published": "2023-06-23", + "updated": "2024-04-09", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph", + "cs.CC", + "math.CO" + ], + "main_content": "Introduction Quantum Mechanics (QM) is often described as one of the most successful physical theories of all time, and yet many questions regarding the very foundations of QM 1 arXiv:2306.13319v7 [quant-ph] 9 Apr 2024 \fremain unresolved. To address these foundational issues, many interpretations of QM (i.e., mappings from mathematical formalisms of QM to physical phenomena) have been proposed. Hidden-variable theories are attempts at understanding counterintuitive QM phenomena through a deterministic lens by positing the existence of (possibly) unobservable physical entities or hidden variables [1] that standard QM theory does not account for (and hence is deemed incomplete). Over the years, many constraints have been imposed on hidden-variable theories, e.g., Bell\u2019s inequalities that rule out the possibility of local hidden-variable theories that are also in agreement with the predictions of QM [2]. In a similar vein, Simon Kochen and Ernst Specker [3] proved their famous Kochen\u2013Specker (KS) theorem in 1967 (and independently by John Bell in 1966 [4]) that essentially asserts that non-contextual hidden variable theories cannot reproduce the empirical predictions of QM. The KS theorem rules out non-contextual hidden-variable theories via the existence of a finite set of three-dimensional vectors, referred to as a KS vector system [3]. A KS vector system (or simply a KS system) is a combinatorial object that witnesses a contradiction between non-contextuality (i.e., the assumption that observables can be assigned values prior to measurement and independent of measurement context) and the SPIN axiom of QM. The first KS vector system, discovered in 1967, contains 117 vectors [3]. Another theorem that relies on the existence of KS systems in an essential way is the \u201cFree Will\u201d theorem of John Conway and Simon Kochen [5]. Since the publication of Kochen and Specker\u2019s theorem in 1967, physicists and mathematicians have wondered about the cardinality of the smallest-sized KS vector system (see Table 2 and Section 3). Finding the minimum KS system, referred to as the minimum KS problem, is not only of scientific and historical interest but also has direct applications in quantum information processing [6]. For example, finding a minimum KS system could enable applications in the security of quantum cryptographic protocols based on complementarity [7], zero-error classical communication [8], and dimension witnessing [9]. Further, the large size of all known KS systems has hindered physicists from using them for empirical tests of the KS theorem, similar to the empirical tests of Bell\u2019s theorem [10]. 1.1 Two definitions of the Kochen-Specker System There are two definitions of the KS system widely used in literature. The \u201coriginal\u201d KS set definition used in this paper (Section 2) contains only the vectors necessary to prove the KS theorem mathematically. This \u201coriginal\u201d definition of a KS set is the one originally used by Kochen and Specker themselves. KS systems developed by this definition are commonly referred to as \u201coriginal KS systems\u201d [11]. However, from an experimental perspective, another definition that requires additional vectors in the KS system is used, since constructing the set in practice would involve vectors not explicitly needed in the mathematical proof. Specifically, this definition requires that every pair of vectors in a 3-dimensional KS set belongs to a set of 3 mutually orthogonal vectors. KS systems developed by this alternative definition are commonly referred to as \u201cextended KS systems\u201d [11]. Both definitions are well-known and used extensively in the literature. For examples, the \u2018original\u2019 definition used in this paper is also used in [12], [13], [14], [15], [5], [16], 2 \fDiscoverers Original System Extended System Kochen, Specker [3] 117 192 Sch\u00a8 utte [15] 33 49 Peres [26] 33 57 Conway, Kochen [14] 31 51 Table 1: Size of 3-dimensional KS systems and how they differ based on the definition used, as discussed in Subsection 1.1. In addition, Pavi\u02c7 ci\u00b4 c and Megill [27] discovered many other unique 3-dimensional extended KS systems with 51, 53, 54, 55, 57, 69, etc. vectors through automated generation. [17], [18], [19], while the other definition is used in [11], [20], [21], [22], [23], [24], [25]. As a result of the difference in the two definitions, the lower bound on the original KS system and the extended KS system are also different, as shown in Table 1. John Conway has stressed the problem of finding the minimum number of threedimensional vectors necessary to prove the \u201cFree Will theorem\u201d in public lectures on the topic (see [17]). Thus, knowing the smallest \u2018original\u2019 KS set is of interest since such a set would correspond to a proof of the Free Will theorem using the fewest number of three-dimensional directions. This in a certain sense would lead to the \u2018simplest\u2019 proof of the theorem. We believe the question of the minimal size of an \u2018original\u2019 KS set is theoretically interesting (independent of what the minimal size of an \u2018experimental\u2019 KS set is). In this paper, we investigate the lower bound of the original KS system. However, the paradigm proposed in this paper is easily adaptable and scalable, and the approach can be applied to both definitions given the appropriate SAT encoding of the problem. 1.2 The SAT+CAS Paradigm for Hard Combinatorial Problems In recent years we have witnessed the dramatic impact of satisfiability (SAT) solvers\u2014 computer programs that take as input Boolean logic formulas and decide whether they have solutions\u2014in areas as diverse as AI, software engineering, program verification, program synthesis, and computer security [28, 29]. Unfortunately, despite these fantastic achievements of SAT solvers, they struggle with certain problems such as those containing many symmetries [30] or those requiring the usage of more advanced mathematical theories than propositional logic [31]. Much work has been done to remedy these drawbacks, including the development of sophisticated symmetry breaking techniques [32] and the development of solvers that support richer logic such as \u201cSAT modulo theories\u201d or SMT solvers [33]. However, the mathematical support of SMT solvers is quite limited when compared with the vast mathematical functionality available in a modern computer algebra system (CAS). In response to this need for a solver that combines the efficient search capabilities of SAT solvers with the mathematical knowledge available in CASs, a new kind of solving methodology was developed in 2015 by Zulkoski, Ganesh, and Czarnecki [34] and independently by \u00b4 Abrah\u00b4 am [35]. This SAT+CAS solving methodology has been successfully applied to many diverse problems, including circuit verification [36, 37], automatic debugging [38], finding circuits for matrix multiplication [39], computing directed Ramsey numbers [40], and verifying mathematical conjectures [41]. For other 3 \fDiscoverers Year Bound Kochen, Specker [3] 1967 \u2264117 Jost [43] 1976 \u2264109 Conway, Kochen [14] 1990 \u226431 Arends, Ouaknine, Wampler [17] 2009 \u226518 Uijlen, Westerbaan [18] 2016 \u226522 Li, Bright, Ganesh [44] 2022 \u226523 Li, Bright, Ganesh [45] / 2023 \u226524 Kirchweger, Peitl, Szeider [19] Table 2: A chronology of the bounds on the size of the minimum KS vector system in three dimensions. This table should not be regarded as a comprehensive catalog of all three-dimensional KS systems (Section 3); rather, it is a chronological overview highlighting the advancements in reducing the size of the minimal 3-D KS system in line with its initial definition. The present work (presented at CanaDAM 2023) was performed independently of Kirchweger, Peitl, Szeider (presented at IJCAI 2023). work in the intersection of symbolic computation and satisfiability checking, see Matthew England\u2019s summary [42] of the SC-Square project. In short, the SAT+CAS methodology has found wide application in diverse fields that somehow require solving hard combinatorial problems. In this paper, we use the SAT+CAS solving methodology (see Figure 1) to dramatically improve the performance of the search for KS systems compared to all previous approaches developed to prove lower bounds for the minimum KS problem (see Section 1.4). This is made possible via a combination of the powerful search and learning algorithms used in modern SAT solvers with an \u201cisomorph-free exhaustive generation\u201d approach that prevents the duplicate exploration of isomorphic parts of the search space by the solver. For example, such an approach was recently used to resolve the Lam\u2019s problem from projective geometry [46]. Although isomorph-free exhaustive generation has been used extensively in combinatorial enumeration, it has only recently been combined with SAT solving [47, 48]. The traditional approach to preventing a SAT solver from repeatedly exploring isomorphic parts of a search space is via the use of symmetry breaking techniques [30]. One such symmetry breaking approach is to add \u201cstatic\u201d constraints to the input formula at the beginning of the search aimed at reducing the size of the search space [49, 50]. Unfortunately, such an approach can be quite expensive in the sense that the number of added constraints can be large (e.g., exponential in the number of variables of the formula that encodes the problem-at-hand). Another approach is to \u201cdynamically\u201d break symmetries during the solver\u2019s search [30, 51] such as in the SAT modulo symmetries (SMS) paradigm [52, 53]. Our approach is similar in that it also dynamically adds constraints to the problem during the solving process. However, an important difference is that the SAT+CAS paradigm is more general since it goes beyond breaking symmetries. For example, in the resolution of the smallest counterexample of the Williamson conjecture, we used the Discrete Fourier Transform (DFT) as part of the CAS computations [54]. 4 \fCaDiCaL Preprocessor Simplifier Orderly Generation via SAT+CAS Embeddability Checker CNF Instance CaDiCaL + CAS Simplified Instance KS Candidates Z3 SMT Solver KS Vector System MapleSAT CAS Partial Solution Blocking Clause Fig. 1: A flowchart of our SAT+CAS based tool MathCheck for solving the KS problem in the sequential setting. The CNF instance encoding the KS problem (see Section 4) is simplified using CaDiCaL+CAS. The simplified instance is passed to the MapleSAT+CAS tool (see Section 5) either sequentially or in parallel using Cubeand-Conquer (CnC). Finally, an embeddability checker applies the SMT solver Z3 to determine whether the candidates are embeddable (see Section 6). 1.3 Automated Verification of Results Verification is of utmost importance in the context of computer-assisted proofs, given the mathematical nature of such computations\u2014especially for nonexistence proofs. Fortunately, the SAT+CAS paradigm naturally lends itself to automated verification, given the fact that all modern SAT solvers produce verifiable proofs. By contrast, all previous computer-assisted proofs of lower bounds for the minimum KS problem are not verifiable. Since our problem requires the solver to perform an exhaustive search, the validity of our nonexistence result is crucially dependent on the encodings and the computational tools that we use. For example, our nonexistence result crucially relies on the correctness of the SAT solver\u2019s search and the computer algebra system\u2019s isomorph-free exhaustive generation routine. Fortunately, our SAT+CAS method generates verifiable certificates that allow an independent third party to certify that the SAT solver\u2019s search is exhaustive and also that the facts provided by the isomorph-free generation are correct. Thus, one does not need to trust either the SAT solver or the CAS to trust that our results are correct\u2014instead, one only needs to trust the correctness of the proof verifier. This is quite significant, as SAT solvers and CASs are complicated pieces of software that typically cannot be guaranteed to be bug-free. By contrast, a proof verifier is a 5 \fmuch simpler piece of software that can be formally checked. In Section 9, we provide details on the verification techniques that we used to certify our results. 1.4 Our Contributions \u2022 Proof-producing SAT+CAS with Orderly Generation (OG) Method: In this paper, we present the design and implementation of a verifiable proofproducing SAT+CAS system with orderly generation (OG) aimed at combinatorial problems (as part of the SAT+CAS tool, MathCheck1), thus showing that the minimum KS system must contain at least 24 vectors. Also, we extend our work to complex vectors in three dimensions (3D), and thus establish a lower bound of 24 for both the real and complex KS problem. \u2022 Speedup over Competing Methods: We show that our sequential and parallel Cube-and-Conquer (CnC) SAT+CAS methods are significantly faster than SATonly, CAS-only, and prior CAS-based methods by [18]. Further, while our pipeline is somewhat slower than the recently introduced SMS method, this is in part due to the generation of verifiable proofs by our methods which added some additional complexity and slowed down our method. \u2022 Formal Verification of Results: Finally, our approach provides a formal verification of the lower bound of 24 for the minimum KS problem in 3D by verifying all certificates computed by the SAT+CAS solvers in all orders up to and including order 23 (see Section 9). By contrast, [19] describe a method to verify their result, but they only report having verified 5% of the proofs in order 23. 2 Background In this section, we introduce several fundamental concepts from quantum foundations such as the SPIN axiom, 010-colorability, the KS theorem, and the KS vector system. For a deeper dive, we refer the reader to the QM section in the Stanford Encyclopedia of Philosophy [1]. We assume that the reader is familiar with Boolean logic and SAT solvers. While we provide a very brief overview of cube-and-conquer SAT solvers, we refer the reader to the Handbook of Satisfiability [28] for a comprehensive overview. 2.1 The KS Theorem Informally, the KS theorem states that there is a contradiction between the SPIN axiom of standard QM and the assumption of non-contextuality. The Stanford Encyclopedia of Philosophy provides a comprehensive background to the KS theorem and stresses its importance in the foundations of QM [1]. The proof of the KS theorem crucially relies on the existence of a KS vector system (see Figure 2). More precisely, exhibiting the existence of a KS vector system proves the KS theorem, which essentially states that the unit sphere is not 010-colorable (defined below). Spin of an Elementary Particle: Spin is an intrinsic form of angular momentum of elementary particles whose existence can be inferred from the Stern\u2013Gerlach experiment [55]. In our context, a spin-1 particle is shot through a magnetic field in a given 1Code at https://github.com/BrianLi009/MathCheck 6 \fX \u22122.0 \u22121.5 \u22121.0 \u22120.5 0.0 0.5 1.0 1.5 2.0 Y \u22122.0 \u22121.5 \u22121.0 \u22120.5 0.0 0.5 1.0 1.5 2.0 Z \u22122.0 \u22121.5 \u22121.0 \u22120.5 0.0 0.5 1.0 1.5 2.0 Fig. 2: The 31 vectors of the smallest known KS system in three dimensions (discovered by John Conway and Simon Kochen circa 1990). For simplicity, the vectors have been scaled to lie on the cube with vertices (\u00b12, \u00b12, \u00b12) instead of the unit sphere. direction and continues undisturbed, deflects up, or deflects down\u2014corresponding to 3 possible angular momentum states, namely 0, 1, and \u22121. Thus, the square of this measurement is 0 or 1. SPIN axiom: The SPIN axiom of QM states that the squared spin components of a spin-1 particle are 1, 0, 1 in three pairwise orthogonal directions of measurement. Thus, the observable corresponding to the question \u201cis the squared spin 0?\u201d measured in three mutually orthogonal direction always produces yes in exactly one direction and no in the other two orthogonal directions. We use the dual of the above form in the present work, i.e., the \u2018010\u2019 convention rather than \u2018101\u2019, following Uijlen and Westerbaan. The SPIN axiom follows from the postulates of QM and is experimentally verifiable [56]. KS Vector System: A KS vector system can be represented in multiple ways and we describe it as a finite set of points on a sphere. As a consequence of the SPIN axiom, the squared-spin measurements along opposite directions must yield the same outcome. Therefore, two collinear vectors are considered to be equivalent. To define a KS vector system, we first formally define a vector system and the notion of 010-colorability. For 7 \fthe purposes of this paper, we limit ourselves to the 3D version of the KS problem as the size of the minimum KS system in higher dimensions is already known [57]. Definition 1 (Vector System). A vector system is a finite set of non-collinear points on the unit sphere in R3. A {0, 1}-coloring of a vector system is an assignment of 0 and 1 to each vector in the system. The colorings of interest to us are described in the following definition. Definition 2 (010-Colorability of Vector Systems). A vector system is 010colorable if there exists an assignment of 0 and 1 to each point such that: 1. No two orthogonal points are assigned 1. 2. Three mutually orthogonal points are not all assigned 0. Definition 3 (KS Vector System). A KS vector system is one that is not 010-colorable. Definition 4 (Orthogonality Graph). For a vector system K, define its orthogonality graph GK = (V, E), where V = K, E = { (v1, v2) : v1, v2 \u2208K and v1 \u00b7 v2 = 0 }. Essentially, the vertices of GK are the vectors in K, and there is an edge between two vertices exactly when their corresponding vectors are orthogonal. Similarly, the notion of 010-colorability can be translated from a vector system to an orthogonality graph. Definition 5 (010-colorability of Graphs). A graph G is 010-colorable if there is a {0, 1}-coloring of the vertices such that the following conditions are satisfied simultaneously: 1. No two adjacent vertices are colored 1. 2. For each triangle, the vertices are not all colored 0. It is not always the case that an arbitrary graph has a corresponding vector system, but if one does exist then we say that such a graph is embeddable. Definition 6 (Embeddable Graph). A graph G = (V, E) is embeddable if it is a subgraph of an orthogonality graph for some vector system. Being embeddable implies the existence of a vector system K whose vectors have a one-to-one correspondence with the vertices of G, such that adjacent vertices are assigned to orthogonal vectors. An example of an unembeddable graph is the cyclic graph C4 on 4 vertices, as the orthogonality constraints force a pair of opposite vertices to be mapped to collinear vectors (which are not allowed in this context). Definition 7 (KS Graph). An embeddable and non-010-colorable graph is called a KS graph. Observation 1. There exists a KS vector system if and only if there exists a KS graph. Definition 8 (KS Candidates). A KS Candidate is a satisfying assignment generated by the SAT+CAS solver. If a KS candidate is also embeddable, then it is a KS graph. As described in Section 6, our solver dynamically blocks all minimal unembeddable subgraphs up to order 12, and hence our KS candidates are graphs that do not contain any unembeddable subgraphs of order up to 12. 8 \f2.2 The Minimum KS Problem The minimum KS problem is to find a KS vector system of minimum cardinality, that is, a system with the fewest number of vectors in three-dimensional space (or equivalently a KS graph with the fewest number of vertices). Every KS system has an associated KS graph, so if a KS graph with cardinality n does not exist then a lower bound on the minimum KS problem is at least n + 1. 2.3 Cube-and-conquer The cube-and-conquer SAT solving paradigm was developed in [58] to solve hard combinatorial problems. The method applies two (possibly) different types of SAT solvers in two stages: First, a \u201ccubing solver\u201d splits a SAT instance into a large number of distinct subproblems specified by cubes\u2014formulas of the form x1 \u2227\u00b7 \u00b7 \u00b7\u2227xn where xi are literals. Second, a \u201cconquering solver\u201d solves each subproblem under the assumption that its associated cube is true (more precisely, the conjunction of the original instance and the cube). The cube-and-conquer method has empirically been shown to be effective at quickly solving large satisfiability problems when the cubing solver generates many cubes encoding subproblems of similar difficulty. It has since been applied to solve huge combinatorial problems such as the Boolean Pythagorean triples problem [59], the computation of Schur number five [60], and a SAT-based resolution of Lam\u2019s problem [46]. 3 Previous Work Over the last 55+ years, many mathematicians and physicists such as Roger Penrose, Asher Peres, and John Conway have attempted to find a minimum 3-dimensional KS system (see Table 2). The first KS system was constructed in 1967 and it contained 117 vectors [3]. A KS system with 109 vectors was found by Res Jost [43] in 1976. Peres found a KS system of size 33 in 1991, and Sch\u00a8 utte found a KS system of size 33 in 1996. The current smallest known KS system in three dimensions contains 31 vectors and was discovered by John Conway and Simon Kochen circa 1990 (see Figure 2). All these discoveries were made analytically, without the assistance of computational methods. Recently, Pavi\u02c7 ci\u00b4 c and Megill [27] applied an automated generation approach to robustly generate KS systems in odd dimensions. This approach led to the discovery of many more three-dimensional KS systems. In 2011, Arends, Ouaknine, and Wampler proved several interesting properties of KS graphs and leveraged them to computationally establish that a KS system must contain at least 18 vectors [17]. Seven years later, Uijlen and Westerbaan showed that a KS system must have at least 22 vectors [18]. This computational effort used around 300 CPU cores for three months and relied on the nauty software package [61] to exhaustively search for KS graphs. Pavi\u02c7 ci\u00b4 c, Merlet, McKay, and Megill [57] have improved a variation of the KS problem, one in which each vector is part of a mutually orthogonal triple (or a mutually orthogonal d-tuple in d dimensions). Under this restriction, they show a KS system must have at least 30 vectors in d = 3 dimensions, 9 \fand in d \u22654 dimensions the minimum KS system has 18 vectors. However, in three dimensions the gap between the lower and upper bounds of a KS system remains significant and the minimum size remains unknown. Another way of measuring the size of a d-dimensional KS system is the number of mutually orthogonal \u201ccontexts\u201d (cliques of size d in the orthogonality graph). Lison\u02c7 ek, Badzi\u00b8 ag, Portillo, and Cabello [62] found a six-dimensional KS system with seven contexts and showed this is the simplest possible KS system allowing a symmetry parity proof of the KS theorem. This KS system was later experimentally used by Ca\u02dc nas et al. [6] to perform measurements verified to arise from a quantum system rather than a classical system. Preliminary versions of the present work were announced at the 2022 SC-Square workshop, as well as at the 2023 Southeastern International Conference on Combinatorics, Graph Theory and Computing, and at CanaDAM 2023. At the former two venues, we presented searches for KS systems with up to 22 vectors, and at CanaDAM 2023 we presented our work that extends this to a search for KS systems with up to 23 vectors. In each case the searches were exhaustive and no KS systems were found. Thus, a KS system in three dimensions must contain at least 24 vectors. Kirchweger, Peitl, and Szeider [19] completed an independent search for KS systems establishing a lower bound of 24 vectors with a similar approach as SAT+CAS but with an SMS solver and an alternate definition of canonicity. Also, they do not use OG, as their definition of canonical does not satisfy property (2) from Sec. 5, but otherwise the SMS approach is similar in that it combines a SAT solver with a canonical checking routine. Their approach can also be used to generate proof certificates, though the certificate verification was not performed with the exception of 5% of the certificates in the order 23 search. By constrast, ours formally verifies all results. 4 SAT Encoding of the Minimum KS Problem As stated earlier, every KS vector system K can be converted into a KS graph GK. Each vector in K is assigned to a vertex in GK, so that if two vectors are orthogonal, then their corresponding vertices are connected. We say a KS graph is minimal if the only subgraph that is also a KS graph is itself. Arends, Ouaknine, and Wampler [17] proved that a three-dimensional minimal KS graph must satisfy the following properties: 1. The graph does not contain the 4-cycle graph C4 as a subgraph. 2. Each vertex of the graph has a minimum degree 3. 3. Every vertex is part of a 3-cycle triangle graph C3. We encode these three properties and the non-010-colorability of the KS graph in conjunctive normal form (CNF), as described below. If a SAT solver produces solutions for such an encoding, then these solutions are equivalent to graphs that satisfy all of the above-mentioned four constraints. A simple undirected graph of order n has \u0000n 2 \u0001 potential edges, and we represent each edge as a Boolean variable. The edge variable eij is true exactly when the vertices i and j are connected, where 1 \u2264i < j \u2264n. For convenience, we let both eij and eji denote the same variable since the graphs we consider are undirected. We also use the \u0000n 3 \u0001 10 \ftriangle variables tijk denoting that distinct vertices i, j, and k are mutually connected. In Boolean logic this is expressed as tijk \u2194(eij \u2227eik \u2227ejk) which in conjunctive normal form is expressed via the four clauses \u00actijk \u2228eij, \u00actijk \u2228eik, \u00actijk \u2228ejk, and \u00aceij \u2228\u00aceik \u2228\u00acejk \u2228tijk. Again, the indices i, j, and k of the variable tijk may be reordered arbitrarily for notational convenience. 4.1 Encoding the Squarefree Constraint To encode the property that a KS graph must be squarefree, we construct encodings that prevent the existence of any squares in the graph. Observe that three squares can be formed on four vertices. Therefore, for every choice of four vertices i, j, k, l, we use clauses \u00aceij \u2228\u00acejk \u2228\u00acekl \u2228\u00aceli, \u00aceij \u2228\u00acejl \u2228\u00acelk \u2228\u00aceki, and \u00aceil \u2228\u00acelj \u2228\u00acejk \u2228\u00aceki to encode the fact that a solution produced by the solver must be squarefree. By enumerating all possible choices of four vertices and constructing the above CNF formula, we force the graph to be squarefree. The total number of clauses used is 3\u00b7 \u0000n 4 \u0001 . 4.2 Encoding the Minimum Degree Constraint For each vertex i, to ensure that i is connected to at least three other vertices, we take each subset S of {1, . . . , i \u22121, i + 1, . . . , n} with cardinality n \u22123 and construct the clause W j\u2208S eij. By enumerating over all such subsets we enforce a minimum degree of 3 on vertex i. Thus, constructing similar formulae for all vertices 1 \u2264i \u2264n, enforces that any vertex in the graph has a degree of at least 3. The total number of clauses used is therefore n \u00b7 \u0000n\u22121 n\u22123 \u0001 = n \u00b7 \u0000n\u22121 2 \u0001 . 4.3 Encoding the Triangle Constraint We encode the property that every vertex is part of a triangle as follows: for each vertex i, we require 2 other distinct vertices to form a triangle, and there are \u0000n\u22121 2 \u0001 possible triangles containing i. At least one of those triangles must be present in the KS graph\u2014this is encoded by the clause W j,k\u2208S tijk where S is {1, . . . , i \u22121, i + 1, . . . , n} and j < k. Using this clause for each 1 \u2264i \u2264n ensures that every vertex is part of a triangle and hence there are n triangle clauses. 4.4 Encoding the Noncolorability Constraint Recall that the key property of a KS graph is that it is non-010-colorable. As stated earlier, a graph is non-010-colorable if and only if for all {0, 1}-colorings of the graph, a pair of color-1 vertices is connected or a set of three color-0 vertices are mutually connected. For each {0, 1}-coloring, a KS graph has a set V0 of color-0 vertices and a set V1 of color-1 vertices. Given a specific such coloring, the clause _ i,j\u2208V1 i