diff --git "a/abs_29K_G/test_abstract_long_2405.00718v1.json" "b/abs_29K_G/test_abstract_long_2405.00718v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.00718v1.json" @@ -0,0 +1,460 @@ +{ + "url": "http://arxiv.org/abs/2405.00718v1", + "title": "Can't say cant? Measuring and Reasoning of Dark Jargons in Large Language Models", + "abstract": "Ensuring the resilience of Large Language Models (LLMs) against malicious\nexploitation is paramount, with recent focus on mitigating offensive responses.\nYet, the understanding of cant or dark jargon remains unexplored. This paper\nintroduces a domain-specific Cant dataset and CantCounter evaluation framework,\nemploying Fine-Tuning, Co-Tuning, Data-Diffusion, and Data-Analysis stages.\nExperiments reveal LLMs, including ChatGPT, are susceptible to cant bypassing\nfilters, with varying recognition accuracy influenced by question types,\nsetups, and prompt clues. Updated models exhibit higher acceptance rates for\ncant queries. Moreover, LLM reactions differ across domains, e.g., reluctance\nto engage in racism versus LGBT topics. These findings underscore LLMs'\nunderstanding of cant and reflect training data characteristics and vendor\napproaches to sensitive topics. Additionally, we assess LLMs' ability to\ndemonstrate reasoning capabilities. Access to our datasets and code is\navailable at https://github.com/cistineup/CantCounter.", + "authors": "Xu Ji, Jianyi Zhang, Ziyin Zhou, Zhangchi Zhao, Qianqian Qiao, Kaiying Han, Md Imran Hossen, Xiali Hei", + "published": "2024-04-25", + "updated": "2024-04-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "LLM AND Reasoning", + "gt": "Ensuring the resilience of Large Language Models (LLMs) against malicious\nexploitation is paramount, with recent focus on mitigating offensive responses.\nYet, the understanding of cant or dark jargon remains unexplored. This paper\nintroduces a domain-specific Cant dataset and CantCounter evaluation framework,\nemploying Fine-Tuning, Co-Tuning, Data-Diffusion, and Data-Analysis stages.\nExperiments reveal LLMs, including ChatGPT, are susceptible to cant bypassing\nfilters, with varying recognition accuracy influenced by question types,\nsetups, and prompt clues. Updated models exhibit higher acceptance rates for\ncant queries. Moreover, LLM reactions differ across domains, e.g., reluctance\nto engage in racism versus LGBT topics. These findings underscore LLMs'\nunderstanding of cant and reflect training data characteristics and vendor\napproaches to sensitive topics. Additionally, we assess LLMs' ability to\ndemonstrate reasoning capabilities. Access to our datasets and code is\navailable at https://github.com/cistineup/CantCounter.", + "main_content": "Introduction Large Language Models (LLMs), exemplified by ChatGPT[1], redefine information acquisition, communication, and problem-solving[2]. These models are trained on extensive datasets or fine-tuned from pre-existing models, necessitating vast amounts of data. However, LLMs also pose security and ethical concerns as attackers can exploit their generative capabilities for malicious purposes [3]. Such misuse encompasses disinformation dissemination [4], AI-driven crime [5], privacy breaches [6], and social engineering [7]. Despite efforts by regulators like OpenAI to implement content filters [8], there remains a risk of attackers disguising malicious content using \u201ccant\u201d or \u201cdark jargon\u201d concealed language elements requiring deeper comprehension [9]. LLMs excel in understanding and generating natural language responses, fostering user trust. While research evaluates their efficacy in providing accurate responses [10], little attention has been paid to LLMs\u2019 interaction with cant in specific domains. Prior studies often lack depth in understanding the intricacies of cant [11], especially its varied representations in domains like politics and drugs. In this paper, we investigate LLMs\u2019 ability to recognize and reason about cant, particularly in domains prone to offensive content like politics and drugs. Despite progress in filtering harmful content, attackers can still exploit cant to evade detection. Understanding LLMs\u2019 response to cant in specific domains is essential for addressing emerging security challenges. Additionally, we assess LLMs\u2019 ability to demonstrate reasoning capabilities. \u2217Corresponding author: zjy@besti.edu.cn arXiv:2405.00718v1 [cs.CL] 25 Apr 2024 \fCan\u2019t say cant? Measuring and Reasoning of Dark Jargons in Large Language Models A PREPRINT Research Questions. To address the above issues, in this paper, we evaluate the reasoning abilities of current LLMs involving cant or dark jargon from the following four perspectives: 1. RQ1: Do different types of questions help LLM understand the cant? 2. RQ2: Do different question setups and prompt clues help LLM understand cant? 3. RQ3: Do different LLMs have the same understanding of the same cant? 4. RQ4: How well does LLM understand cant in different domains? CantCounter: Addressing past shortcomings[11], CantCounter is a system crafted to evaluate LLM\u2019s grasp of cant within specific domains. We compile Cant and Scene datasets from various sources to form adversarial texts. These datasets fine-tune the GPT-2 model and generate Scene fragments for assessing LLM comprehension. Co-Tuning methods align the Cant dataset and Scene fragments, while Data-Diffusion techniques augment and refine adversarial text. Employing Type, Sample learning, and Clue approaches enrich our experiments. Finally, Data-Analysis methods systematically evaluate 1.67 million data points. CantCounter is locally deployable and adaptable to any open-world dialogue system. Its replication has both advantages and drawbacks, aiding attackers in bypassing LLM classifiers while facilitating safety filter development. We define \u201centities\u201d as distinct objects or concepts and \u201cscenes\u201d as related events in specific environments. Ethical Considerations: CantCounter draws from public datasets such as Reddit [12] and 4chan [13], avoiding direct user interaction. However, its misuse poses risks, despite its benefits in addressing LLM\u2019s challenges. Despite these potential risks, we believe that the benefits of CantCounter far outweigh the risks. LLM has become a hot topic [14], and we need to fully recognize the potential problems of LLM and promote its safer development and application. We caution that this paper may contain sensitive content, including drug and violence-related examples, which could cause discomfort. Comprehensive data is available upon request. We have only open sourced part of the dataset. Contributions. This paper introduces three key contributions: 1. We present the Cant and Scene datasets, addressing data scarcity in domains like drugs, weapons, and racism, laying groundwork for future large language model assessment. 2. CantCounter, our framework, assesses large language models\u2019 understanding of domain-specific cants through four stages: Fine-Tuning for scene fragment generation, Co-Tuning for cross-matching, DataDiffusion for text expansion, and Data-Analysis for simplifying complex calculations. 3. Our evaluation of CantCounter reveals its efficacy in bypassing security filters of mainstream dialogue LLMs, providing insights into LLM reasoning within specific domains and guiding future research. 2 Background 2.1 Large Language Model Security Issues ChatGPT, developed by OpenAI in November 2022 [1], has undergone upgrades and fine-tuning [15] to prevent harmful content generation. However, users can still provoke negative responses by using specific prompts [16]. Researchers are investigating security risks, including the generation of toxic outputs from benign inputs [17]. Recent studies have shown that attackers can bypass detection by encrypting inputs with methods like Caesar ciphers and exploiting language nuances [18]. This paper proposes a Q&A query approach to evaluate LLMs\u2019 reasoning abilities in handling such content. 2.2 Cant Cant, a specialized language used by social groups for secrecy [19], varies in names like argot [20], slang [21], and secret language across history. While LLMs excel in traditional cant analysis, understanding criminal cant poses challenges. Criminal groups use innocuous terms to hide illegal activities, necessitating mastery for law enforcement [22]. Our study explores cant in politics, drugs, racism, weapons, and LGBT issues. These cants share ambiguity, indirect messaging, and potential for social harm. Political cant conveys biases, drug cant evades regulation, racism cant reinforces biases, weapons cant enables illegal dealings, and LGBT cant discriminates. Mastering these cants is vital for addressing societal and security concerns. 2.3 Question Answering (Q&A) Task Dialogue systems fall into task-oriented and non-task-oriented categories. Task-oriented systems serve specific purposes like reservations, while non-task-oriented systems engage in free conversation. Examples include ChatGPT, Bard, ERNIE, and Claude, offering services in entertainment, social interaction, and information retrieval [23].Question-answering (Q&A) tasks in NLP evaluate language processing capabilities [24], including reading comprehension and logical reasoning. Q&A formats include abstractive, Yes/No, and Multiple-Choice, each requiring specific evaluation metrics [25]. We employ Zero-shot/One-shot learning for testing. 2 \fCan\u2019t say cant? Measuring and Reasoning of Dark Jargons in Large Language Models A PREPRINT 3 CantCounter 3.1 High-level Idea We observe that the responses generated by LLMs vary with different cants, allowing adversaries to bypass filters or security restrictions. Thus, understanding how LLMs react to different cants is very important. However, exhaustively trying different cants queries with different scenes across numerous domains to find those capable of bypassing LLM restrictions and generating harmful outputs would be time-consuming and impractical. Therefore, we investigate whether adversaries can independently combine different cants and scenes to generate context that is reasonable and coherent, bypassing LLM filters or restrictions. To this end, we introduce CantCounter, the first evaluation (attack) framework targeting open-world dialogue systems (LLM). 3.2 Threat Model We adopt a threat model similar to \u201cWhy so toxic\u201d [17], targeting deployed dialogue LLMs like ChatGPT. Firstly, the adversary requires scene data different from the target LLM\u2019s training data. Secondly, they interact with the LLM, combining cants and scenarios to extract detectable cants. Finally, they access the victim LLM via CantCounter in a black-box manner, querying it through an API-like interface. 3.3 Dataset In our study, we extensively gathered cant related to five domains: politics, drugs, racism, weapons, and LGBT. The cant, comprising common and less common usages, holds practical meanings in real life. This Cant dataset forms a robust basis for evaluating the veracity and reliability of LLMs across specific domains. These five areas were chosen to address pressing societal issues impacting fundamental values such as social justice and human rights. Exploration of politics, drugs, racism, weapons, and homosexuality enables LLMs to tackle real-world challenges effectively. While other domains like hacking and fraud are significant, we focused on these due to data availability and processing feasibility, leaving room for future research on sensitive topics. Donald Trump Cheeto Jesus Agent Orange ... 45 Barack\u00a0 Obama Nobama Odumbo ... Deporter\u00a0 in chief Cocaine Blow C ... White Lady LSD Microdot L ... Cubes gun Rod Heater ... Cuete Sailor Geardo fobbit ... Cage Kicker Jews Crikey German Candle\u00a0 ... Penny Chaser Asians Bananas Sleepwalker ... Tiny gay man chicken queen ... \u00a0silver fox lesbins baby butch\u00a0 kitty puncher ... pillow\u00a0 princess Politics Drugs Weapons Racism LGBT ... ... ... ... ... Data Collection Data Classification\u00a0 & Categories Domain Platform Object Style Type Abjectives \u00a0Information Network Data and Social Forum Community Dark Web Figure 1: Construction of the Cant dataset involves collecting, summarizing security-related data, forming interconnected relationships between cants into an information network, and establishing the dataset through data classification and categorization, encompassing various domain-related entities and their corresponding cants. In constructing the Cant dataset (Figure 2 2 \u20dd), we crawled or manually screened multiple sources, including government agency websites [26], online forums like Reddit [12], 4chan [13], and X [27], publicly available datasets from Kaggle [28] and Hugging Face [29], dark web, and public compilations of cant. Multi-source data encompasses various text types closely related to specific domains. CantCounter utilizes information networks [30] to address redundancy challenges between cants, capturing their interdependency. The Cant dataset covers five domains, totaling 1,778 cants across 187 entities. We randomly selected 53 entities, totaling 692 cants, ensuring even representation across domains and prevalence in the open world. Selected entities and cants were cross-validated with authoritative sources [31, 32, 33, 34, 35] to ensure wide presence and reflection in publicly accessible information sources. Criteria like content relevance and topic specificity guided information selection and filtering, aiming for transparency and consistency. The resulting high-quality data forms the Scene dataset, laying the groundwork for subsequent simulation scene generation models. During information selection and filtering (Figure 2 1 \u20dd), explicit criteria were used to judge relevance and adherence to study definitions. Decisions were reached through participatory discussion to mitigate subjectivity and ensure alignment with research objectives. This rigorous process yields a refined dataset for accurate and relevant analysis. 3 \fCan\u2019t say cant? Measuring and Reasoning of Dark Jargons in Large Language Models A PREPRINT 3.4 Pipeline The CantCounter pipeline (Figure 2) consists of four stages: Fine-Tuning, Co-Tuning, Data-Diffusion, and DataAnalysis, as detailed below. GPT-2 Scene Dataset Scene Generation Model Scene fragments Multi-Source Data Cant Dataset Q&A-Queries\u00a0 Analysis Target Model Cross Matching Constrain \u2463 \u2464 \u2465 \u2462 \u2466 \u2467 \u2468 \u2461 \u2460 Fine-Tuning Co-Tuning Data-Diffusion Data-Analysis Figure 2: The pipeline of CantCounter. Cant is prevalent in the open world, so we aggregate raw text data from various sources to construct Cant and Scene datasets (Section 3.3). Although Cant and Scene datasets provide specific entities and scenes, they may not align well with the domain\u2019s requirements. Therefore, in Stage 3 \u20dd, we fine-tune GPT-2 using the Scene dataset to build five scene generation models for large-scale scenes, tailored to our specific domains. However, the fine-tuned scenes may not match the entities in the Cant dataset. In Stage 4 \u20dd, we address this issue by using entities from the Cant dataset to constrain the output of the generated model, ensuring scenes closely relate to the cant entities. Next, we conduct semi-automatic screening of the generated simulation scenes to form a set of Scene fragments. While these fragments contain entities, linking them with specific questions requires a method we have not yet discovered. Hence, in Steps 5 \u20dd6 \u20dd, we devise the Co-Tuning stage, where Scene fragments cross-match with cants from the Cant dataset to form Fragments. To enable multi-task comparison, we construct detection tests through different combinations of specific domains, question types, learning methods, and prompt clue methods in Stage 7 \u20dd. This completes and diffuses Fragments to form Q&A-Query datasets. Finally, in Stages 8 \u20dd9 \u20dd, Q&A-Queries are sent to the target model API for completion, and a segmented data statistics algorithm is applied to obtain and analyze test results, conducting analyses in the Data-Analysis stage. 3.5 Stage 1: Fine-Tuning During the fine-tuning stage, we use the Scene dataset to guide GPT-2 in generating tailored scenarios for specific domains. Despite more advanced models like GPT-3.5 and GPT-4 being available, we opt for GPT-2 due to its open-source nature, facilitating better control over training details. The fine-tuning code is publicly accessible for replication. The fine-tuning process is outlined in Algorithm 1. The Transformer model [36] forms the basis for GPT-2, featuring encoders and decoders with identical modules. GPT2 employs a partially masked self-attention mechanism and positional coding to understand sequence relationships. It has been successfully applied in various tasks like AI detection and text summarization. Overall, GPT-2\u2019s fine-tuning with the Scene dataset enables the generation of Question-Answer patterns tailored to specific domains, aiding in simulated scene generation tasks. 3.6 Stage 2: Co-Tuning To solve the problem of many intersecting data processes in CantCounter, we use the Cant dataset and Scene fragments to collaborate and design a Co-Tuning method. Co-Tuning realizes the generation and collaboration of cross-matching and solves the problem of detection data insufficiency. The Cant dataset provides detailed entity information for the generated model. The entities could constrain the generative model and make the Scene Fragments more consistent and coherent in the need for a specific domain during the Co-Tuning stage. In the end, we also manually review the results to ensure the relevance of cants to scenes and the distinctiveness of all scenes corresponding to the same cant. In this paper, we design formulas in the Co-Tuning to mathematically represent this part of the stage. The generation model is specified as Mp(p \u2208[1, 5]), and it includes five fine-tuned models, which are denoted as M1, M2, M3, M4, and M5. As shown in Figure 3, entity Oi represents the i-th entity (i \u2208[1, 15]) in the Cant dataset, and cant \u03c9ij represents the j-th cant of Oi (j \u2208[1, 20]). For example, in the case of the politics domain, there are 10 entities used in our 4 \fCan\u2019t say cant? Measuring and Reasoning of Dark Jargons in Large Language Models A PREPRINT Algorithm 1: Fine-Tuning Input: pre-trained model parameters \u03b8p, fine-tuned dataset Df, loss function L(\u03b8) that depends on the model parameters \u03b8, optimizer O(\u03b8) for updating the parameters \u03b8, learning rate \u03b7, number of iterations T for fine-tuning, x is the input sample, y is the label, \u02c6 y is the model\u2019s predicted value, and \u2207\u03b8 denotes the gradient operation Output: fine-tuned model parameters \u03b8f 1 Initialization \u03b8f = \u03b8p 2 for each t in range(1, T + 1) do 3 for batchx, batchy in Df do 4 \u02c6 y = model(batchx, \u03b8f); 5 loss = L(\u02c6 y, batchy); 6 gradient = torch.autograd.grad(loss,\u03b8f) ; 7 \u03b8f = O(\u03b8f, gradient, \u03b7); 8 return \u03b8f entity \ufffd1 . . . ... ... entity \ufffd2 entity \ufffd\ufffd \ufffd\ufffd Generation Models GPT-2 Scene Datasets FineTuning cant \ufffd1\ufffd cant \ufffd11 cant \ufffd12 ... cant \ufffd\ufffd\ufffd cant \ufffdi1 cant \ufffd\ufffd2 ... ... k scenes \ufffd11 \u2019 . . . ... \ufffd12 \u2019 ... \ufffd1\ufffd \u2019 ... ... \ufffd\ufffd2 \u2019 ... \ufffd\ufffd\ufffd \u2019 . . . . . . . . . Fragment 1 ... \ufffd\ufffd1 \u2019 k scenes Fragment 2 k scenes Fragment j (belong to \ufffd1) (belong to \ufffd\ufffd) . . . . . . Fragments (i entities correspond to i*j*k scenes) Fragment 1 Fragment 2 Fragment j Fragments belonging to \ufffd1 Fragments belonging to \ufffd\ufffd (Scene fragment) (Scene fragment) Scene \ufffd11 Scene \ufffd12 Scene \ufffd1\ufffd Scene \ufffd\ufffd1 Scene \ufffd\ufffd\ufffd Scene \ufffd\ufffd2 (Cant Dataset) Cross-Matching Figure 3: The overall structure and process of Co-Tuning. experiments, each entity has twenty cants, j is taken as [1, 20]. The entity Oi can constrain the fine-tuned model Mp\u2019s output, and the result of the constraint is the Scene fragment; this part corresponds to Eq. (1). The Scene is Sik (i \u2208[1, 10], k \u2208[1, 101]). The Scene Sik represents the k-th scene fragment (i \u2208[1, 10], k \u2208[1, 101]) that the i-th entity enters into the output of the fine-tuning model (Mp). Sik = Mp(Oi) (1) Eq. (2) denotes the cross-match of Cant and Scene fragment and was saved in S \u2032 ik. S \u2032 ik = Sik \u222a\u03c9ij (2) There are k orange boxes in the O1 Scene fragment. These orange boxes represent the Mp-generated text containing the Cant dataset\u2019s entities. The function of Eq. 2 is to replace the entities in the Scene fragments with cant in the Cant dataset. As shown in Figure 3, for example, from O1 Scene fragment to Fragment 1. We replace entities in Scene [S11, S1k] with the cant (\u03c911), forming Fragment 1. By analogy, we built j Fragments in the Co-Tuning stage. In the Co-Tuning stage, we can obtain scene fragments related to entities in specific domains that have a high degree of context consistency and express various characteristics of the entities in different contexts. At the same time, our finetuned model is flexible enough to introduce multiple entities during the generation process and allow scene fragments to describe the relationships among multiple entities. This stage generates diverse scene fragments. While the scene fragments are generated through a generative process, the Scene dataset we provide undergoes manual review to mitigate errors in both the generated content and the language utilized within the experimental environment. 3.7 Stage 3: Data-Diffusion At this stage, Fragments from the Co-Tuning stage are transformed into Q&A-Queries to enhance interaction with LLM and diversify evaluation. We employ three diffusion methods: two sample learning techniques, three question types, and four prompt clue methods. Each Fragment generates 24 Q&A-Queries. First, we introduce sample learning techniques for zero-shot and one-shot learning transformations of Fragments. Second, we categorize Fragments into Abstractive, Yes/No, and Multiple-choice question types. Finally, prompts are classified into None-tip, Tip-1, Tip-2, and All-tip categories, considering information retrieval difficulty and situational prompting. 5 \fCan\u2019t say cant? Measuring and Reasoning of Dark Jargons in Large Language Models A PREPRINT ... ... ... ... ... ... Zero-shot learning One-shot learning Abstractive Yes/NO Multiple-choice None-tip Tip-1 Tip-2 All-tip None-tip Tip-1 Tip-2 All-tip None-tip Tip-1 Tip-2 All-tip None-tip Tip-2 None-tip Tip-2 None-tip Tip-1 Tip-2 All-tip Tip-1 All-tip Tip-1 All-tip Abstractive Yes/NO Multiple-choice ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Data-Diffusion ... ... Fragment 1 Fragments ... Fragment 2 ... Fragment \ufffd ... Q&A-Query ... Figure 4: Schematic diagram of Data-Diffusion. The introduction of Data-Diffusion in extended Fragments has significantly increased Q&A queries, providing diverse test cases for evaluating the generation model\u2019s performance comprehensively. This approach promises to establish a diverse database for future research and applications. 3.8 Stage 4: Data-Analysis As shown 8 \u20ddand 9 \u20ddin Figure 2, 8 \u20ddmeans sending the data expanded by Data-Diffusion to ChatGPT and other target models. 9 \u20ddshows data analysis of the output results of LLMs such as ChatGPT. After completing the Data-Diffusion, we submit the generated Q&A-Queries to the LLM API interface to obtain a large number of data results. These data results are complex and diverse, including the interplay of relationships. Therefore, we devise a data analysis algorithm to yield both numerical and analysis outcomes. Algorithm 2: Segmented data statistics algorithm Input: J \u2013 j Fragments in Co-Tuning (Fig 4), K \u2013 k Scenes in a Fragment, tasks \u2013 [\u201cAbstractive\u201d: {\u201cAZ\u201d, \u201cAO\u201d}, \u201cYes/NO\u201d: {\u201cJZ\u201d, \u201cJO\u201d}, \u201cMultiple-choice\u201d: {\u201cMZ\u201d, \u201cMO\u201d} ], clues \u2013 [\u201cNone-tip\u201d, \u201cTip-1\u201d, \u201cTip-2\u201d, \u201cAll-tip\u201d], intervals \u2013 [0, 1-10, 11-20, ..., 91-101] Output: Rj,t,c, Nj,t,c,z, Sumt, SumPCM 1 for each j in J do 2 for each t in tasks do 3 for each c in clues do 4 Nj,t,c=Pk a=1 ia (if hit then ia=1, else ia = 0) 5 Rj,t,c \u2190 \u2212 Nj,t,c k 6 for each z in interval do 7 if Nj,t,c in z then Nj,t,c,z++ 8 Sumt=Pj \u03b1=1 Pc \u03b2=1 N\u03b1,t,\u03b2 9 SumPCM=Pj \u03b1=1 Pt \u03b2=1 N\u03b1,\u03b2,c 10 return Rj,t,c, Nj,t,c,z, Sumt, SumPCM After the Co-Tuning and Data-Diffusion stages, the test data generated by CantCounter is very complex. Therefore, in the Data-Analysis stage, we implement Algorithm 2 to conduct data statistics from various angles. During analysis, when the entity Oi is modified in the Co-Tuning stage (see Figure 3), Algorithm 2 will be called accordingly. We analyze the results based on different tasks. We learn and analyze data features from Question Type Method (See 4.2 QTM) and Sample Learning Method (See 4.3 SLM) based on different question types and samples learning to get Sumt; we analyze the data based on different prompt clues from Prompt Clue Method (See 4.4 PCM) to get SumP CM. In Algorithm 2, we set the matching conditions, calculate the number of fragments, and obtain Nj,t,c and accuracy Rj,t,c. At the same time, we set eleven intervals: 0, 1-10, 11-20, ..., 91-101 to distinguish different feedbacks and obtain Nj,t,c,z. As shown in the Algorithm 2, we put Zero-shot learning, One-shot learning, and three tasks together as a loop. We define that in the Abstractive task, the output is AZ in the Zero-shot learning input; the output is AO in the Oneshot learning input. In the Yes/NO task, the output is expressed as JZ in the Zero-shot learning input; the output is 6 \fCan\u2019t say cant? Measuring and Reasoning of Dark Jargons in Large Language Models A PREPRINT expressed as JO in the One-shot learning input. In the Multiple-choice task, the output is represented as MZ in the Zero-shot learning input; the output is expressed as MO in the One-shot learning input. The above content has been integrated into our code to form semi-automation. 4 Experimental Design and Results To explore our research questions, we conducted experiments in CantCounter, outlined sequentially in this section. We examined various question types in RQ1 (Section 4.1), different question setups in RQ2 (Section 4.2), and diverse prompt clues in RQ2 (Section 4.3). Focusing primarily on ChatGPT-3.5 (version gpt-3.5-turbo-0613) due to its convenience and wide usage, similar experiments were also conducted with other language models. All experiments were performed on a server equipped with an RTX 3090 Ti GPU. In this section, we analyze using cant and scene to bypass the LLM filter in the CantCounter framework quantitatively. We conduct open-world query experiments across five domains: politics, drugs, racism, weapons, and LGBT. Initially setting k to 101, we match 692 cants to 53 entities, resulting in 69,892 scenes. These undergo Data-Diffusion, expanding to 1,677,408 scenes. This study enables a comprehensive analysis of corpus performance and language changes within specific domains. 4.1 Question Type Method (QTM) In the Q&A task, we conduct three types of tasks: \u2022 Abstractive Task: Models generate responses freely, without relying on specific information extraction. \u2022 Yes/No Task: Models provide binary responses, \u201cTrue\u201d or \u201cFalse,\u201d based solely on the presented question and existing knowledge. \u2022 Multiple-choice Task: Models select the correct answer from a set of options, demonstrating comprehension of semantics and accurate identification. Table 1 shows that Multiple-choice tasks achieve the highest accuracy (45.38%), while Yes/No tasks have the lowest (22.91%). The discovery that ChatGPT performs well in multiple-choice questions is intriguing. In this task, there are five options (A) to (E), with (A) to (D) relevant to a specific domain, and (E) set as \u201cI don\u2019t know.\u201d \u201cOther\u201d signifies an answer unrelated to these options, with (A) as the correct choice. Figure 5 displays the box plot analysis results. Analyzing the Multiple-choice task results, we find key factors for its success. Firstly, it offers a set of answers with one correct option and distractors, aiding comprehension. Secondly, its structured format simplifies the process of eliminating incorrect options, improving accuracy. Lastly, the inclusion of an \u201cI don\u2019t know\u201d option enhances accuracy in uncertain situations. Figure 5: The vertical axis refers to the number of correct answers under the four tips. The total number is 404. (A) and (E) stand out in Multiple-choice, being the correct answer and \u201cI don\u2019t know\u201d respectively. After carefully studying ChatGPT-3.5\u2019s interpretation of option (E), we find that when the context is ambiguous or the entities in the implicit context are rare, ChatGPT-3.5\u2019s accuracy will drop significantly; that is, it will prefer option (E). We also explore the low accuracy in the Yes/No task. Comparing ChatGPT-3.5\u2019s \u201cFalse\u201d answers with Multiple-choice task data, we find they often include option (E) and incorrect choices from the Multiple-choice task due to the clarity of options. Additionally, differences in response styles and keyword detection criteria impact ChatGPT-3.5\u2019s performance across Abstractive and Yes/No tasks, where Yes/No tasks restrict responses to \u201cTrue\u201d or \u201cFalse.\u201d Overall, our analysis highlights how different Q&A types affect ChatGPT-3.5\u2019s accuracy in specific domains, with Multiple-choice tasks showing higher performance. Further research is needed to improve ChatGPT-3.5\u2019s accuracy and adaptability in these domains. 7 \fCan\u2019t say cant? Measuring and Reasoning of Dark Jargons in Large Language Models A PREPRINT 4.2 Sample Learning Method (SLM) In our experiments, we explore two sample setups: Zero-shot and One-shot learning. \u2022 Zero-shot learning. No examples are provided in the prompt, which only includes instructions and questions. \u2022 One-shot learning. The prompt includes an example relevant to the discussion, consisting of a sample message and user information. Zero-shot learning involves a single user message, while One-shot learning processes a sample message and a user message. These methods help understand LLM\u2019s performance in different sample learning approaches and reveal its inference capabilities in information-poor settings. Further investigation uncovers learning patterns and effects of the model in specific domains, with default hyper-parameter settings used to avoid extensive tuning. In this section, we explore how Zero-shot and One-shot learning methods affect LLM accuracy in recognizing cant scenes for RQ2. Traditionally, One-shot learning often outperforms Zero-shot learning due to more available data [37]. However, our cross-domain analysis, depicted in Figure 6 and reflected in Table 1 (red section), reveals a trend favoring Zero-shot learning overall. We find this trend varies by domain. Figure 6: Comparison of the number of correct Zero-shot learning and One-shot learning in different domains. In the politics domain, One-shot learning performs better due to ample data and contextual understanding. Conversely, in the LGBT domain, Zero-shot learning outperforms One-shot learning due to limited publicly available examples. One-shot learning aids ChatGPT-3.5 in better contextual comprehension of sensitive topics, but it may also introduce biases, leading to lower overall accuracy in specific domains. Similar analyses across other domains yield consistent results. 4.3 Prompt Clue Method (PCM) In this part of the study, the purpose of CantCounter is to explore the impact of different clues on LLM recognition and reasoning abilities. To this end, we provide four different clues to experiment with: \u2022 None-tip. Keeps the same as the original prompt and does not add any additional clues. \u2022 Tip 1. Add relevant tip for \u201cNone-tip\u201d. For example, when describing Trump\u2019s cant, we can add the clue \u201cPolitician\u201d in the political domain to make the prompt more directional. \u2022 Tip 2. Add another relevant tip for \u201cNone-tip\u201d. For example, when describing Trump\u2019s cant, add the \u201cUnited States\u201d prompt in the domain of politics to enrich the prompt content. \u2022 All-tip. Add both Tip 1 and Tip 2 on the basis of \u201cNone-tip\u201d; for example, when describing Trump\u2019s cant, add both \u201cpolitician\u201d and \u201cAmerican\u201d in the political domain to make the prompt more appropriate. By observing the effects of these different clues on LLMs, CantCounter can assess the fluctuating changes they induce in recognition and reasoning abilities. This study will help further understand the influence of cues on LLM and provide directions for improving its application and performance. To answer RQ2, Table 1 displays ChatGPT-3.5\u2019s accuracy across five domains using different prompt clues. Generally, more clue-related information improves recognition accuracy, as seen in the political domain where All-tip prompts perform significantly better. However, increasing clues doesn\u2019t always lead to higher accuracy, possibly due to information redundancy or LLM filter triggering. Too many clues may reduce accuracy, as seen in the LGBT domain where Tip 1 prompts were less accurate than none-tip prompts. Our analysis stresses the importance of a balanced clue selection approach to maximize external information usage without compromising accuracy. Thus, choosing appropriate clues in moderate quantities is key to enhancing ChatGPT-3.5\u2019s domain-specific performance. 8 \fCan\u2019t say cant? Measuring and Reasoning of Dark Jargons in Large Language Models A PREPRINT Table 1: CantCounter stats highlight top performances(%): Multiple-choice excels in QTM (45.38%), Zero-shot learning shines in SLM (52.13%), and All-tip prevails in PCM (29.11%). \u201cA\u201d represents \u201cAbstractive\u201d. \u201cY/N\u201d represents \u201cYes/No\u201d. \u201cMc\u201d represents \u201cMultiple-choice\u201d. \u201cZs\u201d represents \u201cZero-shot\u201d. \u201cOs\u201d represents \u201cOne-shot\u201d. \u201cNT\u201d represents \u201cNone-tip\u201d. \u201cT1\u201d represents \u201cTip 1\u201d. \u201cT2\u201d represents \u201cTip 2\u201d. \u201cAllT\u201d represents \u201cAll-tip\u201d. QTM SLM PCM Domain A Y/N Mc Zs Os NT T1 T2 AllT Politics 26.81 22.55 50.64 42.85 57.15 19.01 24.75 25.19 31.05 Drugs 21.16 22.41 56.43 55.41 44.59 17.32 27.43 25.47 29.78 Racism 29.05 27.60 43.35 41.39 58.61 11.22 19.63 37.50 31.66 Weapons 50.89 16.20 32.91 54.96 45.04 18.73 28.11 25.27 27.90 LGBT 34.41 25.75 39.84 59.78 40.22 22.58 22.10 28.53 26.79 Total 31.71 22.91 45.38 52.13 47.87 19.03 24.61 27.24 29.11 Table 2: Zero-shot learning and One-shot learning Q&A accuracy in CantCounter for GPT-4, Bard, New Bing, and SparkDesk. \u201cAcc\u201d represents \u201cAccuracy Rate\u201d. \u201cRej\u201d represents \u201cRejection Rate\u201d. \u201cDon\u2019t know\u201d represents \u201c\u2019I don\u2019t know\u2019 Rate\u201d. Zero-shot learning One-shot learning Acc Rej Don\u2019t know Acc Rej Don\u2019t know ChatGPT-3.5[1] 47.61 4.66 39.91 45.52 1.63 46.45 GPT-4[38] 27.27 0.00 70.45 50.00 0.00 34.09 Bard[39] 47.73 4.55 13.64 65.91 15.91 6.82 New Bing[40] 50.00 11.36 34.09 50.00 36.36 2.27 SparkDesk[41] 29.55 45.45 9.09 20.45 68.18 2.27 4.4 Comparison with other LLMs In our study, we examine several LLMs alongside ChatGPT-3.5 to address RQ3, including GPT-4[1], New Bing [40], Bard [39], Claude [42], ERNIE [43], and SparkDesk [41]. While ERNIE is optimized for Chinese content, translating cant prompts may compromise their subtlety and effectiveness. Moreover, ERNIE\u2019s frequent account suspensions hindered extensive trials [44]. Claude\u2019s sensitive content handling also led to account suspensions [42]. Thus, we focus on comparing and validating four other LLMs: GPT-4, Bard, New Bing, and SparkDesk. Table 2 presents ratios of correct answers, refused answers, and \u201cI don\u2019t know\u201d responses. Interestingly, GPT-4 consistently responds in all situations, avoiding refusal to answer. This contrasts with other models that often refuse to respond due to content filtering. GPT-4\u2019s tendency to use \u201cI don\u2019t know\u201d may stem from our controlled comparisons in the QTM and PCM methods, particularly in Multiple-choice scenarios. Conversely, other LLMs tend to refuse to answer, likely due to content categorization by filters and classifiers. SparkDesk exhibits the highest refusal rate, possibly due to overly strict filters. Furthermore, One-shot learning models are more prone to refusal to answer, as they rely on context understanding, potentially triggering filters. These findings offer insights into the performance of these LLMs across different learning tasks, informing future research directions. 4.5 Takeaways We observe varying accuracy across different Q&A-Query types (RQ1), with Multiple-choice tasks being most accurate and Yes/No tasks the least. In sensitive domains, Zero-shot learning performs better than One-shot learning (RQ2). Increasing prompt clues improves cant identification accuracy (RQ2). More recent LLM models consistently avoid refusing to answer (RQ3), but they are more likely to refuse answering questions related to racism compared to LGBT (RQ4). 5", + "additional_graph_info": { + "graph": [ + [ + "Xu Ji", + "Jianyi Zhang" + ], + [ + "Jianyi Zhang", + "Yiran Chen" + ], + [ + "Jianyi Zhang", + "Changyou Chen" + ], + [ + "Jianyi Zhang", + "Ruiyi Zhang" + ], + [ + "Jianyi Zhang", + "Zhiqiang Wang" + ] + ], + "node_feat": { + "Xu Ji": [ + { + "url": "http://arxiv.org/abs/2405.00718v1", + "title": "Can't say cant? Measuring and Reasoning of Dark Jargons in Large Language Models", + "abstract": "Ensuring the resilience of Large Language Models (LLMs) against malicious\nexploitation is paramount, with recent focus on mitigating offensive responses.\nYet, the understanding of cant or dark jargon remains unexplored. This paper\nintroduces a domain-specific Cant dataset and CantCounter evaluation framework,\nemploying Fine-Tuning, Co-Tuning, Data-Diffusion, and Data-Analysis stages.\nExperiments reveal LLMs, including ChatGPT, are susceptible to cant bypassing\nfilters, with varying recognition accuracy influenced by question types,\nsetups, and prompt clues. Updated models exhibit higher acceptance rates for\ncant queries. Moreover, LLM reactions differ across domains, e.g., reluctance\nto engage in racism versus LGBT topics. These findings underscore LLMs'\nunderstanding of cant and reflect training data characteristics and vendor\napproaches to sensitive topics. Additionally, we assess LLMs' ability to\ndemonstrate reasoning capabilities. Access to our datasets and code is\navailable at https://github.com/cistineup/CantCounter.", + "authors": "Xu Ji, Jianyi Zhang, Ziyin Zhou, Zhangchi Zhao, Qianqian Qiao, Kaiying Han, Md Imran Hossen, Xiali Hei", + "published": "2024-04-25", + "updated": "2024-04-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "main_content": "Introduction Large Language Models (LLMs), exemplified by ChatGPT[1], redefine information acquisition, communication, and problem-solving[2]. These models are trained on extensive datasets or fine-tuned from pre-existing models, necessitating vast amounts of data. However, LLMs also pose security and ethical concerns as attackers can exploit their generative capabilities for malicious purposes [3]. Such misuse encompasses disinformation dissemination [4], AI-driven crime [5], privacy breaches [6], and social engineering [7]. Despite efforts by regulators like OpenAI to implement content filters [8], there remains a risk of attackers disguising malicious content using \u201ccant\u201d or \u201cdark jargon\u201d concealed language elements requiring deeper comprehension [9]. LLMs excel in understanding and generating natural language responses, fostering user trust. While research evaluates their efficacy in providing accurate responses [10], little attention has been paid to LLMs\u2019 interaction with cant in specific domains. Prior studies often lack depth in understanding the intricacies of cant [11], especially its varied representations in domains like politics and drugs. In this paper, we investigate LLMs\u2019 ability to recognize and reason about cant, particularly in domains prone to offensive content like politics and drugs. Despite progress in filtering harmful content, attackers can still exploit cant to evade detection. Understanding LLMs\u2019 response to cant in specific domains is essential for addressing emerging security challenges. Additionally, we assess LLMs\u2019 ability to demonstrate reasoning capabilities. \u2217Corresponding author: zjy@besti.edu.cn arXiv:2405.00718v1 [cs.CL] 25 Apr 2024 \fCan\u2019t say cant? Measuring and Reasoning of Dark Jargons in Large Language Models A PREPRINT Research Questions. To address the above issues, in this paper, we evaluate the reasoning abilities of current LLMs involving cant or dark jargon from the following four perspectives: 1. RQ1: Do different types of questions help LLM understand the cant? 2. RQ2: Do different question setups and prompt clues help LLM understand cant? 3. RQ3: Do different LLMs have the same understanding of the same cant? 4. RQ4: How well does LLM understand cant in different domains? CantCounter: Addressing past shortcomings[11], CantCounter is a system crafted to evaluate LLM\u2019s grasp of cant within specific domains. We compile Cant and Scene datasets from various sources to form adversarial texts. These datasets fine-tune the GPT-2 model and generate Scene fragments for assessing LLM comprehension. Co-Tuning methods align the Cant dataset and Scene fragments, while Data-Diffusion techniques augment and refine adversarial text. Employing Type, Sample learning, and Clue approaches enrich our experiments. Finally, Data-Analysis methods systematically evaluate 1.67 million data points. CantCounter is locally deployable and adaptable to any open-world dialogue system. Its replication has both advantages and drawbacks, aiding attackers in bypassing LLM classifiers while facilitating safety filter development. We define \u201centities\u201d as distinct objects or concepts and \u201cscenes\u201d as related events in specific environments. Ethical Considerations: CantCounter draws from public datasets such as Reddit [12] and 4chan [13], avoiding direct user interaction. However, its misuse poses risks, despite its benefits in addressing LLM\u2019s challenges. Despite these potential risks, we believe that the benefits of CantCounter far outweigh the risks. LLM has become a hot topic [14], and we need to fully recognize the potential problems of LLM and promote its safer development and application. We caution that this paper may contain sensitive content, including drug and violence-related examples, which could cause discomfort. Comprehensive data is available upon request. We have only open sourced part of the dataset. Contributions. This paper introduces three key contributions: 1. We present the Cant and Scene datasets, addressing data scarcity in domains like drugs, weapons, and racism, laying groundwork for future large language model assessment. 2. CantCounter, our framework, assesses large language models\u2019 understanding of domain-specific cants through four stages: Fine-Tuning for scene fragment generation, Co-Tuning for cross-matching, DataDiffusion for text expansion, and Data-Analysis for simplifying complex calculations. 3. Our evaluation of CantCounter reveals its efficacy in bypassing security filters of mainstream dialogue LLMs, providing insights into LLM reasoning within specific domains and guiding future research. 2 Background 2.1 Large Language Model Security Issues ChatGPT, developed by OpenAI in November 2022 [1], has undergone upgrades and fine-tuning [15] to prevent harmful content generation. However, users can still provoke negative responses by using specific prompts [16]. Researchers are investigating security risks, including the generation of toxic outputs from benign inputs [17]. Recent studies have shown that attackers can bypass detection by encrypting inputs with methods like Caesar ciphers and exploiting language nuances [18]. This paper proposes a Q&A query approach to evaluate LLMs\u2019 reasoning abilities in handling such content. 2.2 Cant Cant, a specialized language used by social groups for secrecy [19], varies in names like argot [20], slang [21], and secret language across history. While LLMs excel in traditional cant analysis, understanding criminal cant poses challenges. Criminal groups use innocuous terms to hide illegal activities, necessitating mastery for law enforcement [22]. Our study explores cant in politics, drugs, racism, weapons, and LGBT issues. These cants share ambiguity, indirect messaging, and potential for social harm. Political cant conveys biases, drug cant evades regulation, racism cant reinforces biases, weapons cant enables illegal dealings, and LGBT cant discriminates. Mastering these cants is vital for addressing societal and security concerns. 2.3 Question Answering (Q&A) Task Dialogue systems fall into task-oriented and non-task-oriented categories. Task-oriented systems serve specific purposes like reservations, while non-task-oriented systems engage in free conversation. Examples include ChatGPT, Bard, ERNIE, and Claude, offering services in entertainment, social interaction, and information retrieval [23].Question-answering (Q&A) tasks in NLP evaluate language processing capabilities [24], including reading comprehension and logical reasoning. Q&A formats include abstractive, Yes/No, and Multiple-Choice, each requiring specific evaluation metrics [25]. We employ Zero-shot/One-shot learning for testing. 2 \fCan\u2019t say cant? Measuring and Reasoning of Dark Jargons in Large Language Models A PREPRINT 3 CantCounter 3.1 High-level Idea We observe that the responses generated by LLMs vary with different cants, allowing adversaries to bypass filters or security restrictions. Thus, understanding how LLMs react to different cants is very important. However, exhaustively trying different cants queries with different scenes across numerous domains to find those capable of bypassing LLM restrictions and generating harmful outputs would be time-consuming and impractical. Therefore, we investigate whether adversaries can independently combine different cants and scenes to generate context that is reasonable and coherent, bypassing LLM filters or restrictions. To this end, we introduce CantCounter, the first evaluation (attack) framework targeting open-world dialogue systems (LLM). 3.2 Threat Model We adopt a threat model similar to \u201cWhy so toxic\u201d [17], targeting deployed dialogue LLMs like ChatGPT. Firstly, the adversary requires scene data different from the target LLM\u2019s training data. Secondly, they interact with the LLM, combining cants and scenarios to extract detectable cants. Finally, they access the victim LLM via CantCounter in a black-box manner, querying it through an API-like interface. 3.3 Dataset In our study, we extensively gathered cant related to five domains: politics, drugs, racism, weapons, and LGBT. The cant, comprising common and less common usages, holds practical meanings in real life. This Cant dataset forms a robust basis for evaluating the veracity and reliability of LLMs across specific domains. These five areas were chosen to address pressing societal issues impacting fundamental values such as social justice and human rights. Exploration of politics, drugs, racism, weapons, and homosexuality enables LLMs to tackle real-world challenges effectively. While other domains like hacking and fraud are significant, we focused on these due to data availability and processing feasibility, leaving room for future research on sensitive topics. Donald Trump Cheeto Jesus Agent Orange ... 45 Barack\u00a0 Obama Nobama Odumbo ... Deporter\u00a0 in chief Cocaine Blow C ... White Lady LSD Microdot L ... Cubes gun Rod Heater ... Cuete Sailor Geardo fobbit ... Cage Kicker Jews Crikey German Candle\u00a0 ... Penny Chaser Asians Bananas Sleepwalker ... Tiny gay man chicken queen ... \u00a0silver fox lesbins baby butch\u00a0 kitty puncher ... pillow\u00a0 princess Politics Drugs Weapons Racism LGBT ... ... ... ... ... Data Collection Data Classification\u00a0 & Categories Domain Platform Object Style Type Abjectives \u00a0Information Network Data and Social Forum Community Dark Web Figure 1: Construction of the Cant dataset involves collecting, summarizing security-related data, forming interconnected relationships between cants into an information network, and establishing the dataset through data classification and categorization, encompassing various domain-related entities and their corresponding cants. In constructing the Cant dataset (Figure 2 2 \u20dd), we crawled or manually screened multiple sources, including government agency websites [26], online forums like Reddit [12], 4chan [13], and X [27], publicly available datasets from Kaggle [28] and Hugging Face [29], dark web, and public compilations of cant. Multi-source data encompasses various text types closely related to specific domains. CantCounter utilizes information networks [30] to address redundancy challenges between cants, capturing their interdependency. The Cant dataset covers five domains, totaling 1,778 cants across 187 entities. We randomly selected 53 entities, totaling 692 cants, ensuring even representation across domains and prevalence in the open world. Selected entities and cants were cross-validated with authoritative sources [31, 32, 33, 34, 35] to ensure wide presence and reflection in publicly accessible information sources. Criteria like content relevance and topic specificity guided information selection and filtering, aiming for transparency and consistency. The resulting high-quality data forms the Scene dataset, laying the groundwork for subsequent simulation scene generation models. During information selection and filtering (Figure 2 1 \u20dd), explicit criteria were used to judge relevance and adherence to study definitions. Decisions were reached through participatory discussion to mitigate subjectivity and ensure alignment with research objectives. This rigorous process yields a refined dataset for accurate and relevant analysis. 3 \fCan\u2019t say cant? Measuring and Reasoning of Dark Jargons in Large Language Models A PREPRINT 3.4 Pipeline The CantCounter pipeline (Figure 2) consists of four stages: Fine-Tuning, Co-Tuning, Data-Diffusion, and DataAnalysis, as detailed below. GPT-2 Scene Dataset Scene Generation Model Scene fragments Multi-Source Data Cant Dataset Q&A-Queries\u00a0 Analysis Target Model Cross Matching Constrain \u2463 \u2464 \u2465 \u2462 \u2466 \u2467 \u2468 \u2461 \u2460 Fine-Tuning Co-Tuning Data-Diffusion Data-Analysis Figure 2: The pipeline of CantCounter. Cant is prevalent in the open world, so we aggregate raw text data from various sources to construct Cant and Scene datasets (Section 3.3). Although Cant and Scene datasets provide specific entities and scenes, they may not align well with the domain\u2019s requirements. Therefore, in Stage 3 \u20dd, we fine-tune GPT-2 using the Scene dataset to build five scene generation models for large-scale scenes, tailored to our specific domains. However, the fine-tuned scenes may not match the entities in the Cant dataset. In Stage 4 \u20dd, we address this issue by using entities from the Cant dataset to constrain the output of the generated model, ensuring scenes closely relate to the cant entities. Next, we conduct semi-automatic screening of the generated simulation scenes to form a set of Scene fragments. While these fragments contain entities, linking them with specific questions requires a method we have not yet discovered. Hence, in Steps 5 \u20dd6 \u20dd, we devise the Co-Tuning stage, where Scene fragments cross-match with cants from the Cant dataset to form Fragments. To enable multi-task comparison, we construct detection tests through different combinations of specific domains, question types, learning methods, and prompt clue methods in Stage 7 \u20dd. This completes and diffuses Fragments to form Q&A-Query datasets. Finally, in Stages 8 \u20dd9 \u20dd, Q&A-Queries are sent to the target model API for completion, and a segmented data statistics algorithm is applied to obtain and analyze test results, conducting analyses in the Data-Analysis stage. 3.5 Stage 1: Fine-Tuning During the fine-tuning stage, we use the Scene dataset to guide GPT-2 in generating tailored scenarios for specific domains. Despite more advanced models like GPT-3.5 and GPT-4 being available, we opt for GPT-2 due to its open-source nature, facilitating better control over training details. The fine-tuning code is publicly accessible for replication. The fine-tuning process is outlined in Algorithm 1. The Transformer model [36] forms the basis for GPT-2, featuring encoders and decoders with identical modules. GPT2 employs a partially masked self-attention mechanism and positional coding to understand sequence relationships. It has been successfully applied in various tasks like AI detection and text summarization. Overall, GPT-2\u2019s fine-tuning with the Scene dataset enables the generation of Question-Answer patterns tailored to specific domains, aiding in simulated scene generation tasks. 3.6 Stage 2: Co-Tuning To solve the problem of many intersecting data processes in CantCounter, we use the Cant dataset and Scene fragments to collaborate and design a Co-Tuning method. Co-Tuning realizes the generation and collaboration of cross-matching and solves the problem of detection data insufficiency. The Cant dataset provides detailed entity information for the generated model. The entities could constrain the generative model and make the Scene Fragments more consistent and coherent in the need for a specific domain during the Co-Tuning stage. In the end, we also manually review the results to ensure the relevance of cants to scenes and the distinctiveness of all scenes corresponding to the same cant. In this paper, we design formulas in the Co-Tuning to mathematically represent this part of the stage. The generation model is specified as Mp(p \u2208[1, 5]), and it includes five fine-tuned models, which are denoted as M1, M2, M3, M4, and M5. As shown in Figure 3, entity Oi represents the i-th entity (i \u2208[1, 15]) in the Cant dataset, and cant \u03c9ij represents the j-th cant of Oi (j \u2208[1, 20]). For example, in the case of the politics domain, there are 10 entities used in our 4 \fCan\u2019t say cant? Measuring and Reasoning of Dark Jargons in Large Language Models A PREPRINT Algorithm 1: Fine-Tuning Input: pre-trained model parameters \u03b8p, fine-tuned dataset Df, loss function L(\u03b8) that depends on the model parameters \u03b8, optimizer O(\u03b8) for updating the parameters \u03b8, learning rate \u03b7, number of iterations T for fine-tuning, x is the input sample, y is the label, \u02c6 y is the model\u2019s predicted value, and \u2207\u03b8 denotes the gradient operation Output: fine-tuned model parameters \u03b8f 1 Initialization \u03b8f = \u03b8p 2 for each t in range(1, T + 1) do 3 for batchx, batchy in Df do 4 \u02c6 y = model(batchx, \u03b8f); 5 loss = L(\u02c6 y, batchy); 6 gradient = torch.autograd.grad(loss,\u03b8f) ; 7 \u03b8f = O(\u03b8f, gradient, \u03b7); 8 return \u03b8f entity \ufffd1 . . . ... ... entity \ufffd2 entity \ufffd\ufffd \ufffd\ufffd Generation Models GPT-2 Scene Datasets FineTuning cant \ufffd1\ufffd cant \ufffd11 cant \ufffd12 ... cant \ufffd\ufffd\ufffd cant \ufffdi1 cant \ufffd\ufffd2 ... ... k scenes \ufffd11 \u2019 . . . ... \ufffd12 \u2019 ... \ufffd1\ufffd \u2019 ... ... \ufffd\ufffd2 \u2019 ... \ufffd\ufffd\ufffd \u2019 . . . . . . . . . Fragment 1 ... \ufffd\ufffd1 \u2019 k scenes Fragment 2 k scenes Fragment j (belong to \ufffd1) (belong to \ufffd\ufffd) . . . . . . Fragments (i entities correspond to i*j*k scenes) Fragment 1 Fragment 2 Fragment j Fragments belonging to \ufffd1 Fragments belonging to \ufffd\ufffd (Scene fragment) (Scene fragment) Scene \ufffd11 Scene \ufffd12 Scene \ufffd1\ufffd Scene \ufffd\ufffd1 Scene \ufffd\ufffd\ufffd Scene \ufffd\ufffd2 (Cant Dataset) Cross-Matching Figure 3: The overall structure and process of Co-Tuning. experiments, each entity has twenty cants, j is taken as [1, 20]. The entity Oi can constrain the fine-tuned model Mp\u2019s output, and the result of the constraint is the Scene fragment; this part corresponds to Eq. (1). The Scene is Sik (i \u2208[1, 10], k \u2208[1, 101]). The Scene Sik represents the k-th scene fragment (i \u2208[1, 10], k \u2208[1, 101]) that the i-th entity enters into the output of the fine-tuning model (Mp). Sik = Mp(Oi) (1) Eq. (2) denotes the cross-match of Cant and Scene fragment and was saved in S \u2032 ik. S \u2032 ik = Sik \u222a\u03c9ij (2) There are k orange boxes in the O1 Scene fragment. These orange boxes represent the Mp-generated text containing the Cant dataset\u2019s entities. The function of Eq. 2 is to replace the entities in the Scene fragments with cant in the Cant dataset. As shown in Figure 3, for example, from O1 Scene fragment to Fragment 1. We replace entities in Scene [S11, S1k] with the cant (\u03c911), forming Fragment 1. By analogy, we built j Fragments in the Co-Tuning stage. In the Co-Tuning stage, we can obtain scene fragments related to entities in specific domains that have a high degree of context consistency and express various characteristics of the entities in different contexts. At the same time, our finetuned model is flexible enough to introduce multiple entities during the generation process and allow scene fragments to describe the relationships among multiple entities. This stage generates diverse scene fragments. While the scene fragments are generated through a generative process, the Scene dataset we provide undergoes manual review to mitigate errors in both the generated content and the language utilized within the experimental environment. 3.7 Stage 3: Data-Diffusion At this stage, Fragments from the Co-Tuning stage are transformed into Q&A-Queries to enhance interaction with LLM and diversify evaluation. We employ three diffusion methods: two sample learning techniques, three question types, and four prompt clue methods. Each Fragment generates 24 Q&A-Queries. First, we introduce sample learning techniques for zero-shot and one-shot learning transformations of Fragments. Second, we categorize Fragments into Abstractive, Yes/No, and Multiple-choice question types. Finally, prompts are classified into None-tip, Tip-1, Tip-2, and All-tip categories, considering information retrieval difficulty and situational prompting. 5 \fCan\u2019t say cant? Measuring and Reasoning of Dark Jargons in Large Language Models A PREPRINT ... ... ... ... ... ... Zero-shot learning One-shot learning Abstractive Yes/NO Multiple-choice None-tip Tip-1 Tip-2 All-tip None-tip Tip-1 Tip-2 All-tip None-tip Tip-1 Tip-2 All-tip None-tip Tip-2 None-tip Tip-2 None-tip Tip-1 Tip-2 All-tip Tip-1 All-tip Tip-1 All-tip Abstractive Yes/NO Multiple-choice ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Data-Diffusion ... ... Fragment 1 Fragments ... Fragment 2 ... Fragment \ufffd ... Q&A-Query ... Figure 4: Schematic diagram of Data-Diffusion. The introduction of Data-Diffusion in extended Fragments has significantly increased Q&A queries, providing diverse test cases for evaluating the generation model\u2019s performance comprehensively. This approach promises to establish a diverse database for future research and applications. 3.8 Stage 4: Data-Analysis As shown 8 \u20ddand 9 \u20ddin Figure 2, 8 \u20ddmeans sending the data expanded by Data-Diffusion to ChatGPT and other target models. 9 \u20ddshows data analysis of the output results of LLMs such as ChatGPT. After completing the Data-Diffusion, we submit the generated Q&A-Queries to the LLM API interface to obtain a large number of data results. These data results are complex and diverse, including the interplay of relationships. Therefore, we devise a data analysis algorithm to yield both numerical and analysis outcomes. Algorithm 2: Segmented data statistics algorithm Input: J \u2013 j Fragments in Co-Tuning (Fig 4), K \u2013 k Scenes in a Fragment, tasks \u2013 [\u201cAbstractive\u201d: {\u201cAZ\u201d, \u201cAO\u201d}, \u201cYes/NO\u201d: {\u201cJZ\u201d, \u201cJO\u201d}, \u201cMultiple-choice\u201d: {\u201cMZ\u201d, \u201cMO\u201d} ], clues \u2013 [\u201cNone-tip\u201d, \u201cTip-1\u201d, \u201cTip-2\u201d, \u201cAll-tip\u201d], intervals \u2013 [0, 1-10, 11-20, ..., 91-101] Output: Rj,t,c, Nj,t,c,z, Sumt, SumPCM 1 for each j in J do 2 for each t in tasks do 3 for each c in clues do 4 Nj,t,c=Pk a=1 ia (if hit then ia=1, else ia = 0) 5 Rj,t,c \u2190 \u2212 Nj,t,c k 6 for each z in interval do 7 if Nj,t,c in z then Nj,t,c,z++ 8 Sumt=Pj \u03b1=1 Pc \u03b2=1 N\u03b1,t,\u03b2 9 SumPCM=Pj \u03b1=1 Pt \u03b2=1 N\u03b1,\u03b2,c 10 return Rj,t,c, Nj,t,c,z, Sumt, SumPCM After the Co-Tuning and Data-Diffusion stages, the test data generated by CantCounter is very complex. Therefore, in the Data-Analysis stage, we implement Algorithm 2 to conduct data statistics from various angles. During analysis, when the entity Oi is modified in the Co-Tuning stage (see Figure 3), Algorithm 2 will be called accordingly. We analyze the results based on different tasks. We learn and analyze data features from Question Type Method (See 4.2 QTM) and Sample Learning Method (See 4.3 SLM) based on different question types and samples learning to get Sumt; we analyze the data based on different prompt clues from Prompt Clue Method (See 4.4 PCM) to get SumP CM. In Algorithm 2, we set the matching conditions, calculate the number of fragments, and obtain Nj,t,c and accuracy Rj,t,c. At the same time, we set eleven intervals: 0, 1-10, 11-20, ..., 91-101 to distinguish different feedbacks and obtain Nj,t,c,z. As shown in the Algorithm 2, we put Zero-shot learning, One-shot learning, and three tasks together as a loop. We define that in the Abstractive task, the output is AZ in the Zero-shot learning input; the output is AO in the Oneshot learning input. In the Yes/NO task, the output is expressed as JZ in the Zero-shot learning input; the output is 6 \fCan\u2019t say cant? Measuring and Reasoning of Dark Jargons in Large Language Models A PREPRINT expressed as JO in the One-shot learning input. In the Multiple-choice task, the output is represented as MZ in the Zero-shot learning input; the output is expressed as MO in the One-shot learning input. The above content has been integrated into our code to form semi-automation. 4 Experimental Design and Results To explore our research questions, we conducted experiments in CantCounter, outlined sequentially in this section. We examined various question types in RQ1 (Section 4.1), different question setups in RQ2 (Section 4.2), and diverse prompt clues in RQ2 (Section 4.3). Focusing primarily on ChatGPT-3.5 (version gpt-3.5-turbo-0613) due to its convenience and wide usage, similar experiments were also conducted with other language models. All experiments were performed on a server equipped with an RTX 3090 Ti GPU. In this section, we analyze using cant and scene to bypass the LLM filter in the CantCounter framework quantitatively. We conduct open-world query experiments across five domains: politics, drugs, racism, weapons, and LGBT. Initially setting k to 101, we match 692 cants to 53 entities, resulting in 69,892 scenes. These undergo Data-Diffusion, expanding to 1,677,408 scenes. This study enables a comprehensive analysis of corpus performance and language changes within specific domains. 4.1 Question Type Method (QTM) In the Q&A task, we conduct three types of tasks: \u2022 Abstractive Task: Models generate responses freely, without relying on specific information extraction. \u2022 Yes/No Task: Models provide binary responses, \u201cTrue\u201d or \u201cFalse,\u201d based solely on the presented question and existing knowledge. \u2022 Multiple-choice Task: Models select the correct answer from a set of options, demonstrating comprehension of semantics and accurate identification. Table 1 shows that Multiple-choice tasks achieve the highest accuracy (45.38%), while Yes/No tasks have the lowest (22.91%). The discovery that ChatGPT performs well in multiple-choice questions is intriguing. In this task, there are five options (A) to (E), with (A) to (D) relevant to a specific domain, and (E) set as \u201cI don\u2019t know.\u201d \u201cOther\u201d signifies an answer unrelated to these options, with (A) as the correct choice. Figure 5 displays the box plot analysis results. Analyzing the Multiple-choice task results, we find key factors for its success. Firstly, it offers a set of answers with one correct option and distractors, aiding comprehension. Secondly, its structured format simplifies the process of eliminating incorrect options, improving accuracy. Lastly, the inclusion of an \u201cI don\u2019t know\u201d option enhances accuracy in uncertain situations. Figure 5: The vertical axis refers to the number of correct answers under the four tips. The total number is 404. (A) and (E) stand out in Multiple-choice, being the correct answer and \u201cI don\u2019t know\u201d respectively. After carefully studying ChatGPT-3.5\u2019s interpretation of option (E), we find that when the context is ambiguous or the entities in the implicit context are rare, ChatGPT-3.5\u2019s accuracy will drop significantly; that is, it will prefer option (E). We also explore the low accuracy in the Yes/No task. Comparing ChatGPT-3.5\u2019s \u201cFalse\u201d answers with Multiple-choice task data, we find they often include option (E) and incorrect choices from the Multiple-choice task due to the clarity of options. Additionally, differences in response styles and keyword detection criteria impact ChatGPT-3.5\u2019s performance across Abstractive and Yes/No tasks, where Yes/No tasks restrict responses to \u201cTrue\u201d or \u201cFalse.\u201d Overall, our analysis highlights how different Q&A types affect ChatGPT-3.5\u2019s accuracy in specific domains, with Multiple-choice tasks showing higher performance. Further research is needed to improve ChatGPT-3.5\u2019s accuracy and adaptability in these domains. 7 \fCan\u2019t say cant? Measuring and Reasoning of Dark Jargons in Large Language Models A PREPRINT 4.2 Sample Learning Method (SLM) In our experiments, we explore two sample setups: Zero-shot and One-shot learning. \u2022 Zero-shot learning. No examples are provided in the prompt, which only includes instructions and questions. \u2022 One-shot learning. The prompt includes an example relevant to the discussion, consisting of a sample message and user information. Zero-shot learning involves a single user message, while One-shot learning processes a sample message and a user message. These methods help understand LLM\u2019s performance in different sample learning approaches and reveal its inference capabilities in information-poor settings. Further investigation uncovers learning patterns and effects of the model in specific domains, with default hyper-parameter settings used to avoid extensive tuning. In this section, we explore how Zero-shot and One-shot learning methods affect LLM accuracy in recognizing cant scenes for RQ2. Traditionally, One-shot learning often outperforms Zero-shot learning due to more available data [37]. However, our cross-domain analysis, depicted in Figure 6 and reflected in Table 1 (red section), reveals a trend favoring Zero-shot learning overall. We find this trend varies by domain. Figure 6: Comparison of the number of correct Zero-shot learning and One-shot learning in different domains. In the politics domain, One-shot learning performs better due to ample data and contextual understanding. Conversely, in the LGBT domain, Zero-shot learning outperforms One-shot learning due to limited publicly available examples. One-shot learning aids ChatGPT-3.5 in better contextual comprehension of sensitive topics, but it may also introduce biases, leading to lower overall accuracy in specific domains. Similar analyses across other domains yield consistent results. 4.3 Prompt Clue Method (PCM) In this part of the study, the purpose of CantCounter is to explore the impact of different clues on LLM recognition and reasoning abilities. To this end, we provide four different clues to experiment with: \u2022 None-tip. Keeps the same as the original prompt and does not add any additional clues. \u2022 Tip 1. Add relevant tip for \u201cNone-tip\u201d. For example, when describing Trump\u2019s cant, we can add the clue \u201cPolitician\u201d in the political domain to make the prompt more directional. \u2022 Tip 2. Add another relevant tip for \u201cNone-tip\u201d. For example, when describing Trump\u2019s cant, add the \u201cUnited States\u201d prompt in the domain of politics to enrich the prompt content. \u2022 All-tip. Add both Tip 1 and Tip 2 on the basis of \u201cNone-tip\u201d; for example, when describing Trump\u2019s cant, add both \u201cpolitician\u201d and \u201cAmerican\u201d in the political domain to make the prompt more appropriate. By observing the effects of these different clues on LLMs, CantCounter can assess the fluctuating changes they induce in recognition and reasoning abilities. This study will help further understand the influence of cues on LLM and provide directions for improving its application and performance. To answer RQ2, Table 1 displays ChatGPT-3.5\u2019s accuracy across five domains using different prompt clues. Generally, more clue-related information improves recognition accuracy, as seen in the political domain where All-tip prompts perform significantly better. However, increasing clues doesn\u2019t always lead to higher accuracy, possibly due to information redundancy or LLM filter triggering. Too many clues may reduce accuracy, as seen in the LGBT domain where Tip 1 prompts were less accurate than none-tip prompts. Our analysis stresses the importance of a balanced clue selection approach to maximize external information usage without compromising accuracy. Thus, choosing appropriate clues in moderate quantities is key to enhancing ChatGPT-3.5\u2019s domain-specific performance. 8 \fCan\u2019t say cant? Measuring and Reasoning of Dark Jargons in Large Language Models A PREPRINT Table 1: CantCounter stats highlight top performances(%): Multiple-choice excels in QTM (45.38%), Zero-shot learning shines in SLM (52.13%), and All-tip prevails in PCM (29.11%). \u201cA\u201d represents \u201cAbstractive\u201d. \u201cY/N\u201d represents \u201cYes/No\u201d. \u201cMc\u201d represents \u201cMultiple-choice\u201d. \u201cZs\u201d represents \u201cZero-shot\u201d. \u201cOs\u201d represents \u201cOne-shot\u201d. \u201cNT\u201d represents \u201cNone-tip\u201d. \u201cT1\u201d represents \u201cTip 1\u201d. \u201cT2\u201d represents \u201cTip 2\u201d. \u201cAllT\u201d represents \u201cAll-tip\u201d. QTM SLM PCM Domain A Y/N Mc Zs Os NT T1 T2 AllT Politics 26.81 22.55 50.64 42.85 57.15 19.01 24.75 25.19 31.05 Drugs 21.16 22.41 56.43 55.41 44.59 17.32 27.43 25.47 29.78 Racism 29.05 27.60 43.35 41.39 58.61 11.22 19.63 37.50 31.66 Weapons 50.89 16.20 32.91 54.96 45.04 18.73 28.11 25.27 27.90 LGBT 34.41 25.75 39.84 59.78 40.22 22.58 22.10 28.53 26.79 Total 31.71 22.91 45.38 52.13 47.87 19.03 24.61 27.24 29.11 Table 2: Zero-shot learning and One-shot learning Q&A accuracy in CantCounter for GPT-4, Bard, New Bing, and SparkDesk. \u201cAcc\u201d represents \u201cAccuracy Rate\u201d. \u201cRej\u201d represents \u201cRejection Rate\u201d. \u201cDon\u2019t know\u201d represents \u201c\u2019I don\u2019t know\u2019 Rate\u201d. Zero-shot learning One-shot learning Acc Rej Don\u2019t know Acc Rej Don\u2019t know ChatGPT-3.5[1] 47.61 4.66 39.91 45.52 1.63 46.45 GPT-4[38] 27.27 0.00 70.45 50.00 0.00 34.09 Bard[39] 47.73 4.55 13.64 65.91 15.91 6.82 New Bing[40] 50.00 11.36 34.09 50.00 36.36 2.27 SparkDesk[41] 29.55 45.45 9.09 20.45 68.18 2.27 4.4 Comparison with other LLMs In our study, we examine several LLMs alongside ChatGPT-3.5 to address RQ3, including GPT-4[1], New Bing [40], Bard [39], Claude [42], ERNIE [43], and SparkDesk [41]. While ERNIE is optimized for Chinese content, translating cant prompts may compromise their subtlety and effectiveness. Moreover, ERNIE\u2019s frequent account suspensions hindered extensive trials [44]. Claude\u2019s sensitive content handling also led to account suspensions [42]. Thus, we focus on comparing and validating four other LLMs: GPT-4, Bard, New Bing, and SparkDesk. Table 2 presents ratios of correct answers, refused answers, and \u201cI don\u2019t know\u201d responses. Interestingly, GPT-4 consistently responds in all situations, avoiding refusal to answer. This contrasts with other models that often refuse to respond due to content filtering. GPT-4\u2019s tendency to use \u201cI don\u2019t know\u201d may stem from our controlled comparisons in the QTM and PCM methods, particularly in Multiple-choice scenarios. Conversely, other LLMs tend to refuse to answer, likely due to content categorization by filters and classifiers. SparkDesk exhibits the highest refusal rate, possibly due to overly strict filters. Furthermore, One-shot learning models are more prone to refusal to answer, as they rely on context understanding, potentially triggering filters. These findings offer insights into the performance of these LLMs across different learning tasks, informing future research directions. 4.5 Takeaways We observe varying accuracy across different Q&A-Query types (RQ1), with Multiple-choice tasks being most accurate and Yes/No tasks the least. In sensitive domains, Zero-shot learning performs better than One-shot learning (RQ2). Increasing prompt clues improves cant identification accuracy (RQ2). More recent LLM models consistently avoid refusing to answer (RQ3), but they are more likely to refuse answering questions related to racism compared to LGBT (RQ4). 5" + } + ], + "Jianyi Zhang": [ + { + "url": "http://arxiv.org/abs/2305.05644v2", + "title": "Towards Building the Federated GPT: Federated Instruction Tuning", + "abstract": "While \"instruction-tuned\" generative large language models (LLMs) have\ndemonstrated an impressive ability to generalize to new tasks, the training\nphases heavily rely on large amounts of diverse and high-quality instruction\ndata (such as ChatGPT and GPT-4). Unfortunately, acquiring high-quality data,\nespecially when it comes to human-written data, can pose significant challenges\nboth in terms of cost and accessibility. Moreover, concerns related to privacy\ncan further limit access to such data, making the process of obtaining it a\ncomplex and nuanced undertaking. Consequently, this hinders the generality of\nthe tuned models and may restrict their effectiveness in certain contexts. To\ntackle this issue, our study introduces a new approach called Federated\nInstruction Tuning (FedIT), which leverages federated learning (FL) as the\nlearning framework for the instruction tuning of LLMs. This marks the first\nexploration of FL-based instruction tuning for LLMs. This is especially\nimportant since text data is predominantly generated by end users. Therefore,\nit is imperative to design and adapt FL approaches to effectively leverage\nthese users' diverse instructions stored on local devices, while preserving\nprivacy and ensuring data security. In the current paper, by conducting widely\nused GPT-4 auto-evaluation, we demonstrate that by exploiting the heterogeneous\nand diverse sets of instructions on the client's end with the proposed\nframework FedIT, we improved the performance of LLMs compared to centralized\ntraining with only limited local instructions. Further, in this paper, we\ndeveloped a Github repository named Shepherd. This repository offers a\nfoundational framework for exploring federated fine-tuning of LLMs using\nheterogeneous instructions across diverse categories.", + "authors": "Jianyi Zhang, Saeed Vahidian, Martin Kuo, Chunyuan Li, Ruiyi Zhang, Tong Yu, Yufan Zhou, Guoyin Wang, Yiran Chen", + "published": "2023-05-09", + "updated": "2024-01-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.DC", + "cs.SY", + "eess.SY" + ], + "main_content": "Introduction Large Language Models (LLMs) have become ubiquitous in natural language processing (NLP) [6, 17, 64, 65], where one single model can perform well on various language tasks, including established tasks such as text generation, machine translation, and question answering, as well as novel application-oriented tasks in human daily life [19, 69, 81]. To align LLM to follow human intents, instruction-tuning has been proposed by fine-tuning LLM on instruction-following data [62, 84, 85]. Though instruction-tuning has demonstrated great effectiveness in improving the zero and few-shot generalization capabilities of LLM, its performance on real-world tasks is contingent on the quantity, diversity, and quality of the collected instructions [58, 84]. The process of collecting these instructions can be expensive [75, 84]. Beyond the commonly acknowledged constraints of time and labor expenses, the increasing awareness of data sensitivity highlights a significant challenge in acquiring extensive and high-quality instructions [3, 26, 34]. For instance, collecting vast amounts of daily conversations from users is a valuable means of providing guidance for LLMs, enabling them to generate authentic and genuine responses. However, privacy concerns may hinder users from sharing their conversations, resulting in a limited quantity of instructions that are not fully representative of the target population. Likewise, many companies treat their instructions as proprietary assets that are closely guarded. They are reluctant to share their instructions with external parties, as they often contain confidential and proprietary information that is critical to their success and profitability [27]. For example, pharmaceutical companies rely on meticulously-crafted instructions that may include details about the chemical composition of new drugs and the results of clinical trials [35]. Hence, the sensitive nature of these instructions can pose significant challenges when utilizing traditional centralized approaches for instruction tuning. We aim to tackle these challenges by exploring the potential of federated learning (FL) as a promising solution [56]. This collaborative learning technique enables many clients to learn a shared model jointly without sharing their sensitive data. In particular, in our proposed federated instruction-tuning, clients initially download a global LLM from a central server and subsequently compute local model updates using their respective local instructions. These local updates are then transmitted back to the server, where they are aggregated and integrated to update the global LLM. Given that clients often have limited computational resources in comparison to traditional centralized training cloud servers, which can utilize thousands of GPUs to fully fine-tune all parameters of LLMs, we resort to parameter-efficient tuning techniques. This leads to a significant decrease in the computational and communication demands, as it reduces the number of trainable parameters on each device. Thus, our proposed framework enables efficient utilization of the computational resources available on local edge devices, which are commonly accessible, as well as their diverse local instructions. This eliminates the need for depending on large cloud servers to fine-tune LLMs. Our major contributions are summarized as follows: \u2022 We make the first attempt to leverage FL for instruction tuning (FedIT) of LLMs. In the predominant instruction tuning of LLMs, acquiring access to extensive and high-quality instructions can present significant obstacles due to the associated costs and privacy concerns. In this work, we show that we are able to circumvent the above-mentioned limitations by exploiting the diverse sets of available instructions (especially in cross-device FL where the number of clients is of the order of billion) from the users in the FL system. To make deploying LLM within an FL setup viable in terms of communication and computation cost, we suggest the clients exploit the parameter-efficient tuning (PETuning) methods. \u2022 A comprehensive study is conducted on the heterogeneity within the FL instruction tuning. We employ the GPT-4 auto-evaluation method, which has been widely utilized in related research [11, 63], to demonstrate the effectiveness of our FedIT approach in enhancing response quality by leveraging diverse available instructions. We discuss potential avenues for future research to improve the FL-based LLM fine-tuning methods for practical deployment. \u2022 We have developed and released a GitHub repository called Shepherd3, which has been designed to provide ease of customization and adaptability, thereby offering benefits for future research endeavors in this field. 3https://github.com/JayZhang42/FederatedGPT-Shepherd 2 \f2 Related Work 2.1 Instruction tuning of Large Language Models Instruction tuning has emerged as a simple yet effective approach to enhance the generalizability of LLMs for complicated real-world tasks. This research area has recently gained increasing attention, particularly since the introduction of FLAN [85] that demonstrates significant zero-shot performance, and Instruct-GPT [62] that aligns GPT-3 [6] to follow human intents via supervised tuning and RLHF [13, 70]. The development of Instruct-GPT has been instrumental in the success of ChatGPT [60] and GPT-4 [61]. In general, current research efforts can be broadly classified into two main categories based on the source of instructions: (1) human-annotated task prompts and feedback [62], and (2) machinegenerated instruction-following data. For the latter, self-instruct [84] is utilized, where a strong teacher LLM is considered to generate a comprehensive collection of instructional data that a student LLM can then utilize to gain alignment capabilities. Thanks to the recently open-sourced LLM LLaMA [77], which has demonstrated performance on par with proprietary LLMs such as GPT-3, the open-source community now has ample opportunities to actively explore promising solutions to build their own LLMs capable of following language and multimodal instructions [11, 45, 63, 75, 88, 98]. In this line of research, it is commonly assumed that instruction-following data can be centralized, regardless of its sources. However, we anticipate that decentralization is becoming a prevalent trend in sharing and accessing instruction-following data due to its sensitivity and popularity. As such, we propose the first attempt to address this issue using FL. Parameter-Efficient Fine-Tuning (PEFT) The fine-tuning of LLMs aims to optimize LLMs while minimizing the computational and storage demands associated with the training process. Various innovative methods have been proposed to achieve this goal, each with distinctive characteristics, including LoRA [31], P-Tuning [49], Prefix Tuning [42, 48], Prompt Tuning [38]. We suggest interested readers to refer to the DeltaPaper repository 4 and the Delta Tuning paper [20] for a comprehensive understanding of the advanced PEFT methods. We consider LoRA in our FL framework due to its promising performance in recent studies on instruction tuning, including Alpaca-lora 5 and Baize [88]. We save it for future work to explore other PEFT techniques in FL framework. 2.2 Federated Learning in NLP Tasks Federated Learning [55] is a decentralized and collaborative machine learning technique that enables data to remain on user devices. Significant research efforts have focused on addressing privacy and heterogeneity challenges and developing advanced FL methods [33, 52, 59, 92, 93]. These advancements include designing optimization methods with improved aggregation performance ([10, 21, 28, 66, 68, 79, 102], increasing the framework\u2019s robustness against adversarial attacks [72], devising effective client selection mechanisms [12, 25, 78, 95], enhancing personalization capabilities [18, 40, 80, 89], and boosting the overall efficiency of FL systems [37, 39, 54, 67, 74]. Furthermore, recent research has explored the application of FL to NLP tasks, such as Language Modeling [29, 90], Text Classification [8, 44], Sequence Tagging [23, 32], and Dialogue Generation [41, 51]. Several open benchmarks and repositories support the study of federated NLP tasks, including the Leaf benchmark [7], FedNLP benchmark [43], FedML [30], FedScale [36], and FATE [50]. Recent research has also highlighted the importance of pretraining models for federated learning [9, 73, 76, 86], as they offer a more powerful initialization for training instead of starting from scratch. This advantage improves the convergence and robustness of FL training in the face of data heterogeneity. Our study represents the first work to leverage FL for the instruction tuning of LLMs. We hope it could inspire the two communities to explore the intersection. 3 Federated Instruction Tuning In light of our review in FL and instruction tuning, we now proceed to our Federated Instruction Tuning (FedIT). This section is structured as follows: first, we introduce the overall framework 4https://github.com/thunlp/DeltaPapers 5https://github.com/tloen/alpaca-lora 3 \fin subsection 3.1. Subsequently, we delve into the intricacies of this framework and examine the heterogeneity of the instruction dataset in subsection 3.2. Following this, we discuss the parameterefficient technique, LoRA, that is integrated into our framework and explore its connections with other federated learning algorithms. Lastly, in subsection 3.4, we present our Python-based GitHub repository framework, \u201cShepherd\" to facilitate research in federated instruction studies. 3.1 An Overview of FedIT Drawing on the successful application of FL in various machine learning domains to offer privacy protection, we introduce the FedIT framework. By harnessing the advantages of FL and PEFT, our framework enables secure and cost-effective LLM instruction tuning. The overall framework, illustrated in Figure 1 and Algorithm 1, involves two primary components: local training operations on the client side and scheduling and aggregation operations on the server side, which work together to ensure efficient training. Our FedIT framework for instruction tuning is designed to address the challenges of collecting highquality data and ensuring data privacy by keeping the instructions on the local devices throughout the process. By ensuring data sensitivity protection, we can encourage more clients to participate in the federated instruction tuning. Consequently, the combined instruction dataset from all clients can encompass a broader range of topics, tasks, and valuable information, as clients may come from different areas and possess domain-specific expertise. This FL approach enables our framework to effectively adapt to diverse and evolving instruction datasets, resulting in more robust and generalized LLM performance. Moreover, our FedIT methodology incorporates a parameter-efficient finetuning technique, known as LoRA, to facilitate local training. This method reduces computational and communication overheads for local edge devices that have limited system resources. As a result, we can leverage the computational capabilities of a multitude of distributed local edge devices that are often disregarded in conventional centralized instruction tuning. This feature enhances the scalability of our FedIT solution, enabling it to address large-scale instructional tuning challenges effectively. The framework assigns an LLM to each client and performs client selection to determine which clients will participate in local instruction tuning. During instruction tuning, clients use their local instruction dataset to update a small, trainable adapter that is added to the pre-trained model weights. This approach reduces the cost of fine-tuning and makes it compatible with the limited computational resources of local devices. Upon completion, clients send the updated adapter back to the server, which aggregates the received adapters\u2019 parameters and conducts another round of client selection. This iterative process continues until convergence is achieved. We also wish to emphasize the importance of client selection. In real-world settings, not all clients may be accessible for local instruction tuning, as local device processors may be occupied by other tasks. Hence, client selection can come into play to better simulate a real-world scenario. The server can actively choose clients for training based on their distinct instructions and computational resources, thus improving the overall efficiency of the FL framework by identifying clients that best represent the overall data distribution. 3.2 Heterogeneity of Instructional Data Beyond the practical benefits of FedIT, our research makes a unique contribution by presenting a scenario for instruction tuning of LLMs where statistical heterogeneity can serve as a positive factor for federated learning. Our work demonstrates that the extensive heterogeneous and diverse set of instructions available from multiple languages can, in fact, be a blessing factor for our FedIT approach. For instance, different clients may have different instruction tasks, such as open-domain QA and writing. The content and format of these instructions can be substantially different. For example, QA tasks typically require fact-based questions and answers, while writing tasks involve instructions for generating coherent and meaningful sentences. In order to obtain a comprehensive understanding of data heterogeneity inherent in the instructional dataset utilized for this study, we performed an in-depth examination of the Databricks-dolly-15k6 dataset. This publicly accessible dataset, consisting of instruction-following records generated by a multitude of Databricks employees, spans a range of behavioral categories as outlined in the 6https://huggingface.co/datasets/databricks/databricks-dolly-15k 4 \fFL Training Process A1, B1 An ,Bn C1 C2 Cn \u2026 \u2026 A2 ,B2 Client local instructions Client local LLM Model update by LORA module (A, B) Task: Open_qa Instruction: What happens when the sun goes down? Response: When the sun sets, the evening starts. Context: \u201c \u201d Heterogeneous Instructions Instruction: Please summarize what Linkedin does. Response: Linkedin is a social platform that business\u2026 Context: \u201cLinkedIn (/l\u026a\u014bkt\u02c8\u026an/) is a business and\u2026\u201d Server \u2026 Instruction: Why mobile is bad for human Response: We are always engaged one phone which is not good. Context: \u201c \u201d Task: Brainstorming Task: Summarization Figure 1: The framework of Federated Instruction Tuning (FedIT) Algorithm 1 Federated Instruction Turning (FedIT) Initialization: each client\u2019s initial global large language model with parameters w and a lightweight adapter with parameters \u2206w(0), client index subset M = \u2205, K communication rounds, k = 0, Training while k \u2264K do Server updates M using specific strategies \u25b7Select clients for local training for n \u2208M in parallel do \u25b7Parameter-efficient finetuning on local instructions dataset Client freeze the LLM and update the adapter weights with \u2206w(k) \u2206w(k+1) n \u2190InstructionTuning(\u2206w(k) n ) end For \u2206w(k+1) \u2190Aggregate(\u2206w(k+1) n ) for n \u2208M \u25b7Aggregate the adapters at Server k \u2190k + 1 end while Outcome (m, \u03b8t g): Derive the final adapter with parameters \u2206w(K) and the global LLM with parameters w InstructGPT paper [62]. These categories encompass brainstorming, classification, closed QA, generation, and more. To emulate an FL environment with ten clients, we partitioned the entire Databricks-dolly-15k dataset into ten shards using a widely adopted partitioning method [30, 36, 94], with each shard assigned to an individual client. The category distribution for each client\u2019s instruction dataset is illustrated in Figure 2a. As is evident in the figure, each user\u2019s dataset contains imbalanced categories of instructions, with some categories absent entirely. This reflects real-world scenarios where users may not possess expertise across all instruction categories. In the absence of FedIT, due to the challenges associated with collecting sensitive instruction data, the model can only be trained on the local instruction dataset of each user, as depicted in the left subfigure of Figure 2b. However, by implementing our FedIT approach, the model can be trained on the local instruction datasets of all clients, as illustrated in the right subfigure of Figure 2b. As a result, FedIT allows for instruction tuning on a dataset with enhanced diversity, and a larger number of data points, encompassing the complete Databricks-dolly-15k dataset. The distribution of this dataset is shown in the right subfigure of Figure 2b. Comprising eight unique categories with varying numbers of instructions, this dataset offers increased diversity, allowing the model to be more generalized and applicable to a wider array of tasks compared to training solely on each client\u2019s local instruction dataset with limited categories and quantity. In addition to task categories, language diversity adds a new dimension of heterogeneity to Federated Instruction Tuning. In real-world applications, LLMs are typically designed for multilingual capabilities to cater to users from diverse regions and countries who speak various languages. Addressing 5 \fthe needs of a multilingual audience poses several challenges. Apart from understanding all the languages in the dataset, achieving fairness across languages, particularly for those underrepresented in the instruction dataset, is a crucial aspect that warrants further investigation in Federated Instruction Tuning. Moreover, domain-specific instructions further compound the heterogeneity of the framework. Different domains have distinct contexts, each characterized by unique terminologies and sentence structures. For instance, legal or pharmaceutical contexts may require specific vocabulary and phrasing that do not apply to other domains. Additionally, there are other sources of heterogeneity, such as task complexity, task ambiguity, emotional tone, cultural factors, and more, which merit further investigation. OQAGQA C CQA B IE S CW 0 200 400 600 800 1000 149 10.0% 598 40.1% 746 50.0% Client #0 OQAGQA C CQA B IE S CW 0 200 400 600 800 1000 747 49.3% 767 50.7% Client #1 OQAGQA C CQA B IE S CW 0 200 400 600 800 1000 377 25.2% 747 50.0% 370 24.8% Client #2 OQAGQA C CQA B IE S CW 0 200 400 600 800 1000 985 66.0% 507 34.0% Client #3 OQAGQA C CQA B IE S CW 0 200 400 600 800 1000 747 49.7% 757 50.3% Client #4 OQAGQA C CQA B IE S CW 0 200 400 600 800 1000 746 49.3% 767 50.7% Client #5 OQAGQA C CQA B IE S CW 0 200 400 600 800 1000 362 24.2% 747 50.0% 385 25.8% Client #6 OQAGQA C CQA B IE S CW 0 200 400 600 800 1000 746 50.0% 483 32.4% 264 17.7% Client #7 OQAGQA C CQA B IE S CW 0 200 400 600 800 1000 325 21.8% 468 31.3% 701 46.9% Client #8 OQAGQA C CQA B IE S CW 0 200 400 600 800 1000 747 49.0% 777 51.0% Client #9 (a) The distribution of instruction dataset categories across each client\u2019s dataset. C CQA S OQA C OQA GQA IE OQA S CQA IE OQA GQA GQA B IE OQA C B GQA CQA CW C B Client #0 (1493) Client #1 (1514) Client #2 (1494) Client #3 (1492) Client #4 (1504) Client #5 (1513) Client #2 (1494) Client #0 (1493) Client #2 (1494) Client #9 (1524) 24.0% (3611) 14.6% (2191) 14.2% (2136) 12.1% (1823) 11.8% (1768) 10.1% (1512) 8.4% (1263) 4.7% (711) FedIT Categories Open QA(OQA) General QA(GQA) Classification(C) Closed QA(CQA) Brainstorming(B) Information Extraction(IE) Summarization(S) Creative Writing(CW) (b) Training on a Higher-Quality, More Diverse Dataset with FedIT Figure 2: Illustrate the heterogeneity of FedIT with Databricks-dolly-15k instruction dataset. The model can be trained on only the particular local instruction categories of each user (bottom left), or on the local instruction datasets of all clients with greater diversity and quantity of data points that cover the entire range of the subject matter by implementing our FedIT (bottom right). 6 \f3.3 Parameter Efficiency in Federated Instruction Tuning Taking into account the limited computational capabilities of local devices, which are unable to support full fine-tuning of a large language model, it is crucial to implement a parameter-efficient fine-tuning strategy that leverages local computational resources. For a weight matrix W0 \u2208Rd\u00d7k belonging to a large pre-trained LLM, the method we adopt, Low-Rank Adaptation (LoRA) method, freezes W0 and constrains its update \u2206W by representing it using a low-rank decomposition W0 +\u2206W = W0 +BA, where B \u2208Rd\u00d7r, A \u2208Rr\u00d7k are two trainable parameters, and the rank r \u226amin(d, k). For a linear layer h = W0x, the modified forward pass is given by: h = W0x + BAx Compared to fully fine-tuning the LLM, LoRA considerably decreases the number of trainable parameters. Please refer to Section 4.1 and Table 1, which present the parameter counts for each model and the corresponding memory costs. Once the local parameter-efficient fine-tuning with LoRA is completed, clients only need to transmit the B and A matrices of parameters to the server, significantly reducing communication costs compared to sending updates for all LLM parameters. Finally, the central server aggregates these local matrices of parameters into a new global model parameter by FedAvg. Notably, LoRA does not introduce any additional latency of inference compared to fully fine-tuning the model when deployed in production, as we can explicitly compute, store W0 + BA, and conduct inference as usual. It is important to note that the LoRA method we employ is scalable to accommodate varying system resources. If a specific client\u2019s communication or computational resources are significantly lower than others, it can adjust its LoRA configurations by reducing the number of matrix W0 elements, which will be decomposed into low-rank A, B. Alternatively, it can also opt to decrease the rank r of A and B. Optimizing the aggregation process in the presence of substantial system heterogeneity within the FL framework, where different clients have distinct LoRA configurations, is an intriguing research topic to explore further. 3.4 Shepherd: A GitHub Platform for FedIT Support We introduce Shepherd7, a lightweight framework designed to implement Federated ParameterEfficient Instruction Learning. Shepherd supports ongoing research in this area, as well as other NLP tasks, by providing a user-friendly and scalable platform capable of handling large datasets. The framework allows for seamless integration of innovative algorithms and configurations and is compatible with a range of recent popular large language models, such as Stanford Alpaca [75], Vicuna [11], Pythia [5], Dolly 8, Baize [88], and Koala [24], among others. The Shepherd pipeline consists of four main components: 1) client data allocation, 2) client participation scheduling, 3) simulated local training, and 4) model aggregation. Client Data Allocation To simulate the real-world scenario where each client has its unique dataset, we employ a \"synthetic\" partitioning process, which is implemented in the client_data_allocation.py module. We offer two methods to replicate the non-independent and identically distributed (non-i.i.d) nature of the clients\u2019 datasets. In the first approach, we allocate n-class training data to each client, with the number of classes differing across clients, resulting in unbalanced class sizes. Despite this imbalance, the volume of data in each client\u2019s dataset is roughly equivalent. The second approach is similar to the first but stands out by having significantly varying data volumes across each client\u2019s dataset. Client Participation Scheduling The process of selecting clients to participate in the training is crucial and implemented in the fed_util/sclient_participation_scheduling.py module. Our vanilla version of Shepherd employs a random selection approach, and we aim to enhance the client selection strategy with efficiency-driven methods that address data and system heterogeneity, such as those proposed in [37, 95]. 7https://github.com/JayZhang42/FederatedGPT-Shepherd 8https://github.com/databrickslabs/dolly 7 \fSimulated Local Training This core component of our Fed-PEIT framework is implemented in the fed_util/client.py module. In real-world scenarios, all selected clients perform their local training simultaneously, which can be computationally expensive to simulate. To make it feasible for researchers with limited resources, our framework conducts the local training of clients sequentially, one at a time. To implement the LoRA method, we utilize the PEFT package [53] and the Alpaca-lora repository 9 to encapsulate the frozen, original pre-trained model with LoRA configurations, enabling more efficient parameter-efficient fine-tuning for our Shepherd framework. model = get_peft_model (model , LoRA_config) To aid future researchers in understanding and implementing our framework, we have defined a Python class, GeneralClient, which represents a client in the Federated Learning (FL) training process and includes attributes that represent the specific client\u2019s required information. class GeneralClient: def __init__(self , model , ** args): self.model = model We have also defined several methods for GeneralClient that conduct important components of the local training process. def preprare_local_dataset (self , ** args): ... self. local_train_dataset = ... self. local_eval_dataset = ... This method entails the preparation of the local dataset for the client by reading data from the specified data path and transforming it using the required tokenizer and prompt. Its design allows for ease of use with new datasets and supports the exploration of various prompts and tokenizers for future research purposes. def build_local_trainer (self , ** args): ... self.local_trainer= transformers.Trainer(self.model , ** args) This method constructs a local trainer for client-side training by leveraging the Hugging Face Trainer. This approach allows for the design of customized and efficient training configurations with tailored arguments based on specific requirements. def initiate_local_training (self): ... This method encompasses the preparatory steps for training. In our vanilla implementation, we create and modify certain attributes of the GeneralClient class for the convenience of recording information related to the model in parameter-efficient learning. It allows for the integration of custom functions for various purposes in future applications. def train(self): self.local_trainer.train () This method executes local training by leveraging the capabilities of the established local trainer. def terminate_local_training (self , ** args): ... return self.model , ... The terminate_local_training method signifies the conclusion of the local training process. It saves the locally trained model parameters and updates relevant information associated with the local training session. 9https://github.com/tloen/alpaca-lora 8 \fTable 1: Prompt Template Template Prompt Input Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. Instruction: {instruction} Input: {input} Response: Prompt No Input Below is an instruction that describes a task. Write a response that appropriately completes the request. Instruction: {instruction} Response: Model Aggregation This component is responsible for the combination of trained client models into a single global model, with the objective of producing a more generalized and accurate model. In our parameter-efficient setting, model aggregation involves combining only the trainable parameters specified by the LoRA configuration instead of all the parameters of LLM to reduce computational and communication costs. The module for this component is implemented in fed_util/model_aggregation.py, which provides a platform for the adoption of various federated optimization methods, including FedAvg [55]. In its current form, our Shepherd framework presents a fundamental and accessible vanilla version designed for ease of understanding and modification. In future iterations, we plan to expand the framework by incorporating more complex functionalities, such as novel client selection strategies [12, 25, 78, 95] and advanced optimization methods [10, 68, 79]. We also aim to support additional instruction datasets and enable a wider range of NLP tasks. Furthermore, we believe that the framework\u2019s practicality in real-world scenarios can be significantly improved by integrating advanced system simulations that account for various factors such as computing time delays, communication latencies, overheads, and bandwidth limitations. 4 Qualitative Study 4.1 Implementation details In our FL setup, we assume the presence of 100 clients. We proceed to apply the Shepherd framework\u2019s second data partitioning technique to divide the residual data from the Databricks-dolly-15k dataset into 100 distinct portions. Each of these portions corresponds to an individual client\u2019s local instruction dataset. We conduct a total of 20 communication rounds, with each round involving the random selection of 5 (5%) clients for training. Each client performs one epoch of local training with their respective instruction datasets on a single Nvidia Titan RTX with 24GB memory. We initialize the model with the 7B LLaMA model. The model remains frozen during training, thereby reducing GPU memory usage and enhancing training speed. In alignment with Baize\u2019s settings [88], we apply LoRA to all linear layers with a rank of 8 to boost adaptation capabilities. Following [31], we use random Gaussian initialization for A and set B to zero, ensuring that the value of BA is zero at the beginning of training. We employ the Adam optimizer to update LoRA parameters with a batch size of 32 and a learning rate of 1.5e-4. We set the maximum input sequence length to 512 and provide the template of the prompt adopted from Alpaca-lora in Table 1. The implementation of FedIT is completed utilizing our repository, Shepherd, and the derived model is referred to as Shepherd-7B. We detail the number of model parameters, training time, and GPU memory consumption in Table 2. 9 \fTable 2: Numbers of parameters (frozen&trainable), training time, and GPU memory cost on a single Nvidia Titan RTX Model Orig. Param Adapt. Param Trainable Training Time GPU Memory Shepherd-7B 7B 17.9M 0.26% 2 hours 23GB 4.2 Qualitative Study with Automatic Evaluation Following the same evaluation approach of the Vicuna project [11] and GPT-4-LLM [63], we use GPT-4 to automatically assess the responses generated by our Shepherd-7B model and other baseline models on 20 unseen questions randomly sampled from the evaluation set of the Vicuna project [11], which pertain to unseen categories during the training, such as \"counterfactual question,\" \"femir question,\" \"math question\" and others. Each model produces one response per question, and GPT-4 rates the response quality between the two models on a scale of 1 to 10. To minimize the impact of randomness in GPT-4\u2019s scoring, we force it to rate each response pair three times and then average the ratings. We compare our Shepherd-7B model with the following five baseline models. The first baseline model is a 7B LLaMA model without fine-tuning on the Databricks-dolly-15k dataset, denoted as LLaMA. Comparison with this baseline demonstrates the improvement in response quality through the use of our FedIT framework. The subsequent three baseline models are three 7B LLaMA models fine-tuned on three different individual clients\u2019 local datasets for one epoch without model aggregation in FL. The comparison between these models and ours highlights the advantages of utilizing diverse instruction datasets from multiple clients in our methodology. \"Local-1\" focuses on the brainstorming task solely, \"Local-2\" on the closed question answering task, and \"Local-3\" on classification and brainstorming tasks. The final strong baseline model, dubbed as \"CentralizedModel ,\" is fine-tuned with the entire Databricks-dolly-15k dataset for one epoch, representing the ideal centralized training scenario where the server could collect all clients\u2019 instructions. This serves as an upper bound, as we aim for FL to achieve comparable performance to centralized training in the future. We apply the GPT-4 automatic evaluation on the responses generated by our model Shepherd-7B and other baseline models. We list the averaged scores provided by GPT-4 in Table 3. Table 3: A summary of the baselines and their corresponding scores evaluated by GPT-4. The scores are reported in the format of (Baseline\u2019s score, Shepherd-7B\u2019s score) and the Relative Score is defined as ( Shepherd-7B\u2019s score / Baseline\u2019s score) Baseline Task Scores Relative Score CentralizedModel Centralized tuning with all the instructions (142.2, 130.7) 0.919 LLaMA No instruction tuning (114.0, 131.7) 1.155 Local-1 Brainstorming instruction tuning (120.0, 131.0) 1.092 Local-2 Closed question answering instruction tuning (116.1, 129.0) 1.111 Local-3 Classification and brainstorming instruction tuning (121.3, 131.8) 1.087 As demonstrated in Table 3, the performance of our proposed model, Shepherd-7B, significantly surpasses that of the LLaMA model. This result serves as evidence that our FedIT approach is indeed effective. When compared to other baseline models, which are fine-tuned solely on local instruction datasets, Shepherd-7B achieves considerably higher scores. This underlines the benefits of leveraging diverse instruction datasets from multiple clients in our FL approach, emphasizing that the heterogeneity and diversity of instructions within the FL framework can be advantageous to adopt the LLMs to different unseen tasks. However, a comparison with the robust CentralizedModel baseline reveals that our model still has room for improvement. This disparity is partly attributed to the fact that the local models aggregated at the server side are trained on instructions with substantially different distributions, which can cause their local models to learn different representations of 10 \f50 60 70 80 90 100 110 Relative Score (%) CentralizedModel LLaMA Local-1 Local-2 Local-3 Shepherd-7B (ours) 139.0/172.7 81% 118.3/176.0 67% 125.3/173.7 72% 123.7/173.7 71% 125.3/173.3 72% 133.3/175.7 76% ChatGPT CentralizedModel LLaMA Local-1 Local-2 Local-3 Shepherd-7B (ours) Figure 3: The relative scores of all models against ChatGPT(GPT-3.5-turbo) the instructions.Consequently, there is a need for further exploration of more efficient federated optimization methods and client scheduling methods, such as FA-LD [16] and FedCBS [95], which are inspired by bayesian sampling methods [47, 87, 96, 97, 99, 100], to enhance the aggregation process. In conclusion, as discussed in Section 3.2, statistical heterogeneity can be a beneficial factor for FedIT, as it enhances the diversity of instruction data, thus improving the model\u2019s generalization ability to unseen tasks. However, to fully utilize the benefits of data heterogeneity, advanced federated optimization methods need to be developed and integrated to manage and leverage heterogeneity more effectively. To evaluate the practical significance of this research, we further compare our proposed model, as well as the baseline models, with established industry products such as ChatGPT. In line with our ultimate goal of developing federated GPT models, we utilized GPT-4 auto-evaluation to compare the responses of these models with the response of GPT-3.5-turbo (ChatGPT). The resulting Relative Scores over ChatGPT are presented in Figure 3. As can be seen, our method achieves superior performance compared to all baselines except the Centralized model, which supports its potential to effectively address future product development scenarios where instruction data may be scarce due to the difficulties of collecting sensitive data. Overall, this evaluation highlights the value and applicability of our approach to real-world scenarios. 4.3 Qualitative Study with Example Demonstration In this section, we demonstrate the efficacy of FedIT by presenting examples that highlight the comparative performance of our model against various baseline models. The first example is derived from the 20 unseen questions in [11], while the second example is sourced from the Databricksdolly-15k dataset. As illustrated in Table 4, our Shepherd-7B model generates a more accurate and pertinent response compared to Local-3, which is also more detailed than LLaMA\u2019s response. Our response outlines potential discoveries Newton could have made in biology, such as evolution, genetics, cellular biology, immunology, infectious diseases, and vaccines, whereas Local-3 incorporates extraneous information, rendering its response less relevant to the question. As assessed by GPT-4, the Local-2 response also discusses the potential implications of Newton concentrating on biology but predominantly empha11 \fTable 4: Response demonstration for a counterfactual question from [11] What if Isaac Newton had focused on biology instead of physics? CentralizedModel If Isaac Newton had focused on biology, he likely would have discovered key biological concepts such as evolution, genetics, and cellular biology. He might also have made important discoveries in immunology, infectious diseases, and vaccines. Isaac Newton\u2019s physics theories changed the way we think about the laws of motion. If he had focused on biology instead, he may not have made the discoveries that he did... LLaMA If Isaac Newton had focused on biology instead of physics, he may have discovered DNA, which could have had a profound impact on medicine. Local-1 Biology is a much broader field than physics, and there are many different subfields that Newton could have focused on, like the study of living organisms. Local-2 Isaac Newton was a physicist, mathematician, astronomer, a natural philosopher, and a member of the Royal Society. But if he had focused on biology instead of physics, he would have been a biologist, and we would have been much more likely to have found vaccines for diseases like polio and smallpox, and we would have a much better understanding of the causes and treatment of some of the most devastating illnesses of the 20th century. Local-3 If Isaac Newton had focused on biology instead of physics, he may have discovered that living organisms are made of carbon. As we now know, carbon is an essential component of all living organisms, and it is even found in the DNA of every living cell. He would then have discovered that carbon can be found in the form of coal, oil, natural gas, and methane. 1. What are the key differences between renewable and non-renewable energy sources? 2. What if Isaac Newton had focused on biology instead of physics? 3. What would have happened if Isaac Newton had never focused on physics? ... Shepherd-7B (ours) If Isaac Newton had focused on biology, he likely would have discovered key biological concepts such as evolution, genetics, and cellular biology. He might also have made important discoveries in immunology, infectious diseases, and vaccines. ChatGPT If Isaac Newton had focused on biology instead of physics, his contributions would have been different. Biology is a completely different field of study, and Newton\u2019s expertise in physics may not have been easily transferable to biology. However, Newton was a brilliant intellectual, and his approach to scientific investigation was characterized by rigorous and systematic methodology. It is possible that if he had focused on biology, he could have made significant contributions in the field by applying his scientific method, logic, and insight to biological questions. It is impossible to know what exactly his contributions to biology might have been, but there is no doubt that he would have made significant contributions in any field he chose to study. 12 \fsizes vaccines and the understanding of disease causes and treatments. Although this speculation is intriguing, it is less focused on specific biological areas than our response, which emphasizes the potential areas of biological research where Newton might have made significant contributions. Moreover, it briefly mentions Newton\u2019s actual background, which is not directly related to the question but provides context. Even though baseline Local-1 is primarily fine-tuned on brainstorming instructions that share similarities with counterfactual QA, since they both involve creative thinking and deal with hypothetical situations, its response lacks depth and does not discuss the potential impact of Newton\u2019s focus on biology. Counterfactual QA typically evaluates or analyzes past events, involving questions about alternative outcomes, necessitating an understanding of the factors leading to a specific event outcome [57]. This distinction from merely producing novel ideas or solutions without assessing past events as seen in brainstorming, highlights the necessity for LLMs to possess other capabilities such as summarization, information extraction, and creative writing. Consequently, this emphasizes the significance of diverse instruction tuning for LLMs and illustrates the advantages of our methodology. Table 5: Response demonstration for a classification task from Databricks-dolly-15k dataset. The fact is that Binzasara is a percussion instrument and the Calabrian Lira is a stringed instrument. Identify which instrument is string or percussion: Binzasara, Calabrian Lira CentralizedModel Calabrian Lira is string, Binzasara is percussion. LLaMA Binzasara is a string instrument. Calabrian Lira is a percussion instrument. Local-1 Binzasara is string, Calabrian Lira is percussion. Local-2 Binzasara is string, Calabrian Lira is percussion. Local-3 Calabrian Lira is a string instrument and Binzasara is percussion. Shepherd-7B (ours) Calabrian Lira is string, Binzasara is percussion. ChatGPT Based on a quick internet search, both the Binzasara and Calabrian Lira are stringed instruments. Although there remains room for improvement in our model\u2019s quality when compared to the CentralizedModel and ChatGPT, the performance gap between our model and ChatGPT does not imply that our model is consistently inferior. As evidenced in Table 5, our response accurately addresses the question, while ChatGPT fails. Interestingly, ChatGPT falsely claims to have conducted an internet search, which diminishes the response\u2019s helpfulness and honesty. In contrast, our model and Local-3, which have encountered similar classification instructions, excel at this task. This result also emphasizes the importance of diversity for LLM instruction tuning. We believe that as valuable instructions become increasingly difficult and costly to collect due to sensitivity or other factors, our FedIT approach will find broader applications and add significant value to the development of LLMs. 5 Future Directions 5.1 Computation and Communication Overhead Deploying LLM in FL poses major challenges in terms of the colossal communication cost and the computational and storage overhead of local clients. FL faces significant communication challenges as it requires frequent exchanges of model information (parameters or gradients) among distributed clients and services. When it comes to using FL for LLM, the communication overhead becomes even more significant, with gigabit-level data transmissions necessary to achieve centralized training performance. This level of communication overhead is not acceptable for FL systems. Furthermore, local clients may not have the computing power to fine-tune the entire LLM, and storing different 13 \finstances for various tasks is also memory-intensive. As a result, it is crucial to develop appropriate LM-empowered FL methods that can work within the constraints of communication and resources. Inspired by this, proposing new parameter-efficient tuning (PETuning) methods such as Prefixtuning [42], LoRA [31], and BitFit [91] which are tailored for FL systems and yield competitive results can be a direction for future works. Those methods can naturally be a remedy for the communication and resource constraints mentioned above. 5.2 Privacy FL has gained popularity in privacy-sensitive NLP applications due to its ability to preserve privacy, especially when the client\u2019s data is highly sensitive and cannot be transmitted outside their device. Essentially, with preserving a notion of privacy, FL has emerged as a preferred approach for privacysensitive NLP tasks such as medical text tasks [71], and financial text classification [4]. The advancement of large language models (PLMs) has created an opportunity to use FL in privacysensitive NLP applications by combining the two techniques. The progress made in PLMs has made it possible to consider the combination of PLMs and FL as a viable and promising solution. However, LLMs in FL pose distinctive core challenges, one of which is the potential of malicious clients polluting the FL process by injecting crafted instructions. Such instructions can lead to biased or suboptimal models. To fully unpack the benefits of FL to LLM, the mentioned concerns should be addressed. Therefore, designing methods for robust aggregation and outlier detection techniques that can detect and exclude clients with abnormal behavior particular to LLM can be an interesting direction for future work in using FL for LLM. 5.3 Personalization With deploying FL in LLM, due to the differences among the language data (instructions) used in distributed clients and averaging of learning updates across a decentralized population, personalization becomes a critical requirement for FL systems [51]. The former can be further complicated by language diversity, domain-specific instructions, task complexity, emotional tone, cultural factors, etc., which are new aspects of heterogeneity [46, 86]. For instance, in multilingual applications, fairness across languages, especially for languages with fewer data samples, is essential but hard to achieve [82, 86]. In domain-specific contexts, distinct sentence structures add to the heterogeneity of the framework, requiring proposing new personalization methods to ensure the efficacy of the language model. Methods, such as meta-learning [2, 14, 22] or few-shot learning [15, 83], that combine personal embeddings with shared context embeddings, and preference embeddings, that facilitate personalization without the need for backpropagation, etc. have the potential to revolutionize the field of NLP. 5.4 Defense Against Attacks Recent research has highlighted the possibility of recovering text from the gradients of language models [3, 26]. This vulnerability can also arise due to the models\u2019 tendency to memorize their training data and can result in the inadvertent disclosure of sensitive information. In the context of FL, this issue becomes particularly concerning, as malicious users can leverage this vulnerability to extract local sensitive texts using various techniques. Although different methods, including gradient pruning [101] and Differentially Private Stochastic Gradient Descent (DPSGD) [1] have been proposed as defense mechanisms against these attacks, they often come at the cost of significant utility loss [26]. To address this issue, future research could explore more sophisticated defense strategies that are specifically tailored to the characteristics of text data. 6" + }, + { + "url": "http://arxiv.org/abs/2212.13675v1", + "title": "XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for Federated Learning", + "abstract": "Federated Learning (FL) has received increasing attention due to its privacy\nprotection capability. However, the base algorithm FedAvg is vulnerable when it\nsuffers from so-called backdoor attacks. Former researchers proposed several\nrobust aggregation methods. Unfortunately, many of these aggregation methods\nare unable to defend against backdoor attacks. What's more, the attackers\nrecently have proposed some hiding methods that further improve backdoor\nattacks' stealthiness, making all the existing robust aggregation methods fail.\n To tackle the threat of backdoor attacks, we propose a new aggregation\nmethod, X-raying Models with A Matrix (XMAM), to reveal the malicious local\nmodel updates submitted by the backdoor attackers. Since we observe that the\noutput of the Softmax layer exhibits distinguishable patterns between malicious\nand benign updates, we focus on the Softmax layer's output in which the\nbackdoor attackers are difficult to hide their malicious behavior.\nSpecifically, like X-ray examinations, we investigate the local model updates\nby using a matrix as an input to get their Softmax layer's outputs. Then, we\npreclude updates whose outputs are abnormal by clustering. Without any training\ndataset in the server, the extensive evaluations show that our XMAM can\neffectively distinguish malicious local model updates from benign ones. For\ninstance, when other methods fail to defend against the backdoor attacks at no\nmore than 20% malicious clients, our method can tolerate 45% malicious clients\nin the black-box mode and about 30% in Projected Gradient Descent (PGD) mode.\nBesides, under adaptive attacks, the results demonstrate that XMAM can still\ncomplete the global model training task even when there are 40% malicious\nclients. Finally, we analyze our method's screening complexity, and the results\nshow that XMAM is about 10-10000 times faster than the existing methods.", + "authors": "Jianyi Zhang, Fangjiao Zhang, Qichao Jin, Zhiqiang Wang, Xiaodong Lin, Xiali Hei", + "published": "2022-12-28", + "updated": "2022-12-28", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.DC" + ], + "main_content": "Introduction Federated Learning (FL) Kone\u02c7 cn` y et al. [2016], McMahan et al. [2017] is a burgeoning technology. To sum up, FL possesses the following three characteristics, which are distinguished from standard distributed learning: (i) The server gathers clients\u2019 local models instead of their training data to train a global model jointly. (ii) The distribution of client \u2217Corresponding author: zjy@besti.edu.cn, xiali.hei@louisiana.edu arXiv:2212.13675v1 [cs.CR] 28 Dec 2022 \fXMAM A PREPRINT training data is non-i.i.d. (iii) The server cannot control the training processes of clients. With the promulgation of the privacy regulations General Data Protection Regulation (GDPR), FL has received increasing attention and applications due to its privacy protection capability. The \ufb01rst FL aggregation method is FedAvg McMahan et al. [2017] proposed by Google. However, the base algorithm FedAvg is vulnerable when there malicious clients exist in the training process. In FL, a malicious client inducing the global model to misclassify the data selected by the attacker but guaranteeing the convergence of the global model is called backdoor attack Bagdasaryan et al. [2020] (i.e., targeted attack Kairouz et al. [2019]). A malicious client can use data poisoning attacks, which poison the client\u2019s local training data, or local model poisoning attacks, which modify the client\u2019s local model update, to achieve his/her purpose. In this paper, we focus on the backdoor attacks (Trigger attack Gu et al. [2017], Semantic attack Bagdasaryan et al. [2020], and Edge-case attack Wang et al. [2020]) since they are more challenging problems in FL. We also give some explanations of our method\u2019s performance on the adaptive attacks. There are still many de\ufb01ciencies in existing aggregation methods. To make the malicious local model updates more concealing, attackers proposed some hiding techniques, making them indiscernible from the benign local model updates. The former researchers proposed several existing robust aggregation methods. All of them focus on the local model updates\u2019 parameters. For instance, KrumBlanchard et al. [2017] computes the Euclidean distances between local model updates and selects the one with the smallest distance as the global update. However, the aggregation methods based on Euclidean distance will be dif\ufb01cult to distinguish the malicious local model updates from the benign ones when the backdoor attacks are under hiding modes (e.g., Projected Gradient Descent (PGD) mode Wang et al. [2020] and Stealthy Model Poisoning (SMP) mode Bhagoji et al. [2019]). FLTrust Cao et al. [2020] performs well in defending against a more signi\ufb01cant proportion of malicious clients. Still, it requires the server to possess a small batch of training data, which might be impractical in some \ufb01elds (e.g., \ufb01nancial and digital healthRieke et al. [2020]) because the private local training data would be challenging to obtain or fabricate. Our work: To tackle the threat of backdoor attacks that use hiding techniques, we proposed a new aggregation method, X-raying Models with A Matrix (XMAM). Like an X-ray in medical examinations, we use a matrix to examine the local model updates. Speci\ufb01cally, we directly utilize the submitted local model updates to be the parameters of the network and input a matrix (a random matrix is enough) to get the Softmax layer\u2019s outputs. Then, we preclude local mode updates whose Softmax layer\u2019s outputs are abnormal by clustering. In our method, the malicious local model updates generated by backdoor attacks that use hiding techniques can be easily distinguished from the benign local model updates. XMAM can defend against existing attacks: We evaluate our aggregation method\u2019s superiority on three backdoor attacks (Trigger attack Gu et al. [2017], Semantic attack Bagdasaryan et al. [2020], and Edge-case attack Wang et al. [2020]) under Black-box mode Wang et al. [2020], PGD mode Wang et al. [2020], and SMP mode Bhagoji et al. [2019]. Note that the Black-box mode is the initial mode that the attackers do not use hiding techniques. The PGD and SMP modes are two advanced modes that the attackers use hiding techniques. Compared with six existing aggregation methods (FedAvg, NDC, RSA, RFA, Krum, and Multi-Krum), we can \ufb01nd that some methods fail even when the backdoor attacks are in the Black-box mode, and other methods fail when the backdoor attacks are in PGD or SMP mode. Only Krum and our method successfully defend against backdoor attacks, whatever the mode is. Our method performs best since Krum only collects one local update as the global update per iteration. Although this allows Krum to avoid malicious local model updates, it also makes global model convergence very slow and reduces accuracy. Furthermore, we evaluate our method on two adaptive attacks: the Krum attack and the XMAM attack. The \ufb01rst one is proposed by Fang et al. [2020], and the latter one is designed by ourselves according to the framework of the Krum attack. The results manifest that our method is resilient to adaptive attacks. Our contributions can be summarized as follows: \u2022 To the best of our knowledge, our work is the \ufb01rst to reveal backdoor attacks in FL by focusing on the Softmax layer\u2019s outputs instead of the local model updates. The experimental results demonstrate that the Softmax layer\u2019s output can re\ufb02ect a model\u2019s information, and different models show different information. \u2022 We successfully tackle the threat of backdoor attacks that use hiding techniques. The experimental results manifest that our method is more effective in detecting malicious local model updates than other existing robust aggregation methods. \u2022 Our method is not vulnerable when the attackers implement adaptive attacks, which further demonstrates the robustness of our method. \u2022 Our method is time-saving in the process of detection compared with other methods since we mainly focus on an M-dimensional space but others focus on a \u03b6-dimensional space, where M is the number of classes of data and \u03b6 is the number of parameters of local model update. 2 \fXMAM A PREPRINT 2 Background 2.1 Federated Learning (FL) In FL, multiple clients jointly train a global model by their imbalanced local dataset. Ideally, the optimization model is as follows: min w ( F(w) = N X i=1 1 N Fi(w) ) (1) where N is the number of clients. Fi(\u00b7) is the local objective, which is de\ufb01ned by Fi(w) = 1 |Di| |Di| X j=1 L (w, xi,j) (2) where L(\u00b7, \u00b7) is a user-speci\ufb01ed loss function, and the ith client holds the |Di| training data: xi,1, xi,2, . . . , xi,|Di|. We roughly divide FL into three steps in one iteration (illustrated in Fig. 1): (i) The server sends the aggregated global model to clients. (ii) The clients update the global model by local training dataset and return the local model updates to the server. (iii) The server collects a portion of clients\u2019 local model updates and aggregates them to update the global model for the next iteration. When the server is the only defender, the only way to resist poisoning attacks is by focusing on step (iii). So the following de\ufb01nitions of all methods account for step (iii). Note that the global model and the local model update we said in this paper are parameters. Global model Server Client1 Client2 Clientn Update ... Dataset1 Dataset2 Datasetn (i) (ii) (ii) (iii) Figure 1: Illustration of the three steps in one iteration of FL. There are N clients (e.g., smartphones or edge devices) and a server (service provider, e.g., Google or Amazon). Each client has different classes and number of data, representing the heterogeneous distribution of client data. 2.2 Backdoor attacks in federated learning Generally, obtaining a high accuracy on the testing dataset is a model\u2019s main task, and the main task can be considered as a series of sub-tasks, like performance on a certain type of data. Manipulating the performance of a model on special types of data is the so-called backdoor attack. Backdoor attacks (targeted attacks Kairouz et al. [2019]) do not destroy the global model\u2019s accuracy on the main task but induce it to make attacker-chosen mistakes on backdoor tasks. In the paradigm of FL, the server has no power to inspect the cleanliness of clients\u2019 data. Thus, a malicious client might poison his local data and train a malicious local model update based on it. When the server frequently receives the malicious local model updates, the global model will compromise on backdoor tasks once received from malicious clients and aggregate them. Currently, there are three typical backdoor threats in FL: Trigger backdoor, Semantic backdoor, and Edge-case backdoor. Trigger backdoor assumes the malicious clients poison their local data by stamping a pattern on the images and 3 \fXMAM A PREPRINT modifying the labels to any class they want. Then, the ultimate global model will make an attacker-chosen judgment on data that has the same pattern on it. Therefore, the Trigger backdoor requires a data modi\ufb01cation both on the local model\u2019s training period and the global model\u2019s inference period. The Semantic backdoor does not require a data modi\ufb01cation on the global model\u2019s inference period. It directly exploits the special feature (e.g., a green car, a car painted with stripes, and a car with stripe background on it) on some data and modi\ufb01es their label to any class the attacker wants. Then, the ultimate global model will excessively learn the special feature and make a prejudicial judgment on data that contains it. The latest backdoor is the Edge-case backdoor, which directly uses data rarely seen in clients\u2019 datasets and modi\ufb01es their labels to make the ultimate global model to misclassify those data. Edge-case backdoor further reveals the security issue of FL on rare data. Fig. 2 illustrates the concrete operations of three backdoor attacks. southwest airplane southwest airplane southwest airplane green car green car green car truck cat automobile (a) Trigger backdoor (b) Semantic backdoor (c) Edge-case backdoor airplane bird truck Figure 2: Illustration of the three backdoor attacks in FL. For each backdoor attack, we give an example. For the Trigger attack, we stamp a white pixel block on the corner of the images and label them as airplanes. For the Semantic attack, we use the images of green cars and label them as birds. For the Edge-case attack, we use the images of southwest airplanes, which are rare data, and label them as trucks. Why do backdoor attacks threaten FL so much? Earlier, the attackers want to embed the backdoor to the global model in one shot by only one malicious client. Therefore, they amplify the malicious local model update optimized by the poisoned local data to mitigate the effectiveness of other benign clients\u2019 local model updates. In other words, the global model will be replaced by the malicious client\u2019s model in just one aggregation, which is the so-called model replacement technique Bagdasaryan et al. [2020]. Assuming a malicious local model update trained by the poisoned local dataset is \u02c6 u, and the server collects \u03c4 local model updates in each iteration. A malicious client who uses the model replacement technique will magnify his \u02c6 u before submitting it. It\u2019s usually magni\ufb01ed \u03c4 times. However, this naive operation magni\ufb01es the anomaly degree of malicious local model update so that most current robust aggregation methods can detect and preclude it. For example, an aggregation method Blanchard et al. [2017] can preclude abnormal local model updates based on Euclidean distance. Then, the attackers \ufb01nd that a group of collusive malicious clients who can jointly attack the global model for several iterations can also successfully embed the backdoor without using model replacement. Since it only needs the malicious clients injecting poisoned data into their clean local dataset without any other manipulations, this attack mode is called the black-box mode Wang et al. [2020]. Although the malicious local model updates optimized under the black-box mode are more similar to benign local model updates, they might still possess an unusual angle or magnitude compared with the benign ones (Fig. 3(a)). Thus, some existing aggregation methods, like Krum and Multi-Krum, can detect it when the number of malicious clients does not surpass 50% in each iteration. 4 \fXMAM A PREPRINT (a) black-box mode (b) hiding mode (e.g., PGD) Figure 3: Illustration of the malicious local model update before hiding and after hiding. The black vectors denote the benign local model updates, and the red vector denotes the malicious local model update. When the malicious client uses hiding techniques, the malicious local model updates scales in magnitude, making it hard to distinguish from benign local model updates in the Euclidean space. To make the malicious local model updates more concealing (Fig. 3(b)), the attackers proposed two hiding techniques: PGD model poisoning Wang et al. [2020] (we call it PGD mode below) and stealthy model poisoning Bhagoji et al. [2019] (we call it SMP mode below). Attacks in PGD mode scale down the malicious local model updates to a smaller norm, which makes them indiscernible from the benign local model updates: \u02c6 u \u2032 = \u03f5\u02c6 u \u2225\u02c6 u\u22252 (3) where \u03f5 is the scaling down magnitude and \u02c6 u \u2032 is the malicious local model update after scaling down. Attacks under SMP mode use a new objective function that considers three losses of a model: the loss on clean data, the loss on poisoned data, and the loss of distance between the model after training and the model before training. With the above three constraints, a model trained in SMP mode is similar to the benign models and maintains its backdoor alive. Therefore, the adversarial objective becomes: argmin \u02c6 w \u03c11L(Dp, \u02c6 w) + L (Dc, \u02c6 w) + \u03c12 \u2225\u02c6 w \u2212wg\u22252 (4) Then, the malicious local model update \u02c6 u in SMP mode is: \u02c6 u = \u02c6 w \u2212wg (5) where Dp is the poisoned data in the client\u2019s local dataset, Dc is the clean data in the client\u2019s local dataset, wg is the global model in the previous iteration, \u02c6 w is the malicious client local model after training and \u03c11 and \u03c12 are the weights of the objective function. 2.3 Existing robust aggregation methods The existing robust aggregation methods can be roughly divided into three categories: (i) limiting the update, (ii) \ufb01nding the \u201ccenter\u201d, and (iii) detecting and precluding. The typical representatives of the three categories are introduced below. Notations below are listed in Appendix3. The \ufb01rst category is limiting the update. The core of this type of method is to punish the local model update who has a large norm or regularize all the local model updates to a small norm. The representative methods are as follows: NDCSun et al. [2019]: Norm Difference Clipping (NDC) clips the part of local model update whose norm exceeds the threshold \u2225u\u22252 \u03b4 when it is greater than 1. The clipped local model update is calculated as follows: uc i = \u03c4 X i=1 ui max(1, \u2225ui\u22252 \u03b4 ) (6) where uc i is the local model update after clipping and \u03b4 is the clipping parameter. RSALi et al. [2019]: To punish the abnormal local model updates, RSA only considers the directions rather than the magnitudes of the local model updates in each iteration. So, all the local models are constricted in a boundary, which can be explained as follows: ut+1 = \u03c4 X i=1 \u03b2rSign(ut i) (7) 5 \fXMAM A PREPRINT where Sign(x) equals to 1 when x > 0, \u22121 when x < 0, and an arbitrary value within [\u22121, 1] when x = 0. The second category is \ufb01nding the \u201ccenter\u201d. The core of this type of method is to exploit the local model updates submitted by clients to \ufb01nd a compromised update, which is the \u201ccenter\u201d of the local model updates, to update the global model. The representative methods are as follows: RFAPillutla et al. [2019]: RFA takes the weighted geometric median of collected local model updates using the smoothed Weiszfeld\u2019s algorithm as the aggregated global model. A particular round of the smoothed Weiszfeld\u2019s algorithm is computed as follows: qr i = pi v \u222a\u2225zr \u2212ui\u22252 (8) zr+1 = P\u03c4 i=1 qr i ui P\u03c4 i=1 qr i (9) where pi = Di |D| and zr is the geometric median point in rth round. The third category is detecting and precluding. The core of this type of method is to detect the malicious local model update and preclude it. The representative methods are as follows: KrumBlanchard et al. [2017]: Krum assumes that the server knows the number f of malicious clients in each iteration and then selects the local model update u\u22c6, which is at the geometric center of \u03c4 \u2212f \u22122 nearest local model updates, as the global model update. The u\u22c6is computed as follows: u\u22c6= arg min ui X uj\u2208\u2126j,\u03c4\u2212f\u22122 \u2225ui \u2212uj\u22252 2 \f \f \f \f ui=1,2,...,u\u03c4 (10) where \u2126j,\u03c4\u2212f\u22122 are the set of \u03c4 \u2212f \u22122 local model updates that have the smallest Euclidean distance to uj. Multi-KrumBlanchard et al. [2017]: Multi-Krum is a variant of Krum, which collects \u03c4 \u2212f \u22122 clients\u2019 local model updates and then integrates them for the global model update. 3 Problem setup Threat model: We have the below assumptions for malicious clients according to Cao et al. [2020], Fang et al. [2020]: (i) they have access to the global model of the previous iterations. (ii) They can manipulate their local training data and local model updates in any way. (iii) They can control the local training hyper-parameters such as local learning rate and local training epochs. Furthermore, we have assumptions that (iv) the number of malicious clients is less than 50% of the total. The operations of (ii) and (iii) also mean that the malicious clients do not know benign clients\u2019 local training data and local model updates and can do nothing about the training process of the benign clients. This setting is de\ufb01ned as partial knowledge byFang et al. [2020]. Moreover, under the condition that the server has no dataset, the (iv) is common in other papers Wang et al. [2020], Blanchard et al. [2017], Sun et al. [2019], Li et al. [2019], Pillutla et al. [2019]. Defense goals: As in Cao et al. [2020], we evaluate our method from three aspects: \ufb01delity, robustness, and ef\ufb01ciency. For \ufb01delity, we expect our aggregation method does not sacri\ufb01ce the performance compared with FedAvg when there are no backdoor attacks. For robustness, the goal of our aggregation method is to have comparable performance to FedAvg* (no malicious client participates) under the most powerful backdoor attacks. And for ef\ufb01ciency, we aim to reduce the screening costs to negligible. Defender\u2019s knowledge and capability: We assume the server is the only defender and make the following assumptions: (i) The server has no access to the clients\u2019 local training data. (ii) The server has full access to the global model and local model updates from all clients in each iterationCao et al. [2020]. (iii) The server does not know the number of malicious clientsCao et al. [2020], Sun et al. [2019], Li et al. [2019], Pillutla et al. [2019]. (iv) The server has no dataset. Comparing with Fang et al. [2020] that assumes the server has a test dataset to validate the collected models\u2019 accuracy, and Cao et al. [2020] that hypothesizes the server has a root dataset (a small dataset which contains about 100 training examples) to train a benchmark update, our assumption (iv) is more practical from the perspective of privacy. 6 \fXMAM A PREPRINT 4 XMAM overview and design 4.1 High-level idea Motivation: Although there are multiple robust aggregation methods proposed by former researchers, the backdoor attacks with hiding techniques still pose a threat to FL. A malicious local model update after being scaled down can still embed the backdoor to the global model illustrating that the aggregation methods (e.g., NDC citesun2019can and RSA Li et al. [2019]) that limit the magnitude of local model updates are infeasible. The methods (e.g., RFA Pillutla et al. [2019]) that attempt to \ufb01nd a geometric center of the local model updates fail to mitigate the impact of malicious local model updates. The methods (e.g., Krum and Multi-Krum) based on detecting and precluding are effective measures to thoroughly eliminate the effect of malicious local model updates. However, the detection methods of Krum and Multi-Krum lose ef\ufb01cacy when the malicious local model updates become more concealing. To improve the safety of FL, a new aggregation method that can effectively address this problem is desired. 0.0998 0.09985 0.0999 0.09995 0.1 0.10005 0.1001 0.10015 class:0 class:1 class:2 class:3 class:4 class:5 class:6 class:7 class:8 class:9 Figure 4: SLOUs of ten local model updates. Among them, the \ufb01rst three are SLOUs of malicious clients, and the others are benign clients. Challenges: As mentioned above, we think the most effective way to defend against backdoor attacks is detecting and precluding. Therefore, we follow this idea to design the new method. There are two critical challenges that a new method needs to face. \u2022 The \ufb01rst challenge is to distinguish the malicious local model updates from benign ones when the malicious clients are using hiding techniques. \u2022 The second challenge is to design the new method without assuming the server has a dataset. Key observation and idea: Our key idea is that we directly investigate the collected local model updates by using a matrix as an input to get their Softmax layer\u2019s outputs. The elements of this matrix can be random numbers. In .1, we illustrate the feasibility of this method. In the following, we refer to the Softmax layer\u2019s output of model whose parameter is local model update as SLOU. Our key observation is that the SLOUs exhibits distinguishable patterns between malicious updates and benign ones. For example, Fig. 4 shows the SLOUs of updates for ten clients in a certain iteration. Among them, three updates are generated using the Edge-case attack in black-box mode. Furthermore, we plot the dimensionality reduction distribution of local model updates of 100 clients and the dimensionality reduction distributions of 100 corresponding SLOUs in a certain round (Fig. 8). Among them, the local model updates of 20 clients are generated by an Edge-case attack in PGD mode. We found that as the parameter \u03f5 of PGD gets smaller and smaller (\u03f5 = 1, \u03f5 = 5e\u22121, and \u03f5 = 5e\u22122), that is, the norm of malicious local model updates becomes smaller and smaller, and in the dimensionality reduction distribution of 100 local model updates, 20 malicious local model updates are indistinguishable from benign ones. In this case, the traditional method, such as Multi-Krum, based on Euclidean distance, selects k updates at the center of Euclidean space, and the malicious updates will be selected. In the dimensionality reduction distribution of 100 corresponding SLOUs, there is still a clear distinction between malicious SLOUs and benign SLOUs. Thus, our method can easily preclude malicious updates by clustering. 7 \fXMAM A PREPRINT 4.2 XMAM design Our new aggregation method consists of three parts: EXAMINING (the server examines the received local model updates using a matrix as input to get the SLOUs), CLUSTERING (the server clusters these SLOUs, and decides which local model updates are benign and should be preserved according to the clustering result), and AGGREGATION (the server aggregates the preserved local model updates and uses them to update the global model). Fig. 5 illustrates the process of our aggregation method. Clustering uc\u03c4't uc1 t wgt-1 wgt-1 ui t Train wg t-1 wg t-1 uj t Train wgt-1 wgt-1 uk t Train ui t uj t uk t Aggregation ... \u03c4 ... A Matrix i1,n in,1 in,n ... ... ... ... i1,1 ... Examining ugt Figure 5: The procedure of XMAM. After receiving the t \u22121th iteration global model wt\u22121 g and training based on it, the \u03c4 clients (f of them are malicious clients) submit their local model updates ut c1, \u00b7 \u00b7 \u00b7 , ut c\u03c4 to the server. Note that in the procedure Examining, we only draw three clients i, j, and k and the three local model updates ut i, ut j, and ut k submitted by them. After getting the SLOUs using a matrix input, the server clusters these SLOUs and preserves these local model updates (ut c1, \u00b7 \u00b7 \u00b7 , ut c\u03c4 \u2032 ) whose SLOUs are in the major cluster, and aggregates them as the global update ut g. EXAMINING: Considering that the server has no dataset, we generate a matrix and use it as input to examine all the local model updates like an X-ray in medical examinations. Using the same matrix as an input, the server can extract all the local model updates\u2019 features (i.e., the SLOU) in each iteration. The process of EXAMINING is de\ufb01ned as follows: SLOU t i = Network(ut i, Dmat) (11) where SLOU t i denotes the ith client local model update\u2019s SLOU at tth iteration, Dmat is the matrix, and the function Network(\u00b7, \u00b7) means inputting data and getting the SLOU. CLUSTERING: After procedure EXAMINING, the server obtains the SLOUs of the received local model updates. Notice that we have no knowledge of how many local model updates in them are malicious. In other words, there might be no malicious local model updates at all. In addition, the local model updates affected by the non-i.i.d. of clients\u2019 local datasets are innately biased. Therefore, we cannot purely divide the local model updates into two clusters. That is to say, we cannot use a clustering algorithm like K-means in which we need to de\ufb01ne the number of clusters in advance. Therefore, we use HDBSCAN Campello et al. [2013] as our clustering algorithm since it automatically divides clusters based on node density. After clustering, the server only preserves the local model updates in the major cluster with the maximum number of nodes. The precluded updates might be malicious local model updates or biased benign local model updates. We can express it as follows: Clusters = HDBSCAN(SLOU 1, \u00b7 \u00b7 \u00b7 , SLOU \u03c4) (12) Preserve : {i \u2208Clustermajor|ui} (13) where the Clusters is the clustering result by using HDBSCAN algorithm, and the Clustermajor is the set of nodes\u2019 id in the largest cluster. 8 \fXMAM A PREPRINT AGGREGATION: In the last step, the server aggregates the preserved \u03c4 \u2032 local model updates to update the global model: ut+1 = \u03c4 \u2032 X i=1 ut i (14) wt+1 = wt + \u03b7gut+1 (15) where wt+1 is the aggregated global model for (t + 1)th iteration and \u03b7g is the global learning rate. 4.3 Complete XMAM algorithm Algorithm 1 represents our complete XMAM method in a speci\ufb01c global iteration. For the server, it needs to take the following three steps: (i) After receiving a certain number of client local model updates, the server inputs a matrix into the models, whose parameters are local model updates, to get the SLOUs. (ii) The server clusters these SLOUs by the HDBSCAN algorithm and sorts these clusters according to the number of SLOUs to get the major cluster. (iii) The server aggregates the local model updates whose SLOUs are in the major cluster to update the global model. Algorithm 1: Illustration of XMAM on \u03c4 clients C1, C2, \u00b7\u00b7\u00b7, C\u03c4 in tth global iteration. \u03c4 \u2032 is the number of preserved local model updates whose SLOUs in Clustermajor. Input: received \u03c4 client local model updates; a random matrix Dmat. Output: the global model wt+1. /*Step I: Examining*/ for i = C1, C2, \u00b7\u00b7\u00b7, C\u03c4 do SLOU i = Network(ut i, Dmat) /*Step II: Clustering */ Clusters = HDBSCAN(SLOU 1, \u00b7 \u00b7 \u00b7 , SLOU \u03c4) Clustermajor = Sort(Cluster1, Cluster2, \u00b7 \u00b7 \u00b7 )[0] /*Step III: Aggregation*/ wt+1 = wt + \u03b7g P\u03c4 \u2032 i=1 ut i(\u03c4 \u2032 \u2208Clustermajor) return wt+1 4.4 Mathmatical explanations In this section, we illustrate our method in mathematical terms. Due to the network that we use in our experiments is Convolutional Neural Networks (CNN), we conduct a convolution layer, a pooling layer, and a fully connected layer to get the output. \uf8ee \uf8ef \uf8f0 i1,1 . . . i1,n . . . ... . . . in,1 . . . in,n \uf8f9 \uf8fa \uf8fb coresize:3 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2192 step:1 \uf8ee \uf8ef \uf8f0 A1,1 . . . A1,n\u22122 . . . ... . . . An\u22122,1 . . . An\u22122,n\u22122 \uf8f9 \uf8fa \uf8fb (core : \"a1,1 a1,2 a1,3 a2,1 a2,2 a2,3 a3,1 a3,2 a3,3 # , Ap,q = ip\u22121,q\u22121 \u2217a3,3 + ip,q\u22121 \u2217a3,2 + \u00b7 \u00b7 \u00b7 + ip+1,q+1 \u2217a1,1) Relu = Relu \uf8ee \uf8ef \uf8f0 A1,1 . . . A1,n\u22122 . . . ... . . . An\u22122,1 . . . An\u22122,n\u22122 \uf8f9 \uf8fa \uf8fb+ b ! = Relu \uf8ee \uf8ef \uf8f0 A1,1 + b . . . A1,n\u22122 + b . . . ... . . . An\u22122,1 + b . . . An\u22122,n\u22122 + b \uf8f9 \uf8fa \uf8fb ! Pool = maxpooling(Relu) (kernel size=3) 9 \fXMAM A PREPRINT output = \uf8ee \uf8ef \uf8f0 P1,1 . . . P1,n\u22124 . . . ... . . . Pn\u22124,1 . . . Pn\u22124,n\u22124 \uf8f9 \uf8fa \uf8fb\u00d7 \uf8ee \uf8ef \uf8f0 s1 . . . sn\u22124 \uf8f9 \uf8fa \uf8fb+ \uf8ee \uf8ef \uf8f0 \u02c6 b1 . . . \u02c6 bn\u22124 \uf8f9 \uf8fa \uf8fb= \uf8ee \uf8ef \uf8f0 P P1,isi + \u02c6 b1 . . . P Pn\u22124,isi + \u02c6 bn\u22124 \uf8f9 \uf8fa \uf8fb= \uf8ee \uf8ef \uf8f0 out1 . . . outn\u22124 \uf8f9 \uf8fa \uf8fb SLOU = softmax \uf8ee \uf8ef \uf8f0 out1 . . . outn\u22124 \uf8f9 \uf8fa \uf8fb ! = \uf8ee \uf8ef \uf8f0 slou1 . . . sloun\u22124 \uf8f9 \uf8fa \uf8fb This is the whole process of the random matrix passing through the network that we design for giving an example. We will explain why this process can distinguish malicious local model updates from benign ones in .1. 5 Adaptive attacks The adversaries may design adaptive attacks to bypass the detection after knowing the aggregation method used in the FL system. Adaptive attacks aim to increase the testing error rate of the global model. However, adaptive attacks require more knowledge about the current FL system. First of all, a malicious client needs to know which aggregation method is in use, which might not be public knowledge. To further test XMAM\u2019s defensive capability, we evaluate it on Krum attack Fang et al. [2020], which is an adaptive attack designed for attacking Krum and Multi-Krum. Furthermore, we develop an adaptive attack, XMAM attack, according to the general framework proposed by Fang et al. [2020], and evaluate our method on it. 5.1 A general adaptive attack framework The adaptive attack framework proposed by Fang et al. [2020] is general to all aggregation methods. In adaptive attacks, the malicious clients collude to cause a deviation of the global model within the detective boundary. The most effective deviation is to \ufb01nd the opposite direction of the global model update and then modify the local model update to this direction. Except for the direction, the remaining metric to think about is the magnitude. Therefore, how to \ufb01nd the maximum magnitude within the detective boundary is critical. The general adaptive attack framework can be de\ufb01ned as follows: max \u03bb\u2208R+ \u03bb Subject to u \u2032 1 = A(u \u2032 1, . . . , u \u2032 f, uf+1, . . . , u\u03c4), u \u2032 1 = ug \u2212\u03bbs, u \u2032 i = u \u2032 1, (i = 2, 3, . . . , f). (16) where u \u2032 1, . . . , u \u2032 f are the f malicious local model updates, A() is the aggregation method that the malicious clients want to attack, s is the Sign() of the global model update and \u03bb is the magnitude that maximizes the bounded attack effect. 5.2 Threat model for our adaptive attack We assume all malicious clients are collusive, and their leader can obtain all client local model updates and arbitrarily modify the malicious local model updates. After receiving the global model, these malicious clients train their local model updates with their clean local data. We use these local model updates and other benign local model updates to form a distribution. Then, the leader searches for a suitable \u03bb according to the distribution to bypass detection and modi\ufb01es the other malicious client local model updates uniformly. This threat model is in line with the full knowledge assumption Fang et al. [2020], Cao et al. [2020]. 5.3 Our complete adaptive attack algorithm We set initial \u03bb = 1. As in Fang et al. [2020], we use a binary search to \ufb01nd the ultimate \u03bb. Speci\ufb01cally, we \ufb01rst calculate the correct direction of global update s, which is the Sign(P\u03c4 1 ui). The function Sign() has been illustrated in Section 2.3. Then, we set the malicious local model updates as the global update in the previous iteration and deviate it to the inverse direction s in a certain magnitude \u03bb. If XMAM cannot catch the deviated malicious local model updates, 10 \fXMAM A PREPRINT we will reduce the magnitude \u03bb by half. Otherwise, we will return the current magnitude \u03bb. That means the returned magnitude \u03bb is the maximum attack magnitude that the attackers can implement. Algorithm 2: XMAM attack Input: the global update ug; the \u03c4 client local model updates; a random matrix Dmat. Output: the deviation parameter \u03bb. Initialize \u03bb = 1 while C1, C2, \u00b7\u00b7\u00b7, Cf / \u2208Clustermajor or \u03bb \u22641e\u221210 do for i = C1, C2, \u00b7\u00b7\u00b7, Cf do ui = ug \u03bbs Clusters = HDBSCAN(SLOU 1, \u00b7 \u00b7 \u00b7 , SLOU \u03c4) Clustermajor = Sort(Cluster1, Cluster1, \u00b7 \u00b7 \u00b7 )[0] return \u03bb 6 Evaluation In this section, we \ufb01rst evaluate the \ufb01delity of XMAM and the other six aggregation methods. Then, we implement the three backdoor attacks to FL with and without hiding techniques. we compare the six former aggregation methods with XMAM to highlight the superior capability of XMAM for defending against backdoor attacks. Furthermore, we implement the adaptive attack (Krum attack) proposed by Fang et al. [2020] and self-designed adaptive attack (XMAM attack) to demonstrate the robustness of XMAM. At last, we theoretically analyze the time complexity of each aggregation method and test the real run time of one-round aggregation on our device. The results show that our method is ef\ufb01cient. For example, our method is about 10000 times faster than Multi-Krum. For simplicity, in our experiments, we use an all-ones matrix to evaluate our method. Our code is publicly available at https://github.com/BESTICSP/XMAM. 6.1 Experimental setup 1) Datasets: We use two datasets in the computer-vision domain and sample them to simulate non-i.i.d. data. Note that we do not conduct our experiments in the i.i.d. scenario since the malicious local model updates are much easier to be detected in this scenario. As in Wang et al. [2020], for each dataset, we divide the training data into N piles using Dirichlet distributionHsu et al. [2019] and allocate them to N clients to simulate the practical FL system (i.e., the data distribution is non-i.i.d., and \u03c7 \u223cDir(0.5, N), where \u03c7 is the data distribution, N is the number of total clients, and 0.5 is the distribution parameter.) CIFAR-10 Krizhevsky et al. [2009]: CIFAR-10 is a color image classi\ufb01cation dataset incorporating prede\ufb01ned 50,000 training examples and 10,000 testing examples. Each example belongs to one of the 10 classes (airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck). MNIST LeCun et al. [1998]: The MNIST dataset of handwritten digits has a training set of 60,000 examples and a test set of 10,000 examples. It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a \ufb01xed-size image. 2) Evaluated backdoor attacks and adaptive attacks: We \ufb01rst evaluate the performance of the seven aggregation methods (FedAvg, Krum, Multi-Krum, NDC, RFA, RSA, and XMAM) on the three backdoor attacks (Trigger attack, Semantic attack, and Edge-case attack). To show the threat of the backdoor attacks with hiding techniques and the robustness of XMAM, we respectively conduct experiments on Black-box mode, PGD mode, and SMP mode. Then, we evaluate our method for adaptive attacks. Note that the three attack modes are designed for backdoor attacks to ensure that the server will select the malicious local model updates. We do not conduct adaptive attacks on them since the adaptive attacks have achieved this. Trigger attack: As in Gu et al. [2017], we poison 30% local data of malicious clients by stamping a square pixel block on the corner of images. Semantic attack: As in Bagdasaryan et al. [2020], we use the images of green cars (class: 1) to generate the poisoned training data and backdoor test dataset. Consistent with Bagdasaryan et al. [2020], we set the target label as the bird (class: 2). Edge-case attack: As in Wang et al. [2020], we use the images of the southwest airplane, which is in the tail of data distribution, to generate the poisoned training data and backdoor test dataset, and set the target label as the truck (class: 11 \fXMAM A PREPRINT Table 1: The default FL system parameter settings. Backdoor attacks Adaptive attacks Trigger attack Semantic attack Edge-case attack Krum attack XMAM attack Total number of clients 200 Clients selected in each iteration 30 Byzantine clients in each iteration 20% 40% Frequency of attacks 1 Local iterations 1 Global iterations 100 50 Batch size 32 Combined learning rate 0.001 \u00d7 0.998t Optimizer SGD Momentum 0.9 Weight decay 10\u22124 9). Note that the images of the southwest airplane would be classi\ufb01ed as an automobile (class: 1) if we do not implement backdoor attacks. Adaptive attack: We use the adaptive attack designed in Section 5. We assume the server collects 20 clients in each iteration, where 5% to 50% of them are malicious clients. Attack modes: For Black-box mode, a malicious client trains his local model in the poisoned dataset and submits the local model update to the server. For PGD mode, we set the PDG parameter \u03f5 = 5e\u22122 for three backdoor attacks. For SMP mode, as in Bhagoji et al. [2019], we set \u03c11 = 10 and \u03c12 = 1e\u22124. 3) Defenders\u2019 setting: As in Wang et al. [2020], in our experiments, we set \u03b4 = 2 for NDC, and we set the smoothing factor v = 0.1, the fault tolerance threshold \u00b5 = 10\u22125, and the maximum number of rounds R = 500 for RFA. We set the learning rate of RSA \u03b2r = 5 \u00d7 10\u22125 \u00d7 0.998t, which is proved by experiments that in this setting, and RSA performs well when there is no attack. 4) Evaluation metrics: Consistent with Cao et al. [2020], we use attack success rate to re\ufb02ect the performance of backdoor attacks since the aim of them is to promote the accuracy of the backdoor task. Speci\ufb01cally, the attack success rate is the fraction of data in the backdoor task, which is classi\ufb01ed by the global model as the attacker-chosen class. For adaptive attacks, we use testing error rates to re\ufb02ect the performance of adaptive attacks since their goal is to increase the testing error rate of the testing dataset. Speci\ufb01cally, the testing error rate is the fraction of data in the testing dataset that are mistakenly classi\ufb01ed by the global model. 5) System settings: In the experiments of backdoor attacks, we set 200 clients in the simulated FL system, the server collects 30 clients in each iteration, and 20% of the 30 clients are malicious clients. In the experiments of adaptive attacks, we set 200 clients in the simulated FL system, the server collects 30 clients in each iteration, and 40% of the 30 clients are malicious clients. Furthermore, we plot the PCA scatter diagram in the setting that the server collects 100 clients in each iteration, and 20% of the 100 clients are malicious clients. For CIFAR-10, we use network VGG9, and for MNIST, we use network LeNet. See Table 1 for more details. 6.2 Experimental results 1) Fidelity: When there is no malicious client in FL, we can see from Fig. 6 that all aggregation methods except Krum and RSA have a similar performance to FedAvg. That is to say, most existing methods can ensure \ufb01delity. It is no surprise that Krum has such a \ufb02uctuation in testing error rate since it only collects one local model update used for the global model\u2019s updating in each iteration. To punish the malicious local model updates, RSA limits the magnitude of all received local model updates to a \ufb01xed number and only preserves the direction, which makes the ultimate global model tend to a sub-optimal solution. 2) Robustness: Viewing Fig. 9 horizontally, we can \ufb01nd that different backdoor attacks have different performances. In contrast, Trigger backdoor attack is harder to succeed. In black-box mode, Trigger backdoor attack only evades FedAvg and RFA, but Semantic and Edge-case backdoors both evade FedAvg, NDC, RFA, and RSA. This phenomenon is more obvious in the SMP mode. Why does Trigger backdoor attack have a poor capability on attack success rate when the poisoned data proportion is the same as the other two backdoor attacks? We conjecture the reason is that it is harder for a model to learn the feature of the backdoor pattern from variant backgrounds than from a \ufb01xed background. Since Trigger backdoor attack poisons data by printing backdoor patterns on random data, but the backdoor patterns of 12 \fXMAM A PREPRINT 0 25 50 75 100 Iteration 0.135 0.140 0.145 0.150 0.155 0.160 Testing error rate 0 10 20 30 40 50 Iteration 0.25 0.50 0.75 Attack success rate FedAvg* FedAvg Krum Multi-Krum NDC RFA RSA XMAM Figure 6: The testing error rate of the global model under different aggregation methods when there is no attack. We can \ufb01nd that Krum is \ufb02uctuant and has a high testing error rate, RSA has a gap to the FedAvg, and other methods have comparable performance to FedAvg. Semantic and Edge-case backdoor attack are the data (containing special feature) itself, the success of Trigger backdoor attack needs more intensity and iterations. Viewing Fig. 9 vertically, we can observe the different performances on different attack modes. Visually, we can see that the attacks using hiding techniques successfully evade more aggregation methods than the attacks without using hiding techniques. Most notably, Multi-Krum is robust in black-box mode but fails in PGD and SMP mode (see Fig. 9\u2019s (b)(c)(e)(f)(h)). That means when backdoor attacks are under PGD mode and SMP mode, except Krum and our method, no existing aggregation methods can defend against them. About Krum, we will analyze it as follows. To illustrate why Multi-Krum fails but XMAM succeeds when the malicious local model updates are more concealing, we plot the distribution of 100 client local model updates using PCA dimension reduction technology. From Fig. 8, we can observe that with the scaling down magnitude increasing, the distance between malicious local model updates and benign local model updates are increasingly close, which makes them indistinguishable. In contrast, the SLOUs of malicious local model updates and benign local model updates still have a clear boundary when the malicious local model updates and benign ones are indistinguishable (Fig. 8(c)(f)). Since it collects n \u2212f local model updates that have a closer Euclidean distance to other local model updates, Multi-Krum fails when the scaling down magnitude is large enough. Fig. 10 shows the testing error rate of the global model on the main task. We can easily observe that whatever modes the backdoor attacks are under, our aggregation method ensures a low testing error rate, which is comparable to FedAvg*. However, other methods, especially RSA, have a higher testing error rate compared with XMAM. Now, we explain the two special phenomena in Fig. 9. The \ufb01rst one is that the attack success rate promotes about 30% in Semantic backdoor attacks even when there is no malicious client. Why does it happen? We surmise that the data on green cars are close to the data on birds in the output space of the clean global model. To verify it, we separately set the target label of poisoned data from class 0 to class 9 and run 100 iterations to observe the performance of the global model on different backdoor tasks. The results show that the data of green cars is more likely to be classi\ufb01ed as automobiles (class 1: 33.6%), birds (class 2: 29%), and frogs (class 6: 25.2%) in a clean global model. Detailed results are in Fig. 7. The second one is that Krum seems to have a more powerful capability than FedAvg* (FedAvg under no attacks) and XMAM in defending against Semantic attacks. However, as we explained in Figure 7, the global model has an innate accuracy of the backdoor task when there is no malicious client. Krum has a lower Attack Success Rate (ASR) because the global model trained by Krum has a lower innate accuracy of the backdoor task, which does not mean Krum is better than XMAM in defending against Semantic attacks. As in Krum, XMAM can also preclude malicious client models in each iteration with Semantic attacks. Therefore, we can conclude that Krum and XMAM both have the ability to evade Semantic attacks. However, XMAM will select more benign client models in each iteration when there are malicious clients so that XMAM has a good and similar performance to FedAvg* on the main task, but Krum is not good enough on the main task. Performance on adaptive attacks: The global model that uses Krum as the aggregation method has a worse convergence rate and a higher testing error rate, which is especially conspicuous when the data distribution is non-i.i.d. In this part, we further explain the weakness of Krum in defending against adaptive attack (Krum attack), which is designed by Fang et al. [2020] to challenge the robustness of Krum. We use Krum attack to test all aggregation methods on two datasets, CIFAR-10, and MNIST, Fig. 12 illustrates that Krum attack breakdowns the aggregation methods, Krum and Multi-Krum, and has a certain negative impact on the other methods except XMAM, which further demonstrates our method is more robust than Krum and Multi-Krum. 13 \fXMAM A PREPRINT 0 20 40 60 80 100 Iteration 0.0 0.1 0.2 0.3 Test accuracy class:0 class:1 class:2 class:3 class:4 class:5 class:6 class:7 class:8 class:9 Figure 7: The test accuracy of the global model on the backdoor testing dataset. We guess that the data of green cars are close to data of birds in the output space of the clean global model. So, we label the testing data as classes 0-9 to observe the test accuracy. The results show that the data of green cars are more likely to be classi\ufb01ed as automobile (class 1: 33.6%), bird (class 2: 29%), and frog (class 6: 25.2%), which veri\ufb01es our conjecture. (a) updates & no scaling (b) updates & \u03f5=5e-1 (c) updates & \u03f5=5e-2 (d) SLOUs & no scaling (e) SLOUs & \u03f5=5e-1 (f) SLOUs & \u03f5=5e-2 Figure 8: The dimensionality reduction graph of the 100 local model updates and the 100 corresponding SLOUs. The red dots (20%) are the malicious local model updates or SLOUs, and the blue dots (80%) are the benign local model updates or SLOUs. From left to right, we successively decrease the PGD parameter \u03f5 (i.e., we scale down the local model updates to a smaller norm) to observe the distribution of the 100 local model updates and the 100 corresponding SLOUs. Thus, we can \ufb01nd that the PGD hiding technique makes the malicious local model updates indistinguishable from the benign local model updates, but can not hide the corresponding malicious SLOUs from benign SLOUs. To further verify the robustness of XMAM, we design a new adaptive attack, XMAM attack, according to the same framework as the Krum attack. We \ufb01nd that even \u03bb is very small, like 1e\u221210, XMAM will not select the malicious local model updates. To \ufb01nd out the root cause, we plot the scatter diagrams using Principal Component Analysis (PCA) algorithm to explain why XMAM does not fail in XMAM-adaptive attack, but Krum fails in Krum-adaptive attack. From Fig. 11 we can see that when the reverse magnitude \u03bb becomes smaller and smaller, the malicious clients\u2019 model updates become closer and closer to the benign client\u2019s model updates so that Krum can select the malicious clients\u2019 model updates. Nevertheless, with the reverse magnitude \u03bb becoming smaller and smaller, the distance between the malicious clients\u2019 SLOU and the benign clients\u2019 SLOU has no signi\ufb01cant change. This experiment can illustrate XMAM\u2019s ability to defend against adaptive attacks. 3) Ef\ufb01ciency: Now we compare the screening complexity of the proposed aggregation methods. The server using the aggregation method FedAvg does not need to screen received local model updates, so its screening complexity is O(0). Krum and Multi-Krum compute the mutual distance of \u03c4 client local model updates. NDC and RSA clip and regularize \u03c4 local model updates, respectively. RFA \ufb01nds the geometric center by considering \u03c4 client local model updates until it satis\ufb01es the de\ufb01ned condition. In a word, all the above methods need to consider the \u03b6 parameters of the local model update. Unfortunately, the local model update in the deep learning model currently possesses millions of parameters (e.g., VGG-16 has 138M parameters). Instead of operating on the parameters of the local model update, our method considers the M probabilities of the Softmax layer\u2019s output. As far as we know, M is far less than \u03b6, which might be a millionfold. 14 \fXMAM A PREPRINT 0 20 40 60 80 100 Iteration 0.1 0.2 0.3 0.4 0.5 Attack success rate (a) Trigger & Black-box 0 20 40 60 80 100 Iteration 0.0 0.2 0.4 0.6 0.8 1.0 Attack success rate (b) Semantic & Black-box 0 20 40 60 80 100 Iteration 0.0 0.2 0.4 0.6 0.8 Attack success rate (c) Edge-case & Black-box 0 20 40 60 80 100 Iteration 0.10 0.11 0.12 0.13 0.14 0.15 Attack success rate (d) Trigger & PGD 0 20 40 60 80 100 Iteration 0.0 0.2 0.4 0.6 0.8 Attack success rate (e) Semantic & PGD 0 20 40 60 80 100 Iteration 0.0 0.2 0.4 0.6 Attack success rate (f) Edge-case & PGD 0 20 40 60 80 100 Iteration 0.1 0.2 0.3 0.4 Attack success rate (g) Trigger & SMP 0 20 40 60 80 100 Iteration 0.0 0.2 0.4 0.6 0.8 Attack success rate (h) Semantic & SMP 0 20 40 60 80 100 Iteration 0.0 0.2 0.4 0.6 0.8 Attack success rate (i) Edge-case & SMP 0 20 40 60 80 100 Iteration 0.00 0.25 0.50 0.75 Attack success rate FedAvg* FedAvg Krum Multi-Krum NDC RFA RSA XMAM Figure 9: The attack success rate of three backdoor attacks (Trigger attack, Semantic attack, and Edge-case attack) on the backdoor task under three modes (Black-box mode, PGD mode, and SMP mode). We compare the performance of the seven aggregation methods (FedAvg, NDC, RSA, RFA, Krum, and Multi-Krum) and the benchmark FedAvg*. We can see that some well-known robust methods are breached when the attack changed from the black-box mode to the PGD and SMP modes, and our method always maintains similar performance to FedAvg*. Table 2: The screening complexity for seven aggregation methods. \u03b6 denotes the number of local model update parameters, M means the number of label classes and \u03c4 is the number of collected local model updates in each iteration. The screening time is recorded by experiments on dataset CIFAR-10 and network VGG9. Method Screening complexity Screening time (second) FedAvg O(0) 0 Multi-Krum O(\u03c4 2\u03b6) 116.98 Krum O(\u03c4 2\u03b6) 109.19 RFA O(\u03c4\u03b6R\u2217) 39.93 RSA O(\u03c4\u03b6) 0.78 NDC O(\u03c4\u03b6) 0.34 XMAM O(\u03c4 2MR\u2217) 0.0079 For XMAM, the screening complexity covers two-parts. The \ufb01rst part is the time of training a random matrix to get the SLOUs. The second part is the time of clustering these SLOUs, which we set as O(\u03c4 2MR\u2217), where R\u2217is the rounds needed by the clustering algorithm to \ufb01nd the cluster centers. Note that the time of training a random matrix could be negligible. Therefore, the ultimate screening complexity of XMAM is O(\u03c4 2MR\u2217). Table 2 gives the concrete screening complexity for seven aggregation methods and the average screening time for ten arbitrary iterations. We run the procedure of screening on a computer with NVIDIA TITAN X GPU. The results show that compared with the same type of methods, Krum and Multi-Krum, XMAM improves the screening ef\ufb01ciency by tens of thousands of times. It is no surprise the XMAM reduces the dimension of the detection object from \u03b6 to M. 15 \fXMAM A PREPRINT 0 20 40 60 80 100 Iteration 0.14 0.15 0.16 0.17 Testing error rate (a) Trigger & Black-box 0 20 40 60 80 100 Iteration 0.14 0.15 0.16 0.17 Testing error rate (b) Semantic & Black-box 0 20 40 60 80 100 Iteration 0.14 0.15 0.16 0.17 0.18 Testing error rate (c) Edge-case & Black-box 0 20 40 60 80 100 Iteration 0.14 0.15 0.16 0.17 Testing error rate (d) Trigger & PGD 0 20 40 60 80 100 Iteration 0.14 0.15 0.16 0.17 Testing error rate (e) Semantic & PGD 0 20 40 60 80 100 Iteration 0.14 0.15 0.16 0.17 Testing error rate (f) Edge-case & PGD 0 20 40 60 80 100 Iteration 0.14 0.15 0.16 0.17 Testing error rate (g) Trigger & SMP 0 20 40 60 80 100 Iteration 0.14 0.15 0.16 0.17 Testing error rate (h) Semantic & SMP 0 20 40 60 80 100 Iteration 0.14 0.15 0.16 0.17 0.18 Testing error rate (i) Edge-case & SMP 0 20 40 60 80 100 Iteration 0.00 0.25 0.50 0.75 Attack success rate FedAvg* FedAvg Krum Multi-Krum NDC RFA RSA XMAM Figure 10: The attack success rate of three backdoor attacks (Trigger attack, Semantic attack, and Edge-case attack) on the main task under three modes (Black-box mode, PGD mode, and SMP mode). We compare the performance of the seven aggregation methods (FedAvg, NDC, RSA, RFA, Krum, and Multi-Krum) and the benchmark FedAvg*. We can see that in all backdoor attacks and modes, our aggregation method ensures a low testing error rate, which is comparable to FedAvg*. However, other methods, especially RSA, have a higher testing error rate than XMAM. 7 Discussions and limitations Discussion: Huang Huang et al. [2019] uses the interpretability technique to design a heatmap to explain the DNNs output, facilitating a more accurate detection for the poisoned model. However, their method requires a clean dataset encompassing all classes. Different from their method, our scheme only needs a generated random matrix, and it directly uses the probability distribution of the Softmax layer\u2019s output to judge the quality of a model. KolouriKolouri et al. [2020] feeds a group of Universal Litmus Patterns (ULPs) through a model and pools the logit layer\u2019s output to classify it as poisoned or clean. Nevertheless, optimizing the classi\ufb01er and ULPs requires hundreds of pre-trained clean and poisoned models, which is impractical in FL because the server has no dataset. HuangHuang et al. [2020] proposes One-Pixel Signature for backdoor detection. As in Kolouri et al. [2020], One-Pixel Signature also demands pre-trained clean and poisoned models. Limitations: As we know, the data distribution in FL is in a non-i.i.d. scenario, and our experimental datasets are also distributed in a non-i.i.d. way. In the early stages of training, the local model updates submitted by the benign clients are more heterogeneous than the local model updates in the middle and late stages of training. Therefore, our method is not guaranteed to be effective at the early stages of training. Note that in the middle and late stages of training, our method will occasionally fail to preclude all malicious local model updates in a certain round, but this does not mean that our defense is ineffective because backdoor attacks require multiple rounds to succeed. In the above experiments, we demonstrate that our method is the most robust aggregation method compared with the other six under a certain proportion (20%) of malicious clients. Then, we conduct an experiment to observe our method\u2019s tolerance in different proportions of malicious clients. The results (Fig. 13) show that our method can tolerate at least 45% of malicious clients in the black-box mode and about 30% of malicious clients in the PGD mode. However, we must admit that the results of our defense are still not enough, and we will \ufb01nd ways to improve XMAM\u2019s tolerance to a larger number of malicious clients in future work. 16 \fXMAM A PREPRINT (a) SLOU & \u03bb = 2\u221212 (b) SLOU & \u03bb = 2\u221214 (c) SLOU & \u03bb = 2\u221216 (d) SLOU & \u03bb = 2\u221218 (e) SLOU & \u03bb = 2\u221220 (f) update & \u03bb = 2\u221212 (g) update & \u03bb = 2\u221214 (h) update & \u03bb = 2\u221216 (i) update & \u03bb = 2\u221218 (j) update & \u03bb = 2\u221220 Figure 11: The PCA scatter diagram of 100 client models\u2019 SLOUs (top) and updates (bottom). We use CIFAR-10 and VGG9 in this experiment and malicious clients occupy 20 %. 0 10 20 30 40 50 Iteration 0.2 0.4 0.6 0.8 Attack success rate (a) CIFAR-10 0 10 20 30 40 50 Iteration 0.0 0.2 0.4 0.6 0.8 Attack success rate (b) MNIST 0 10 20 30 40 50 Iteration 0.25 0.50 0.75 Attack success rate FedAvg* FedAvg Krum Multi-Krum NDC RFA RSA XMAM Figure 12: The attack success rate of the Krum attack on two datasets (CIFAR-10 and MNIST). We can see from the results that Krum, and Multi-Krum seriously deteriorate and some other methods are slightly affected, but our method always has comparable performance to FedAvg*. 8 Related works Poisoning attacks refer to destroying the training result of machine learning. A typical poisoning attack is data poisoning attacksBiggio et al. [2012], Chen et al. [2017], Fang et al. [2018], Gu et al. [2017], Jagielski et al. [2018], Li et al. [2016], Mu\u00f1oz-Gonz\u00e1lez et al. [2017], Nelson et al. [2008], Rubinstein et al. [2009], Shafahi et al. [2018], Suciu et al. [2018], Wang and Gong [2019], Xiao et al. [2015], Yang et al. [2017], Fang et al. [2021]. Recently, multiple poisoning attacks Bagdasaryan et al. [2020], Xie et al. [2020], Wang et al. [2020] have threatened FL. In addition to data poisoning attacks, FL also suffers from local model poisoning attacksFang et al. [2020], Li et al. [2019], Baruch et al. [2019], Xie et al. [2019], He et al. [2020], Bagdasaryan et al. [2020], Wang et al. [2020], Bhagoji et al. [2019], which are more potent than the former. Furthermore, from the perspective of the attack\u2019s purpose, these poisoning attacks can be divided into untargeted attacksFang et al. [2020], Li et al. [2019], Baruch et al. [2019], Xie et al. [2019], He et al. [2020], which aim to deteriorate the global model, and backdoor (targeted) attacksBagdasaryan et al. [2020], Bhagoji et al. [2019], Xie et al. [2020], Wang et al. [2020], which aim to induce the global model to make some attacker-chosen mistake in certain input without deteriorating the global model. The latter is more threatening to FL owing to its hidden characteristic. In central learning, all data is used to train a model jointly, and it is assumed that the model is transparent to the attacker. Therefore, in the poisoning-based backdoor attacks Gu et al. [2017], Chen et al. [2017], Liao et al. [2018], Liu et al. [2017a], Zhao et al. [2020], Bagdasaryan et al. [2020], the attacker can optimize the trigger in the poisoned data according to the performance of the model on them. In FL, the attacker does not know what the global model of the current iteration will be, so it is impossible to optimize the trigger in the poisoned data based on the performance of the global model on them. In addition, apart from the poisoning-based backdoor attacks, when the data can not be tampered 17 \fXMAM A PREPRINT 0 5 10 15 20 25 30 35 40 45 50 Fraction of malicious clients (%) 0.0 0.2 0.4 0.6 0.8 1.0 Attack success rate Trigger Semantic Edge-case (a) Black-box 0 5 10 15 20 25 30 35 40 45 50 Fraction of malicious clients (%) 0.0 0.2 0.4 0.6 0.8 1.0 Attack success rate Trigger Semantic Edge-case (b) PGD Figure 13: The attack success rate of three backdoor attacks under different proportions of malicious clients. We can see that our defense can tolerate 45% malicious clients under black-box mode can about 30% malicious clients under PGD mode. with, the backdoor attacker in the central learning can also achieve the backdoor function by directly perturbing the model weight, which is the so-called Targeted Weight Perturbation (TWP) Dumford and Scheirer [2020]. Similarly, this is dif\ufb01cult to achieve in FL because the backdoor attacker in FL can only control its own model, and can not directly tamper with the global model. Currently, there are three types of backdoor attacks in FL: Trigger backdoor Gu et al. [2017], Semantic backdoor Li et al. [2020], and Edge-case backdoor Wang et al. [2020]. All of them belong to the poisoning-based backdoor attacks. There are three types of defense approaches proposed in centralized learning Li et al. [2020]: i) trigger-backdoor mismatch, ii) trigger elimination, and iii) bcakdoor elimination. The trigger-backdoor mismatch based defenses Liu et al. [2017b], Doan et al. [2020], Udeshi et al. [2019], Qiu et al. [2021] try to modify or reverse the trigger in the poisoned data so that the modi\ufb01ed poisoned data can not match the hidden backdoor in the model to prevent backdoor activation. The trigger elimination based defenses Gao et al. [2019], Subedar et al. [2019], Du et al. [2019], Javaheripi et al. [2020] attempt to distinguish between poisoned data and benign data, and preclude the poisoned data before training. Both of the above two types of defenses require direct contact with client data, which is impractical in FL. The bcakdoor elimination based defenses Wang et al. [2019], Chen et al. [2019], Huang et al. [2019], Xu et al. [2019], Huang et al. [2020] try to prune the neurons in the model that can be activated by the trigger, or refuse to deploy the infected model through detection. This type of defense requires the server having a large batch of data or plenty of models that have been labeled as infected or uninfected to train a meta-classi\ufb01er, which is dif\ufb01cult to obtain in FL. The existing defenses in FL can be roughly divided into three categories: (i) limiting the update, (ii) \ufb01nding the \u201ccenter\u201d, and (iii) detecting and precluding. The \ufb01rst category is limiting the update Sun et al. [2019], Li et al. [2019]. The core of this type of defense is to punish the local model update that has a large norm or regularize all the local model updates to a small norm. The second category is \ufb01nding the \u201ccenter\u201d Pillutla et al. [2019]. The core of this type of defense is to exploit the local model updates submitted by clients to \ufb01nd a compromised update, which is the \u201ccenter\u201d of the local model updates, to update the global model. The third category is detecting and precluding Blanchard et al. [2017]. The core of this type of defense is to detect the malicious local model update and preclude it. 9" + }, + { + "url": "http://arxiv.org/abs/2209.15245v2", + "title": "Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated Learning via Class-Imbalance Reduction", + "abstract": "Due to limited communication capacities of edge devices, most existing\nfederated learning (FL) methods randomly select only a subset of devices to\nparticipate in training for each communication round. Compared with engaging\nall the available clients, the random-selection mechanism can lead to\nsignificant performance degradation on non-IID (independent and identically\ndistributed) data. In this paper, we show our key observation that the\nessential reason resulting in such performance degradation is the\nclass-imbalance of the grouped data from randomly selected clients. Based on\nour key observation, we design an efficient heterogeneity-aware client sampling\nmechanism, i.e., Federated Class-balanced Sampling (Fed-CBS), which can\neffectively reduce class-imbalance of the group dataset from the intentionally\nselected clients. In particular, we propose a measure of class-imbalance and\nthen employ homomorphic encryption to derive this measure in a\nprivacy-preserving way. Based on this measure, we also design a\ncomputation-efficient client sampling strategy, such that the actively selected\nclients will generate a more class-balanced grouped dataset with theoretical\nguarantees. Extensive experimental results demonstrate Fed-CBS outperforms the\nstatus quo approaches. Furthermore, it achieves comparable or even better\nperformance than the ideal setting where all the available clients participate\nin the FL training.", + "authors": "Jianyi Zhang, Ang Li, Minxue Tang, Jingwei Sun, Xiang Chen, Fan Zhang, Changyou Chen, Yiran Chen, Hai Li", + "published": "2022-09-30", + "updated": "2023-06-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction With the booming of IoT devices, a considerable amount of data is generated at the network edge, providing valuable resources for learning insightful information and enabling intelligent applications such as self-driving, video analytics, anomaly detection, etc. The traditional wisdom is to train machine learning models by collecting data from devices and performing centralized training. Data migration usually raises serious privacy concerns. Federated learning (FL) (McMahan et al., 2017a) is a promising technique to mitigate such privacy concerns, enabling a large number of clients to learn a shared model collaboratively, and the learning process is orchestrated by a central server. In particular, the participating clients first download a global model from the central server and then compute local model updates using their local data. The clients then transmit the local updates to the server, where the local updates are aggregated and then the global model is updated accordingly. In practice, due to limited communication and computing capabilities, one usually can not engage all the available clients in FL training to fully utilize all the local data. Therefore, most FL methods only randomly select a subset of the available clients to participate in the training in each communication round. However, in practice, the data held by different clients are often typically non-IID (independent and identically distributed) due to various user preferences and usage patterns. This leads to a serious problem that the random client selection strategy often fails to learn a global model that can generalize well for most of the participating clients under non-IID settings (Goetz et al., 2019; Cho et al., 2020; Nishio & Yonetani, 2019; Yang et al., 2020). Several heuristic client selection mechanisms have been proposed to tackle the non-IID challenge. For example, in the method of (Goetz et al., 2019), the clients with larger local loss will have a higher probability to be selected to participate in the training. Power-of-Choice (Cho et al., 2020) selects several clients with the largest loss from a randomly sampled subset of all the available clients. However, selecting clients with a larger local loss may not guarantee that the final model can have a smaller global loss. Another limitation of previous research on client selection is 1 arXiv:2209.15245v2 [cs.LG] 6 Jun 2023 \fFed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated Learning via Class-Imbalance Reduction 0 25 50 75 100 125 150 175 200 Training Rounds 65 70 75 80 85 90 Test Accuracy (%) All Random Class-Balanced (a) Global Balanced & One-class 0 25 50 75 100 125 150 175 200 Training Rounds 70.0 72.5 75.0 77.5 80.0 82.5 85.0 87.5 90.0 Test Accuracy (%) All Random Class-Balanced (b) Global Balanced & Two-class 0 25 50 75 100 125 150 175 200 Training Rounds 40 50 60 70 80 90 Test Accuracy (%) All Random Class-Balanced (c) Global Imbalanced & One-class 0 25 50 75 100 125 150 175 200 Training Rounds 40 50 60 70 80 90 Test Accuracy (%) All Random Class-Balanced (d) Global Imbalanced & Two-class Figure 1. Three different FL client selection strategies on MNIST. All means engaging all the 100 clients in training. Random means randomly selecting 10 clients. Class Balanced means that we keep the class-balance by intentionally selecting 10 clients. In Figure 1a and 1b, the global dataset of all the 100 clients\u2019 training data is class-balanced. In Figure 1c and 1d, the global dataset is classimbalanced. Each client has only one class of data in (a) and (c) and each client has two classes of data in (b) and (d). The results show significant performance degradation with imbalanced data from random client selection. It is worth noting that when the global dataset is class-imbalanced, selecting all the clients leads to worse performance compared with the Class Balanced strategy, which suggests the importance of keeping class-balance for client selection. the missing comparison between their strategy and the ideal case, where all the available clients participate in the training. In general, existing works not only miss a vital criterion that can measure the performance of their methods, but also fail to investigate the essential reason why random client selection can lead to performance degradation on non-IID data compared with fully engaging all the available clients. In this paper, we focus on image classification tasks. First, we demonstrate our key observation for the essential reason why random client selection results in performance degradation on non-IID data, which is the class-imbalance of the grouped dataset from randomly selected clients. Based on our observation, we design an efficient heterogeneity-aware client sampling mechanism, i.e., Federated Class-Balanced Sampling (Fed-CBS), which effectively reduces the classimbalance in FL. Fed-CBS is orthogonal to numerous existing techniques to improve the performance of FL (Li et al., 2018; Wang et al., 2020b; Karimireddy et al., 2019; Chen et al., 2020; Reddi et al., 2020; Hao et al., 2021; Yang et al., 2021) on non-IID data, meaning Fed-CBS can be integrated with these methods to improve their performance further. Our major contributions are summarized as follows: \u2022 We reveal that the class-imbalance is the fundamental reason why random client selection leads to performance degradation on non-IID data in Section 2. \u2022 To effectively reduce the class-imbalance, we design an efficient heterogeneity-aware client sampling mechanism, i.e., Fed-CBS, based on our proposed classimbalance metric in Section 3. We provide theoretical analysis on the convergence of Fed-CBS in Section 4, as well as the analysis of the NP-hardness of this problem. \u2022 We empirically evaluate Fed-CBS on FL benchmark (non-IID datasets) in Section 5. The results demonstrate that Fed-CBS can improve the accuracy of FL models on CIFAR-10 by 2% \u223c7% and accelerate the convergence time by 1.3\u00d7 \u223c2.8\u00d7, compared with the state-of-the-art method (Yang et al., 2020) that also aims to reduce class-imbalance via client selection. Furthermore, our Fed-CBS achieves comparable or even better performance than the ideal setting where all the available devices are involved in the training. 2. Preliminary and Related Work We first clarify three definitions. The local dataset is the client\u2019s own locally-stored dataset, which is inaccessible to other clients and the server. Due to the heterogeneity of local data distribution, the phenomenon of class-imbalance frequently happens in most of the local datasets. The global dataset is the union of all the available client local datasets. It can be class-balanced or class-imbalanced, but it is often imbalanced. The grouped dataset is the union of several clients\u2019 local datasets which have been selected to participate in training for one communication round. It follows that the grouped dataset is a subset of the global dataset. 2.1. Pitfall of Class-Imbalance in Client Selection Some recent works (Yang et al., 2020; Wang et al., 2020b; Duan et al., 2019) have identified the issue of classimbalance in the grouped dataset by random selection under non-IID settings. Since class-imbalance degrades the classification accuracy on minority classes (Huang et al., 2016) and leads to low training efficiency, we are motivated to verify whether the class-imbalance of the randomly-selected grouped dataset is the essential reason accounting for the performance degradation. We conduct some experiments on MNIST to verify our proposition1. As shown in Figure 1a and Figure 1b, the random selection mechanism shows the worst performance when the global label distribution is class-balanced. If we keep the grouped dataset class-balanced by manually selecting the clients based on their local label distribution, we can 1Detailed experiment settings are listed in the Appendix (Section C.1) 2 \fFed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated Learning via Class-Imbalance Reduction obtain accuracy comparable to the case of fully engaging all the clients in training. Another natural corollary is that when the global dataset is inherently class-imbalanced, engaging all clients in training may lead to worse performance than manually keeping the grouped dataset class-balanced. The results in Figure 1c and Figure 1d prove our hypothesis and verify the importance of class-imbalance reduction. This also indicates that only keeping diversity in the data and fairness for clients is not enough, which was missed in the previous literature (Balakrishnan et al., 2021; Huang et al., 2021; Yang et al., 2020; Wang et al., 2020b; Shen et al., 2022; Wang et al., 2021). More experimental results on larger datasets will be provided to verify the importance of class-imbalance reduction (Section 5). 2.2. Related Work Some effort has been made to improve client selection for FL in previous literature. (Cho et al., 2020; Goetz et al., 2019) select clients with larger local loss, but this cannot guarantee that the final global model has a smaller global loss. Focusing on the diversity in client selection, the authors of (Balakrishnan et al., 2021) select clients by maximizing a submodular facility location function defined over gradient space. A fairness-guaranteed algorithm termed RBCS-F was proposed in (Huang et al., 2021), which models the fairness-guaranteed client selection as a Lyapunov optimization problem. Although diversity and fairness are important, the experimental results in Section 2.1 demonstrate that they are not enough for client selection if the classimbalance issue is not considered. The authors in (Ribero & Vikalo, 2020) model the progression of model weights by an Ornstein-Uhlenbeck process and design a sampling strategy for selecting clients with significant weight updates. However, the work only considers the identical data distribution setting. Following the existing works (Goetz et al., 2019; Cho et al., 2020), we only focus on the data heterogeneity caused by non-IID data across clients. Additionally, we included a comparison of our method with other clusteredbased client sampling algorithms in the appendix. To the best of our knowledge, (Duan et al., 2019) and (Yang et al., 2020) are the first two attempts to improve client selection by reducing class-imbalance. An extra virtual component called a mediator is introduced in Astraea of (Duan et al., 2019), which has access to the local label distributions of the clients. With these distributions, Astraea will conduct client selection in a greedy way. The method of (Yang et al., 2020) first estimates the local label distribution of each client based on the gradient of model parameters and adopts the same greedy way to select clients as Astraea. Since directly knowing the exact value of local label distributions of clients in Astraea will cause severe concerns on privacy leakage, we consider the method in (Yang et al., 2020) as the state-of-the-art method aiming to improve client selection through class-imbalance reduction. However, the solution presented by (Yang et al., 2020) has several limitations. First, their method requires a classbalanced auxiliary dataset that consists of all classes of data at the server. However, that is not always available in some large-scale FL systems since it requires the server to collect raw data from clients, which breaches privacy. Second, their estimations of the clients\u2019 local label distribution are not accurate as shown in Figure 2. Theorem 1 in (Yang et al., 2020) supports their estimations, but it cannot be generalized to multi-class classification tasks since it has only been proved in the original paper (Anand et al., 1993) for two-class classification problems. Finally, the performance of greedily conducting the client selection is not guaranteed due to the nature of the greedy algorithm. We provide an example in Figure 3 to show its weakness. Their method will select C1 as the first client since it is the most class-balanced one. Then C2 will be selected because the grouped dataset of C1 \u222aC2 is the most classbalanced among the choices C1 \u222aC2, C1 \u222aC3 and C1 \u222aC4. Similarly, it will choose C3 since the grouped dataset of C1 \u222aC2 \u222aC3 is more class-balanced than C1 \u222aC2 \u222aC4. Their method is deterministic and thus only one combination {C1, C2, C3} is obtained. However, this is clearly not the optimal solution since {C1, C3, C4} is more class-balanced than {C1, C2, C3}. The above weaknesses motivate us to design a more effective solution for this problem. 3. Methodology We first propose a metric to measure class-imbalance in Section 3.1. Then we derive the measure with privacypreserving techniques in Section 3.2. Based on this measure, we then design our client sampling mechanism and show its superiority in Section 3.3. 3.1. Class-Imbalance Measure Assume there are B classes of data in an image classification task, where B \u22652. In the k-th communication round, we assume there are Nk available clients and we select M clients from them. To make the presentation concise, we ignore the index \u201ck\u201d and assume the set of indices for the available clients is {1, 2, 3, ..., N} and the n-th available client has its own training dataset Dn. We adopt the following vector of size B to represent the local label distribution of Dn, where \u03b1(n,b) \u22650 and PB b=1 \u03b1(n,b) = 1, \u03b1n = \u0002 \u03b1(n,1), \u03b1(n,2), ..., \u03b1(n,b), ..., \u03b1(n,B) \u0003 . (1) We aim to find a subset M of {1, 2, 3, .., N} of size M, such that the following grouped dataset Dg M = S n\u2208M Dn is class-balanced. Assuming the n-th client\u2019s local dataset has 3 \fFed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated Learning via Class-Imbalance Reduction Figure 2. The exact local label distributions and the estimated ones of the first 5 clients in the experiment of (Yang et al., 2020). Label distribution quantifies the ratio between the number of data from 10 classes (C-1, C-2, ..., C-10) in each client\u2019s local dataset. (a) Greedy method (b) Our method Figure 3. An example demonstrating the weakness of greedy method to deal with class imbalance. Supposing we work on a 6-class classification task and aim to select 3 clients from 4 available clients C1, C2, C3, C4. Each of them has 30 images. The compositions of their local datasets are [5, 5, 5, 5, 5, 5], [6, 6, 6, 6, 6, 0], [0, 0, 0, 10, 10, 10] and [10, 10, 10, 0, 0, 0] respectively. The greedy method in (Yang et al., 2020) is deterministic. It can only derive one result {C1, C2, C3} instead of the optimal solution {C1, C3, C4} (see the text description). But our method is based on probability modeling, which directly models the distribution of the optimal solution {C1, C3, C4}. Thus when sampling from it, the optimal solution can be returned with high probability. qn training samples, the following vector \u03b1g M can represent the label distribution of the grouped dataset Dg M, \u03b1g M = P n\u2208M qn\u03b1n P n\u2208M qn = \u0014P n\u2208M qn\u03b1(n,1) P n\u2208M qn , ..., P n\u2208M qn\u03b1(n,b) P n\u2208M qn , ..., P n\u2208M qn\u03b1(n,B) P n\u2208M qn \u0015 . Instead of dealing with the Kullback-Leibler (KL) divergence as (Duan et al., 2019; Yang et al., 2020), which is complicated to analyze, we propose the following function to measure the magnitude of class-imbalance of M, which we call Quadratic Class-Imbalance Degree (QCID): QCID(M) \u225c B X b=1 ( P n\u2208M qn\u03b1(n,b) P n\u2208M qn \u22121 B )2. Essentially, QCID(M) reflects the L2 distance between the distribution of the grouped dataset Dg M and the ideally class-balanced dataset that has a uniform label distribution. Although there exist several more commonly-used probabilistic distances other than L2, it is easier to analyze QCID and more efficient to calculate while keeping privacy as shown in the next section. 3.2. Privacy-Preserving QCID Derivation Our privacy goal is to calculate the value of QCID while keeping clients\u2019 local distributions {\u03b1n} hidden from the server since it contains sensitive information. Unlike Kullback-Leibler (KL) divergence which is difficult to analyze, we can expand the expression of QCID to explore how the pairwise relationships of the clients\u2019 local label distributions {\u03b1m} affects the class-imbalance degree of M, where m \u2208M. Below we provide a theorem to show the feasibility of our method. Theorem 3.1. The QCID value is decided by the sum of inner products between each two vectors \u03b1m, \u03b1m\u2032 \u2208{\u03b1m} with m \u2208M, i.e., QCID(M) = P n\u2208M,n\u2032\u2208M qnq\u2032 n\u03b1n\u03b1T n\u2032 (P n\u2208M qn)2 \u22121 B Theorem 3.1 reveals the fact that there is no need to know the local label distribution of each client to calculate the QCID, as long as we have access to the inner products between each other. To derive the QCID for any subset M \u2286{1, 2, 3, .., N}, we only need to know the following N \u00d7 N matrix S with element sn,n\u2032 being \u03b1n\u03b1T n\u2032, which is the inner product between the local label distributions of the available clients n and n\u2032. S = \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 q1q1\u03b11\u03b1T 1 q1q2\u03b11\u03b1T 2 \u00b7 \u00b7 \u00b7 q1qN\u03b11\u03b1T N q2q1\u03b12\u03b1T 1 q2q2\u03b12\u03b1T 2 \u00b7 \u00b7 \u00b7 q2qN\u03b12\u03b1T N . . . . . . ... . . . qNq1\u03b1N\u03b1T 1 qNq2\u03b1N\u03b1T 2 \u00b7 \u00b7 \u00b7 qNqN\u03b1N\u03b1T N \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb Although it is possible to calculate QCID with S, another concern arises, can a malicious party infer the values of {\u03b1i} from S ? Then we have another theorem to provide privacy protection. Theorem 3.2. One can not derive the values of {\u03b1i} from the value of S. 4 \fFed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated Learning via Class-Imbalance Reduction Based on these two theorems, our privacy goal can be simplified as enabling the server to derive S without access to {\u03b1i}. There are several ways to achieve our goal. One option is to leverage the server-side trusted execution environments (TEEs), e.g., Intel SGX (Anati et al., 2013), which allows calculating S without leaking information of {\u03b1n}. Another potential solution is to adopt Fully Homomorphic Encryption (FHE) (Chen et al., 2017; Brakerski et al., 2014; Fan & Vercauteren, 2012; Halevi & Shoup, 2014; 2015) to enable the server to compute on encrypted data (i.e., {\u03b1i}) to derive S. We provide an example of the system skelon in Section A.2 to illustrate how to derive S without knowing the local label distributions {\u03b1i} using FHE. Since we focus on efficient algorithms to reduce class-imbalance instead of designing the fundamental infrastructure for computing (which is beyond our scope and not a contribution of this paper), we leave the detailed system design for future work. 3.3. A Client Sampling Mechanism To select the most class-balanced grouped dataset Dg M, we need to find the optimal subset M\u2217that has the lowest QCID value, which is defined as follows: M\u2217\u225c arg min M\u2286{1,2,3,..,N} P n\u2208M,n\u2032\u2208M qnqn\u2032\u03b1n\u03b1T n\u2032 (P n\u2208M qn)2 \u22121 B . The main challenge is computational complexity. To find the exact optimal M\u2217, we need to loop through all the possible cases and find the lowest QCID value. The computational complexity thereafter will be O \u0010\u0000N M \u0001 \u00d7 M 2\u0011 , which is unacceptable when N is extremely large. A probability approach To overcome the computational bottleneck, instead of treating M as a determined set, we consider it as a sequence of random variables, i.e. M = {C1, C2, ..., Cm, ..., CM} and assign it with some probability. Our expectation is that M should have higher probability to be sampled with if it is more class-balanced. This means P(C1 = c1, C2 = c2, ..., Cm = cm, ..., CM = cM) should be larger if M = {c1, c2, ..., cM} has a lower QCID value. Our sampling strategy generates the elements in M in a sequential manner, i.e., we first sample M1 = {c1} according to the probability of P(C1 = c1), then sample c2 to form M2 = {c1, c2} according to the conditional probability P(C2 = c2|C1 = c1). The same procedure applies for the following clients until we finally obtain M = {c1, c2, ..., cM}. In the following, we will design proper conditional probabilities such that the joint distribution of client selection satisfies our expectations. Let Tn denote the number of times that client n has been selected. Once client n has been selected in a communication round, Tn \u2192Tn+ 1, otherwise, Tn \u2192Tn. Inspired by combinatorial upper confidence bounds (CUCB) algorithm (Chen et al., 2013) and previous work in (Yang et al., 2020), in the k-th communication round, the first element is designed to be sampled with the following probability: P(C1 = c1) \u221d 1 [QCID(M1)]\u03b21 + \u03bb s 3 ln k 2Tc1 , \u03b21 > 0, where \u03bb above is the exploration factor to balance the trade-off between exploitation and exploration. The second term will add a higher probability to the clients that have never been sampled before in the following communication rounds. After sampling C1, the second client is defined to be sampled with probability P(C2 = c2|C1 = c1) \u221d 1 [QCID(M2)]\u03b22 1 [QCID(M1)]\u03b21 + \u03b1 q 3 ln k 2Tc1 , \u03b22 > 0. For the m-th client, where 2 < m \u2264M, we define P(Cm = cm|Cm\u22121 = cm\u22121, ..., C2 = c2, C1 = c1) \u221d[QCID(Mm\u22121)]\u03b2m\u22121 [QCID(Mm)]\u03b2m , \u03b2m\u22121, \u03b2m > 0. With the above sampling process, the final probability to sample M is P(C1 = c1, C2 = c1, ..., CM = cM) = P(C1 = c1) \u00d7 P(C2 = c2|C1 = c1) \u00b7 \u00b7 \u00b7 \u00d7 P(CM = cM|CM\u22121 = cM\u22121, ..., C2 = c2, C1 = c1) \u221d 1/[QCID(M)]\u03b2M . Since \u03b2M > 0, this matches our goal that the M with lower QCID value should have higher probability to be sampled with. Our mechanism, Fed-CBS, is summarized in Algorithm 1 . Algorithm 1 Fed-CBS Initialization: initial local model w(0), client index subset M = \u2205, K communication rounds, k = 0, Tn = 1 while k < K do // Client Selction: for n in {1, 2, ..., N} do if n \u2208M then Tn \u2192Tn + 1 else Tn \u2192Tn; end if end for Update M using our proposed sampling strategy in Section 3.3 // Local Updates: for n \u2208M do w(k+1) n \u2190Update(w(k)). end for // Global Aggregation: w(k+1) \u2190Aggregate(w(k+1) n ) for n \u2208M end while 5 \fFed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated Learning via Class-Imbalance Reduction Details and analysis For any 1 < m < M, we have P(C1 = c1, C2 = c1, ..., Cm = cm) \u221d 1 [QCID(Mm)]\u03b2m . This means when we generate the first m elements of M, we expect the Mm should be more class-balanced since the Mm with lower QCID value has a higher probability of being sampled. This is different from the algorithm in (Yang et al., 2020), which greedily chooses the cm from {1, 2, .., N}/Mm\u22121 that makes Mm the most classbalanced one. Unlike the greedy algorithm which has no guarantees on finding the optimal client set, our method can generate the globally optimal set of clients in the sense of probability. An example is provided in Figure 3 to demonstrate that our method can overcome the pitfall of the greedy method. After selecting the first two clients, {C1, C3} our method is less class-balanced than {C1, C2} chosen by the greedy method. However, after making the last choice, our method has the chance to derive a perfectly class-balanced set {C1, C3, C4}. In contrast, the greedy method can only get one result {C1, C2, C3}, which is less class-balanced. We require the distribution of P(C1 = c1, C2 = c1, ..., Cm = cm) to be more dispersed when m is small. This is because we expect our sampling strategy to explore more possible cases of client composition at the beginning. We require the distribution of P(C1 = c1, C2 = c1, ..., Cm = cm) to be less dispersed when m is large. This is because as we approach the end of our sampling process, we expect our sampling strategy can find the Mm that is more class-balanced. Especially when m = M, we hope the strategy to find the client cM which can make M the most class-balanced. Since P(C1 = c1, C2 = c1, ..., Cm = cm) \u221d 1 [QCID(Mm)]\u03b2m we can set 0 < \u03b21 < \u03b22 < ... < \u03b2M to satisfy the above requirements. Remark: We set a lower bound for QCID(Mm) as Lb since QCID(Mm) = 0 in some special cases will cause P(Cm = cm|Cm\u22121 = cm\u22121, ..., C1 = c1) \u2192\u221e. When viewing the conditional distribution as the likelihood in Bayesian inference, our probability can be interpreted as an estimate of the posterior distribution. This allows us to comprehend our algorithm through the lens of Bayesian sampling (Welling & Teh, 2011; Liu & Wang, 2019; Zhang et al., 2020a; 2019). In our future studies, we will further analyze the connection between them. Below we present two theorems to show the superiority of our proposed sampling strategy. Theorem 3.3 (Class-Imbalance Reduction). We denote the probability of selecting M with our strategy with \u03b2M as P\u03b2M and the probability of selecting M with the random selection as Prand. Our method can reduce the expectation of QCID compared to the random selection mechanism. In other words, we have EM\u223cP\u03b2M QCID(M) < EM\u223cPrandQCID(M). Furthermore, if increasing the value \u03b2M, the expectation of QCID can be further reduced, i.e., for \u03b2\u2032 M > \u03b2M, we have EM\u223cP\u03b2\u2032 M QCID(M) < EM\u223cP\u03b2M QCID(M). Theorem 3.4 (Computation Complexity Reduction). The computation complexity of our method is O \u0000N \u00d7 M 2\u0001 , which is much smaller than the exhaustive search of O \u0010\u0000N M \u0001 \u00d7 M 2\u0011 . Theorem 3.4 shows that the computation complexity of our method is independent of the number of classes. Since the dimension of neural networks is typically much larger than the class distribution vector \u03b1n, the additional communication cost is almost negligible. Besides, we also prove the NP-hardness of the problem formally in Section B.3 in the appendix. 4. Convergence Analysis To analyze the convergence of our method, we first define our objective functions and adopt some general assumptions. Our global objective function e F > 0 can be decomposed as e F = 1 B PB b=1 e Fb, where e Fb is the averaged loss function with respect to all the data of the b-th class in the global dataset. Similarly, the n-th client\u2019s local objective function Fn can be decomposed as Fn = PB b=1 \u03b1(n,b)Fn,b, where Fn,b is the averaged loss function with respect to all the data of the b-th class in the n-th client\u2019s local dataset, and \u03b1(n,b) is defined in Equation 1. Moreover, let w(k) denote the global model parameters at the k-th communication round and w(0) denote the initial global model parameters. If not stated explicitly, \u2207denotes \u2207w throughout the paper. Assumption 4.1 (Smoothness). The global objective function e F and each client\u2019s averaged loss function Fn,b are Lipschitz smooth, i.e. \r \r \r\u2207e F(w) \u2212\u2207e F(w\u2032) \r \r \r \u2264L e F \u2225w\u2212w\u2032\u2225and \u2225\u2207Fn,b(w) \u2212\u2207Fn,b(w\u2032)\u2225\u2264Ln,b\u2225w \u2212w\u2032\u2225, \u2200n, b, w, w\u2032. Assumption 4.2 (Unbiased Gradient and Bounded Variance). The stochastic gradient gn at each client is an unbiased estimator of the local gradient: E\u03be [gn(w | \u03be)] = \u2207Fn(w), with bounded variance E\u03be h \u2225gn(w | \u03be) \u2212\u2207Fn(w)\u22252i \u2264\u03c32, \u2200w, where \u03c32 \u22650. Assumption 4.3 (Bounded Dissimilarity). There exist two non-negative constants \u03b4 \u22651, \u03b32 \u22650 such that PB b=1 1 B \r \r \r\u2207e Fb(w) \r \r \r 2 \u2264\u03b4 \r \r \rPB b=1 1 B \u2207e Fb(w) \r \r \r 2 + \u03b32, \u2200w. Assumption 4.4 (Class-wise Similarity). For each class b, the discrepancy between the gradient of global averaged 6 \fFed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated Learning via Class-Imbalance Reduction all rand pow-d Fed-cucb Fed-CBS Communication Rounds \u03b1=0.1 757\u00b1155 951\u00b1202 1147\u00b1130 861\u00b1328 654\u00b196 \u03b1=0.2 746\u00b195 762\u00b1105 741\u00b1111 803\u00b1220 475\u00b1110 \u03b1=0.5 426\u00b167 537\u00b1115 579\u00b1140 1080\u00b1309 384\u00b174 E[QCID](10\u22122) \u03b1=0.1 1.01\u00b10.01 8.20\u00b10.21 12.36\u00b10.26 7.09\u00b12.27 0.62\u00b10.20 \u03b1=0.2 0.93\u00b10.03 7.54\u00b10.27 10.6\u00b10.48 5.93\u00b11.01 0.51\u00b10.12 \u03b1=0.5 0.72\u00b10.03 5.87\u00b10.24 7.36\u00b10.57 6.47\u00b10.77 0.36\u00b10.04 Table 1. The communication rounds required for targeted test accuracy and the averaged QCID values. The targeted test accuracy is 45% for \u03b1 = 0.1, 47% for \u03b1 = 0.2 and 50% for \u03b1 = 0.5. The results are the mean and the standard deviation over 4 different random seeds. 0 500 1000 1500 2000 2500 3000 Communication Round 30 35 40 45 50 55 Test accuracy(%) =0.1 0 500 1000 1500 2000 2500 3000 Communication Round 30 35 40 45 50 55 Test accuracy(%) =0.2 all rand pow-d Fed-cucb Fed-CBS (ours) 0 500 1000 1500 2000 2500 3000 Communication Round 30 35 40 45 50 55 Test accuracy(%) =0.5 Figure 4. Test accuracy on Cifar-10 under three heterogeneous settings. loss function and the local one is bounded by some constant in l2 norm. That means, for every n and b, we have \r \r \r\u2207e Fb(w) \u2212\u2207Fn,b(w) \r \r \r 2 \u2264\u03ba2 n,b, \u2200w. Assumptions 4.1, 4.2 and 4.3 have been widely adopted in previous literature on the theoretical analysis of FL (Li et al., 2019; Cho et al., 2020; Wang et al., 2020a). Assumption 4.4 is based on the similarity among the data from the same class. Similar to the standard setting (Wang et al., 2020a), the convergence of our algorithm is measured by the norm of the gradients, stated in Theorem 4.5. Theorem 4.5. Under Assumptions 4.1 to 4.4, if the total communication rounds K is pre-determined and the learning rate is set as \u03b7 = s 10L\u221a \u03c4(\u03c4\u22121)K , where s < 1, L = max{n,b} Ln,b and \u03c4 is the number of local update iterations, the minimal gradient norm of e F is bounded as: min k\u2264K \r \r \r\u2207e F \u0010 w(k)\u0011\r \r \r 2 \u22641 V [ \u03c32s2 25\u03c4K + sL e F \u03c32 10L p \u03c4(\u03c4 \u22121)K + 5\u03ba2 + 10L p \u03c4(\u03c4 \u22121) e F \u0000w(0)\u0001 s \u221a K + \u03b32E[QCID]], where V = 1 3 \u2212\u03b4BE[QCID] and \u03ba = max{n,b} \u03ban,b. If the class-imbalance in client selection is reduced, E[QCID] will decrease. Consequently, 1 V and E[QCID] V will also decrease, making the convergence bound on the right side tighter2. Therefore, Theorem 4.5 not only provides a convergence guarantee for Fed-CBS, but also proves 2Theorem 4.5 requires the \u03b2M in our method to be large enough to make E[QCID] < 1 3\u03b4B according to Theorem 3.3. How to the class-imbalance reduction in client selection could benefit FL, i.e., more class-balance leads to faster convergence. 5. Experiments We conduct thorough experiments on three public benchmark datasets, CIFAR-10 (Krizhevsky et al.), FashionMNIST (Xiao et al., 2017) and FEMNIST in the Leaf Benchmark (Caldas et al., 2018). In all the experiments, we simulate cross-device federated learning (CDFL), where the system runs with a large number of clients with only a fraction of them available in each communication round, and we make client selections on those available clients. The results show that our method can achieve faster and more stable convergence compared with four baselines: random selection (rand), Power-of-choice Selection Strategy (pow-d) (Cho et al., 2020), the method in Yang et al. (2020) (Fed-cucb), and the ideal setting where we select all the available clients (all). To compare them efficiently in the main text, we present the results from Cifar-10 where the whole dataset is divided to 200 (or 120) clients, since we need to engage all the clients for the ideal setting. To simulate more realistic settings where there are thousands of clients, we conduct our method on FEMNIST in the Leaf Benchmark with more then 3000 clients. Due to the space limit, we move the results of FEMNIST, Fashion-MNIST, and the ablation studies to Section C.5 & D in the Appendix. For Fashion-MNIST, we adopt FedNova (Wang explicitly derive a lower bound for \u03b2M is also very interesting and we leave it as a theoretical future work. 7 \fFed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated Learning via Class-Imbalance Reduction et al., 2020a) to show that our method can be organically integrated with existing orthogonal works which aim at improving FL. Experiment Setup We target cross-device settings where the devices are resource-constrained, i.e., most of the devices do not have sufficient computational power and memory to support the training of large models. Therefore, we adopt a compact model with two convolutional layers followed by three fully-connected layers and FedAvg (McMahan et al., 2017b) as the FL optimizer. The batch size is 50 for each client. In each communication round, all of them conduct the same number of local updates, which allows the client with the largest local dataset to conduct 5 local training epochs. In our method, we set the \u03b2m = m, \u03b3 = 10 and Lb = 10\u221220. The local optimizer is SGD with a weight decay of 0.0005. The learning rate is 0.01 initially and the decay factor is 0.9992. We terminate the FL training after 3000 communication rounds and then evaluate the model\u2019s performance on the test dataset of CIFAR-10. More details of the experiment setup are listed in Section C.2. 5.1. Results for Class-Balanced Global Datasets In this experiment, we set 200 clients in total with a classbalanced global dataset. The non-IID data partition among clients is based on a Dirichlet distribution parameterized by the concentration parameter \u03b1 in Hsu et al. (2019). Roughly speaking, as \u03b1 decreases, the data distribution will become more non-iid. In each communication round, we uniformly and randomly set 30% of them (i.e., 60 clients) available and select 10 clients from those 60 available ones to participate in the training. As shown in Table 4, our method can achieve the lowest QCID value compared with other client selection strategies. As a benefit of successfully reducing the class-imbalance, our method outperforms the other three baseline methods and achieves comparable performance to the ideal setting where all the available clients are engaged in training. As shown in Table 4 and Figure 4, our method can achieve faster and more stable convergence. The enhancement in stability can also be perceived as a reduction in gradient variance, a concept that has been explored in previous studies (Johnson & Zhang, 2013; Zhang et al., 2020b; Defazio et al., 2014; Zhao et al., 2018; Chatterji et al., 2018). It is also worth noting that due to the inaccurate distribution estimation and the limitations of the greedy method discussed in Section 2.2, the performance of Fed-cucb is much worse than ours. 5.2. Results for Class-Imbalanced Global Datasets In real-world settings, the global dataset of all the clients is not always class-balanced. Hence, we investigate two different cases to show the superiority of our method and provide more details of their settings in Section C.3. To simplify the construction of a class-imbalanced global dataset, each client only has one class of data with the same quantity. We report the best test accuracy in Table 2 and present the corresponding QCID values in Section C.4. 5.2.1. CASE 1: UNIFORM AVAILABILITY Settings. There are 120 clients in total, and the global dataset of these 120 clients is class-imbalanced. To measure the degree of class imbalance, we let the global dataset have the same amount of n1 data samples for five classes and the same amount of n2 data samples for the other five classes. The ratio r between n1 and n2 is respectively set to 3 : 1 and 5 : 1. In each communication round, we uniformly set 30% of them (i.e., 36 clients) available with replacement and select 10 clients to participate in the training. As shown in Table 2 and Figure 5, our method can achieve faster and more stable convergence, and it even achieves slightly better performance than the ideal setting where all the available clients are engaged. The performance of Fedcucb (Yang et al., 2020) is better than the results on the class-balanced global dataset, which is partly due to the simplicity of each client\u2019s local dataset composition in our experiments. The third line in Figure 2 indicates Fed-cucb can accurately estimate this simple type of label distribution. 0 500 1000 1500 2000 2500 3000 Communication Round 25 30 35 40 45 50 55 60 Test accuracy(%) 3:1 rand pow-d Fed-CBS(ours) Fed-cucb all 0 500 1000 1500 2000 2500 3000 Communication Round 25 30 35 40 45 50 55 60 Test accuracy(%) 5:1 Figure 5. Test accuracy on Cifar-10 with class-imbalanced global dataset in Case 1. 5.2.2. CASE 2: NON-UNIFORM AVAILABILITY Settings. There are 200 clients in total. In each communication round, 30% of them (i.e., 60 clients) are set available uniformly in each training round with replacement. By nonuniformly setting the availability, the global dataset of those 60 available clients is always class-imbalanced. To measure the degree of class imbalance, we make the global dataset have the same amount of n1 data samples for the five classes and have the same amount of n2 data samples for the other five classes. The ratio r between n1 and n2 is set to 3 : 1 and 5 : 1. We select 10 clients to participate in the training. As shown in Table 2 and Figure 5, our method consistently achieves higher test accuracy and more stable convergence, and it also outperforms the ideal setting where all the available clients are engaged. Since the global dataset of the available 60 clients in each communication round is always class-imbalanced, engaging all of them is not the optimal selection strategy in terms of test accuracy. 8 \fFed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated Learning via Class-Imbalance Reduction all rand pow-d Fed-cucb Fed-CBS Case 1 3:1 55.17\u00b10.94 50.99\u00b10.97 53.51\u00b10.34 55.11\u00b10.26 56.86\u00b10.34 5:1 50.93\u00b11.64 47.36\u00b12.34 52.73\u00b11.85 53.75\u00b10.58 54.94\u00b10.73 Case 2 3:1 54.01\u00b10.60 50.81\u00b12.03 53.98\u00b11.87 54.48\u00b11.31 57.71\u00b10.50 5:1 50.42\u00b11.27 48.33\u00b13.03 53.54\u00b11.18 53.38\u00b11.48 57.99\u00b10.46 Table 2. Best test accuracy for our method and other four baselines. 0 500 1000 1500 2000 2500 3000 Communication Round 10 20 30 40 50 60 Test accuracy(%) 3:1 all rand pow-d Fed-CBS (ours) Fed-cucb 0 500 1000 1500 2000 2500 3000 Communication Round 10 20 30 40 50 60 Test accuracy(%) 5:1 Figure 6. Test accuracy on Cifar-10 with class-imbalanced global dataset in Case 2. 6." + }, + { + "url": "http://arxiv.org/abs/2205.15202v1", + "title": "A Small Leak Will Sink Many Ships: Vulnerabilities Related to Mini Programs Permissions", + "abstract": "As a new format of mobile application, mini programs, which function within a\nlarger app and are built with HTML, CSS, and JavaScript web technology, have\nbecome the way to do almost everything in China. This paper presents our\nresearch on the permissions of mini programs. We conducted a systematic study\non 9 popular mobile app ecosystems, which host over 7 million mini programs,\nand tested over 2,580 APIs to understand these emerging systems better. We\nextracted a common abstracted model for mini programs permission control and\nrevealed six categories of potential security vulnerabilities in the permission\nenvironments. It is alarming that the current popular mobile app ecosystems\n(host apps) under study have at least one security vulnerability. We present\nthe corresponding attack methods to dissect these potential weaknesses further\nto exploit the discovered vulnerabilities. To prove that the revealed\nvulnerabilities may cause severe consequences in real-world use, we show three\nkinds of attacks related to the mini programs' permissions. We have responsibly\ndisclosed the newly discovered vulnerabilities, officially confirmed and\nrevised. Finally, we put forward systematic suggestions to strengthen the\nstandardization of mini programs.", + "authors": "Jianyi Zhang, Leixin Yang, Yuyang Han, Zhi Sun, Zixiao Xiang", + "published": "2022-05-30", + "updated": "2022-05-30", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR" + ], + "main_content": "Introduction Mini programs are light (commonly 2-4 MB) applications that run inside a speci\ufb01c mobile app (host app) [1]. As a new form of mobile application, leveraging web technologies like HTML, CSS, and JavaScript, mini programs are taking over the iOS and Android app ecosystems in China. The mini program technology enables the \"super app\" to bundle features and capabilities into a single mobile native APP, which lets the users never need to leave this native app. We call this a host app. Many host app vendors, such as Tencent (WeChat), ByteDance (TikTok) and Alibaba (Alipay), provide their own framework to support the mini programs [2]. There are various ways to launch a mini program in these host apps. Users can scan a QR code, directly search the name in the host app, share with a group or friend, launch it in a content article, or even link between mini programs. Figure 1a shows how to launch the Tesla mini program in WeChat as an example. With the clear interface and fast loading times, mini programs are very easy to use. More and more people use mini programs in their everyday life and need not worry about installing too many apps. There is a mini program for just about anything in China, whether to pay bills, play games, order a taxi, or book a doctor\u2019s appointment. As of 2021, there are over 1.26 billion monthly active users (MAU) of WeChat [3], 1 billion of TikTok [4], and over 658 million of Alipay [5]. The total number of the mini programs\u2019 users is close to that of Facebook, the most popular social network worldwide, which has about 2.89 billion MAU [6]. Since the widespread use of mini programs, any incorrect permission assignments or settings can result in serious security and privacy problems. However, there is not much research focus on this issue. That is not only because it is a new mobile application format, but also, more importantly, the permission structure of mini programs is entirely different from any other current permission-based security model. As we know, the mobile operating system (OS) is responsible for allowing or denying the use of speci\ufb01c resources at the app\u2019s run time [7]. The developers should declare a list of permissions that the user must accept before installing or running an application. Then the OS uses this security model to restrict every mobile app, native or hybrid, access to advanced or dangerous functionality on the arXiv:2205.15202v1 [cs.CR] 30 May 2022 \fRunning Title for Header (a) Two ways to access a WeChat mini program. (b) These mini programs, which operate as a separate mobile app, can run in the background and function within another larger app (host app). Figure 1: Tesla mini program in WeChat. Source: Screenshot. device [8]. Unlike the current access control models and methods, the permissions of mini programs are based on the host app authorization. That is, Android or iOS decides whether or not to allow the host apps to have some speci\ufb01c permissions, and the host apps authorize users to send pictures or other operations in the mini program. Therefore, from the view of OS, the permissions of mini programs and the host apps are the same, which means mini programs may apply for permissions from OS by using the reputation of host apps. It is impossible to control the permissions of mini programs directly. Once the host app does not authorize the correct permissions to the mini programs, it may cause security problems[9]. The permission issues in mobile applications have been fully studied in the past, and its permissions management mechanism is relatively complete and formal[10, 11, 12, 13, 14, 15, 16, 17, 18]. M-Perm[11] can identify normal, dangerous, and third-party permissions requests in applications to detect permission abuse. DroidNet[12] can provide advice on whether to accept or reject requests related to sensitive behaviors, helping users to implement low-risk resource access control on untrusted applications to protect user privacy. However, these studies cannot solve the relevant security issues in the permissions of the mini programs. In this paper, we present our systematic analysis of the current mini programs\u2019 permissions, where we dissect its framework, ecosystem, and potential vulnerabilities. We refer to the de\ufb01nition of sensitive permissions in Android and iOS and conduct a series of sensitive permission speci\ufb01cation tests on mini programs. Speci\ufb01cally, we systematically studied 9 popular mobile app ecosystems, which host more than 7 million mini programs, and established an abstract model for the unique permissions application process of existing mini programs. According to the host App and OS allow or reject permission application, we divide the application process into three situations: Host App Allow, OS Allow; Host App Reject, OS Allow; OS Reject. The majority of the vulnerabilities belong to Host App Reject, OS Allow. According to this abstract model, we investigated more than 2580 APIs and revealed six categories of potential security vulnerabilities that are common in most of the mini programs we studied. According to the types of leaked information, we present three kinds of proof-of-concept attacks to analyze these potential weaknesses further. To prove that the exposed vulnerabilities may cause serious consequences on the real-world systems, we described three interesting cases of APIs and illegal mini programs. To mitigate the threat of these vulnerabilities, we have listed recommendations for mini programs, developers, and users. Finally, to ensure that the mini programs\u2019 mechanism of different host apps had enough time to \ufb01x the vulnerabilities, we contacted them individually about the vulnerabilities several months before submitting this manuscript. This allowed several different host apps to \ufb01nish patching the reported vulnerabilities in writing. We also disclosed the vulnerabilities to various security response platforms, including Tencent Security Response Center. In summary, we have made the following contributions in this paper: \u2022 We deeply analyzed the current mini programs permission and presented a common abstracted model of the permission control of mini programs. To our best knowledge, we systematically studied the mini programs and the permissions for the \ufb01rst time. \u2022 We have detected more than 2,580 APIs. Through large-scale tracking and analysis of sensitive permission APIs, we have found six categories of potential security vulnerabilities in the process of processing sensitive permission applications by mini programs. 2 \fRunning Title for Header WebView WebView JSCore WebView App Service View Native Host App OS Mini Program request data API Figure 2: The common framework of mini programs \u2022 We conducted empirical research on the currently popular 9 host apps and revealed the security issues corresponding to the six types of potential security vulnerabilities we discovered in the real world. We also showed three real-life attacks on the mini programs permissions to prove that the revealed vulnerabilities may cause serious consequences in real-world use. \u2022 Following the practice of responsible disclosure, we have reported all the discovered design \ufb02aws, of\ufb01cially con\ufb01rmed and revised. \u2022 To mitigate these potential vulnerabilities, we put forward suggestions to strengthen the standardization of the entire mini programs\u2019 permission, thereby enhancing user privacy protection. 2 Background 2.1 Framework of Mini Programs Mini programs are a category of applications embedded in host apps without the need of downloading and installing [19]. The \ufb02ow of mini program framework consists of two components: View (the rendering layer) and App Service (the logic layer), which are respectively managed by two separate threads, as exhibited in Figure 2. The interface of View is rendered by WebView component, which handles page displaying and the user event interaction behavior, while the App Service employs JsCore threads to run JavaScript, for controlling the generation and processing of mini programs data. The communication between two threads is relayed by the Native app (refers to the client). The host app in the framework determines whether the mini program has the permission for acquiring the speci\ufb01c data through the corresponding API. That is, OS determines whether to allow host apps to have some speci\ufb01c permissions, and host apps then transmit authorized data to the mini program through the API. In other words, the permission of mini programs is inherited from the host app. Hence, if a host app does not properly manage the data and permission, data privacy and security issues will occur at its mini programs. In this paper, we will focus on the App Service component to study the authorization mechanism of the host app and the design of API for mini programs. 2.2 Difference to Other Apps As a new emerging paradigm, mini program has the substantial difference to the existing native app, hybrid app, progressive web app (PWA), and instant app. Yet, there is little research on mini programs, especially for their potential security and privacy issues. We brie\ufb02y summarize the difference between mini programs and other categories of apps here. A native app is a mobile application developed speci\ufb01cally for installing on device\u2019s OS like Android or iOS. A hybrid app looks like a native app, but at core, it is one kind of web app wrapped in a native container which loads the information on the page when a user navigates through the application. Although both hybrid apps and mini programs apply web technology, the hybrid app is still one type of native app that is under the management by OS. In contrast, as 3 \fRunning Title for Header Figure 3: Under the iOS system, the system permission pop-up prompt is displayed when Alipay requests runtime permission (left) and the pop-up prompt when a mini program in Alipay requests permission (right). a light version of the app, the mini program can only run on a native app\u2019s (host app\u2019s) interface and apply for the host app\u2019s permissions. Similar to the mini program, PWA also applies web technology, but differently, PWA is a type of webpage or website that runs in the browser and can be added to the home screen. Hence, the host environment of PWA is the browser and the OS manages the PWA\u2019s permission through the browser. Mini programs can be considered as one type of \u201cInstant\u201d app embedded in host apps for disposable interactions at a fraction of cost of an app. The host environment is a platform with extra capabilities that can support seamless service and access control for the user data. Google\u2019s instant app [20] is very similar to mini programs. Both of them allow users to access the application\u2019s content without additionally installing the application, which can save the application space on the device. In essence, Google\u2019s instant app is still a native app and under the OS\u2019s permission control while the mini program is under the host apps\u2019. 2.3 Authorization Permissions in mobile apps can be divided into two types, install-time permissions and runtime permissions[21]. Here, the runtime permission is also called the dangerous permission, which is related to users\u2019 privacy and can access users\u2019 private data, such as location information, contacts information, etc. These information are considered to be sensitive and should acquire user\u2019s authorization for access. When requesting the runtime permission, the system will display a prompt window, as shown in Figure 3. According to Section 2.1, the framework provides rich APIs to support the mini programs request a resource such as user information, location information, payment functions, etc. But the user does not authorize the API directly. In mini program development, the framework divided the dangerous APIs into multiple scope according to the scope of usage. The users can select scope to authorize. After a scope is authorized, all of its APIs can be used directly. 2.4 Permission Control Abstract Model We conclude an abstract model common to all mini programs\u2019 permission control, depicted as follows. Mini programs are \"sub-applications\" built on mature mobile applications which are then built on the OS. So, mini programs need to pass two layers of barriers when applying for sensitive permissions from users. That is, mini programs \ufb01rst apply to their host app where they are located and then the host App applies to the OS. As shown in Figure 4, according to the host app and OS allow or reject permission application scenarios, we categorize the application process into the following three cases: 4 \fRunning Title for Header Host App OS \u2461 Allow or Reject ? Allow! Host App Reject, OS Allow OS Reject Request A \u2462 Allow or Reject ? \u2461 Allow or Reject ? \u2463 Allow or Reject ? Reject! \u2461 Allow or Reject ? Reject! \u2460 Request A \u2460 Request B \u2462 Request B \u2460 Request C Allow! Mini Program Allow! Obtained Permission A Obtained Permission A Host App Allow, OS Allow Obtained Permission B Not Obtained Permission B Not Obtained Permission C Not Obtained Permission C Figure 4: The sensitive permission application process of mini programs. Mini programs run in speci\ufb01c mobile applications (host apps), while mobile applications run in the OS. \u2022 Both Host App and OS Allow If a mini program applies for sensitive permission A, but the user has not applied this permission, the host app will pop up a window (as shown in the left of Figure 3) to ask whether the user is willing to grant the permission. If it agrees, the host app will continue to apply for sensitive permission A to the OS, and the OS will pop up a window (the right side of Figure 3) to ask if the user is willing to grant this permission to the mini program. The mini program will successfully obtain the requested sensitive permission once the user agrees. Later on, when the user uses the mini program again, he can directly call the interface. \u2022 Host App Reject but OS Allow In this case, the host app has obtained permission B given by the OS, but when its mini program wants to apply for permission B, the user chooses to refuse and then directly enters the interface fail callback. Hence, the mini program fails to obtain permission B. In our later analysis in Section 3, most vulnerabilities we discover belong to this category. \u2022 OS Reject If the host app does not get the permission C given by the OS, neither the host app nor the mini programs in it can get this permission. It is worth noting that when the developer calls the API requiring authorization, the authorization setting will appear in the mini programs\u2019 authorization setting page regardless of the user\u2019s permission or not. The user can modify the authorization, and the calling result will change accordingly until the user actively deletes the mini programs. But user authorization setting is not synchronized with the server. If the user changes the device or switches his account on the same device, the authorization will be prompted again. 3 Security Analysis After getting the background knowledge of mini programs, we start to perform the security analysis of mini programs\u2019 sensitive permission application process. According to the three scenarios as we have discussed in Section 2.4, we discover six categories of potentially vulnerable modes during mini programs\u2019 processing of sensitive permission applications, as presented in Figure 5. Based on these vulnerabilities, we have created three proof-of-concept attacks. In this section, we will \ufb01rst illustrate the principle for determining the sensitive permission in Section 3.1. Then, we elaborate the detailed vulnerability in Section 3.2 and present our proof-of-concept attacks in Section 3.3. 5 \fRunning Title for Header Proof-of-concept Attacks Vulnerabilities Permission Application Host App Allow, OS Allow Host App Reject, OS Allow OS Reject Security Risks Permission Encapsulation Leaked API Forgotten API Under Permission Control Parameter Settings Ignored Invalid Permission Setting Qualified Conditions Ignored Secretly Transfer Sensitive Data Reuse Cache Files Permission Management Issues Permissions Difference in Different Environments Contacts Attack Location Information Attack Clipboard Content Attack WebView Bypasses Permission Control Path1 Path2 Collect and use user privacy, personal information abuse. Figure 5: The vulnerabilities and attacks in the mini programs permissions. 3.1 Whether to Request Authorization According to 2.3, we know that the mini program framework puts the sensitive APIs into the scope. The framework determines whether the sensitive data can be obtained or processed by the mini program in the background as a rule for sensitive operations. For example, in the WeChat mini programs, wx.getLocation obtains the current geographical location in the background. If there is no user authorization, the current location information will be transmitted without the user\u2019s awareness. Therefore, the mini program framework puts such APIs into scope for uni\ufb01ed authorization and management. In contrast, the API of wx.chooseImage is to select pictures from local photo albums or take photos with cameras. Although the data involved are sensitive, the user needs to select and other interactive operations to send out the data. The framework will consider the user know and authorize this operation, and this data cannot be transmitted without the user\u2019s awareness. Hence, the mini program framework does not put this kind of API into scope. 3.2 Potential Vulnerabilities We conduct a large-scale analysis of APIs from different host apps and target APIs related to sensitive permission. A total of 2,580 APIs are detected by us. Through large-scale tracking and analysis, examining the documentation, available source codes, and demos, we identify the following six categories of potential security vulnerabilities in the process of sensitive permission applications by mini programs. 3.2.1 Reuse Cache Files When a user quits or deletes the mini programs, the cache \ufb01les in the corresponding path should also be deleted, for reducing the user\u2019s storage space and avoiding being reused. However, suppose the user closes the mini program in the host app and opens it again for operation. The previously cached temporary \ufb01le is still there without being deleted. So, an attacker has the chance to obtain the local temporary \ufb01le path of the previous \ufb01le without the user\u2019s awareness. 3.2.2 Permission Encapsulation in Leaked API Some APIs related to sensitive permissions do not encapsulate permissions well (we refer to these APIs collectively as PEL-API). This means that any mini program can directly call these APIs to obtain the corresponding permissions, ignoring the need to apply permission within the host app. The application process of sensitive permissions corresponding to such an issue is shown in Figure 6. When the host app obtains a dangerous permission B from the OS, the mini programs can get this permission without asking the user\u2019s willingness in a pop-up window. Even having the pop-up inquiry and the user rejects it, the mini programs can still get this permission. In particular, we divide these vulnerabilities into the following four categories. Quali\ufb01ed Conditions Ignored. According to the description in Section 3.1, when the user must manually operate the sensitive permissions involved to view, select, or transmit private data, the mini programs will default that the user has allowed the use of rights in self-operation so that no pop-up will be made. In other words, there is no scope (mentioned in 2.3) corresponding to these APIs, and as long as the OS opens the relevant permissions for the host app, the mini programs can call these APIs and get data at will. At this time, sensitive data can only be used when the user interacts 6 \fRunning Title for Header \u2461 Allow or Reject ? \u2460 Request B Allow! Obtained Permission B Obtained Permission B No Ask! \u2463 Allow or Reject ? Reject! \u2462 Request B Obtained Permission B Host App OS Mini Program Figure 6: When Host App Reject and OS Allow, mini programs illegally obtain the process of sensitive permissions. with mini program, which is equivalent to the user controlling his own sensitive information. Mini programs cannot steal the user\u2019s sensitive information through this type of API. However, since the API of mini programs will be updated irregularly, the newly-launched API that involves the same permissions may have the function of stealing users\u2019 private data. Many old APIs that involve the same permissions do not divide scope into corresponding ones, so these new APIs become a security risk. We believe that these APIs ignore quali\ufb01cation of sensitive permission (whether private data will be stolen without the user\u2019s awareness). Forgotten API Under Permission Control. The host apps handle whether the sensitive information can be passed to the mini program. For example, almost all host apps consider obtain \u201clocation information\u201d is the dangerous permission. Whenever the mini program wants to use the current user\u2019s location, it will \ufb01rst apply to the user to ask for his consent. If the user refuses, the mini program itself cannot locate the user\u2019s speci\ufb01c location, regardless of whether the host app has obtained the permission. However, the designer neglected that some APIs should ask for authorization before sending the location information to the mini programs. In other words, these neglected APIs do not belong to any scope (mentioned in 2.3) and can be used without user authorization. Parameter Settings Ignored. From the point of view of its functions, some APIs may not actively obtain the user\u2019s sensitive permissions. However, the designer neglects the parameter settings in API, which will also steal users\u2019 privacy. For example, my.chooseCity in the Alipay mini programs is an API to open the city selection list. The parameter showLocatedCity indicates whether to display the currently located city. If it is set to true, the user\u2019s current city will be directly located regardless of whether the host program opens the location permission for the mini programs. If the user does not do anything, the background cannot see any content about the location information. However, as long as the user selects the area located by the system, the background will return to the current city and latitude and the user\u2019s longitude, even if the location permission is closed at this time. Invalid Permission Setting. The permission settings of some APIs are inconsistent with the descriptions in their of\ufb01cial documents. To some extent, this shows that the API related to sensitive permissions in mini programs does not encapsulate permissions well. For example, wx.choosePoi in the WeChat mini programs realizes the function of opening the map and selecting the location. It is indicated in the document that the invocation of this API requires the authorization of scope.userLocation. However, in the actual test, we found that it is inconsistent with the of\ufb01cial document description the location can be selected without user authorization at all. When the user chooses precise positioning, the background will return the latitude and longitude data of the current user. 3.2.3 Stealthily Transferring Sensitive Data Vertical: The application process for this vulnerability is shown in Figure 6. When some mini programs developed by serious cooperative companies involve sensitive permissions like geographical location, their host apps will ignore applying for permissions from users. The method shown as in Figure 6 can bypass user authorization for stealthily transmitting user sensitive information. Horizontal: Different mini programs developed by the same company may share user information. All the user sensitive permissions acquired by the mini programs should be made public, and the user has the right to close the acquisition of sensitive permissions by the mini programs. Our empirical study found that some mini programs obtain and leverage the user information in their associated mini programs by default, including account information, shipping address, etc., but never request user\u2019s consent. This type of mini programs skips the permission application step, and 7 \fRunning Title for Header the authorized permission in the settings is empty, resulting that a user is unable to close authorization of relevant user information. Users cannot fully control the transmission way and use the scope of their personal information, which risks continuous opening of permissions and unknown use of personal information. 3.2.4 Permission Management Issues A mini program may continue to use sensitive permissions to collect users\u2019 privacy information even a user wants to turn off the permissions after using it. In particular, it can be divided into the following three situations. Permission Setting Disappears. The permission setting page provides convenience for users to manage permissions of mini programs. However, some mini programs may get permanent authorization after one-time authorization due to the lack of setting pages showing permissions. In this case, users cannot view what permissions they have granted to a mini program, and they cannot cancel the previously granted permission. As long as the host app is not uninstalled, the permission will remain open to the mini programs. Thus, a mini program can use the previous authorization to continuously gather and use the user\u2019s personal information, posing a security risk. Permission Cannot be Deleted. Regarding the validity period of authorization of mini programs, once a user explicitly agrees or rejects the authorization, such an authorization relationship will be recorded in the background until the user actively deletes the mini programs. However, the permissions some mini programs may not be able to delete, which can cause harmful consequences. These mini programs will be able to use the previous authorization to continue collecting and using users\u2019 personal information. Unable to Completely Remove Permissions. Although the authorization settings of some mini programs are deleted after a user actively removes the mini programs, the sensitive personal information (such as ID number) involved will be retained. And such information can be leveraged for queries and other operations. At this time, the setting options of related permissions will not be in the mini programs\u2019 settings page, and the user cannot completely remove the permission. Hence, although a user cancels the authorization, the related mini programs do not update the authorization in time, thus continuously collecting and using the user\u2019s personal information. This leads to a risk of using a user\u2019s personal information for abuse. 3.2.5 Webview Bypasses Permission Control The mini programs can use the web-view page bearing component to open the H5 page in the mini programs. In this process, the loaded H5 page needs to manually import JS \ufb01les (which is a web development toolkit based on the host apps for web developers) provided by each platform. In this way, developers can apply the capabilities of mobile phone system such as taking pictures, selecting pictures, and location with the help of host apps to provide users with a better web experience. After empirical study, we prove that webview can bypass the speci\ufb01ed API when the mini programs or host apps do not prompt for permission application or the user refuses after prompting. There are two scenarios to illegally obtain sensitive permissions for mini programs after using the web-view component. First, the mini programs may completely ignore the OS\u2019s permission control over host apps and host apps\u2019 permission control over mini programs. They can directly access any sensitive permissions without notifying a user. Even when OS rejects sensitive permissions from a host app, the mini programs can still obtain such permissions through the web-view page-bearing component. Second, only the OS\u2019s permission control over host apps is considered, while the host apps\u2019 permission control over mini programs is ignored. The speci\ufb01c process is shown in Figure 6. In this case, if the OS\u2019s permission control on host apps is turned off, the mini programs will not obtain the corresponding sensitive permissions. If the OS\u2019s permission control on the host app is turned on, a user allows host apps to use certain sensitive permission. Hence, any mini program in host apps can obtain sensitive permission. We would like to remark that both scenarios may result in the disclosure of user privacy. 3.2.6 Permission Issues in Different Environments The processing details for some APIs are not the same between different OS and versions. Since the mini program framework can not handle these APIs differently depending on the running environment, the same operation or program code will have different results. For example, apps can read the clipboard contents without the user\u2019s manually selecting \u201cPaste\u201d when the user copies something. This is by design. Nevertheless, if the user copies sensitive information and leaves it on the clipboard, all apps can capture it and maybe send it to a remote server. Copying private things from a clipboard is risky. Different versions of OS have different feedback on this. The Android or old version of iOS will not inform the user when an application reads the clipboard. Many host apps will also be silent when the mini program reads the clipboard. If OS or host app does not consider the clipboard permission is dangerous, mini programs with access to the clipboard can steal the clipboard information of users in the background. 8 \fRunning Title for Header Table 1: The list of collected 9 host apps. Company Host App Monthly Active Users Tencent WeChat 1.26 billion [3] Tencent QQ 595 million [22] Alibaba Alipay 658 million [5] ByteDance TouTiao 400 million [23] TouTiao speed Edition TikTok 1 billion [4] Baidu Baidu 607 million [24] Multi Vendor QuickAPP 130 million* [25] China UnionPay UnionPay 10.6 million [26] * Only contains Huawei\u2019s data. 3.3 Proof-of-concept Attacks The vulnerabilities mentioned above we found can result in users\u2019 sensitive information leakage. Here, we provide three proof-of-concept attacks to prove the seriousness of these vulnerabilities. 3.3.1 Location Information Attack An attacker can obtain the user\u2019s location information through the issues mentioned above discovered. For example, the second, third, and fourth issues discussed in Section 3.2.2 all involve location information leakage. An attacker may simply call an API with a bug in the design of host apps to obtain user\u2019s location information. Based on which, an attacker can infer personal information such as user\u2019s hobbies and sports patterns. More serious consequences can also be resulted, such as being tracked and personal attacks. If an attacker steals a massive amount of personal private information, the security and trust of the whole society will become an issue. 3.3.2 Contacts Attack Some mini programs do separate the permission of \u201ccontacts\u201d, which can result in serious consequence. For example, the WeChat mini programs do not divide the permissions of \u201ccontacts\u201d separately, so an attacker with ulterior motives can return and match similar mobile phone numbers in the background by calling wx.searchContacts. Once a user is enticed to click the button bound to the event, a malicious mini program can traverse the user\u2019s address book information for fraud. 3.3.3 Clipboard Content Attack Since the mini program framework does not restrict the apps to read the clipboard, developers only need a few lines of code to see what users have just copied. If users copy an online banking password to paste and leave this private information on the clipboard, a malicious mini program can read it in the background and see that data directly. The same goes for other sensitive data like names, addresses, credit card numbers, or even private photos. Mini programs can capture everything of user\u2019s clipboard and do whatever they want with it. The copied text could be sent to a remote server without user\u2019s awareness. 4 Empirical Study This section presents our empirical study for analyzing the current mini programs permissions. Our goal is twofold. First, we collect the current popular mini program platforms (Table 1), and then expose potential vulnerabilities as outlined in Section 3. We also exhibit the security issues exposed in the real world through detection. Second, we conduct case studies to show some real attacks on the permissions of mini programs, to prove that the revealed vulnerabilities may cause serious consequences in real-world use. 4.1 Mini Programs Ecosystem As of June 2021, the number of mini programs in the whole network exceeded 7 million. We have identi\ufb01ed 9 popular host apps developed by 6 companies, which are listed in Table 1. These 9 host apps are used by tremendous amounts of users. Each host app has its own development tools. We apply the respective development tools to test on different host apps. Through empirical study, we discuss their vulnerabilities and list them in Figure 7. In this \ufb01gure, the orange blocks indicate that the host app has corresponding vulnerabilities, green blocks indicate that the host app has \ufb01xed 9 \fRunning Title for Header WeChat QQ Alipay Baidu TikTok TouTiao TouTiao Speed Edition QuickAPP web-view bypasses Permission management Environmental differences Cache-related issues Existed Vulnerability Fixed Vulnerability after Our Report Unknown or No Discovery Yet Parameter Invalidation Secretly transfer Ignore Forgotten Non-existent UnionPay Cell format: Left Android Right iOS Figure 7: The vulnerabilities distributions in the collected 9 host apps, where the vertical axis lists the names of host apps and the horizontal axis lists the vulnerabilities that we have discussed in Section 3. Cell format: | Table 2: The list of Collected PEL-APIs. Vulnerabilities Host App API Ignore WeChat wx.searchContacts Forgotten QQ MapContext.moveToLocation MapContext.getCenterLocation Parameter Alipay my.chooseCity Invalidation WeChart wx.choosePoi vulnerabilities, gray block indicates that the host app does not have such a vulnerability, and light yellow blocks indicate that it is uncertain if there exists such a vulnerability. 4.2 Vulnerability Analysis 4.2.1 Vulnerable Caching Mechanism Our study discovered that in WeChat, when a user closes the used mini programs (which have not been deleted from the recent use) and opens the mini programs again for operation, the previously cached temporary \ufb01les still exist. Because the local temporary \ufb01le path can be obtained in the background, the temporary \ufb01le can be reused without the user\u2019s consent before being recycled. In QQ and ByteDance, as long as the user exits the mini programs or reopens the previously used mini programs, the previous temporary \ufb01les will be automatically deleted. Hence, they don\u2019t have the vulnerable caching issue. This vulnerability has not been found temporarily in mini programs of other host apps. 4.2.2 Vulnerable Issues API As discussed in Section 3.2.2, this kind of issue comes from that the API related to sensitive permissions does not encapsulate permissions well. In order to \ufb01nd APIs with such vulnerabilities, we set two criteria for analysis: 1) whether Android and iOS treat the involved permissions as permissions that need to be granted by the user; 2) whether using permissions by these APIs involves the interaction between users and mini programs. If the user must manually operate to view, select or transmit private data, then the mini programs default that the user has allowed the use of the permissions without prompting, so these APIs are not within the scope of our research. Table 2 summarizes the APIs that we have found so far that are related to sensitive permissions but do not encapsulate permissions well. 10 \fRunning Title for Header 4.2.3 Vulnerable Permissions Transfer Vertical: \"Amap\" in the Alipay mini program can be opened directly to accurately locate the user, ignoring the application of the mini program to the user for location permission. Although Alipay and AMap have reached an in-depth cooperative relationship (both belong to Alibaba Group), this does not mean that their operations can bypass the user\u2019s willingness. No abnormality was found in the public test code of \u201cdemo\u201d provided by Alipay, but using this mini program alone can directly locate users accurately. This shows that there may be an inconsistency between the source code of \u201cdemo\u201d and the public test code provided. They may use other ways to bypass the user authorization to transmit the user\u2019s location secretly. Horizontal: Some companies may share user information among different mini programs. For example, after logging in the \u201cPinduodu\u201d mini program in WeChat, the same login information is displayed directly when the user \ufb01rst uses another mini program named \u201cPinduoduo Coupon\u201d, which is from the same company Pinduoduo Inc. This type of mini program skips the permission application step, and the authorized permission in the settings is empty. Hence, the user cannot close the authorization of relevant user information. Users cannot fully control the dissemination of their personal information and the scope of use, which will lead to the continuous opening of permission and the risk of personal information being used in unknown circumstances. 4.2.4 Insecure Permission Management The permission management of UnionPay mini programs is rather messy. Since the security of UnionPay is mainly related to user identity information and payment, it is not very strict to consider other sensitive permissions such as microphones, geographic locations, cameras, and photo albums. On the privacy settings page of the UnionPay APP, we can see the authorization information for mini programs. However, only the authorization of the phone number and identity information (name, ID number) is displayed. In contrast, other permissions such as location, access to mobile phone albums, and the camera are ignored, and there is no permission setting page inside mini programs. When a user grants mini programs sensitive permission such as a microphone, geographical location, camera, and photo album, as long as the UnionPay app is not uninstalled, the mini programs can always access these permissions. We found that, in ByteDance, the permission of mini programs cannot be deleted with the deletion of the mini programs. In the of\ufb01cial ByteDance document, there is no clear explanation on the validity period of the authorization and whether the authorization will remain open after deleting mini programs. Hence, mini programs under ByteDance may use previous authorizations to continuously access and collect user personal information, posing a security risk. Although the authorized information of Alipay mini programs can be deleted after a user actively deletes the mini program, the sensitive personal information involved before (such as ID number) will be retained by individual mini programs. We have canceled the authorization and removed the mini program, but the relevant information can still be consulted when entering the mini programs again after a certain period. At this time, the permission setting of the mini program becomes empty, and the user cannot set the permission on the setting page. This situation indicates that after the user cancels the authorization, the related mini programs do not update the authorization in time and may continue to collect and use the user\u2019s sensitive information after the user has canceled the authorization. 4.2.5 Insecure Webview Component Some mini programs can carry web pages through the webview component. The communication between the webpage and mini programs is realized by the interfaces provided by the webpage development kit based on host apps provided by each platform for web page developers. None of the communication outside the provided interface is supported. Neither QQ nor ByteDance found any interfaces involving dangerous permissions. In the WeChat host app under Android, it can be determined that the user\u2019s storage information can be obtained using the webview component. That is, the \ufb01rst scenario described in Section 3.2.5 can be used to bypass the normative permission control. The difference is that in WeChat mini programs under iOS, it is used to obtain the user\u2019s album permission, and the second scenario is used to obtain the user\u2019s camera permission. The mini programs in Alipay bypass the standard permission control through the second scenario described in Section 3.2.5, to obtain sensitive permissions such as the user\u2019s camera, photo album, location, etc., even if these permissions are turned off in the setting page of mini programs at this time. That is, when users reject these sensitive permissions, the sensitive information can still be obtained by using the webview component. The mini programs under Baidu, in the iOS environment, can use the second scenario to obtain the camera permission to take pictures. QuickApp in Huawei can obtain the users\u2019 stored information (pictures, audios, videos, documents, etc.) through it. This will cause users to disclose their precise location to the attacker without their awareness. If the user\u2019s mobile phone turns on the GPS when taking photos, then once the mini programs can access the camera and photo album, the malicious mini programs only need to \ufb01lter out which photo taken by the user\u2019s mobile phone to know its geographical location. 11 \fRunning Title for Header WeChat Baidu TikTok TouTiao TouTiao Speed Edition Alipay QQ 1 2 3 4 5 6 Quick APP (HUAWEI) Figure 8: Examples of getting clipboard contents in Android. Only case 2 and case 3 will prompt the user when the mini programs get the contents of the user\u2019s clipboard. 4.2.6 Vulnerable Clipboard Mechanism After iOS 14 upgrade, there will be a pop-up to informs the user when an app reads the contents of the clipboard. Hence, there will be a noti\ufb01cation when the mini programs read the clipboard no matter which host apps they run inside. Android is more complex since there are multiple versions. Xiaomi MIUI12(MIUI is a third-party mobile phone OS deeply optimized, customized, and developed by Xiaomi based on Android) splits the permission of the clipboard independently, and users can monitor the reading and writing behavior of each application. However, in other Android phones (such as Huawei, Vivo, etc.), the permissions control of the clipboard will not inform the users. Under these OS versions, we test the clipboard permissions on different host apps in the Android environment, and check their corresponding prompts, with results shown in Figure 8. In order to exhibit that the mini programs have obtained the contents of a user\u2019s clipboard, we display a modal dialog box. We found that 6 out of the 8 host apps (cases 1, 4, 5, 6 in Figure 8) did not give any prompt to the user when obtaining the clipboard information, which may be vulnerable to the theft of clipboard content described in Section 3.3.3. Cases 2 and 3 in Figure 8 show a design example that the mini programs will prompt when obtaining the contents of a user\u2019s clipboard. However, case 2 only reminds the user that the mini programs have obtained the clipboard\u2019s contents through a bubble, but the user only knows that it cannot be blocked (same as the prompt in the iOS environment). In case 3, obtaining the clipboard\u2019s content is set as the user\u2019s permission to operate. The user must select \u201cAllow\u201d before the mini program can obtain the corresponding information. We consider these designs with the best security usability practice. It should be noted that, although it is set as the user\u2019s permission to operate in case 3, there is still an issue with Alipay. In version 10.2.26.8000, Alipay will pop up a window to ask the user \"Apply to obtain the contents of your clipboard\", and the contents of the clipboard have been pasted on the page before clicking \"Reject\" or \"Allow\" (this is equivalent to the mini program still being able to obtain the clipboard information after the user refuses authorization). However, in version 10.2.23.7100, users cannot get any content after clicking Reject. 12 \fRunning Title for Header API Dataset WeChat ByteDance QQ Ali Baidu Triggering Results report Automatic Code Comparison and Manual Analysis Source code analysis Test API in development tools Decompile Static Analysis Sensitive Permission Detection System Source File \u2026\u2026 Detection in popular Mini Programs User behavior imitation module Status monitoring and recursive module Data acquisition and storage module Statistics Figure 9: Overview of measurement methodology 4.3 An Example WeChat is an application in China with more than 1 billion active users, and it is also one of \ufb01ve applications in the world that have surpassed this milestone. 78% of Chinese people aged 16-64 are using WeChat [27]. So, we take WeChat as an example and detect the API with permission issues in the popular mini programs and expose security issues hidden in the real world. MiniCrawler[28], an open source WeChat mini program crawler, can be used to automatically download, unpack and index the mini programs from WeChat. However, the tool has strict requirements on the WeChat version, and the latest WeChat version API has been replaced, so the tool cannot be used. Our measurement method is shown in Figure 9. 4.3.1 Implementation In order to ensure the smooth progress of the experiment, we did not choose the Android simulator (according to the previous test experience, running small programs with the Android simulator will be very stuck). Instead, we select an Android entity machine with ROOT permission to carry out our operation, as shown in Figure 9. Our experiment is mainly divided into the following three parts: Triggering We use Airtest, a UI automated testing tool based on image recognition and poco control recognition, to achieve automated clicks on mini programs to obtain the source \ufb01les (that is, downloading the source \ufb01le package of mini programs from the server to local). Notably, since mini programs can be used in parallel with the host apps after clicking, each mini program needs to be completely exited after the simulation click is over. Otherwise, too many open mini programs will cause the Android emulator slow. In the process of implementation, to achieve as far as possible that each page of mini programs can be loaded successfully to obtain all the source \ufb01les of mini programs, we adopted LSH image detection algorithm for near repetition to determine whether the current page is consistent with the previous page if it is consistent, and treat them as the same page. If it is inconsistent, it is considered as the next page. In order to illustrate our experiment more clearly, the simulation click part is separated into the following three sub-modules: User Behavior Imitation Module: The function of this module is to imitate the process of users clicking into mini programs. The program completely replaces the human operation: Automatically clicking the \u201csearch\u201d icon to enter the search bar through the simulated click technology, automatically input the name of mini programs stored in the sample library in the search bar through the \ufb01le retrieval technology, and identify the accurately searched mini programs based on the image recognition and poco control, to imitate the user behavior and enter mini programs. Status Monitoring and Recursive Module: The main function of this module is to monitor the current interface position of mini programs in real-time and recursively scan the page content. By de\ufb01ning a page scanner and combining LSH image detection algorithm, the depth of the current page is monitored in real-time to determine whether to enter the next interface or call back to the previous interface through recursive function, for scanning as many different interfaces of the same mini programs as possible and trying to get all the cache packets of mini programs. Data Acquisition and Storage Module: The main function of this module is to obtain the source \ufb01les of mini programs. Take WeChat mini programs as an example. In the scenarios of WeChat version 8.0.2, the source \ufb01les of a certain WeChat mini program are all in the path \u201c/data/data/com.tencent.mm/MicroMsg/.../appbrand/pkg\u201d. Then, we use the shell command to dump the source code of the mini program to the folder named after the WeChat mini program to facilitate the follow-up work. At the same time, it is also necessary to delete the code packages that have been transferred to other locations in a speci\ufb01c folder in time to prevent confusion with the code packages of the next WeChat mini program. 13 \fRunning Title for Header 0 0.05 0.1 0.15 0.2 0.25 0.3 Education Travel Traffic Finance Self-operated by merchants Catering Industry Medical Industry Realty Business Sports Public Welfare Serving People's Livelihood Express Delivery Figure 10: The proportion of wx.getClipboardData in different categories of WeChat mini programs. Decompile: We obtain the source code of the speci\ufb01ed mini programs through decompilation. We have added a graphical operation interface to wxappUnpacker [29] for the decompilation script and con\ufb01gured Nodejs, the running environment required for the decompilation script, which can decompile the mini program source \ufb01les obtained in the previous step to obtain their source code. Statistics: This step counts the usage of APIs with permission issues in popular mini programs in the world. Based on the issues found in section 3.2, we sorted out different mini programs platforms and listed APIs that have permission issues. Screen these APIs from the source code of the mini programs, and measure the usage of the problematic API in popular mini programs in the real world. 4.3.2 Results We refer to the \"Aladdin Index\" (an intelligent platform for data analysis of the entire network of mini programs), to select 578 popular mini programs for research, and get 977 mini programs packages after decompression. We take the permission of the clipboard as an example to detect the severity. Figure 10 shows the proportion of wx.getClipboardData in various WeChat mini programs. Almost every kind of WeChat mini program use the vulnerable API as a standard API, which has the potential risk that some mini programs apply these API to invade user privacy. 4.4 Case Studies In this section, we present our case study of some representative APIs and illegal mini programs. 4.4.1 Stealing Location Information In the QQ mini program development tool, the index.js \ufb01le used qq.createMapContext to create a MapContext object; then we used MapContext.moveToLocation to move the map center to the current location (at this time, the map component show-location is set to true); then use MapContext.getCenterLocation to get the latitude and longitude of the current map center. In the whole process, the user\u2019s geographic location, latitude, and longitude can be accurately obtained in the background without the user\u2019s authorization. When working with Tencent\u2019s location service to accurately obtain the location, the user\u2019s speci\ufb01c location can be obtained. We con\ufb01rmed that the QQ mini programs could obtain the coordinates of the center point of the current location without the user\u2019s authorization, regardless of the Android or iOS system, and successfully transfer the coordinate value, speci\ufb01c location, and other information to the mini program background. Without authorization, the mini programs can obtain the user\u2019s precise location. Once the mini programs associate the location information with the account information in the background, the user\u2019s personal information is completely exposed. This is equivalent to unauthorized access to the user\u2019s whereabouts (belonging to personally sensitive information), and the mini programs are suspected of obtaining personal information in violation of regulations. 14 \fRunning Title for Header 4.4.2 Stealing Contacts The API provided by the WeChat mini programs: wx.searchContacts is to \ufb01nd the contacts and match a similar mobile phone number. This API does not specify the number of calls within a period. Suppose the phoneNumber parameter (the number to be searched) in this function is written in a loop. In that case, most of the information in the user contacts can be obtained by traversing in sequence. Speci\ufb01cally, set a button in index.wxml to bind an event. Write the API wx.searchContacts into this event in the index.js \ufb01le, and click this button, the foreground will not respond, and the background will get the information of the user\u2019s contacts. At this time, if the host environment WeChat has already obtained the contacts\u2019 permission, the mini programs can obtain part of the contact information without the user\u2019s awareness. After testing, we have con\ufb01rmed that WeChat mini programs can use this method to obtain information of some users\u2019 contacts and successfully pass the obtained information into the mini programs\u2019 background, whether in Android or iOS, without the user\u2019s authorization and the user\u2019s awareness. Attackers can take advantage of this vulnerability and bind wx.searchContacts to an inductive button to entice users to click. Sensitive information in the user\u2019s address book may be read and uploaded to the background, resulting in information leakage. 4.4.3 Stealing Clipboard Information Take the WeChat mini programs as an example. During the test, we write wx.getClipboardData in the onload function in the JS code so that the clipboard content can be easily obtained without the user\u2019s awareness. Even if it is not written in the onload function, binding this event to the button control can also trigger it. In the real world, mini programs can write some inductive slogans on the button to induce the user to click and then obtain the contents of the user\u2019s clipboard. After a large number of tests, we have con\ufb01rmed that many host apps (WeChat, ByteDance, Baidu, QuickAPP) can obtain user clipboard information and successfully transfer the obtained information into the mini programs\u2019 background without the user\u2019s authorization and awareness. Suppose the copied content is not destroyed after the user pastes it into an application. In that case, the content can still be obtained when the user opens a mini program, thus causing the leakage of sensitive information of the user. For example, when a user copies the name of a certain product, after opening a shopping mini program, it can read the content of the user\u2019s clipboard and upload the private information in the background. The vendor of this mini program will know the user may want to buy this product and push similar commodities or analyze the user\u2019s behavior to push advertisements accurately. 4.5 Responsible Disclosure We reported the results of our investigation to the Tencent Security Response Center on May 6, 2021, and August 27, 2021, and received their vulnerability con\ufb01rmation. We also reported vulnerabilities regarding location and contacts permissions to CVE, received their con\ufb01rmation of the vulnerability on May 18, 2021, and August 29, 2021, and obtained CVE numbers CVE-2021-33057 and CVE-2021-40180. 5 Discussion 5.1 Limitations and Future Works 1) We conducted a series of mini programs\u2019 permissions tests based on personal accounts. In other words, the tested APIs are all for individuals. According to our statistics, the number of APIs open to non-individual developers and certi\ufb01ed mini programs is 4.6% of the total API numbers. This type of interface usually includes obtaining the user\u2019s mobile phone number, sports data, etc. Different host apps have different attitudes towards such interfaces. For example, the WeChat mini programs allow individual developers to obtain user sports data, while in the Alipay mini programs, this interface is only open to corporate users. Generally, those interfaces that are only open to enterprise users will be more strictly managed by host apps since these can obtain more data. It is very dif\ufb01cult for us to test the APIs used by the enterprise accounts. In the future, we will cooperate with enterprises to test and research these APIs. 2) In the 4.3, we take the permissions of the clipboard in the WeChat mini programs as an example to test it in the mini programs that are currently online. The percentage of detected problems is not large. However, it does not mean that the vulnerabilities we put forward do not appear in mini programs in other host apps. As more global companies join the mini programs related discussion, the number of emerging host apps supporting mini programs (such as UnionPay, NetEase) increases. They just appeared, and the vulnerability proposed in this article may not exist. When host apps gradually add new mini programs\u2019 functions or APIs, they may also make the vulnerabilities we found in this article, so the host apps still need to pay more attention to the permissions in the future. 15 \fRunning Title for Header 5.2 Mitigation Measures The purpose of our empirical analysis was to draw people\u2019s attention to the neglected security issue of the improper use of sensitive permissions in mini programs. In the face of the rapid development of mini programs and the sharp rise in the demand for personal privacy protection of mini programs, it is necessary to strengthen the coordination of host apps, apps vendors, and users to form a management system for personal privacy protection of mini programs. The responsibility of host apps is the biggest. In essence, mini programs are still software that provides various services for mobile users. Mini programs are put on shelves through the host apps, and host apps should strictly control the management of permissions, actively rectify APIs related to users\u2019 sensitive permissions, and protect user privacy with the most signi\ufb01cant effort. For the apps vendors, it is necessary to consider how the existing OS handles speci\ufb01c APIs and the results feedback. For example, face the clipboard issues, the developer can highlight any word in the app and select \u201cCopy\u201d or automatically clear the clipboard after a while. Users also need to enhance the security awareness to protect personal information when using mini programs, be vigilant against mini programs of unknown origin, and do not quickly authorize their private information to the mini programs to avoid being illegally collected and leaked, causing unnecessary losses. At the same time, users should be encouraged to report illegal activities actively. 6 Related Work Reason Exploration. Almomani et al.[30] demonstrated, discussed and compared the latest technologies in the \ufb01eld of Android permissions, and conducted the latest research on Android permissions, revealing that the Android permissions faces various security issues. Zheran Fang et al. [31] investigate the arising issues in Android security, including coarse granularity of permissions, incompetent permission administration, insuf\ufb01cient permission documentation, over-claim of permissions, permission escalation attack, and TOCTOU (Time of Check to Time of Use) attack and put forward several methods to further reduce Android security risks. Joel Reardon et al.[13] searched for sensitive data being sent over the network for which the sending app did not have permissions to access it by mechanisms to monitor the application\u2019s runtime behavior and network traf\ufb01c. They found that apps can circumvent the permission model and gain access to protected data without user consent by using both covert and side channels and determined how this unauthorized access occurs. Mujahid [32] implement a technique in a tool called PERMLYZER, which automatically detects permission issues from apps APK. Detection and Protection. In the Android, the most common is the permission management mechanism of Android[33, 34, 35, 36]. DroidNet[12] is an Android permission control and recommendation system, which is an Android permission control framework based on crowdsourcing. It provides recommendations on whether to accept or reject the permission requests based on decisions from peer expert users, which can help users implement low-risk resource access control for untrusted applications and protect users\u2019 privacy. HybridGuard[37], a framework based on the subject authority and \ufb01ne-grained policy execution for web mobile applications, can accurately monitor all web codes to ensure the security of mobile applications, in which an interception and policy code is implemented in a single JavaScript \ufb01le, and whether to intercept them is determined by wrapping API about device resource access and DOM operation and checking the policy. M-Perm[11] is a detection tool that combines string analysis and static analysis to identify normal, dangerous, and third-party permission requests in applications to detect permission abuse. Cusper[38] is a new modular design in the Android permission model, which separates the management of system permissions from custom permissions declared by untrusted third-party applications. It introduces backwards compatible naming conventions for custom permissions to systematically eliminate and prevent the loopholes of custom permissions. The mainstream approach for enhancing the Android permission mechanism is to identify over-declared permissions requested by an app [39, 40, 41, 42], and recommend appropriate permissions for an app [43, 44]. TERMINATOR [16] provides a safe, reliable, yet non-disruptive approach to protect mobile users against permission misuses.Bin Liu et al.[14] proposed a Personalized Privacy Assistant (PPA) for mobile applications, which can manage mobile permissions of mobile applications and predict the privacy settings that users want by asking some questions, and proposed a method to learn the privacy pro\ufb01le of permission settings. Bao [45] also propose two novel approaches to realize permission recommendation. 7" + }, + { + "url": "http://arxiv.org/abs/2205.15170v1", + "title": "GAN-based Medical Image Small Region Forgery Detection via a Two-Stage Cascade Framework", + "abstract": "Using generative adversarial network (GAN)\\cite{RN90} for data enhancement of\nmedical images is significantly helpful for many computer-aided diagnosis (CAD)\ntasks. A new attack called CT-GAN has emerged. It can inject or remove lung\ncancer lesions to CT scans. Because the tampering region may even account for\nless than 1\\% of the original image, even state-of-the-art methods are\nchallenging to detect the traces of such tampering.\n This paper proposes a cascade framework to detect GAN-based medical image\nsmall region forgery like CT-GAN. In the local detection stage, we train the\ndetector network with small sub-images so that interference information in\nauthentic regions will not affect the detector. We use depthwise separable\nconvolution and residual to prevent the detector from over-fitting and enhance\nthe ability to find forged regions through the attention mechanism. The\ndetection results of all sub-images in the same image will be combined into a\nheatmap. In the global classification stage, using gray level co-occurrence\nmatrix (GLCM) can better extract features of the heatmap. Because the shape and\nsize of the tampered area are uncertain, we train PCA and SVM methods for\nclassification. Our method can classify whether a CT image has been tampered\nand locate the tampered position. Sufficient experiments show that our method\ncan achieve excellent performance.", + "authors": "Jianyi Zhang, Xuanxi Huang, Yaqi Liu, Yuyang Han, Zixiao Xiang", + "published": "2022-05-30", + "updated": "2022-05-30", + "primary_cat": "eess.IV", + "cats": [ + "eess.IV", + "cs.CR", + "cs.CV" + ], + "main_content": "Introduction Due to the privacy of medical images, the lack of data has always been a signi\ufb01cant problem for machine learning tasks related to medical images. One way to solve this problem is the GAN [1], which can generate images that are highly similar to real images. GAN has been widely concerned in the medical image \ufb01eld. Several studies have used GAN to generate medical images for data enhancement and achieved gratifying performance. The image quality generated by GAN is enough to confuse radiologists. Therefore, once this technology is used for malicious attacks, it will lead to serious consequences. Using the deep convolution neural network can detect the GAN-generated image [3\u20135]. Moreover, the detection accuracy can be improved through feature engineering [6\u201311]. To the knowledge of this paper, there is no detection method for GAN forged medical images. Although there is no speci\ufb01c solution to detect the medical images generated by GAN, there are some domain generic methods. For example, Frank et al. used discrete cosine transform (DCT) to detect GAN generated images [10]. Marra et al., through incremental learning, can detect new GAN-generated images with only a small number of samples [12]. Cozzolino et al. learn feature extraction through auto-encoders and generalize the model through a small number of samples [13]. CT-GAN [2], a GAN dif\ufb01cult to detect even with state-of-the-art methods, emerged. It can inject or remove large lung nodules from CT images. Examples of CT-GAN inject/remove tampering of lung nodules are shown in Fig. 2. The number of large lung nodules is a signi\ufb01cant marker of lung cancer. Therefore, CT-GAN can make doctors misjudge \u2217Citation: Authors. Title. Pages.... DOI:000000/11111. arXiv:2205.15170v1 [eess.IV] 30 May 2022 \fGAN-based Medical Image Small Region Forgery Detection via a Two-Stage Cascade Framework CT slice image Sub-images Detector network Combine the detect results GLCM \u00d74 Flatten PCA SVM Fake or not? Step 1: Local detection Step 2: Global classification Heatmap Features(256) Features(40,000) r1 \u22ef \ud835\udc5f \ud835\udc64 \u22ee \u22f1 \u22ee \ud835\udc5f\u210e \u22ef \ud835\udc5f\ud835\udc64\u00d7\u210e Figure 1: Overview of our method. We cut out small sub-images from CT slices to train the local detection neural network. Each sub-image will be detected and output a tampered probability. The detection results were combined according to the position to generate a heatmap. Then we use GLCM to extract the features from the heatmap, which are used for PCA and SVM model training. We use the trained model for global classi\ufb01cation. the patient\u2019s condition, seriously threatening the patient\u2019s life safety. In addition, this attack may also be used to defraud medical insurance and maliciously discredit competitors. origin tampered Figure 2: CT-GAN tampered samples. The \ufb01rst row shows the removal tampering of CT-GAN. A lung nodule is removed from the CT slice image by CT-GAN. The second row shows the injection tampering of CT-GAN. A small nodule was tampered with as a large nodule by CT-GAN. However, new attacks like CT-GAN challenge the current detection methods. CT-GAN generates only a minimal area, and the surrounding area is used as a constraint condition to train a conditional generative adversarial network (CGAN). In that case, the generated image will be closer to the real image. We call the attack using CGAN to forge a very small region in an image as GAN-based small region forgery attack. At present, there is no solution that can effectively detect the GAN-based small region forgery attack in medical image. The characteristic of the attack is that the ratio of the region generated by GAN is very small. Although some methods, such as R\u00f6ssler et al. [4], can detect partial generation, such as face manipulation. However, because medical images\u2019 style, content, and storage format are very 2 \fGAN-based Medical Image Small Region Forgery Detection via a Two-Stage Cascade Framework different from the normal images and the tampered region is too small, even state-of-the-art detection methods can not effectively detect GAN-based small region forgery attacks in medical image. It is conceivable that our medical image security is facing a considerable threat. In order to solve above-mentioned problem, we propose a novel cascade framework based on a local detection network and a global classi\ufb01cation method that can detect GAN-based small region forgery attacks in medical images. The \ufb01rst stage is local detection. We crop a small sub-image from the CT slice image to train the detector network. The sub-image size is small enough so that interference information in authentic regions will not affect the detector. Because the training data only has a single channel and the training data size is small, it is easy to over-\ufb01t. Therefore, we design a lightweight neural network with fewer parameters and use early stopping to prevent over-\ufb01tting. After training, the detector can detect the tampered region effectively. Then, we traverse the entire CT slice image by sub-image. The detection result of all sub-image will be combined and output as a heatmap. It can indicate which area may be tampered. The second stage is global classi\ufb01cation. We use the gray level co-occurrence matrix (GLCM) to extract features from the heatmap. Then train the PCA and SVM model with the GLCM features. Since CT-GAN can adjust the size of the tampered area to a certain extent, we use GLCM features to train PCA and SVM models for global classi\ufb01cation. Compared with the method that uses the whole image as input, this method can locate the tampered coordinate and requires less training data but has faster training speed and higher accuracy. The main contributions are as follows: \u2022 A novel cascade framework based on local detection and a global classi\ufb01cation is proposed to detect and locate the tampering regions caused by CT-GAN attacks including both injection and removal. \u2022 A local detection network with channel attention, spatial attention, depthwise separable convolution, and residual, which can better \ufb01nd the information of small areas in the image and prevent over-\ufb01tting. \u2022 We design a global classi\ufb01cation method base on PCA and SVM with the gray level co-occurrence matrix (GLCM) as input features, which can effectively cooperate with local detection to classify medical images. \u2022 Experiments show that for GAN-based small region forgery attacks in medical image like CT-GAN, our method can achieve excellent performance. The rest of this paper is organized as follows. In Section 2, we discussed the background and related work of the detection of GAN generated images in recent years. Moreover, in Section 3, we explained our method in detail. Furthermore, in Section 4, we describe our experimental results. In Section 5, we discuss our method, and in Section 6 we draw our conclusions. 2 Background and related works 2.1 Medical image The medical image uses some particular medium to interact with the human body to show the structure of the internal tissues or organs of the human. Digital imaging and communications in medicine (DICOM) is an international standard for medical images and their related information. It is widely used in various radiological diagnostic equipment (X-ray, CT, MR, ultrasound, etc.). All medical images of patients are stored in DICOM \ufb01le format. The data used in this paper are mainly CT images with DICOM format. CT equipment scans slices one after another around a certain part of the patient\u2019s body. Fig. 3 shows an example of a CT scan. The scanned image is multi-layered. A three-dimensional image can be formed by stacking layers of slice images on the z-axis. The de\ufb01nition of medical images such as CT is positively correlated with radiation dose. In contrast, high-dose radiation may damage patient\u2019s health, so it is dif\ufb01cult to improve the de\ufb01nition of medical images. Besides, medical images have only one channel. So GAN can easily \ufb01t the distribution of medical images than normal three-channel color images. 2.2 Generative adversarial network Since GAN was proposed by Goodfellow et al., it has been one of the hot spots in the CV \ufb01eld. The GAN model is different from the traditional neural network structure. GAN includes a generative model G and a discriminative model D. G generates a new sample from random noise, and D distinguishes whether the input sample is a real sample. The task of G is to generate images that D cannot distinguish. At the same time, the task of D is to distinguish between the images generated by G and the real images. The two networks compete against each other during training through this min-max game. In this way, G can learn the data distribution of the real sample. Up to now, GAN has derived a large number of variants, such as WGAN [14], PGGAN [15], StyleGAN [16] and so on. These variants are widely used in various CV tasks. 3 \fGAN-based Medical Image Small Region Forgery Detection via a Two-Stage Cascade Framework (a) (b) Figure 3: CT scan schematic. (a) is the cartoon schematic of CT scanning operation. (b) is a part of a CT scan. A complete CT scan may include about 300 slice images, and only a portion is selected here. 2.3 Application of GAN in medical image Medical images are different from normal images and have robust privacy. Even though there are many public data sets such as LIDC-IDRI, DDSM MIAS, OASIS, etc., the medical data sets are still insuf\ufb01cient. Because GAN can effectively alleviate the lack of training data, there are also a large number of researches of medical imaging using GAN. In recent years, the more frequently used GAN variants in medical imaging are pix2pix [17] and CycleGAN [18]. GAN is widely used in image synthesis [19\u201323], noise reduction [24\u201327], cross-modality [28\u201333], and many other aspects, providing signi\ufb01cant help for CAD. 2.4 Detect the GAN-generated image Because of the high performance of GAN, it has gradually become a trend to use deep learning to distinguish whether an image is generated by GAN. Due to the excellent performance of convolutional neural networks (CNN) in CV tasks, CNNs, such as ResNet [34], XceptionNet [35], and Ef\ufb01cientNet [36], are widely used in various CV \ufb01elds, including digital image forensics [3,5,7]. Besides, Andreas et al. [4] prove the superior performance of XceptionNet in image source detection. Using some features can make the network perform better. A way to distinguish whether an image is generated by GAN is to use GAN \ufb01ngerprint. [8]. GAN will leave special \ufb01ngerprints in the generated image due to its structure. Through deep learning, learn those \ufb01ngerprints as a feature. Then it can be used to distinguish the source of the image. Some people use the shortcomings of GAN to \ufb01nd some special features to better distinguish whether a image is generated by GAN. For example, McCloskey and Albright \ufb01nd that the saturated or underexposed pixels of image will be suppressed by the normalization operation of the GAN generator [6], and use this feature to distinguish the real camera images and GAN images. Because the statistical characteristics of GAN images are different from real images, some people use three co-occurrence matrices on RGB channels as features to distinguish the source of the image [9]. Zhang et al. suppress the image content information by converting the image to the YCrCb color space and then use the Scharr operator and the gray-level co-occurrence matrix(GLCM) to obtain edge features, allowing them to simultaneously detect GAN images and copy-move images [7]. In addition, someone distinguishes the source of the image from the defects of up-sampling operations in GAN. Frank et al. found that the up-sampling in GAN will cause grid-like artifacts in the generated images after DCT operation [10], which can be used to distinguish the source of the image. Durall et al. found that the images generated by GAN cannot reproduce the actual spectral distribution [11], which is also due to the upsampling operation. Therefore, after using azimuthal integration to extract the spectral features, using SVM or K-Means can distinguish the source of the image without the need to train a deep CNN. 2.5 Challenge The CT-GAN paper also proposed some detection methods that may be useful. Unfortunately, these methods are not suitable for GAN-based small region forgery attack in medical image like CT-GAN. The reasons are as follows. 4 \fGAN-based Medical Image Small Region Forgery Detection via a Two-Stage Cascade Framework On the one hand, there is a huge difference between medical images and normal images. Medical images show the structure and density of human internal tissues or organs, so they have unique content and style. Medical images, such as CT, MR, X-ray, etc., are all taken with special equipment different from general photographing equipment and are saved according to the DICOM standard. Medical images are all single-channel in terms of the image data format, and the pixel values range of the medical image is about 4096. Compared with the normal gray-scale image ranging from 0 to 255, the range of pixel values of medical images is 16 times larger. Therefore, the pre-training model of normal images only has little effect on medical images. In addition, methods that need to extract features from three channels of an image, such as [7] that needs to compare three features extract from different channels, are not appropriate. The co-occurrence matrix is one of the most effective features to distinguish whether an image is generated by GAN. However, due to the expansion of the pixel value range, the cost of calculating the co-occurrence matrix will increase to unacceptable. So the method using the co-occurrence matrix [9] cannot work too. (a) (b) Figure 4: (a) is a tampered CT image. The nodules in the red box are injected by CT-GAN. (b) is the heatmap corresponding to (a), in which the bright red spot corresponds to the injected nodule. Because the preset sliding window stride is greater than 1, the size of the heatmap is smaller than the original CT slice image. In order to facilitate observation, we enlarged the thermal map, superimposed it on the CT image, and adjusted their colors. On the other hand, GAN-based small region forgery attacks are more dif\ufb01cult to detect. Take CT-GAN as an example. CT-GAN is a 3D CGAN. It designs a 3D network that references the pix2pix structure. The generator of CT-GAN is a 3D UNet [37] structure. It cuts out a small cuboid from a series of CT slices of the patient, then scales it into a small cube of 323 pixels and masks the 163 pixels in the center of the cube to zero. This cube with a masked center is input into the generator as a condition. CT-GAN trained two models. Those models can generate large or small nodules in the cube\u2019s center. It is worth mentioning that the size of a CT image is (512 \u00d7 512) pixels, in which the region modi\ufb01ed by CT-GAN is less than (32 \u00d7 32). That means the minimum number of pixels that have been tampered with is only 1/1024 of the total. Fig. 4 shows a CT image injected into a lung nodule. Hence, as we can see from Fig. 2 and Fig. 4, Each tampering operation by CT-GAN will modify 1/1024 to 1/256 of the pixels of the image. A CT image has been tampered with at four different locations, and the tampered pixels only account for about 1%. Generally, a CT image does not need to be operated so frequently. In other words, 99% of an image is interference information. The untampered part is equivalent to the \"cover\" of the tampered part, which seriously hinders the model from learning the difference between positive and negative images. That is why even state-of-the-art methods are challenging to detect the tampering trace of CT-GAN. Because of this, many methods based on statistical characteristics, such as [6,10,11], are ineffective. More serious is that CNN is not sensitive enough to small tampered regions, so it is dif\ufb01cult to detect such attacks accurately. Unfortunately, almost all current detection methods are based on deep CNN, so it is challenging to detect directly. The current methods for distinguishing whether an image is generated by GAN aim at the images wholly or mostly generated by GAN. There is no special detection model for GAN-based small region forgery attack in medical image like CT-GAN for the time being. In Section 4.4, We have tried to use the whole CT image as input to train the state-of-the-art network. Unfortunately, the result is inferior. 5 \fGAN-based Medical Image Small Region Forgery Detection via a Two-Stage Cascade Framework 3 Our method 3.1 Motivation Medical images are critical private information and are vitally important to the patient\u2019s life. At present, the integrity of medical images faces the threat of GAN-based small region forgery attacks. However, there is no practical method to detect GAN-based small region forgery attacks in medical images. There are two main reasons why attacks like CT-GAN are dif\ufb01cult to detect. First, medical images\u2019 style, content, and storage format are very different from the normal images. If we convert these DICOM images to any other image format, it will lose information like pixels or meta-data. Therefore, the model training by normal image can not be well generalized to CT images, making the pre-training model unusable. Training a new model will need much more data. Unfortunately, the sample data of medical images is very limited due to the restrictions on the use of data concerning health under the privacy regulations like GDPR or CCPA. Hence, we cannot try to solve this attack from the perspective of training data. Moreover, what makes the detection task more challenging is that the tampered region is very small while the entire CT image is large. As mentioned above, the tampered region is less than (32 \u00d7 32) when the entire CT is (512 \u00d7 512). This means the ratio of a single tampered region in the original image may be less than 0.4%. This greatly reduces the sensitivity of general CNN detection methods since the loss of spatial information limits the learning ability of CNN. Hence, directly detecting the whole image will result in very low accuracy. Based on the above, even state-of-the-art methods are challenging to detect CT-GAN attacks. Although no speci\ufb01c method can be implemented directly to detect the GAN-based small region forgery attack in medical images, some research works can still inspire us to design an effective method. Andreas et al. [4] used a face tracking method to extract the face area of the image. They found that if the extracted facial information is used as the input of the detector, it will be more accurate than directly using the entire image as input. It means that the neural network can achieve a better performance if the classi\ufb01er focuses on more precise regions. Following their idea, we refer to the common preprocessing method of copy-move forgery detection, making the detector pay more attention to the local part of the image through a sliding window. Speci\ufb01cally, we split the target CT image into many small sub-images to train a local detector with a corresponding method for using local classi\ufb01cation results to determine global classi\ufb01cation results. Christian et al. [35] replaced the Inception modules with depthwise separable convolutions and proposed their method named XceptionNet for computer vision. Since XceptionNet makes more ef\ufb01cient use of model parameters, compared to Inception V3 [38], it shows better runtime performance and higher accuracy on large-scale datasets like ImageNet while having fewer parameters than general deep CNN. This architecture can effectively reduce over\ufb01tting when we cannot collect more data. Therefore, considering these features, we designed our method inspired by the XceptionNet to detect the GAN-based local tampering attacks. 3.2 Overview Our detection method is mainly divided into two stages: local detection and global judgment. The method we propose is outlined below. In the local detection stage, small sub-images are cut out from CT slices in a planned way to train the local detector neural network. The size of the cropped sub-image is small enough for the tampered area. In this way, the authentic regions are not enough to hinder the detector. The detector can focus on learning the difference between the real image and the GAN-generated image. The tampered area may be hidden in the original image as the background when testing. Therefore, to minimize the missed judgment, our method detects each sub-image divided by the sliding window and predicts the tampered probability of each sub-image. When all sub-images were detected, the results were combined according to the position to generate a heatmap. This heatmap can intuitively re\ufb02ect which area in the original image may have been tampered with by GAN. In the global classi\ufb01cation stage, we use GLCM to extract the features from the heatmap, which are used for PCA and SVM model training. GLCM can make the features of the heatmap more prominent. We use the trained model for global classi\ufb01cation. Intuitively, our method allows the neural network to observe the details of the image more carefully instead of looking at the overall situation. Thus it has a better performance when facing GAN-based small region forgery attack. 3.3 Local detection network architecture Because our training data is insuf\ufb01cient and the hardware is not powerful, we tend to use lightweight networks as the local detector. Using depthwise separable convolution can reduce a large number of required training parameters while maintaining a good training effect. For example, XceptionNet [35] and MobileNet [39] both construct the primary 6 \fGAN-based Medical Image Small Region Forgery Detection via a Two-Stage Cascade Framework Input Conv 32 3\u00d73 Conv 128 3\u00d73 SeparableConv 256 3\u00d73 MaxPooling 3\u00d73 Strdie2 Spatial Attention MaxPooling 3\u00d73 Strdie2 SeparableConv 512 3\u00d73 SeparableConv 1024 3\u00d73 SeparableConv 1024 3\u00d73 SeparableConv 1024 3\u00d73 SeparableConv 1024 3\u00d73 SeparableConv 1024 3\u00d73 SeparableConv 1024 3\u00d73 Global Average Pooling FC 2048 SELU Output Residual Conv 256 1\u00d71 Strdie 2 Residual Conv 512 1\u00d71 Strdie 2 Residual Conv 1024 1\u00d71 Strdie 2 Residual Conv 1024 1\u00d71 Stride 2 FC 2 Softmax Channel Attention Channel Attention B1 B2 B3 B4 B5 Figure 5: The network architecture. If there is no description, the default stride of the convolution operation is 1, the padding operation defaults to \"SAME\", the activation function defaults to Relu, and each convolution and depthwise separable convolution layer are followed by batch normalization by default. part of the network with depthwise separable convolution, and they perform well in image classi\ufb01cation. But our classi\ufb01cation task does not need the too deep network. Because the training data structure is too simple, using a network like XceptionNet or MobileNet will waste many computing resources and may lead to network degradation or over\ufb01tting. Therefore, we designed a shallower network as our sub-image classi\ufb01er based on the depthwise separable convolution. Our network structure is shown in Fig. 5. The network\u2019s input is a (32 \u00d7 32) image matrix, and the features of the image are extracted through a small number of traditional convolutions and a large amount of depthwise separable convolutions. The attention mechanism can effectively improve the performance of the deep learning model. The attention mechanism is often used by copy-move detection and other detail-oriented tamper detection methods. Inspired by Woo et al. [40], we designs a simple attention mechanism for our network. Our attention mechanism is shown in Fig. 6. The spatial attention and channel attention can be computed by Eq. (1) and (2). This network sets the channel attention module after the convolution blocks with the largest number of channels. It is more signi\ufb01cant to use channel attention here. Similarly, because the pooling layer will further reduce the size of the feature image, we set the spatial attention module before the convolution block containing the pooling operation, where the feature image size is the largest. After adding the attention module, the training cost does not increase much, but it can signi\ufb01cantly improve the global classi\ufb01cation performance. As(F F F) = \u03c3(Conv([MAXs(F F F); AV Gs(F F F)])) (1) 7 \fGAN-based Medical Image Small Region Forgery Detection via a Two-Stage Cascade Framework Input feature MaxPool 7*7 Convolution AveragePool AveragePool MaxPool Share FC layer Sigmoid according to spatial position according to channel Output Spatial Attention Channel Attention \ud835\udf0e Figure 6: The attention module of our network. The number of nerve cells in the three FC layers of channel attention is C, C/4, C, where C means the number of channels. The stride of convolution is 1, using the SAME Padding, and the activation function is Sigmoid. Ac(F F F) = \u03c3(FC(MAXc(F F F)) + FC(AV Gc(F F F))) (2) The design of residual block refers to ResNet [34]. The number of convolution kernels channels is unchanged when the input feature image size is the same as the output. The number of convolution kernels and channels is doubled when the input feature image size is different from the output (through the pooling layer). At the end of our network. Selu [41] is used as the activation function in the full connection layer of the network. The Selu function is given by Eq. (3), where \u03bb and \u03b1 are two meticulously designed numbers. It has better performance than Relu in the full connection layer. Our network can save computing resources through the above network structure while maintaining high accuracy. Selu(x) = \u03bb \u001a x (x > 0) \u03b1ex \u2212\u03b1 (x \u22640) (3) 3.4 Global classi\ufb01cation method In our method, the window size is \ufb01xed. This is slightly different from the sliding window commonly used in target detection tasks. In the task of target detection, if the window only covers a part of the target, the model may not be able to classify the target correctly, so it is necessary to adjust the window size and traverse the image multiple times. However, in our task, even if the window is only a part of the GAN generation region, the model can determine whether it is generated by GAN with high accuracy. Therefore, we only need a smaller window size to avoid the in\ufb02uence of authentic regions. First of all, calculate a series of coordinates as the center coordinates of sub-images. Then, crop a sub-image with a size of (32 \u00d7 32) according to these center coordinates. The crop operation is because the practical part of the CT image is only the inside of a circle tangent to the square frame. Furthermore, most of the tampering occurs in this area. It will waste a lot of time and space if the extra part is included in the calculation. The size of the CT image is marked as CTsize, the sub-image size is marked as imgsize, and the stride marked as s. We calculate the longitudinal coordinates of all rows as follow: y = {imgsize 2 + i \u00d7 s} (4) Where i = 0, 1, 2, ..., \u230aCTsize \u2212imgsize s \u230b (5) Then, for a row with y = h, we calculate the horizontal ordinates as follow: x = {ctsize/2 + j \u00d7 s} (6) 8 \fGAN-based Medical Image Small Region Forgery Detection via a Two-Stage Cascade Framework Where j = 0, 1, 2, ..., \u230aw\u2212imgsize 2s \u230b w = \u230a p (ctsize/2)2 \u2212h2\u230b (7) Our method can record the output result of each sub-image. Then classify whether an image has been tampered with according to these results. In addition to output a \ufb01nal prediction result, our model can also generate a heatmap based on the results of each sub-image (see Fig. 4). For the area not counted by the formulas above, we default it to have tampered with the probability of 0. Generally speaking, attackers usually set the tampered region square or round for GAN-based small region forgery attacks. Therefore, after local detection of the tampered slice, a series of sub-images will be judged as positive in theory. Therefore, the tampering trace of CT-GAN can be detected through some \ufb01xed modes, such as the vertical and horizontal continuous n sub-images are judged to be false. However, this detection method is not perfect. The size of the tampered area of GAN is not \ufb01xed. The attacker can set the size of the CT-GAN tampered area to a certain extent. Moreover, many studies use GAN to generate more extensive and higher resolution images. Therefore, we need a \ufb02exible global classi\ufb01cation method. Firstly, to make the features more prominent, GLCM is used to extract the texture features in the thermal map. After rounding the local detection results (heatmap) \u00d7 100, calculate the GLCM with a distance of 1 at four angles of 0 \u00b0, 45 \u00b0, 90 \u00b0 and 135 \u00b0, respectively, then get the feature matrix of (100 \u00d7 100 \u00d7 4). Secondly, a PCA model is trained to reduce the feature to 256 dimensions. Thirdly, the feature data after dimensionality reduction are used to train an SVM model, and the best parameters of the model are found by grid search. Finally, we use the trained PCA + SVM to make the global classi\ufb01cation. This method can adapt to different GAN tampering area sizes. Algorithm 1 Generate the GLCM of heatmap Input: heatmap\u2014The heatmap matrix with size H \u00d7 W. a, b\u2014Two constants determined by angle and distance. Output: GLCM\u2014A matrix with size g \u00d7 g, g is the gray levels number. heatmap = heatmap \u00d7 100, then round heatmap to integer, Initialize GLCM to 0 matrix x = 0, y = 0 while x < W do while y < H do if 0 < (x + a) < W and 0 < (y + b) < W then g1 = heatmap(x, y) g2 = heatmap(x + a, y + b) GLCM[g1, g2] = GLCM[g1, g2] + 1 end if end while end while 4 Experiments 4.1 Implementation Details 1) Data set We use the source code of CT-GAN, train the inject and remove models with the LUNA16 data set [42], and use the trained models to generate 3540 different CT scan samples. Among them, 1776 scans were injected lung cancer lesions (Equivalent to a large-diameter lung nodule), and 1764 scans were removed lung cancer lesions. For each fake sample, we select the tampered point and two slices before and after it, \ufb01ve CT slices, and the corresponding \ufb01ve slices before tampering. In the end, 35400 CT slice images were obtained. The tampering points are the CT slices with lung nodules. Therefore, we randomly selected about half of the real CT slice images (about 8850) and replaced them with slices at random locations. Among the 35400 CT slice images, 1200 images are randomly selected as the test set, 4800 images are randomly selected as the training set of global classi\ufb01cation, 2000 images are randomly selected as the veri\ufb01cation set of local detection, and the remaining 27400 slices are used as the training set of local detection. We mark the test set described in the previous paragraph as the test set CTGAN-ALL. Besides, we divide the test set CTGAN-ALL into two parts according to inject or remove tampering. The large nodule injected CT slice images, and the real large nodule images were marked as CTGAN-INJ. The large nodule removed CT images, and the real small nodule images were marked as CTGAN-REM. 9 \fGAN-based Medical Image Small Region Forgery Detection via a Two-Stage Cascade Framework In addition, eight CT scans different from the above data sets were retained. Two of them were real lung CT scans. One of them had malignant lung cancer lesions, and the other did not. These two scans were marked as MAL and BEN. In addition, one, two, and three large nodules were injected into three scans, respectively. The three scans were marked as INJ1, INJ2, and INJ3. Similarly, one, two, and three large nodules were removed from the remaining three scans marked as REM1, REM2, and REM3. Furthermore, we add impulse noise and gaussian noise to 5000 CT slice images, then use CycleGAN to reduce noise. In the end, 5000 images modi\ufb01ed by CycleGAN were obtained. The images without noise and the image denoised by CycleGAN, these 10,000 slice images are marked as the data set CycleGAN. We mark the images denoised by CycleGAN as the positive class and the images without noise as the negative class. Among them, 8000 images are used as the training set, 1000 images are used as the veri\ufb01cation set, and 1000 images are used as the test set. For each slice image in the test set, we use a (32 \u00d7 32) window to traverse the whole CT image (with the size of (512 \u00d7 512)) with 4 pixels stride. Our method use Eq. (4) and (6) to traverse the image. For each slice image of the training set and the cross-validation set, we use the method of shifting by one pixel for data enhancement so that each slice image in the training set can generate 25 sub-image images. For the fake image (positive class), we mark the coordinates of the injection center point as (0,0), take 25 coordinate points in the rectangle from (-2,-2) to (2,2). Then use these coordinate points as the center point, cut out 25 sub-images with the size of (32 \u00d7 32). For the real image (negative class), we take 10 coordinate points in the rectangle from (-2,-2) to (-1,2), and then randomly select 20 different coordinates from the coordinates calculated by Eq. (4) and (6). Taking these coordinate points as the center, and cut out 25 sub-images with the size of (32 \u00d7 32). By adding n negative samples corresponding to the positive samples, the model can better learn the difference between the images before and after tampering. We found that when n = 10, the model\u2019s performance is better. 2) Setup All experiments were implemented using the Tensor\ufb02ow 1.13 framework and were trained on a single NVIDIA GTX2080TI GPU. The parameters of the training phase are as follows. We set the initial learning rate to 0.0005 and use exponential decay, which decays every 600 steps and with a decay rate of 0.85. The mini-batch size is 56, the batch normalization decay parameter is 0.95, and the L2 regularization weight decay parameter is 0.0001. We use Adam optimizer to minimize cross-entropy loss. Except for the learning rate, the default parameters of the Adam optimizer are used, namely \u03b21 = 0.9, \u03b22 = 0.999, \u03f5 = 1 \u00d7 10\u22128. The early stopping is set to stop training when the accuracy of the validation set no longer increases for three consecutive epochs. If the early stop is not triggered, the training will stop after 30 epochs. 3) Evaluation We regard the tampered slice image as a positive example and the real slice image as a negative example. The number of positive and negative samples in the actual scene may differ. Therefore, in addition to accuracy(ACC), we also use precision(P), recall(R) and F1-score(F1) to evaluate the model\u2019s performance. The tampering operation of CT-GAN is aimed at 3D medical images. The number of slices involved in a tampering operation can easily reach more than 30. Besides, the number of tampered slices is more if the slice interval is small. However, if the same area of 10 consecutive slices is predicted as the tampering area, we can judge that this position has been tampered with easily. However, if the above indicators are calculated in the unit of 2D slice image, the value will be de\ufb01cient, which is unreasonable. Therefore, when detecting the complete CT scan, this paper also takes the 3D tampered area (a series of slice images) as the unit and counts the indicators in the following way. For a tampered area, when 9 or more of the 10 consecutive slices, which including the tampered central slice (these slices must be positive examples in this experiment), are judged as positive examples, we consider that the tampering trace is accurately found and marked as a true positive example. Otherwise, it is regarded as a missing report and marked as a false negative example. Similarly, for a real area, suppose 9 or more consecutive slices in the real slice are judged as positive examples, or 9 of the 10 consecutive slices are judged as positive examples. In that case, it will be regarded as false positives. Finally, the precision, recall and F1-score are calculated in the above way. 4.2 Ablation Study In order to verify the effectiveness of each module in our method, we conducted ablation studies. Four experiments were used to verify the effectiveness of local detection, attention mechanism, Selu activation function, and GLCM feature extraction. In each experiment, we ablate a module from our method. In these experiments, ablate local detection is to input the complete slice image (512 \u00d7 512) and use our network to train and predict directly. The experimental results are shown in Table 1. 10 \fGAN-based Medical Image Small Region Forgery Detection via a Two-Stage Cascade Framework Table 1: The Ablation Study Result of Our Method. \"-SW\" means that sliding windows are not used, the whole image is classi\ufb01ed directly without local detection. \"-Attention\" means that attention mechanism are not used. \"-Selu\" means that uses Relu instead of Selu in our network. \"-GLCM\" means that PCA and SVM are directly used to classify the heatmap without GLCM to extract features. Ablated module ACC P R F1 Ours-SW 0.6583 0.6554 0.6660 0.6607 Ours-Attention 0.9158 0.9561 0.8717 0.9119 Ours-Selu 0.9192 0.9499 0.8850 0.9163 Ours-GLCM 0.8717 0.8729 0.8700 0.8715 Ours 0.9350 0.9350 0.9350 0.9628 0.9628 0.9628 0.9050 0.9050 0.9050 0.9330 0.9330 0.9330 The experimental results show that using the sliding window to divide sub-images for local detection is very helpful to detect GAN-based small region forgery attacks like CT-GAN, which signi\ufb01cantly improves the performance of detection. Selu activation function and attention mechanism can be slightly helpful to the performance of our method. Moreover, the performance of our method can be signi\ufb01cantly improved by using GLCM. When the above modules are used together, the improvement effect is better. 4.3 Detection of CT-GAN inject or remove attack For general GAN-based small region forgery attack, it may not use the same GAN structure to train two different models like CT-GAN. Therefore, we divided the training set into two parts according to the same way as the test set CTGAN-INJ and CTGAN-REM, then trained detector models and tested the inject and remove models of CT-GAN separately. After the training, we got two detectors for different tampering models. After that, we tested on the two kinds of tampering respectively. The test results are shown in Table 2. Table 2: The Detection Results of CT-GAN Inject and Remove Attacks. The training and testing of the two are carried out separately. Test set ACC P R F1 CTGAN-INJ 0.8999 0.9762 0.8200 0.8913 CTGAN-REM 0.9670 0.9937 0.9400 0.9661 The experimental results show that although training data is reduced, our model can still detect CT-GAN\u2019s inject or remove model with a high F1-score. The detection accuracy and f1-score of the inject tampering model are about 90%, while the detection accuracy and f1-score of the remove tampering model are about 97%. The above results show that our method can still effectively detect the traces of tampering in the face of a single tampering model, but our method is more sensitive to the traces of removing tampering. 4.4 Compare to state-of-the-art methods 1) Detect CT slices Because other feature extraction methods for detecting GAN-generated images are not suitable for CT-GAN, we use XceptionNet and ResNet50, both of which are the most advanced DCNNs, as the baseline. Some existing studies have shown that XceptionNet has an excellent performance in GAN forged image detection, and its detection accuracy can be comparable with the most advanced detection methods. Therefore, we choose Xceptionnet as the baseline. In addition, because both XceptionNet and the local detection network take depthwise separable convolution as the main structure, we also select resnet50 as another baseline. Moreover, inspired by [10] we also tried to use the DCT of the sub-image as a feature(Ours-DCT). This experiment is divided into two kinds. One is to use the sliding window, the network as the local detector to predict the (32 \u00d7 32) sub-image. The other is to train and test in a general way. The input of the network is (512 \u00d7 512) 11 \fGAN-based Medical Image Small Region Forgery Detection via a Two-Stage Cascade Framework 0 20 40 60 80 epoch 0.3 0.4 0.5 0.6 0.7 accuracy Xception ResNet50 Our-whole (a) 0 15000 30000 45000 iter 0.6 0.7 0.8 0.9 1.0 accuracy Xception ResNet50 Ours-DCT Ours (b) Figure 7: The training accuracy curve. (a) Input with whole image (512 \u00d7 512). (b)Input with sub-image (32 \u00d7 32). complete slice images. The training set and test set are as described in Section 3. The training information is shown in Fig. 7, and the test results are shown in Table 3. Table 3: The Detect Result of CT-GAN with the State-of-the-art Methods and Ours. Where \"-W\" means whole slice image input. \"-DCT\" means the local detection network is trained with the DCT features extracted from the sub-image. Method ACC P R F1 XceptionNet-W 0.5912 0.5738 0.7064 0.6332 XceptionNet 0.7125 0.7986 0.5683 0.6641 ResNet50-W 0.5600 0.5488 0.6690 0.6030 ResNet50 0.6850 0.7643 0.5350 0.6294 Ours-W 0.6583 0.6554 0.6660 0.6607 Ours-DCT 0.8383 0.8512 0.8200 0.8353 Ours 0.9350 0.9350 0.9350 0.9628 0.9628 0.9628 0.9050 0.9050 0.9050 0.9330 0.9330 0.9330 Experimental results show that when using the current data set, even the most advanced DCNNs such as XceptionNet and ResNet50, the test accuracy and F1-score are only about 65%, which means that it is dif\ufb01cult for them to distinguish CT-GAN tampered images and real images. Their performance is improved when used as local detectors, but the performance is still unsatisfactory, which may be due to over\ufb01tting and network degradation. Our method will seriously over\ufb01t without the sliding window. However, using the sliding window, the model converges faster, and the accuracy and f1-score of our method are increased to 93%, an increase by a percentage of 28. However, all indicators have declined when using DCT as a feature, and the accuracy and f1-score are only about 86%. 2) Detect CT scans In order to test the performance of our method more comprehensively, we compared our method with the latest method on complete CT scans. The model used is still trained under the mixed condition of injecting and removing. The test set is the eight scans mentioned above. Fig. 8 shows several continuous CT slice images near the tampering center point and the corresponding heatmap. Table 4 shows the results of our experiment. The experimental results show that our model can effectively \ufb01nd the traces of CT-GAN tampering and is more stable than other methods. Our method can determine whether a scan has been tampered with automatically through a simple strategy. For example, when any n of m consecutive images are classi\ufb01ed to be positive, it is considered that this scan has been tampered with by CT-GAN. Therefore, even CT scan is three-dimensional, while our model is two-dimensional, our model can effectively assist in distinguishing whether a CT scan has been tampered with. In addition, for CT scans with smaller slice spacing, such as INJ1, INJ3, REM1 and REM2, our method can detect more consecutive positive samples (more than 15). However, when the slice spacing is larger, the continuous positive samples that the model can detect are fewer. Furthermore, many misjudgments will occur in places that are unrelated to lung nodules, such as folds of clothes and calci\ufb01ed muscle tissue, which doctors can easily identify. 12 \fGAN-based Medical Image Small Region Forgery Detection via a Two-Stage Cascade Framework Table 4: The Detection Results of Complete CT Scans. The \"spacing\" means the spacing between two adjacent slice images. The \"2D\" means the indicators are calculated in the unit of a 2D slice image. The \"3D\" means the indicators are calculated in the unit of a 3D tampering area. There is no difference between the \"2D\" and \"3D\" methods in detection but in evaluation. Test set Spacing (mm) Method TP TN FP FN Accuracy Precision Recall F1-score BEN 2.5 Xception 0 162 16 0 0.9101 Resnet50 0 166 12 0 0.9326 Ours-2D 0 0 0 178 178 178 0 0 0 0 0 0 1.0 1.0 1.0 MAL 0.625 Xception 0 433 47 0 0.9101 Resnet50 0 413 67 0 0.9326 Ours-2D 0 0 0 479 479 479 1 1 1 0 0 0 0.9979 0.9979 0.9979 INJ1 1.0 Xception 23 161 30 53 0.6891 0.4340 0.3026 0.3566 Resnet50 12 176 15 64 0.7041 0.4444 0.1579 0.2330 Ours-2D 43 43 43 182 182 182 9 9 9 33 33 33 0.8427 0.8427 0.8427 0.8269 0.8269 0.8269 0.5658 0.5658 0.5658 0.6719 0.6719 0.6719 Ours-3D 1 0 0 1.0 1.0 1.0 INJ2 2.5 Xception 25 25 25 68 68 68 1 1 1 37 37 37 0.7099 0.7099 0.7099 0.9615 0.9615 0.9615 0.4032 0.4032 0.4032 0.5682 0.5682 0.5682 Resnet50 21 68 1 41 0.6794 0.9545 0.3387 0.50 Ours-2D 15 69 0 47 0.6412 1.0 0.2419 0.3896 Ours-3D 2 0 0 1.0 1.0 1.0 INJ3 0.625 Xception 39 290 14 137 0.6854 0.7358 0.2216 0.3406 Resnet50 32 291 13 144 0.6729 0.7111 0.1818 0.2896 Ours-2D 79 79 79 304 304 304 0 0 0 97 97 97 0.7979 0.7979 0.7979 1.0 1.0 1.0 0.4489 0.4489 0.4489 0.6196 0.6196 0.6196 Ours-3D 2 0 0 1.0 1.0 1.0 REM1 1.8 Xception 11 88 2 55 0.6346 0.8462 0.1667 0.2785 Resnet50 7 89 1 59 0.6154 0.8750 0.1061 0.1892 Ours-2D 42 42 42 90 90 90 0 0 0 24 24 24 0.8462 0.8462 0.8462 1.0 1.0 1.0 0.6364 0.6364 0.6364 0.7778 0.7778 0.7778 Ours-3D 1 0 0 1.0 1.0 1.0 REM2 1.8 Xception 41 88 0 64 0.6346 0.8462 0.1667 0.2785 Resnet50 30 86 2 75 0.6010 0.9375 0.2857 0.4380 Ours-2D 80 80 80 88 88 88 0 0 0 25 25 25 0.8705 0.8705 0.8705 1.0 1.0 1.0 0.7619 0.7619 0.7619 0.8649 0.8649 0.8649 Ours-3D 2 0 0 1.0 1.0 1.0 REM3 2.5 Xception 20 50 1 61 0.5303 0.9524 0.2469 0.3922 Resnet50 19 50 1 62 0.5227 0.950 0.2346 0.3762 Ours-2D 59 59 59 51 51 51 0 0 0 22 22 22 0.8333 0.8333 0.8333 1.0 1.0 1.0 0.7284 0.7284 0.7284 0.8429 0.8429 0.8429 Ours-3D 3 0 0 1.0 1.0 1.0 5 Discussion and limitations 1) The correlation of sub-images In our method, we directly detect each sub-image without considering the correlation of sub-images. Although the current experimental results meet the requirements of the detection task, we believe that introducing this correlation into the detection task will probably help improve the detection accuracy, ef\ufb01ciency, or generalization. Hence, in the future, we plan to conduct more extensive experiments to \ufb01nd the correlation between these sub-images. For example, whether there is potentially hidden information between adjacent sub-images in the tampered region and how this hidden information helps to improve the performance. 2) Ef\ufb01ciency Since our method divides the medical images into multiple sub-images through a sliding window, the increased number of targets would reduce the detection ef\ufb01ciency. With the help of a high processing speed for a single sub-image, the overall speed to detect a complete scan of the lungs is acceptable. As mentioned before, the main reason for using this sliding window is that the tampered region only occupies a small ratio of a normal image, which causes 13 \fGAN-based Medical Image Small Region Forgery Detection via a Two-Stage Cascade Framework (a) (b) Figure 8: A part of CT slice images and corresponding heatmaps of scans. (a) scan INJ1. (b) scan REM1. The \ufb01rst column of each scan is the tampered CT slice image. The second column is the heatmaps output when XceptionNet is used as the local detector. The third column is the heatmaps output when ResNet50 is used as the local detector. The fourth column is the heatmaps output of our method. the existing detection methods that treat the target image as a whole to fail. Hence, the proposed method is the better choice from the detection point of view as it is the only detection method for the CT-GAN attack. When the forgery attack is applied to the whole image, although we notice that our method can also detect the tampered region with high accuracy, the ef\ufb01ciency still can not catch up with those methods that treat the target as a single unit, such as [7]. If we have prior knowledge of the size of the tampered region, it will help us select the proper processing method. 3) Generalization The two most commonly used GAN structures in the medical image \ufb01eld are pix2pix and CycleGAN. CT-GAN uses the pix2pix structure. The works such as [28\u201333] are based on the CycleGAN structure. Therefore, we chose CycleGAN to construct another data set to exam the performance of our method. The CycleGAN data set is described in Section 4.1. The medical images denoised by CycleGAN can be regarded as entirely generated by CycleGAN. The test results show that our method can classify CycleGAN tampered medical images and real medical images with 99.8% accuracy. In addition, if we do not use machine learning but use some \ufb01xed pattern for global classi\ufb01cation, it is challenging to classify GAN-based small region forgery and the image generated by GAN wholly. CycleGAN and pix2pix are the two most commonly used GAN structures in medical image synthesis. Our method can effectively detect the images generated by CycleGAN and pix2pix. Although our method can detect the images generated by the same or similar GAN, the detection effect of other GAN models not in the training set is not as good as the former. Many studies [3,6,7,9\u201311,43] want to improve the generalization ability of GAN detection methods. This may be achieved by studying the common defects of CNN or GAN. For example, Wang et al. [43] tested multiple latest image generation models and found that the images generated by CNN today have certain common defects. Chai et al. [3] summarizes which parts are likely to cause the face images generated by GAN to be recognized. However, the above studies did not take GAN-based small region forgery attack into consideration. How to combine these studies with GAN-based small region forgery attack is still a problem. We plan to study this in the future. 14 \fGAN-based Medical Image Small Region Forgery Detection via a Two-Stage Cascade Framework 6" + }, + { + "url": "http://arxiv.org/abs/2102.13446v1", + "title": "Safe Distributional Reinforcement Learning", + "abstract": "Safety in reinforcement learning (RL) is a key property in both training and\nexecution in many domains such as autonomous driving or finance. In this paper,\nwe formalize it with a constrained RL formulation in the distributional RL\nsetting. Our general model accepts various definitions of safety(e.g., bounds\non expected performance, CVaR, variance, or probability of reaching bad\nstates). To ensure safety during learning, we extend a safe policy optimization\nmethod to solve our problem. The distributional RL perspective leads to a more\nefficient algorithm while additionally catering for natural safe constraints.\nWe empirically validate our propositions on artificial and real domains against\nappropriate state-of-the-art safe RL algorithms.", + "authors": "Jianyi Zhang, Paul Weng", + "published": "2021-02-26", + "updated": "2021-02-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "INTRODUCTION Reinforcement learning (RL) has shown great promise in various applications [Silver et al., 2017, Jin, 2017]. As such techniques start to be deployed in real applications, safety in RL [Garcia and Fernandez, 2015] starts to be recognized as a key consideration both during learning, but also during execution after training. Indeed, in many domains from medical applications to autonomous driving to \ufb01nance, the actions chosen by an RL agent can have disastrous consequences and therefore the corresponding risks need to be controlled both during training, but also during execution. While traditional RL does not take safety into account, recent work has started to studied it more actively. Safety takes various de\ufb01nitions in the literature. In its simplest sense, it means avoiding bad states [Geibel and Wysotzky, 2005], but it can take more general meaning such as decision-theoretic risk aversion [Borkar, 2010], or risk constraints [Prashanth and Ghavamzadeh, 2016], satisfaction of logic speci\ufb01cations [Alshiekh et al., 2018], but also simple bounds on expected cumulated costs [Yu et al., 2019]. For a given de\ufb01nition of safety, one may want to learn a policy that satis\ufb01es it, without constraining the training process. Such approach would provide a safe policy for deployment after training. In contrast, recent work in safe RL aims at enforcing safety during learning as well, which is a dif\ufb01cult task as the RL agent needs to explore. This paper follows this latter trend and safety is formulated as the satisfaction of a set of general constraints on distributions of costs or rewards. Thus, a safe policy is de\ufb01ned as a policy that respects some constraints in expectation or in probability. Our goal is to learn among safe policies one that optimizes the usual expected discounted sum of rewards. Furthermore, we also require safe learning, i.e., the safety constraints shall be satis\ufb01ed during training as well. To that aim, we \ufb01rst propose a general framework that accepts various safety formulations from bounds on CVaR to variance, to probability of reaching bad states. This general framework is made amenable by formulating the problem in the distributional RL setting, where distributions of returns are learned in contrast to their expectations. Based on this general distributional formulation, we extend an existing safe RL algorithm, Interior-Point Policy Optimization (IPO) [Liu et al., 2020], to the distributional setting, for which we formulate a performance bound. Contributions Our contributions are threefold: (1) We propose a general framework for safe RL where safety is expressed as the satisfaction of risk constraints, which is enforced during and after training. A risk constraint can be expressed as any (sub)differentiable function of a random variable representing a cumulative reward or cost. (2) In order to obtain a practical algorithm, we formulate our problem and solution method in the distributional RL setting. (3) Our proposition, called SDPO, is empirically validated on multiple domains with various risk constraints against relevant state-of-the-art algorithms. Submitted to the 37th Conference on Uncertainty in Arti\ufb01cial Intelligence (UAI 2021). arXiv:2102.13446v1 [cs.LG] 26 Feb 2021 \fAlgorithm PD CPO IPO PCPO SDPO Expectation \u0013 \u0013 \u0013 \u0013 \u0013 Variance \u0013 \u0017 \u0017 \u0017 \u0013 CVaR \u0013 \u0017 \u0017 \u0017 \u0013 (Sub)differentiable fun. \u0017 \u0017 \u0017 \u0017 \u0013 Safe learning \u0017 \u0013 \u0013 \u0013 \u0013 Safe execution \u0013 \u0013 \u0013 \u0013 \u0013 Table 1: Summary of related algorithms: which constraints are accepted, whether safety is guaranteed during learning/execution. PD (primal-dual) actually corresponds to several algorithms. 2 RELATED WORK Safe RL is becoming an important research direction in RL [Garcia and Fernandez, 2015]. In this paper, we distinguish three main non-exclusive aspects for safe RL: policy safety, algorithmic safety, and exploration safety in exploration. Policy safety corresponds to the goal of learning a safe policy such that its execution would avoid/limit the occurrence of bad outcomes (e.g., probability of reaching bad states or bound on performance). Safety can be modeled as additional constraints or penalization. In that sense, safe RL is related to risk-sensitive RL [Borkar, 2010, Chow and Ghavamzadeh, 2014, Chow et al., 2015] where the goal is to learn a policy that optimizes a risk-sensitive objective function, constrained RL [Achiam et al., 2017, Tessler et al., 2019, Miryoose\ufb01et al., 2019, Liu et al., 2020] where the goal is to learn a policy that optimizes some constraints, and risk-constrained RL [Geibel and Wysotzky, 2005, Borkar and Jain, 2014, Prashanth and Ghavamzadeh, 2016, Chow et al., 2017, Brazdil et al., 2020], which in some sense combines the previous settings. The works in those three areas, with a few exceptions (CPO [Achiam et al., 2017], IPO [Liu et al., 2020], PCPO [Yang et al., 2020]), do not provide any safety guarantee during learning. They are based on a primal-dual approach (PD). For the exceptions, they can only accept simple constraints on expected discounted total costs. Notably, our algorithm, called Safe Distributional Policy Optimization (SDPO), builds on IPO [Liu et al., 2020] and extends it to the distributional RL setting, which then allows the formulation of sophisticated constraints. See Table 1 for a summary. Algorithmic safety corresponds to the idea that running a safe RL algorithm should also guarantee some safety property, e.g., continuous policy improvement [Pirotta et al., 2013], convergence to stationary point [Yu et al., 2019], satisfaction of logic speci\ufb01cations [Alshiekh et al., 2018], satisfaction of constraints [Achiam et al., 2017, Yang et al., 2020] during learning. However, none of those propositions can take into account sophisticated safety constraints (e.g., on risk measure). Exploration safety focuses on an important aspect of safe RL: the exploration problem during learning in order to limit/avoid selecting dangerous actions.. In this context, safety is generally modeled as avoiding bad states. One main line of work [Turchetta et al., 2016, Berkenkamp et al., 2017, Wachi et al., 2018, Cheng et al., 2019] tries to prevent the choice of a bad action by learning a model. Other directions have been explored, for instance, by using a veri\ufb01cation method [Fulton and Platzer, 2018] or by correcting a chosen action [Dalal et al., 2018]. However, this type of approaches requires the assumption that the environment is deterministic. Although research has been active in safe RL, to the best of our knowledge, no ef\ufb01cient algorithm has been proposed for the general framework that we propose. In particular, our proposition can learn a risk-constrained policy while ensuring the satisfaction of the risk constraint during learning. Our proposition is based on distributional RL [Bellemare et al., 2017], which has demonstrated that estimating distributions of returns instead of their expectations can ensure better overall performance of RL algorithms. Most work [Dabney et al., 2018, Yang et al., 2019] in this area focuses on value-based methods, extending mostly the DQN algorithm [Mnih et al., 2015]. However, one recent work has also investigated the extension of the distributional setting to policy optimization [Barth-Maron et al., 2018]. Our work is based on the IQN algorithm [Dabney et al., 2018] instead of more recent propositions (e.g., [Yang et al., 2019]) because of its simplicity and because it perfectly \ufb01ts our purposes. Note that in IQN, the authors consider optimizing a risk-sensitive objective function, but they do not consider constraints, as we do. 3 BACKGROUND In this section, we present the notations, recall the de\ufb01nition of a Markov Decision Process (MDP) as well as its extension to Constrained Markov Decision Process (CMDP), and review the notions (e.g., CVaR) and the related deep RL algorithms, which we use to formulate our method. Notations For any set X, \u2206(X) denotes the set of probability distributions (or densities if X is continuous) over X. For any function f : Y \u2192\u2206(X) and any (x,y) \u2208X \u00d7Y, f(x | y) denotes the probability (or density value if X is continuous) of obtaining x according to f(y). For any n \u2208N, [n] denotes {1,2,...,n}. Vectors (resp. matrix) will be denoted in bold lowercase (resp. uppercase) with their components in normal font face with indices. For instance, v v v = (v1,...,vn) \u2208Rn or M M M = (mij)i\u2208[n], j\u2208[m] \u2208Rn\u00d7m. MDP Model A Markov Decision Process (MDP) [Sutton and Barto, 2018] is de\ufb01ned as a tuple (S ,A ,P,r,\u00b5 \u00b5 \u00b5,\u03b3), where S is a set of states, A is a set of actions, P : 2 \fS \u00d7A \u2192\u2206(S ) is a transition function, r : S \u00d7A \u2192R is a reward function, \u00b5 \u00b5 \u00b5 \u2208\u2206(S ) is a distribution over initial states, and \u03b3 \u2208[0,1) is a discount factor. In this model, a policy \u03c0 : S \u2192\u2206(A ) is de\ufb01ned as a mapping from states to distributions over actions. We also use notation \u03c0\u03b8 \u03b8 \u03b8 to emphasize that the policy is parameterized by \u03b8 \u03b8 \u03b8 (e.g., parameters of neural network). In the remaining, we identify \u03c0\u03b8 \u03b8 \u03b8 to its parameter \u03b8 \u03b8 \u03b8 for ease of notation. The usual goal in an MDP is to search for a policy that maximizes the expected discounted total reward: J(\u03b8 \u03b8 \u03b8) = E\u00b5 \u00b5 \u00b5,P,\u03c0\u03b8 \u03b8 \u03b8 [\u2211\u221e t=0 \u03b3tr(st,at)] (1) where E\u00b5 \u00b5 \u00b5,P,\u03c0\u03b8 \u03b8 \u03b8 is the expectation with respect to the distribution \u00b5 \u00b5 \u00b5, the transition function P, and \u03c0\u03b8 \u03b8 \u03b8. We de\ufb01ne the (state) value function of a policy \u03c0\u03b8 \u03b8 \u03b8 for state s as: V \u03b8 \u03b8 \u03b8(s) = EP,\u03c0\u03b8 \u03b8 \u03b8 [\u2211\u221e t=0 \u03b3tr(st,at)|s0 = s] (2) where EP,\u03c0\u03b8 \u03b8 \u03b8 is the expectation with respect to the transition function P and \u03c0\u03b8 \u03b8 \u03b8. The (action) value function is de\ufb01ned as follows: Q\u03b8 \u03b8 \u03b8(s,a) = EP,\u03c0\u03b8 \u03b8 \u03b8 [\u2211\u221e t=0 \u03b3tr(st,at)|s0 = s,a0 = a] (3) and the advantage function is de\ufb01ned as: A\u03b8 \u03b8 \u03b8(s,a) = Q\u03b8 \u03b8 \u03b8(s,a)\u2212V \u03b8 \u03b8 \u03b8(s). As there is no risk of ambiguity, to avoid clutter we drop \u00b5 \u00b5 \u00b5 and P in the notation of the expectation from now on. Reinforcement learning (RL) is based on MDP, but in RL, the transition and reward functions are not assumed to be known. Thus, in (online) RL, an optimal policy needs to be learned by trial and error. CMDP Model The MDP model can be extended to the Constrained MDP (CMDP) setting [Altman, 1999] in order to handle constraints. In a CMDP, m cost functions ci : S \u00d7 A \u2192R for i \u2208[m] are introduced in addition to the original rewards. For each cost function ci, the corresponding value functions can be de\ufb01ned. They are denoted with a subscript, e.g., Jci, Vci, or Qci. For a CMDP, the goal is to \ufb01nd a policy that maximizes the expected discounted total reward while satisfying constraints on the expected costs Jci(\u03b8 \u03b8 \u03b8): max \u03b8 \u03b8 \u03b8 J(\u03b8 \u03b8 \u03b8) s.t. Jci(\u03b8 \u03b8 \u03b8) \u2264di \u2200i \u2208[m], (4) where d d d = (d)i\u2208[m] \u2208Rm is a \ufb01xed vector constraint bound. Proximal Policy Optimization The family of policy gradient methods constitutes the standard approach for tackling an RL problem when considering parametrized policies. Such a method iteratively updates a policy parameter in the direction of a gradient given by [Sutton and Barto, 2018]: \u2207\u03b8 \u03b8 \u03b8J(\u03b8 \u03b8 \u03b8) = E(s,a)\u223cd d d\u03c0\u03b8 \u03b8 \u03b8 [A\u03b8 \u03b8 \u03b8(s,a)\u2207\u03b8 \u03b8 \u03b8 log\u03c0\u03b8 \u03b8 \u03b8(a | s)] where the expectation is taken with the respect to the stateaction visitation distribution of \u03c0\u03b8 \u03b8 \u03b8. One issue in applying a policy gradient method is the dif\ufb01culty of estimating A\u03b8 \u03b8 \u03b8 online. This issue motivates the use of an actor-critic scheme where an actor (\u03c0\u03b8 \u03b8 \u03b8) and a critic (e.g., A\u03b8 \u03b8 \u03b8 or V \u03b8 \u03b8 \u03b8 depending on the speci\ufb01c algorithm) are simultaneously learned. Learning the value function can help the policy update, such as reducing the gradient variance. Proximal Policy Optimization (PPO) [Schulman et al., 2017] is a state-of-the-art actor-critic algorithm, which optimizes instead a clipped surrogate objective function JPPO(\u03b8 \u03b8 \u03b8) de\ufb01ned by: \u2211\u221e t=0 min(\u03c9t(\u03b8 \u03b8 \u03b8)A \u00af \u03b8 \u03b8 \u03b8(st,at),clip(\u03c9t(\u03b8 \u03b8 \u03b8),\u03b5)A \u00af \u03b8 \u03b8 \u03b8(st,at)), (5) where \u00af \u03b8 \u03b8 \u03b8 is the current policy parameter, \u03c9t(\u03b8 \u03b8 \u03b8) = \u03c0\u03b8 \u03b8 \u03b8 (at|st) \u03c0 \u00af \u03b8 \u03b8 \u03b8 (at|st), and clip(\u00b7,\u03b5) is the function to clip between [1 \u2212\u03b5,1 + \u03b5]. This surrogate function was motivated as an approximation of that used in TRPO [Schulman et al., 2015], which was introduced to ensure monotonic improvement after a policy parameter update. Although PPO is more heuristic than TRPO, its advantages are its simplicity and lower sample complexity. Distributional Reinforcement Learning The key idea in distributional RL [Bellemare et al., 2017] is to learn a random variable to represent the discounted return Z Z Z\u03b8 \u03b8 \u03b8(s,a) = \u2211\u221e t=0 \u03b3tr r rt where r r rt is the random variable representing the immediate reward received at time step t when applying action a in state s and policy \u03c0\u03b8 \u03b8 \u03b8 thereafter. In contrast, standard RL algorithms directly estimate the expectation of Z Z Z\u03b8 \u03b8 \u03b8(s,a), since Q\u03b8 \u03b8 \u03b8(s,a) = EZ Z Z\u03b8 \u03b8 \u03b8 [Z Z Z\u03b8 \u03b8 \u03b8(s,a)] where the expectation is with respect to the distribution of Z Z Z\u03b8 \u03b8 \u03b8(s,a). Recall that any real random variable Z can be represented by its cumulative distribution denoted FZ(z) = P(Z \u2264z) \u2208[0,1], or equivalently by its quantile function (inverse cumulative distribution) denoted F\u22121 Z (p) = inf{z \u2208R | p \u2264FZ(z)} for any p \u2208[0,1]. For ease of notation, Zp denotes the pquantile F\u22121 Z (p). In the Implicit Quantile Network (IQN) algorithm, Dabney et al. [2018] proposed to approximate the quantile function of Z Z Z(s,a) with a neural network and to learn it using quantile regression [Koenker, 2005]. Concretely, the quantile function of Z Z Z(s,a) can be learned as follows. Denote \u02c6 Z Z Z(s,a) the approximated random variable whose quantile function is given by a neural network \u03a8(s,\u03c4), which takes as input a state s and a probability \u03c4 \u2208[0,1] and returns the corresponding \u03c4-quantile \u02c6 Z Z Z\u03c4(s,a) for each action a. After observing a transition (s,a,r,s\u2032), \u03a8 can be trained by sampling 2N values \u03c4 \u03c4 \u03c4 = (\u03c41,...,\u03c4N) and \u03c4 \u03c4 \u03c4\u2032 = (\u03c4\u2032 1,...,\u03c4\u2032 N) with the uniform distribution on [0,1]. By inverse transform sampling, sampling \u03c4 \u03c4 \u03c4 amount to sampling N values from \u02c6 Z Z Z(s,a) corresponding to \u02c6 Z Z Z\u03c41(s,a),..., \u02c6 Z Z Z\u03c4N(s,a)), and similarly for \u03c4 \u03c4 \u03c4\u2032 and sampling from \u02c6 Z Z Z(s\u2032,\u03c0(s\u2032)) where \u03c0 is the current policy. Those samples de\ufb01ne N2 TD errors in the distributional setting: \u03b4ij = r +\u03b3 \u02c6 Z Z Z\u03c4\u2032 j(s\u2032,\u03c0(s\u2032))\u2212\u02c6 Z Z Z\u03c4i(s,a) (6) 3 \fFollowing quantile regression, the following loss function for training the neural network \u03a8 in (s,a,r,s\u2032) is given by: LIQN = 1 N \u2211i\u2208[N] \u2211j\u2208[N] \u03be \u03ba \u03c4i (\u03b4i j) (7) where for any \u03c4 \u2208(0,1], \u03be \u03ba \u03c4 (\u03b4i j) = |\u03c4 \u2212I(\u03b4i j < 0)| L\u03ba(\u03b4ij) \u03ba is the quantile Huber loss with threshold \u03ba with L\u03ba(\u03b4) = 1 2\u03b4 2 for |\u03b4| \u2264\u03ba or \u03ba(|\u03b4|\u22121 2\u03ba) otherwise. Interior-Point Policy Optimization In the CMDP setting, Interior-point Policy Optimization (IPO) [Liu et al., 2020] is a recent RL algorithm to maximize an expected discounted total rewards while satisfying constraints on some expected discounted total costs. To deal with a constraint, IPO augments PPO\u2019s objective function with a logarithmic barrier function applied to it, which provides a smooth approximation of the indicator function. The constrained problem then becomes an unconstrained one with an augmented objective function: max \u03b8 \u03b8 \u03b8 JIPO(\u03b8 \u03b8 \u03b8) = JPPO(\u03b8 \u03b8 \u03b8)+\u2211i\u2208[m] ln(di\u2212Jci(\u03b8 \u03b8 \u03b8)) \u03b7 , (8) where \u03b7 is a hyper-parameter. As \u03b7 tends to \u221e, the solution of (8) tends to that of the original constrained problem. The objective JIPO is differentiable, therefore, we can apply a gradient-based optimization method to update the policy. 4 PROBLEM FORMULATION Let \u2206(R) denote the set of real random variables. Therefore, Z Z Z \u2208\u2206(R)S denotes a function from states to random variables. Given an (unknown) CMDP, the problem tackled in this paper can be expressed as a constrained optimization problem formulated in the distributional RL setting: max \u03b8 \u03b8 \u03b8 Es0\u223c\u00b5 \u00b5 \u00b5,Z Z Z\u03b8 \u03b8 \u03b8 [Z Z Z\u03b8 \u03b8 \u03b8(s0)] (9) s.t. \u03c1i(Y Y Y \u03b8 \u03b8 \u03b8 i ) \u2264di \u2200i \u2208[m] (10) where Z Z Z\u03b8 \u03b8 \u03b8(s) corresponds to the return distribution generated by policy \u03c0\u03b8 \u03b8 \u03b8 from the reward function, for all i \u2208[m], Y Y Y \u03b8 \u03b8 \u03b8 i (s) corresponds to the cumulated cost distribution from cost function ci, and \u03c1i : \u2206(R)S \u2192R is a (sub)differentiable function. Note that this formulation is strictly more general than problem (4) thanks to the possibly non-linear functions \u03c1i\u2019s. We recall a few common cases for \u03c1i in Table 2. The expectation is a simple example. For episodic MDPs with absorbing bad states, another simple example is the probability of bad states, which is de\ufb01ned like the expectation, but applied to a undiscounted cost equal to 1 for a bad state and 0 otherwise. CVaR is a widely-used risk measure in \ufb01nance. In this context, the \u03b1-CVaR of a portfolio is intuitively its expected return in the worst \u03b1 \u00d7100% cases. Here, \u03c1i De\ufb01nition Expectation Es0\u223c\u00b5 \u00b5 \u00b5,Y Y Y [Y Y Y(s0)] Prob. of bad states Es0\u223c\u00b5 \u00b5 \u00b5,Y Y Y [Y Y Y(s0)] \u03b1-CVaR of rewards Es0\u223c\u00b5 \u00b5 \u00b5[ 1 \u03b1 R \u03b1 0 Y\u03b6 (s0)d\u03b6] Variance Es0\u223c\u00b5 \u00b5 \u00b5[EY Y Y [Y Y Y(s0)2]\u2212EY Y Y [Y Y Y(s0)]2] Table 2: Common examples for \u03c1i. we adapted the de\ufb01nition to rewards (instead of costs). Naturally, a CVaR of an additional cost would also be possible. In contrast to previous methods, our framework can accept any (sub)differentiable de\ufb01nitions for \u03c1i (e.g., coherent risk measures). Note that we chose to take the mean (over initial states) of the CVaRs instead of the CVaR of the mean. The latter would have been possible as well, but because CVaR is a convex risk measure, our de\ufb01nition is an upperbound of the CVaR of the mean, which means that our formulation is more conservative and in that sense, safer. The same trick applies if \u03c1i were de\ufb01ned based on any other coherent risk measure, of which CVaR is only one instance. Similarly, for the variance, we use the mean (over initial states) of variances instead of the other way around. Since the initial states are sampled in an independent way, the Y Y Y(s0)\u2019s are independent. This means that our de\ufb01nition upperbounds the variance of the mean of the Y Y Y(s0)\u2019s, leading to a more cautious formulation, which is more desirable for safe RL. In this paper, we de\ufb01ne a safe policy as a policy satisfying constraints (10). Our goal is to learn a policy maximizing an expected discounted total rewards (9) among all safe policies (i.e., safe execution). Besides, we require that any policy used during learning be safe (i.e., safe learning). The formulation of (9)-(10) in the distributional RL setting serves two purposes. First, as observed in distributional RL, estimating the distributions of the cumulated rewards improves the overall performance. Second, many safety constraints (10), such as CVaR, become natural and simple to express in the distributional setting. 5 PROPOSED METHOD To solve problem (9)-(10) in the safe RL setting, we extend IPO to the distributional RL setting and combine it with an adaptation of IQN. Next, we explain the general principle of our approach, and then discuss some techniques to obtain a concrete ef\ufb01cient implementation. 4 \f5.1 GENERAL PRINCIPLE To adapt IPO, we rewrite the surrogate objective function used in PPO in the distributional setting: JPPO(\u03b8 \u03b8 \u03b8) = \u221e \u2211 t=0 min(\u03c9t(\u03b8 \u03b8 \u03b8)E\u03b8 \u03b8 \u03b8[Z Z Z \u00af \u03b8 \u03b8 \u03b8(st,at)\u2212Z Z Z \u00af \u03b8 \u03b8 \u03b8(st)], clip(\u03c9t(\u03b8 \u03b8 \u03b8),\u03b5)E\u03b8 \u03b8 \u03b8[Z Z Z \u00af \u03b8 \u03b8 \u03b8(st,at)\u2212Z Z Z \u00af \u03b8 \u03b8 \u03b8(st)]). (11) Problem (9)-(10) can then be tackled by iteratively solving the following problem with this surrogate function: max \u03b8 \u03b8 \u03b8 JPPO(\u03b8 \u03b8 \u03b8) s.t. \u03c1i(Y Y Y \u03b8 \u03b8 \u03b8 i ) \u2264di \u2200i \u2208[m]. (12) Now, following IPO, using the log barrier function, we reformulate problem (12) as an unconstrained problem: max \u03b8 \u03b8 \u03b8 JPPO(\u03b8 \u03b8 \u03b8)+ \u2211 i\u2208[m] ln(di \u2212\u03c1i(Y Y Y \u03b8 \u03b8 \u03b8 i )) \u03b7i . (13) In contrast to convex optimization [Boyd and Vandenberghe, 2004], we introduce a constraint-dependent hyperparameter \u03b7i to better control the satisfaction of each constraint separately. Finally, we propose to solve problem (13) with an actorcritic architecture where both the actor and the critic are approximated with neural networks. For the critic, we adapt the approach proposed for IQN [Dabney et al., 2018] to learn random returns Z Z Z and random cumulated costs Y Y Y i\u2019s. For the actor, parameter \u03b8 \u03b8 \u03b8 of policy \u03c0\u03b8 \u03b8 \u03b8(a|s) is updated in the direction of the gradient of the objective function de\ufb01ned in (13): \u2207\u03b8 \u03b8 \u03b8JPPO(\u03b8 \u03b8 \u03b8)\u2212\u2211 i\u2208[m] 1 \u03b7i \u2207\u03b8 \u03b8 \u03b8\u03c1i(Y Y Y \u03b8 \u03b8 \u03b8 i ) di \u2212\u03c1i(Y Y Y \u03b8 \u03b8 \u03b8 i ) . (14) This gradient raises one dif\ufb01culty regarding the computation of \u2207\u03b8 \u03b8 \u03b8\u03c1i(Y Y Y \u03b8 \u03b8 \u03b8 i ), which corresponds to the gradient of a critic with respect to the parameters of the actor. When \u03c1i is linear (i.e., for expectation constraints), the policy gradient theorem [Sutton et al., 2000] applies and speci\ufb01es how to compute \u2207\u03b8 \u03b8 \u03b8\u03c1i(Y Y Y \u03b8 \u03b8 \u03b8 i ). However, when \u03c1i is non-linear (i.e., for more sophisticated risk constraints), the gradient in (14) cannot be obtained easily. To solve this issue, we propose a simple and generic solution, which consists in connecting the actor network to any critic network with a non-linear \u03c1i (see Figure 1 for an illustration where only one critic corresponding to non-linear \u03c1i is displayed). Using this construct, the exact gradient of \u03c1i(Y Y Y \u03b8 \u03b8 \u03b8 i ) can be computed by automatic differentiation if \u03c1i is (sub)differentiable and Y Y Y \u03b8 \u03b8 \u03b8 i is approximated with a neural network, as we assume. Note that in previous work, Dabney et al. [2018] who proposed to optimize a risk measure in IQN did not face this gradient issue because their algorithm is based on DQN [Mnih et al., 2015] and therefore does not have an actor network. As a side note, Figure 1: Architecture of SDPO where critic \u03a8 corresponds to the objective function and critic \u03a8i corresponds to constraint i. Both critics outputs a distribution. this construct could be used to deal with a more general problem than (9)-(10) where a non-linear transformation is also applied on the objective function. For instance, one may want to optimize the CVaR of some rewards subject to some other risk constraints, which is as far as we know a completely novel problem. We leave this investigation to future work. Like any interior point method, an initial feasible (i.e., safe) solution is needed. This requirement is actually not as strong as it seems. In many MDPs (or CMDPs), there is a known safe action for every state. For instance, in navigation problem, the action of not moving is safe if the current state is safe. In \ufb01nance, investing in cash or a risk-free asset is safe. For many problems, a dummy action that does not have any effect can be added to de\ufb01ne an initial safe action. More generally, when such a simple safe policy cannot be de\ufb01ned, an expert could possibly provide this initial safe policy or it could be obtained by pretraining with an imperfect simulator. 5.2 TECHNIQUES FOR EFFICIENT IMPLEMENTATION In this section, to simplify notations, we do not write the superscript \u03b8 \u03b8 \u03b8 for the random variables Z Z Z and Y Y Y i\u2019s. To make our \ufb01nal algorithm more ef\ufb01cient, we propose to learn Z Z Z(s) only, instead of Z Z Z(s,a) as it is the usual practice in distributional RL. This serves two purposes: (1) a state-dependent distribution is easier to learn, and (2) the advantage function can be easily estimated from a state value function alone. Note that for the constraints only Y Y Y i(s) is needed for any i \u2208[m]. Recall that the two random variables Z Z Z(s) and Z Z Z(s,a) are related by the following equation: Z Z Z(s,a) = R(s,a)+\u03b3Es\u2032\u223cP(\u00b7|s,a)[Z Z Z(s\u2032)] (15) Following IQN, random variable Z Z Z(s) is approximated by a random variable \u02c6 Z Z Z, which is represented by a neural network. The expectation of Z Z Z(s) can then be approximated by that 5 \fAlgorithm 1 SDPO Require: Constraint bound d d d, Initial policy network \u03c0\u03b8 \u03b8 \u03b8 0, Initial IQN network \u03a80, Hyperparameters \u03b5 for PPO clip rate and \u03b7i for each logarithmic barrier function. 1: for k = 0,1,... do 2: B \u2190run policy \u03c0\u03b8 \u03b8 \u03b8 for N trajectories 3: # update the IQN network 4: Sample \u03c41 < ... < \u03c4N from U [0,1] 5: # quantile regression 6: for i, j \u2208[N] do 7: \u03b4ij = r +\u03b3 \u02c6 Z Z Z\u03c4\u2032 j(s\u2032,\u03c0(s\u2032))\u2212\u02c6 Z Z Z\u03c4i(s,a) 8: end for 9: Update \u03a8k+1 with \u2207LIQN (see (7)) using B 10: Update \u03b8 \u03b8 \u03b8k+1 with \u2207J(\u03b8 \u03b8 \u03b8k) de\ufb01ned in (14) using B 11: end for of \u02c6 Z Z Z(s) with \u03c4 \u03c4 \u03c4 randomly uniformly sampled in [0,1]: E\u03b8 \u03b8 \u03b8[Z Z Z(s)] \u2248 N \u2211 i=1 (\u03c4i \u2212\u03c4i\u22121) \u02c6 Z Z Z\u03c4i(s). (16) setting \u03c40 = 0 by convention and assuming 0 < \u03c41 < \u03c42 < ... < \u03c4N < 1. The exact handling of the constraints depend on the de\ufb01nition of \u03c1i. As illustrative examples, we explain how they can be computed for some concrete cases. If \u03c1i is simply de\ufb01ned as an expectation, it can be dealt with like the objective function. For CVaR, it can be estimated as follows for a random variable Y Y Y(s0): c\u03b1(Y Y Y) \u2248c\u03b1( \u02c6 Y Y Y) = 1 \u03b1 \u2211 i|\u03c4i\u2264\u03b1 (\u03c4i \u2212\u03c4i\u22121) \u02c6 Y Y Y \u03c4i(s0) (17) Here, in contrast to the standard expectation (e.g., (16)), an implementation trick consists in sampling \u03c4 \u03c4 \u03c4 in [0,\u03b1] such as \u03c41 < \u03c42 < ... < \u03c4N = \u03b1 since (17) corresponds to the expectation conditioned on event \u201c \u02c6 Y Y Y \u2264\u02c6 Y Y Y \u03b1\u201d. For the variance, \u03c1i(Y Y Y) can be estimated by: N \u2211 i=1 (\u03c4i \u2212\u03c4i\u22121) \u02c6 Y Y Y \u03c4i(s0)2 \u2212 N \u2211 i=1 (\u03c4i \u2212\u03c4i\u22121) \u02c6 Y Y Y \u03c4i(s0) !2 (18) The pseudo code of our method is shown in Algorithm 1. 5.3 PERFORMANCE GUARANTEE BOUND For \ufb01xed \u03b7 \u03b7 \u03b7, solving (13) instead of (12) may incur a performance loss, which can be bounded under natural conditions, which we discuss below. Since this result uses weak Lagrange duality, we \ufb01rst recall the de\ufb01nition of the Lagrangian of (12): L (\u03b8 \u03b8 \u03b8,\u03bb \u03bb \u03bb) = JPPO(\u03b8 \u03b8 \u03b8)+ \u2211 i\u2208[m] \u03bbi(di \u2212\u03c1i(Y Y Y \u03b8 \u03b8 \u03b8 i )) and its dual function: g(\u03bb \u03bb \u03bb) = max\u03b8 \u03b8 \u03b8 L (\u03b8 \u03b8 \u03b8,\u03bb \u03bb \u03bb). The following bound can be proven: Theorem 1. If \u03b8 \u03b8 \u03b8 \u2217 1 is an optimal solution of (12), \u03b8 \u03b8 \u03b8 \u2217 2 is the strictly feasible optimal solution of (13) and the unique stationary point of L (\u00b7,\u03bb \u03bb \u03bb \u2217) with \u03bb \u2217 i = 1 \u03b7i(di\u2212\u03c1i(Y Y Y \u03b8 \u03b8 \u03b8\u2217 2 i )) then: JPPO(\u03b8 \u03b8 \u03b8 \u2217 1)\u2212JPPO(\u03b8 \u03b8 \u03b8 \u2217 2) \u2264\u2211 i\u2208[d] 1 \u03b7i (19) Proof. This result generalizes Theorem 1 of [Liu et al., 2020], whose proof implicitly uses convexity (which does not hold in deep RL) and follows from the discussion in page 566 of [Boyd and Vandenberghe, 2004]. We adapt the proof to our more general setting. We have: JPPO(\u03b8 \u03b8 \u03b8 \u2217 1) \u2264g(\u03bb \u03bb \u03bb \u2217) (20) = JPPO(\u03b8 \u03b8 \u03b8 \u2217 2)+ \u2211 i\u2208[m] \u03bb \u2217 i (di \u2212\u03c1i(Y Y Y \u03b8 \u03b8 \u03b8\u2217 2 i )) (21) = JPPO(\u03b8 \u03b8 \u03b8 \u2217 2)+ \u2211 i\u2208[m] 1 \u03b7i (22) Step (20) holds by weak duality because \u03bb \u2217 i \u22650 for all i \u2208[m] (since \u03b8 \u03b8 \u03b8 \u2217 2 is strictly feasible). Step (21) holds because we have by de\ufb01nition of \u03b8 \u03b8 \u03b8 \u2217 2: \u2207\u03b8 \u03b8 \u03b8JPPO(\u03b8 \u03b8 \u03b8 \u2217 2)\u2212\u2211 i\u2208[m] \u2207\u03b8 \u03b8 \u03b8\u03c1i(Y Y Y \u03b8 \u03b8 \u03b8\u2217 2 i ) \u03b7i(di \u2212\u03c1i(Y Y Y \u03b8 \u03b8 \u03b8\u2217 2 i )) = 0 (23) which implies that \u03b8 \u03b8 \u03b8 \u2217 2 maximizes L (\u00b7,\u03bb \u03bb \u03bb \u2217) since \u03b8 \u03b8 \u03b8 \u2217 2 is assumed to be its unique stationary point. Step (22) holds by de\ufb01nition of \u03bb \u03bb \u03bb \u2217. The conditions in this theorem are natural. In order to apply an interior point method, the constrained problem needs to be strictly feasible. The condition on the stationarity of \u03b8 \u03b8 \u03b8 \u2217 2 is reasonable and can be controlled by setting \u03b5 (used in the clipping function of JPPO) small enough. As a direct corollary, this result implies that if (13) could be solved exactly, the error made by algorithm SDPO is controllable via setting appropriate \u03b7i\u2019s. Naturally, in the online RL setting, this assumption does not hold perfectly, but this result still provides some theoretical foundation to our proposition. In the next section, we validate the algorithm using various experimental settings. 6 EXPERIMENTAL RESULTS The experiments are carried out in three different domains to validate our algorithm: random CMDPs, safety gym, as well as \ufb01nancial investment. See Appendix A for details about hyperparameter settings. Random CMDPs are CMDPs with N states and M actions, where transition probabilities P(s\u2032 | s,a) are randomly assigned with \u2308lnN\u2309positive values for each pair of stateaction, and rewards are sampled from a uniform distribution, 6 \f(a) rewards (b) constraint Figure 2: 2(a): Average performance over 10 runs of PPO, SDPO and PD-CVaR under the random CMDP for N = 1000. 2(b): 0.1-CVaR bounded by 51. Both SDPO and PDCVaR converge to the level indicated by the dashed line. i.e., r(s,a) \u223cU [0,1]. In the experiments, we set N = 1000 and M = 10. We consider two cases: a bound over the variance or a bound over the CVaR, both over the distribution of discounted total rewards. Safety gym [Ray et al., 2019] includes a set of environments designed for evaluating safe exploration in RL. They all correspond to navigation problems where an agent (i.e., Point, Car, Doggo) moves to some random goal positions to perform some tasks (i.e., Goal, Button, Push) while avoiding entering dangerous hazards or bumping into fragile vases. Each task has two dif\ufb01culty levels (i.e., 1 or 2). See Appendix A for more details. For space reasons, we only present a selection of results in this domain in the main paper. More experimental results in these Mujoco environments are shown in Appendix B. The third domain is the \ufb01nancial stock market. The RL agent can observe the close prices of the stocks in one day, i.e., the observation ot = p p pt = (1, p1,t,..., pN,t) for N selected stocks where the \ufb01rst component corresponds to cash. We further assume that all transactions are dealt at these prices. The action of the agent is de\ufb01ned by a portfolio vector, which corresponds to allocation weights over cash and stocks, i.e., at = w w wt+1 = (w0,t+1,...,wN,t+1), w0 (resp. wi for i \u2208[N]) is the weight for cash (resp. stock i) and \u2211N i=0 wi,t = 1. Naturally, for each stock, we want to maximize the pro\ufb01t. Thus, with reward function rt = ln\u2211N i=0 wi,t pi,t pi,t\u22121 , optimizing the undiscounted cumulative rewards can maximize the pro\ufb01t. We set the CVaR boundary d1 = 0 to avoid any possible loss. Detailed settings of the experiment are listed in Appendix A. In all our experiments, all the agents are initialized so that they are in a feasible region at the beginning. In practice, an initial safe policy can be de\ufb01ned using domain knowledge or by an expert, e.g., in Mujoco domain, the agent can be initialized to stay and doing nothing. For fairness, the PPO agent is also initialized with the same safe policy as all other agents. Two policy gradient algorithms with CVaR and variance constraints respectively, PD-CVaR [Chow and Ghavamzadeh, 2014] and PD-VAR, which is modi\ufb01ed from (a) rewards (b) constraint Figure 3: 3(a): Average performance over 10 runs of PPO, SDPO and PD-VAR under the random CMDP for N = 1000. 3(b): Variance bounded by 2. Both SDPO and PD-VAR converge to the level indicated by the dashed line. Algorithm 2 in [Prashanth and Ghavamzadeh, 2016] are used as baselines in the \ufb01rst domain. SDPO is compared with CPO [Achiam et al., 2017], PCPO [Yang et al., 2020], and IPO [Liu et al., 2020] in the second domain. PPO [Schulman et al., 2017] is evaluated on all domains to serve as a non-safe RL method. Note that in contrast to our architecture SDPO, none of those algorithms can tackle the problem de\ufb01ned in (9)-(10) in its most general form. The experiments are designed to evaluate SDPO in a variety of domains with various risk constraints and to answer the following questions: (A) How does SDPO compare with methods based on Lagrangian relaxation? (B) How does SDPO compare with other safe RL algorithms? Does the distributional formulation of SDPO help compared to IPO? (C) How does SDPO perform with multiple constraints (cumulative cost and probability of reaching a bad states)? (D) How does SDPO perform on a real domain? How does the constraint stringency impact the performance of SDPO? Question (A) To answer (A), we perform some experiments on the \ufb01rst domain, random CMDPs, with either a constraint on CVaR or a constraint on variance. Both are based on the rewards. Therefore, the \ufb01rst needs to be lowerbounded, while the second needs to be upper-bounded. The con\ufb01dence level is \ufb01xed to \u03b1 = 0.1 and the bound for CVaR is set to 51 and that for the variance is set to 2. The bounds were chosen so that they are not too restrictive. From the results in Figure 2, as expected, PPO without constraint achieves the best total rewards and converges faster than the constrained ones. When the CVaR value is bounded, PD-CVaR and SDPO both converge to a slightly worse but safe policy, however SDPO converges faster. From the results in Figure 3, similar observations can be drawn for PPO, PD-VAR, and SDPO. With regards to safety, we can again conclude than SDPO is superior. Question (B) To answer (B), we perform some experiments on the second domain, Safety gym, which is a much more dif\ufb01cult domain than random CMDPs. For this domain, 7 \f(a) rewards (b) constraint Figure 4: Average performance over 10 runs of PPO, SDPO, CPO, PCPO and IPO under Point-Goal1. They are bounded by the dashed line d1 = 25 in 4(b). we did not evaluate the methods based on Lagrangian relaxation: since they do not use a critic, they would not be competitive. In Safety gym, the agent is penalized by receiving a cost c1 = 1 when touching a fragile vase. With a constraint on expected total cost \u03c11(Y Y Y 1) = Es0\u223c\u00b5 \u00b5 \u00b5,Y Y Y 1[Y Y Y 1(s0)] \u2264d1, we are able to compare SDPO with other safe RL algorithms like CPO, PCPO and IPO. We only show the results for Point-Goal1 in Figure 4. For other tasks, please refer to Appendix B. According to Figure 4, SDPO, PCPO and IPO can explore safely, while CPO cannot satisfy the constraint well. This latter observation regarding CPO may be surprising since CPO was designed to solve CMDPs, but similar results were also reported in previous work [Ray et al., 2019]. Among these three latter algorithms, SDPO and IPO performs the best. In Figure 4 and in all the Safety-gym environments (see Appendix B), SDPO dominates IPO in terms of either returns or convergence rates (and sometimes both), which con\ufb01rms the positive contribution of the distributional critics. Question (C) To demonstrate that SDPO can satisfy multiple constraints, the safety gym environment is used again, but with a variation. We modify the hazard area to be end states where an agent receives a cost c2 = 1, and the episode is terminated. In addition to the previous constraint, another one is enforced: \u03c12(Y Y Y 2) = Es0\u223c\u00b5 \u00b5 \u00b5,Y Y Y 2[Y Y Y 2(s0)] \u2264d2, where Y Y Y 2 is the undiscounted cumulative cost distribution from cost function c2. Here, we set the bounds: d1 = 10 and d2 = 0.1. From Figures 5(a) and 5(b), PPO without constraints achieves much more goals, but at the cost of violating all the constraints. For constraint \u03c12, both SDPO and IPO agents can avoid entering into hazards during training. For constraint \u03c11, SDPO converges faster than IPO because of the adaption to distributional RL. Question (D) To answer (D), we switch to the \ufb01nance domain, where the stock market data of year 2019 is used. We run SDPO with a constraint on CVaR de\ufb01ned over rewards using different con\ufb01dence levels \u03b1. Note that since the CVaR is de\ufb01ned over rewards, it needs to be lower(a) rewards (b) constraints Figure 5: 5(a): Average performance over 5 runs of PPO, SDPO and IPO under Point-Goal2. 5(b): Average costs of PPO, SDPO and IPO under Point-Goal2. Figure 6: Average performance over 10 runs of PPO and SDPO with con\ufb01dence level \u03b1 = 0.01,0.05,0.1. bounded. We also run PPO as a baseline to show the performance without any constraints. From Figure 6, all agents manage to make pro\ufb01ts. With tighter constraint on risk (smaller \u03b1), the SDPO agent makes less pro\ufb01t. While PPO does not satisfy the constraint as expected, the curves for the constraint satisfaction of all SDPO agents are all similar. We therefore plot their average directly in Figure 6. PPO without constraint cannot avoid risk and thus suffers from \ufb02uctuation and loss at some time point. Interestingly, all the SDPO agents eventually perform better than PPO, which demonstrates that enforcing safety does not necessarily prevent good performance. Finally, SDPO with \u03b1 = 0.1 performs best. 7" + }, + { + "url": "http://arxiv.org/abs/1811.08052v1", + "title": "Variance Reduction in Stochastic Particle-Optimization Sampling", + "abstract": "Stochastic particle-optimization sampling (SPOS) is a recently-developed\nscalable Bayesian sampling framework that unifies stochastic gradient MCMC\n(SG-MCMC) and Stein variational gradient descent (SVGD) algorithms based on\nWasserstein gradient flows. With a rigorous non-asymptotic convergence theory\ndeveloped recently, SPOS avoids the particle-collapsing pitfall of SVGD.\nNevertheless, variance reduction in SPOS has never been studied. In this paper,\nwe bridge the gap by presenting several variance-reduction techniques for SPOS.\nSpecifically, we propose three variants of variance-reduced SPOS, called SAGA\nparticle-optimization sampling (SAGA-POS), SVRG particle-optimization sampling\n(SVRG-POS) and a variant of SVRG-POS which avoids full gradient computations,\ndenoted as SVRG-POS$^+$. Importantly, we provide non-asymptotic convergence\nguarantees for these algorithms in terms of 2-Wasserstein metric and analyze\ntheir complexities. Remarkably, the results show our algorithms yield better\nconvergence rates than existing variance-reduced variants of stochastic\nLangevin dynamics, even though more space is required to store the particles in\ntraining. Our theory well aligns with experimental results on both synthetic\nand real datasets.", + "authors": "Jianyi Zhang, Yang Zhao, Changyou Chen", + "published": "2018-11-20", + "updated": "2018-11-20", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "main_content": "Introduction 2 2 Preliminaries 4 2.1 Stochastic gradient MCMC . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Stein variational gradient descent . . . . . . . . . . . . . . . . . . . . 5 2.3 Stochastic particle-optimization sampling . . . . . . . . . . . . . . . . 5 3 Variance Reduction in SPOS 6 3.1 SAGA-POS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3.2 SVRG-POS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.3 SVRG-POS+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 4 Convergence Analysis 9 5 Experiments 12 5.1 Synthetic log-normal distribution . . . . . . . . . . . . . . . . . . . . 12 5.2 Bayesian logistic regression . . . . . . . . . . . . . . . . . . . . . . . . 13 5.2.1 Variance-reduced SPOS versus SPOS . . . . . . . . . . . . . . 14 5.2.2 Variance-reduced SPOS versus variance-reduced SGLD . . . . 14 5.2.3 Impact of number of particles . . . . . . . . . . . . . . . . . . 14 6 Conclusion 14 A More details about the notations 19 B Convergence guarantees for SAGA-LD, SVRG-LD and SVRG-LD+ 20 C Proof of the theorems in Section 4 21 D Extra theoretical discussion for SAGA-POS, SVRG-POS and SVRG-POS+ 28 E Comparison between SPOS and its variance-reduction counterpart 30 F More experiments results 30 1 Introduction Sampling has been an e\ufb00ective tool for approximate Bayesian inference, which becomes increasingly important in modern machine learning. In the setting of big data, recent research has developed scalable Bayesian sampling algorithms such as stochastic gradient Markov Chain Monte Carlo (SG-MCMC) [WT11] and Stein \fZhang & Zhao & Chen 3 variational gradient descent (SVGD) [LW16]. These methods have facilitated important real-world applications and achieved impressive results, such as topic modeling [GCH+15, LZS16], matrix factorization [CFG14, DFB+14, cBCR16], di\ufb00erential privacy [WFS15, LCLC17], Bayesian optimization [SKFH16] and deep neural networks [LCCC16]. Generally speaking, these methods use gradient information of a target distribution to generate samples, leading to more e\ufb00ective algorithms compared to traditional sampling methods. Recently, [CZW+18] proposed a particle-optimization Bayesian sampling framework based on Wasserstein gradient \ufb02ows, which uni\ufb01ed SG-MCMC and SVGD in a new sampling framework called particle-optimization sampling (POS). Very recently, [ZZC18] discovered that SVGD endows some unintended pitfall, i.e. particles tend to collapse under some conditions. As a result, a remedy was proposed to inject random noise into SVGD update equations in the POS framework, leading to stochastic particle-optimization sampling (SPOS) algorithms [ZZC18]. Remarkably, for the \ufb01rst time, non-asymptotic convergence theory was developed for SPOS (SVGD-type algorithms) in [ZZC18]. In another aspect, in order to deal with large-scale datasets, many gradient-based methods for optimization and sampling use stochastic gradients calculated on a mini-batch of a dataset for computational feasibility. Unfortunately, extra variance is introduced into the algorithms, which would potentially degrade their performance. Consequently, variance control has been an important and interesting work for research. Some e\ufb03cient solutions such as SAGA [DBLJ14] and SVRG [JZ13] were proposed to reduce variance in stochastic optimization. Subsequently, [DRP+16] introduced these techniques in SG-MCMC for Bayesian sampling, which also has achieved great success in practice. Since SPOS has enjoyed the best of both worlds by combining SG-MCMC and SVGD, it will be of greater value to further reduce its gradient variance. While both algorithm and theory have been developed for SPOS, no work has been done to investigate its variance-reduction techniques. Compared with the research on SG-MCMC where variance reduction has been well explored by recent work such as [DRP+16, CFM+18, ZXG18], it is much more challenging for SPOS to control the variance of stochastic gradients. This is because from a theoretical perspective, SPOS corresponds to nonlinear stochastic di\ufb00erential equations (SDE), where fewer existing mathematical tools can be applied for theoretical analysis. Furthermore, the fact that many particles are used in an algorithm makes it di\ufb03cult to improve its performance by adding modi\ufb01cations to the way they interact with each other. In this paper, we take the \ufb01rst attempt to study variance-reduction techniques in SPOS and develop corresponding convergence theory. We adopt recent ideas on variance reduction in SG-MCMC and stochastic-optimization algorithms, and propose three variance-reduced SPOS algorithms, denoted as SAGA particle-optimization sampling (SAGA-POS), SVRG particle-optimization sampling (SVRG-POS) and a variant of SVRG-POS without full-gradient computations, denoted as SVRG-POS+. For all these variants, we prove rigorous theoretical results on their non-asymptotic convergence rates in terms of 2-Wasserstein metrics. Importantly, our theoretical \f4 Variance Reduced Stochastic Particle-Optimization Sampling results demonstrate signi\ufb01cant improvements of convergence rates over standard SPOS. Remarkably, when comparing our convergence rates with those of variancereduced stochastic gradient Langevin dynamics (SGLD), our theory indicates faster convergence rates of variance-reduced SPOS when the number of particles is large enough. Our theoretical results are veri\ufb01ed by a number of experiments on both synthetic and real datasets. 2 Preliminaries 2.1 Stochastic gradient MCMC In Bayesian sampling, one aims at sampling from a posterior distribution p(\u03b8| X) \u221d p(X |\u03b8)p(\u03b8), where \u03b8 \u2208Rd represents the model parameter, and X \u225c{xj}N j=1 is the dataset. Let p(\u03b8| X) = (1/Z) exp(\u2212U(\u03b8)), where U(\u03b8) = \u2212log p(X |\u03b8) \u2212log p(\u03b8) \u225c\u2212 N X j=1 log p(xi |\u03b8) \u2212log p(\u03b8) is referred to as the potential energy function, and Z is the normalizing constant. We further de\ufb01ne the full gradient F and individual gradient Fj used in our paper: Fj(\u03b8) \u225c\u2212\u2207\u03b8 log p(xj |\u03b8) \u22121 N \u2207\u03b8 log p(\u03b8) = 1 N \u2207\u03b8U(\u03b8) F(\u03b8) \u225c\u2207\u03b8U(\u03b8) = N X j=1 Fj(\u03b8) We can de\ufb01ne a stochastic di\ufb00erential equation, an instance of It\u00f3 di\ufb00usion whose stationary distribution equals to the target posterior distribution p(\u03b8| X). For example, consider the following 1st-order Langevin dynamic: d\u03b8t = \u2212\u03b2\u22121F(\u03b8t)dt + p 2\u03b2\u22121dWt , (1) where t is the time index; Wt \u2208Rd is d-dimensional Brownian motion, and \u03b2 a scaling factor. By the Fokker-Planck equation [Kol31, Ris89], the stationary distribution of (1) equals to p(\u03b8| X). SG-MCMC algorithms are discretized numerical approximations of It\u00f3 di\ufb00usions (1). To make algorithms e\ufb03cient in a big-data setting, the computationally-expensive term F is replaced with its unbiased stochastic approximations with a random subset of the dataset in each interation, e.g. F can be approximated by a stochastic gradient: Gk \u225cN B X j\u2208Ik Fj(\u03b8k) = \u2212\u2207log p(\u03b8k) \u2212N B X j\u2208Ik \u2207\u03b8k log p(xj |\u03b8k) where Ik is a random subset of {1, 2, \u00b7 \u00b7 \u00b7 , N} with size B. The above de\ufb01nition of Gk re\ufb02ects the fact that we only have information from B \u226aN data points \fZhang & Zhao & Chen 5 in each iteration. This is the resource where the variance we try to reduce comes from. We should notice that Gk is also used in standard SVGD and SPOS. As an example, SGLD is a numerical solution of (1), with an update equation: \u03b8k+1 = \u03b8k \u2212\u03b2\u22121Gkh + p 2\u03b2\u22121h\u03bek, where h means the step size and \u03bek \u223cN(0, I). 2.2 Stein variational gradient descent Di\ufb00erent from SG-MCMC, SVGD initializes a set of particles, which are iteratively updated to approximate a posterior distribution. Speci\ufb01cally, we consider a set of particles {\u03b8(i)}M i=1 drawn from some distribution q. SVGD tries to update these particles by doing gradient descent on the interactive particle system via \u03b8(i) \u2190\u03b8(i) + h\u03c6(\u03b8(i)), \u03c6 = arg max \u03c6\u2208F { \u2202 \u2202hKL(q[h\u03c6]||p)} where \u03c6 is a function perturbation direction chosen to minimize the KL divergence between the updated density q[h\u03c6] induced by the particles and the posterior p(\u03b8| X). The standard SVGD algorithm considers F as the unit ball of a vector-valued reproducing kernel Hilbert space (RKHS) H associated with a kernel \u03ba(\u03b8, \u03b8\u2032). In such a setting, [LW16] shows that \u03c6(\u03b8) = E\u03b8\u2032\u223cq[\u03ba(\u03b8, \u03b8\u2032)F(\u03b8\u2032) + \u2207\u03b8\u2032\u03ba(\u03b8, \u03b8\u2032)]. (2) When approximating the expectation E\u03b8\u2032\u223cq[\u00b7] with an empirical distribution formed by a set of particles {\u03b8(i)}M i=1 and adopting stochastic gradients G(i) k \u225cN B P j\u2208Ik Fj(\u03b8(i) k ), we arrive at the following update for the particles: \u03b8(i) k+1 = \u03b8(i) k + h M M X q=1 h \u03ba(\u03b8(q) k , \u03b8(i) k )G(i) k + \u2207\u03b8(q) k \u03ba(\u03b8(q) k , \u03b8(i) k ) i (3) SVGD then applies (3) repeatedly for all the particles. 2.3 Stochastic particle-optimization sampling In this paper, we focus on RBF kernel \u03ba(\u03b8, \u03b8\u2032) = exp(\u2212\u2225\u03b8\u2212\u03b8\u2032\u22252 2\u03b72 ) due to its wide use in both theoretical analysis and practical applications. Hence, we can use a function K(\u03b8) = exp(\u2212\u2225\u03b8\u22252 2\u03b72 ) to denote the kernel \u03ba(\u03b8, \u03b8\u2032). According to the work of [CZW+18, ZZC18], the stationary distribution of the \u03c1t in the following partial di\ufb00erential equation equals to p(\u03b8| X). \u2202t\u03c1t =\u2207\u03b8 \u00b7 (\u03c1t\u03b2\u22121F(\u03b8) + \u03c1tEY \u223c\u03c1tK(\u03b8 \u2212Y )F(Y ) \u2212\u03c1t(\u2207K \u2217\u03c1t) + \u03b2\u22121\u2207\u03b8\u03c1t) . (4) When approximating the \u03c1t in Eq.(4) with an empirical distribution formed by a set of particles {\u03b8(i)}M i=1, [ZZC18] derive the following di\ufb00usion process characterizing \f6 Variance Reduced Stochastic Particle-Optimization Sampling the SPOS algorithm. d\u03b8(i) t = \u2212\u03b2\u22121F(\u03b8(i) t )dt \u22121 M M X q=1 K(\u03b8(i) t \u2212\u03b8(q) t )F(\u03b8(q) t )dt + 1 M M X q=1 \u2207K(\u03b8(i) t \u2212\u03b8(q) t )dt + p 2\u03b2\u22121dW(i) t \u2200i (5) It is worth noting that if we set the initial distribution of all the particles \u03b8(i) 0 to be the same, the system of these M particles is exchangeable. So the distributions of all the \u03b8(i) t are identical and can be denoted as \u03c1t. When solving the above di\ufb00usion process with a numerical method and adopting stochastic gradients G(i) k , one arrives at the SPOS algorithm of [ZZC18] with the following update equation: \u03b8(i) k+1 = \u03b8(i) k \u2212h\u03b2\u22121G(i) k \u2212h M M X j=1 K(\u03b8(i) k \u2212\u03b8(j) k )G(j) k + h M M X j=1 \u2207K(\u03b8(i) k \u2212\u03b8(j) k ) + p 2\u03b2\u22121h\u03be(i) k (6) where \u03be(i) k \u223cN(0, I). And SPOS will apply update (6) repeatedly for all the particles \u03b8(i) k . Detailed theoretical results for SPOS are reviewed in the Supplementary Material (SM). 3 Variance Reduction in SPOS In standard SPOS, each particle is updated by adopting G(i) k \u225cN B P j\u2208Ik Fj(\u03b8(i) k ). Due to the fact that one can only access B \u226aN data points in each update, the increased variance of the \u201cnoisy gradient\u201d G(i) k would cause a slower convergence rate. A simple way to alleviate this is to increase B by using larger minibatches. Unfortunately, this would bring more computational costs, an undesired side e\ufb00ect. Thus more e\ufb00ective variance-reduction methods are needed for SPOS. Inspired by recent work on variance reduction in SGLD, e.g., [DRP+16, CFM+18, ZXG18], we develop three di\ufb00erent variance-reduction algorithms for SPOS based on SAGA [DBLJ14] and SVRG [JZ13] in stochastic optimization. 3.1 SAGA-POS SAGA-POS generalizes the idea of SAGA [DBLJ14] to an interactive particleoptimization system. For each particle \u03b8(i) k , we use {g(i) k,j}N j=1 as an approximation for each individual gradient Fj(\u03b8(i) k ). An unbiased estimate of the full gradient F(\u03b8(i) k ) is \fZhang & Zhao & Chen 7 calculated as: G(i) k = N X j=1 g(i) k,j + N B X j\u2208Ik (Fj(\u03b8(i) k ) \u2212g(i) k,j), \u2200i (7) In each iteration, {g(i) k,j}N j=1 will be partially updated under the following rule: g(i) k+1,j = Fj(\u03b8(i) k ) if j \u2208Ik, and g(i) k+1,j = g(i) k,j otherwise. The algorithm is described in Algorithm 3.1. Compared with standard SPOS, SAGA-POS also enjoys highly computational e\ufb03ciency, as it does not require calculation of each Fj(\u03b8(i) k ) to get the full gradient F(\u03b8(i) k ) in each iteration. Hence, the computational time of SAGA-POS is almost the same as that of POS. However, our analysis in Section 4 shows that SAGA-POS endows a better convergence rate. From another aspect, SAGA-POS has the same drawback of SAGA-based algorithms, which requires memory scaling at a rate of O(MNd) in the worst case. For each particle \u03b8(i) k , one needs to store N gradient approximations {g(i) k,j}N j=1. Fortunately, as pointed out by [DRP+16, CFM+18], in some applications, the memory cost scales only as O(N) for SAGA-LD, which corresponds to O(MN) for SAGA-POS as M particles are used. Algorithm 1 SAGA-POS Input: A set of initial particles {\u03b8(i) 0 }M i=1, each \u03b8(i) 0 \u2208Rd, step size hk, batch size B. Initialize {g(i) 0,j}N j=1 = {Fj(\u03b8(i) 0 )}N j=1 for all i \u2208 {1, ..., M}; 1: for iteration k= 0,1,...,T do 2: Uniformly sample Ik from {1, 2, ..., N} randomly with replacement such that |Ik| = B; 3: Sample \u03be(i) k \u223cN(0, Id\u00d7d), \u2200i ; 4: Update G(i) k \u2190 N P j=1 g(i) k,j + N B P j\u2208Ik (Fj(\u03b8(i) k ) \u2212g(i) k,j), \u2200i ; 5: Update each \u03b8(i) k with Eq.(6); 6: Update {g(i) k,j}N j=1, \u2200i : if j \u2208Ik, set g(i) k+1,j \u2190Fj(\u03b8(i) k ); else, set g(i) k+1,j \u2190g(i) k,j 7: end for Output:{\u03b8(i) T }M i=1 Remark When compared with SAGA-LD, it is worth noting that M particles are used in both SPOS and SAGA-POS. This makes the memory complexity M times worse than SAGA-LD in training, thus SAGA-POS does not seem to bring any advantages over SAGA-LD. However, this intuition is not correct. As indicated by our theoretical results in Section 4, when the number of particles M is large enough, the convergence rates of our algorithms are actually better than those of variance-reduced SGLD counterparts. \f8 Variance Reduced Stochastic Particle-Optimization Sampling 3.2 SVRG-POS Under limited memory, we propose SVRG-POS, which is based on the SVRG method of [JZ13]. For each particle \u03b8(i) k , ones needs to store a stale parameter e \u03b8(i), and update it occasionally for every \u03c4 iterations. At each update, we need to further conduct a global evaluation of full gradients at e \u03b8(i), i.e., e G(i) \u2190F(\u03b8(i) k ) = F(e \u03b8(i)). An unbiased gradient estimate is then calculated by leveraging both e G(i) and e \u03b8(i) as: G(i) k \u2190e G(i) + N B X j\u2208Ik [Fj(\u03b8(i) k ) \u2212Fj(e \u03b8(i))] (8) The algorithm is depicted in Algorithm 3.2, where one only needs to store e \u03b8(i) and e G(i), instead of gradient estimates of all the individual Fj. Hence the memory cost scales as O(Md), almost the same as that of standard SPOS. We note that although SVRG-POS alleviates the storage requirement of SAGAPOS remarkably, it also endows downside that full gradients, F(e \u03b8(i)) = PN j=1 F(e \u03b8(i)), are needed to be re-computed every \u03c4 iterations, leading to high computational cost in a big-data scenario. Algorithm 2 SVRG-POS Input: A set of initial particles {\u03b8(i) 0 }M i=1, each \u03b8(i) 0 \u2208Rd, step size h, epoch length \u03c4, batch size B. Initialize {e \u03b8(i)} \u2190{\u03b8(i) 0 }, e G(i) \u2190F(\u03b8(i) 0 ), \u2200i ; 1: for iteration k= 0,1,...,T do 2: if k mod \u03c4 =0 then 3: Option I i)Sample l \u223cunif(0, 1, .., \u03c4 \u22121) ii)Update e \u03b8(i) \u2190\u03b8(i) k\u2212l Update \u03b8(i) k \u2190e \u03b8(i), \u2200i iii)Update e G(i) \u2190F(\u03b8(i) k ), \u2200i 4: Option II i) Update e \u03b8(i) \u2190\u03b8(i) k ii)Update e G(i) \u2190F(\u03b8(i) k ), \u2200i 5: end if 6: Uniformly sample Ik from {1, 2, ..., N} randomly with replacement such that |Ik| = B; 7: Sample \u03be(i) k \u223cN(0, Id\u00d7d), \u2200i ; 8: Update G(i) k \u2190e G(i) + N B P j\u2208Ik [Fj(\u03b8(i) k ) \u2212Fj(e \u03b8(i))], \u2200i ; 9: Update each \u03b8(i) k with Eq.(6) 10: end for Output:{\u03b8(i) T }M i=1 Remark i) Similar to SAGA-POS, according to our theory in Section 4, SVRG-POS enjoys a faster convergence rate than SVRD-LD \u2013 its SGLD counterpart, although \fZhang & Zhao & Chen 9 M times more space are required for the particles. This provides a trade-o\ufb00between convergence rates and space complexity. ii) Previous work has shown that SAGA typically outperforms SVRG [DRP+16, CFM+18] in terms of convergence speed. The conclusion applies to our case, which will be veri\ufb01ed both by theoretical analysis in Section 4 and experiments in Section 5. 3.3 SVRG-POS+ The need of full gradient computation in SVRG-POS motives the development of SVRG-POS+. Our algorithm is also inspired by the recent work of SVRG-LD+ on reducing the computational cost in SVRG-LD [ZXG18]. The main idea in SVRGPOS+ is to replace the full gradient computation every \u03c4 iterations with a subsampled gradient, i.e., to uniformly sample |Jk| = b data points where Jk are random samples from {1, 2, ..., N} with replacement. Given the sub-sampled data, e \u03b8(i) and e G(i) are updated as: e \u03b8(i) = \u03b8(i) k , e G(i) = N b P j\u2208Jk Fj(\u03b8(i) k ). The full algorithm is shown in Algorithm 3.3. Algorithm 3 SVRG-POS+ Input : A set of initial particles {\u03b8(i) 0 }M i=1, each \u03b8(i) 0 \u2208Rd, step size h, epoch length \u03c4, batch size B. Initialize {e \u03b8(i)} \u2190{\u03b8(i) 0 }, e G(i) \u2190F(\u03b8(i) 0 ), \u2200i ; 1: for iteration k= 0,1,...,T do 2: if k mod \u03c4 =0 then 3: i) Uniformly sample Jk from {1, 2, ..., N} with replacement such that |Jk| = b; ii) Update e \u03b8(i) \u2190\u03b8(i) k e G(i) \u2190N b P j\u2208Jk Fj(\u03b8(i) k ), \u2200i ; 4: end if 5: Uniformly sample Ik from {1, 2, ..., N} with replacement such that |Ik| = B; 6: Sample \u03be(i) k \u223cN(0, Id\u00d7d), \u2200i ; 7: Update G(i) k \u2190e G(i) + N B P j\u2208Ik [Fj(\u03b8(i) k ) \u2212Fj(e \u03b8(i))], \u2200i ; 8: Update each \u03b8(i) k with Eq.(6) 9: end for Output:{\u03b8(i) T }M i=1 4 Convergence Analysis In this section, we prove non-asymptotic convergence rates for the SAGA-POS, SVRG-POS and SVRG-POS+ algorithms under the 2-Wasserstein metric, de\ufb01ned as W2(\u00b5, \u03bd) = \u0012 inf \u03b6\u2208\u0393(\u00b5,\u03bd) Z Rd\u00d7Rd \u2225X\u00b5 \u2212X\u03bd\u22252d\u03b6(X\u00b5, X\u03bd) \u0013 1 2 \f10 Variance Reduced Stochastic Particle-Optimization Sampling where \u0393(\u00b5, \u03bd) is the set of joint distributions on Rd \u00d7Rd with marginal distribution \u00b5 and \u03bd. Let \u00b5\u2217denote our target distribution, and \u00b5T the distribution of 1 M PM i=1 \u03b8(i) T derived via (5) after T iterations. Our analysis aims at bounding W2(\u00b5T, \u00b5\u2217). We \ufb01rst introduce our assumptions. Assumption 1 F and K satisfy the following conditions: \u2022 There exist two positive constants mF and mW, such that \u27e8F(\u03b8) \u2212F(\u03b8\u2032), \u03b8 \u2212 \u03b8\u2032\u27e9\u2265mF\u2225\u03b8 \u2212\u03b8\u2032\u22252 and \u27e8\u2207K(\u03b8) \u2212\u2207K(\u03b8\u2032), \u03b8 \u2212\u03b8\u2032\u27e9\u2264\u2212mK\u2225\u03b8 \u2212\u03b8\u2032\u22252. \u2022 F is bounded and LF-Lipschitz continuous with LF i.e. \u2225F(\u03b8)\u2225\u2264HF and \u2225F(\u03b8) \u2212F(\u03b8\u2032)\u2225\u2264LF\u2225\u03b8 \u2212\u03b8\u2032\u2225; K is LK-Lipschitz continuous for some LK \u22650 and bounded by some constant HK > 0. \u2022 K is an even function, i.e., K(\u2212\u03b8) = K(\u03b8). Assumption 2 There exists a constant DF > 0 such that \u2225\u2207F(\u03b8) \u2212\u2207F(\u03b8\u2032)\u2225\u2264 DF\u2225\u03b8 \u2212\u03b8\u2032\u2225. Assumption 3 There exits a constant \u03c3 such that for all j \u2208{1, 2, ..., N}, E[\u2225Fj(\u03b8) \u22121 N N X j=1 Fj(\u03b8)\u22252] \u2264d\u03c32/N 2 Remark i) Assumption 1 is adopted from [ZZC18] which analyzes the convergence property of SPOS. The \ufb01rst bullet of Assumption 1 suggests U(\u00b7) is a strongly convex function, which is the general assumption in analyzing SGLD [DK17, DM16] and its variance-reduced variants [ZXG18, CFM+18]. It is worth noting that although some work has been done to investigate the non-convex case, it still has signi\ufb01cant value to analysis the convex case, which are more instructive and meaningful to address the practical issues [DK17, DM16, ZXG18, CFM+18]. ii) All of the mf, LF, HF and DF could scale linearly with N. iii) K(\u03b8) = exp(\u2212\u2225\u03b8\u22252 2\u03b72 ) can satisfy the above assumptions by setting the bandwidth large enough, since we mainly focus on some bounded space in practice. Consequently, \u2207K can also be L\u2207K-Lipschitz continuous and bounded by H\u2207K; K can also be Hessian Lipschitz with some positive constant D\u22072K For the sake of clarity, we de\ufb01ne some constants which will be used in our theorems. C1 = H\u2207K + HF \u221a 2(\u03b2\u22121 \u22123HFLK \u22122LF) C2 = p 2(\u03b2\u22121LF + 2LKHF + HKLF + L\u2207K)2 + 2 C3 = \u03b2\u22121mF \u22122LF \u22123HFLK C4 = \u03b2\u22121DF + 4D\u22072K + 4HFL\u2207K + 2LFH\u2207K + 2HFLK + LFHK C5 = 2\u03b2\u22121\u03c32 + 2H2 K\u03c32 \fZhang & Zhao & Chen 11 Now we present convergence analysis for our algorithms, where \u03b1 is some positive constant independent of T. Theorem 1 Let \u00b5T denote the distribution of the particles after T iterations with SAGA-POS. Under Assumption 1 and 2, let the step size h < B 8C2N and the batch size B \u22659, the convergence rate of SAGA-POS is bounded as W2(\u00b5T,\u00b5\u2217) \u2264C1 \u221a M + 5 exp(\u2212C3h 4 T)W2(\u00b50, \u00b5\u2217) + 2hC4dM 1/2\u2212\u03b1 C3 + 2hC2 3 2\u221a d C3M \u03b1 + 24hC2 \u221a dN M \u03b1\u221aC3B (9) Theorem 2 Let \u00b5T denote the distribution of the particles after T iterations with SVRG-POS in Algorithm 3.2. Under Assumption 1 and 2, if we choose Option I and set the step size h < 1 8C2, the batch size B \u22652 and the epoch length \u03c4 = 4 hC3(1\u22122hC2(1+2/B)), the convergence rate of SVRG-POS is bounded for all T, which mod \u03c4 = 0, as W2(\u00b5T , \u00b5\u2217) \u2264C1 \u221a M + exp(\u2212C3h 56 T) \u221aC2 \u221aC3 W2(\u00b50, \u00b5\u2217) + 2hC4dM1/2\u2212\u03b1 C3 + 2hC2 3 2 \u221a d C3M\u03b1 + 64C2 3 2 \u221a hd M\u03b1\u221a BC3 (10) If we choose Option II and set the step size h < \u221a B 4\u03c4C2, the convergence rate of SVRG-POS is bounded for all T as W2(\u00b5T, \u00b5\u2217) \u2264C1 \u221a M + exp(\u2212C3h 4 T)W2(\u00b50, \u00b5\u2217) + \u221a 2hC4dM 1/2\u2212\u03b1 C3 + 5hC2 3 2\u221a d C3M \u03b1 + 9hC2\u03c4 \u221a d M \u03b1\u221aBC3 (11) Theorem 3 Let \u00b5T denote the distribution of particles after T iterations with SVRG-POS+. Under Assumption 1, 2 and 3, if we set the step size h \u2264 min{( BC3 24C24\u03c4 2) 1 3, 1 6\u03c4(C52/b+C2)}, then the convergence rate of SVRG-POS+ is bounded for all T as W2(\u00b5T,\u00b5\u2217) \u2264C1 \u221a M + (1 \u2212hC3/4)TW2(\u00b50, \u00b5\u2217) + 3C5d1/2 M \u03b1C3b1/21(b \u2264N) + 2h(C4dM 1/2\u2212\u03b1) C3 + 2hC3/2 2 d1/2 C3M \u03b1 + 4hC2(\u03c4d)1/2 \u22273h1/2d1/2C5 M \u03b1\u221aBC3 (12) Since the complexity has been discussed in the Section 3, we mainly focus on discussing the convergence rates here. Due to space limit, we move the comparison between \f12 Variance Reduced Stochastic Particle-Optimization Sampling convergence rates of the standard SPOS and its variance-reduced counterparts such as SAGA-POS into the SM. Speci\ufb01cally, adopting the standard framework of comparing di\ufb00erent variance-reduction techniques in SGLD [DRP+16, CFM+18, ZXG18], we focus on the scenario where mf, LF, HF and DF all scale linearly with N with N \u226bd. In this case, the dominating term in Theorem 1 for SAGA-POS is the last term, O( hC2 \u221a d M\u03b1B ) \u2248O( hN \u221a d M\u03b1B ). Thus to achieve an accuracy of \u03b5, we would need the stepsize hag = O( \u03b5M\u03b1B N \u221a d ). For SVRG-POS, the dominating term in Theorem 2 is O( \u221a hNd M\u03b1\u221a B) for Option I and O( \u03c4hN \u221a d M\u03b1\u221a B ) for Option II. Hence, for an accuracy of \u03b5, the corresponding step sizes are hvr1 = O( \u03b52M2\u03b1B Nd ) and hvr2 = O( \u03b5M\u03b1\u221a B \u03c4N \u221a d ), respectively. Due to the fact that the mixing time T for these methods is roughly proportional to the reciprocal of step size [CFM+18], it is seen that when \u03b5 is small enough, one can have hvr1 \u226ahag, which causes SAGA-POS converges faster than SVRG-POS (Option I). Similar results hold for Option II since the factor 1 \u221a B\u03c4 in hvr2 would make the step size even smaller. More theoretical results are given in the SM. Remark We have provided theoretical analysis to support the statement of i) in Remark 3.2 . Moreover, we should also notice in SAGA-POS, stepsize hag = O( \u03b5M\u03b1B N \u221a d ) has an extra factor, M \u03b1, compared with the step size O( \u03b5B N \u221a d) used in SAGA-LD [CFM+18]\u2217. This means SAGA-POS with more particles (M is large) would outperform SAGA-LD. SVRG-POS and SVRG-POS+ have similar conclusions. This theoretically supports the statements of Remark 3.1 and i) in Remark 3.2. Furthermore, an interesting result from the above discussion is that when hvr1 = O( \u03b52M2\u03b1B Nd ) in SVRG-POS, there is an extra factor M compared to the stepsize O( \u03b52B Nd ) in SVRG-LD [CFM+18]. Since the order of M 2\u03b1 is higher than M \u03b1, one expects that the improvement of SVRG-POS over SVRG-LD is much more signi\ufb01cant than that of SAGA-POS over SAGA-LD. This conclusion will be veri\ufb01ed in our experiments. 5 Experiments We conduct experiments to verify our theory, and compare SAGA-POS, SVRGPOS and SVRG-POS+ with existing representative Bayesian sampling methods with/without variance-reduction techniques, e.g. SGLD and SPOS without variance reduction; SAGA-LD, SVRG-LD and SVRG-LD+ with variance reduction. For SVRG-POS, we focus on Option I in Algorithm 3.2 to verify our theory. 5.1 Synthetic log-normal distribution We \ufb01rst evaluate our proposed algorithms on a log-normal synthetic data, de\ufb01ned as p(x|\u00b5) = 1 x \u221a 2\u03c0 exp(\u2212(ln x\u2212\u00b5)2 2 ) where x, \u00b5 \u2208R10. We calculate log-MSE of the sampled \u201cmean\u201d w.r.t. the true value, and plot the log-MSE versus number of passes \u2217For fair comparisons with our algorithms, we consider variance-reduced versions of SGLD with M independent chains. \fZhang & Zhao & Chen 13 Figure 1: Log-MSE of the mean parameter versus the number of dataset pass. through data [CFM+18], like other variance-reduction algorithms in Figure 1, which shows that SAGA-POS and SVRG-POS converge the fastest among other algorithms. It is also interesting to see SPOS even outperforms both SAGA-LD and SVRG-LD. 5.2 Bayesian logistic regression Following related work such as [DRP+16], we test the proposed algorithms for Bayesian-logistic-regression (BLR) on four publicly available datasets from the UCI machine learning repository: Australian (690-14), Pima (768-8), Diabetic (115120) and Susy (100000-18), where (N \u2212d) means a dataset of N data points with dimensionality d. The \ufb01rst three datasets are relatively small, and the last one is a large dataset suitable to evaluate scalable Bayesian sampling algorithms. Speci\ufb01cally, consider a dataset {Xi, yi}N i=1 with N samples, where Xi \u2208Rd and yi \u2208{0, 1}. The likelihood of a BLR model is written as p(yi = 1|Xi, \u03b1) = sigmoid(\u03b1TXi) with regression coe\ufb03cient \u03b1 \u2208Rd, which is assumed to be sampled from a standard multivariate Gaussian prior N(0, I) for simplicity. The datasets are split into 80% training data and 20% testing data. Optimized constant stepsizes are applied for each algorithm via grid search. Following existing work, we report testing accuracy and log-likelihood versus the number of data passes for each dataset, averaging over 10 runs with 50 particles. The minibatch size is set to 15 for all experiments. \f14 Variance Reduced Stochastic Particle-Optimization Sampling 5.2.1 Variance-reduced SPOS versus SPOS We \ufb01rst compare SAGA-POS, SVRG-POS and SVRG-POS+ with SPOS without variance reduction proposed in [ZZC18]. The testing accuracies and log-likelihoods versus number of passes through data on the four datasets are plotted in Figure 2. It is observed that SAGA-POS converges faster than both SVRG-POS and SVRGPOS+, all of which outperform SPOS signi\ufb01cantly. On the largest dataset SUSY, SAGA-POS starts only after one pass through data, which then converges quickly, outperforming other algorithms. And SVRG-POS+ outperforms SVRG-POS due to the dataset SUSY is so large. All of these phenomena are consistent with our theory. 5.2.2 Variance-reduced SPOS versus variance-reduced SGLD Next we compare the three variance-reduced SPOS algorithms with its SGLD counterparts, i.e., SAGA-LD, SVRG-LD and SVRG-LD+. The results are plotted in Figure 3. Similar phenomena are observed, where both SAGA-POS and SVRG-POS outperform SAGA-LD and SVRG-LD, respectively, consistent with our theoretical results discussed in Remark 3.1 and 3.2. Interestingly, in the PIMA dataset case, SVRG-LD is observed to perform even worse (converge slower) than standard SGLD. Furthermore, as discussed in Remark 4, our theory indicates that the improvement of SVRG-POS over SVRG-LD is more signi\ufb01cant than that of SAGA-POS over SAGA-LD. This is indeed true by inspecting the plots in Figure 3. 5.2.3 Impact of number of particles Finally we examine the impact of number of particles to the convergence rates. As indicated by Theorems 1-3, for a \ufb01xed number of iterations T, the convergence error in terms of 2-Wasserstein distance decreases with increasing number of particles. To verify this, we run SAGA-POS and SVRG-POS for BLR with the number of particles ranging between {1, 2, 4, 8, 16}. The test log-likelihoods versus iteration numbers are plotted in Figure 4, demonstrating consistency with our theory. 6" + }, + { + "url": "http://arxiv.org/abs/1809.01293v5", + "title": "Stochastic Particle-Optimization Sampling and the Non-Asymptotic Convergence Theory", + "abstract": "Particle-optimization-based sampling (POS) is a recently developed effective\nsampling technique that interactively updates a set of particles. A\nrepresentative algorithm is the Stein variational gradient descent (SVGD). We\nprove, under certain conditions, SVGD experiences a theoretical pitfall, {\\it\ni.e.}, particles tend to collapse. As a remedy, we generalize POS to a\nstochastic setting by injecting random noise into particle updates, thus\nyielding particle-optimization sampling (SPOS). Notably, for the first time, we\ndevelop {\\em non-asymptotic convergence theory} for the SPOS framework (related\nto SVGD), characterizing algorithm convergence in terms of the 1-Wasserstein\ndistance w.r.t.\\! the numbers of particles and iterations. Somewhat\nsurprisingly, with the same number of updates (not too large) for each\nparticle, our theory suggests adopting more particles does not necessarily lead\nto a better approximation of a target distribution, due to limited\ncomputational budget and numerical errors. This phenomenon is also observed in\nSVGD and verified via an experiment on synthetic data. Extensive experimental\nresults verify our theory and demonstrate the effectiveness of our proposed\nframework.", + "authors": "Jianyi Zhang, Ruiyi Zhang, Lawrence Carin, Changyou Chen", + "published": "2018-09-05", + "updated": "2020-03-29", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "main_content": "Introduction Recently there has been extensive development of scalable Bayesian sampling algorithms, such as stochastic gradient MCMC (SG-MCMC) [Welling and Teh, 2011, Chen et al., 2014, Ding et al., 2014, Chen et al., Proceedings of the 23rdInternational Conference on Arti\ufb01cial Intelligence and Statistics (AISTATS) 2020, Palermo, Italy. PMLR: Volume 108. Copyright 2020 by the author(s). 2015] and Stein variational gradient descent (SVGD) [Liu and Wang, 2016]. SG-MCMC is a family of scalable Bayesian sampling algorithms built on It\u00f3 diffusions, stochastic di\ufb00erential equations (SDEs) with appropriately designed coe\ufb03cients whose stationary distributions match the target distributions. One potential issue of SG-MCMC is that samples may be highly correlated partially due to the nature of Markov chains, leading to undesired low sample e\ufb03ciency. SVGD, on the other hand, belongs to the family of particleoptimization-based sampling methods that optimize a set of interacting particles to minimize some distance metric (e.g., KL-divergence) between the target distribution and the particle-induced approximate distribution. By optimization, one seeks to maintain an optimal set of particles. Recent development of SVGD has shown that the underlying mathematical principle is based on a family of nonlinear partial di\ufb00erential equations (PDEs) [Liu, 2017]. Although achieving signi\ufb01cant practical successes [Liu and Wang, 2016, Feng et al., 2017, Liu et al., 2017, Haarnoja et al., 2017, Zhang et al., 2018a, Zhang et al., 2019, Liu and Zhu, 2018], little theory is available to fully understand its non-asymptotic convergence properties. A recent theoretical development has interpreted SVGD as a special type of gradient \ufb02ows, and developed theory to disclose its asymptotic convergence behavior [Liu, 2017]. The asymptotic theory is also studied in [Lu et al., 2018]. A very recent work [Liu and Wang, 2018] investigated non-asymptotic properties of SVGD, limited to the region of \ufb01nite particles and in\ufb01nite time with restricted conditions. In [\u015eim\u015fekli et al., 2018] the convergence property of the sliced-Wasserstein \ufb02ow are only considered under an in\ufb01nite-particle setting. Recently, [Chen et al., 2018] uni\ufb01ed SG-MCMC and SVGD by proposing a particle-optimization-sampling (POS) framework to interpret both as Wasserstein gradient \ufb02ows (WGFs). Generally, a WGF is a PDE de\ufb01ned on the space of probability measures, describing the evolution of a density over time. [Chen et al., 2018] de\ufb01ned a WGF by combining the corresponding PDEs for both SG-MCMC and SVGD, and solved it with deterministic particle approximations. However, arXiv:1809.01293v5 [stat.ML] 29 Mar 2020 \fStochastic Particle-Optimization Sampling and the Non-Asymptotic Convergence Theory due to its di\ufb00usion nature, deterministic-particle approximation leads to a hard-to-control error, making it challenging for theoretical analysis. Our contributions In this paper, we generalize POS to a stochastic setting, and develop a novel analytical framework based on granular media equations [Malrieu, 2003, Cattiaux et al., 2008] to analyze its nonasymptotic convergence properties. Our contributions are summarized as follows: i) We \ufb01rst identify a pitfall of standard SVGD, where particles tend to collapse under certain conditions and measurement, indicating that developing non-asymptotic theory for SVGD is changing(if possible at all). ii) Based on the uni\ufb01ed framework in [Chen et al., 2018], we propose stochastic particle-optimization sampling (SPOS) by injecting Gaussian noise in particle updates to overcome the pitfall. iii) For the \ufb01rst time, we develop nonasymptotic convergence theory for the family of SPOS algorithms, considering both convexand nonconvex-energy targets. Di\ufb00erent from existing theory for SG-MCMC-based algorithms [Teh et al., 2016, Vollmer et al., 2016, Chen et al., 2015, Raginsky et al., 2017, Zhang et al., 2017, Xu et al., 2018], our development relies on the theory of nonlinear PDEs, which is more involved and less explored in the literature. In particular, we adopt tools from granular media equations [Malrieu, 2003, Cattiaux et al., 2008] to develop non-asymptotic error bounds in terms of 1-Wasserstein distance. More detailed distinctions between our work and existing work are discussed in Section N of the Supplementary Material (SM). Somewhat surprisingly, our theory indicates that utilizing more particles does not necessarily lead to better approximations, due to the numerical errors in the algorithms. This phenomenon is also observed for SVGD. iv) Our theory and advantages of the algorithm are veri\ufb01ed via various experiments, including experiments on synthetic data, Bayesian deep learning and Bayesian exploration for reinforcement learning. 2 Preliminaries Notation We use bold letters to denote variables in continuous-time di\ufb00usions and model de\ufb01nitions (no numerical methods included yet), e.g., \u03b8\u03c4 in (1) below (indexed by \u201ctime\u201d \u03c4). By contrast, unbold letters are used to denote parameters in algorithms (numerical solutions of continuous-time di\ufb00usions), e.g., \u03b8(i) k in (3) below (indexed by \u201citeration\u201d k). For conciseness, all proofs, extra experimental results and a discussion on algorithmic complexity are presented in the SM. 2.1 Stochastic gradient MCMC In Bayesian sampling, one aims to generate random samples from a posterior distribution p(\u03b8|X) \u221d p(X|\u03b8)p(\u03b8), where \u03b8 \u2208Rd represents the model parameter with a prior distribution p(\u03b8), and X \u225c {xq}N q=1 represents the observed data with likelihood p(X|\u03b8) = Q q p(xq |\u03b8). De\ufb01ne the potential energy as: U(\u03b8) \u225c\u2212log p(X|\u03b8) \u2212log p(\u03b8) = \u2212PN q=1 \u0000log p(xq |\u03b8) + 1 N log p(\u03b8) \u0001 \u225c PN q=1 Uq(\u03b8). SG-MCMC algorithms belong to di\ufb00usion-based sampling methods, where a continuous-time di\ufb00usion process is designed such that its stationary distribution matches the target posterior distribution. The di\ufb00usion process is driven by a speci\ufb01c SDE. For example, in stochastic gradient Langevin dynamic (SGLD) [Welling and Teh, 2011], the SDE endows the following form: d\u03b8\u03c4 = \u2212\u03b2\u22121F(\u03b8\u03c4)d\u03c4 + p 2\u03b2\u22121dW\u03c4 , (1) where F(\u03b8) \u225c \u2207\u03b8U(\u03b8) = PN q=1 \u2207\u03b8Uq(\u03b8) \u225c PN q=1 Fq(\u03b8); \u03c4 is the time index, \u03b2 > 0 is the temperature parameter, and W\u03c4 \u2208Rd is a d-dimensional Brownian motion. More instances of SDEs corresponding to other SG-MCMC algorithms can be de\ufb01ned by specifying di\ufb00erent forms of F and potentially other di\ufb00usion coe\ufb03cients. We focus on SGLD and (1) in this paper, and refer interested readers to [Ma et al., 2015] for a more detailed description of general SG-MCMC algorithms. Denote the probability density function of \u03b8\u03c4 in (1) as \u03bd\u03c4, and let a \u00b7 b \u225ca\u22a4b for two vectors a and b. It is known that \u03bdt is characterized by the following Fokker-Planck (FP) equation [Risken, 1989]: \u2202\u03c4\u03bd\u03c4 = \u2207\u03b8 \u00b7 (\u03b2\u22121\u03bd\u03c4F(\u03b8) + \u03b2\u22121\u2207\u03b8\u03bd\u03c4) . (2) According to [Chiang and Hwang, 1987], the stationary distribution \u03bd\u221eequals to our target distribution p(\u03b8|X). As a result, SGLD is designed to generates samples from p(\u03b8|X) by numerically solving the SDE (1). For scalability, it replaces F(\u03b8k) in each iteration with an unbiased evaluation by randomly sampling a subset of X, i.e., F(\u03b8k) is approximated by: Gk \u225c N Bk P q\u2208Ik Fq(\u03b8k), where Ik is a random subset of [1, 2, \u00b7 \u00b7 \u00b7 , N] with size Bk in each iteration. As a result, SGLD uses the Euler method with stepsize hk to numerically solve (1), resulting in the update equation: \u03b8k+1 = \u03b8k \u2212\u03b2\u22121Gkhk + p 2\u03b2\u22121hk\u03bek, with \u03bek \u223cN(0, I). 2.2 Stein variational gradient descent Di\ufb00erent from SG-MCMC, SVGD is a deterministic particle-optimization algorithm that is able to generate samples from a target distribution. In the algorithm, a set of particles interact with each other, driving them to high density regions in the parameter space while keeping them far away from each other with an induced repulsive force. The update equations of the particles follow the fastest descent direction of the KLdivergence between current particle distribution and \fJianyi Zhang1, Ruiyi Zhang1, Lawrence Carin1, Changyou Chen2k the target distribution, on a RKHS (reproducing kernel Hilbert space) induced by a kernel function \u03ba(\u00b7, \u00b7) [Liu and Wang, 2016]. Formally, [Liu and Wang, 2016] derived the following update rules for the particles {\u03b8(i) k }M i=1 at the k-th iteration with stepsize hk and G(i) k \u225c N Bk P q\u2208Ik Fq(\u03b8(i) k ): for \u2200i, \u03b8(i) k+1 = \u03b8(i) k + hk M M X j=1 h \u03ba(\u03b8(j) k , \u03b8(i) k )G(i) k + \u2207\u03b8(j) k \u03ba(\u03b8(j) k , \u03b8(i) k ) i (3) where the \ufb01rst term in the bracket encourages moving particles to the density modes, and the second term serves as repulsive force that pushes away di\ufb00erent particles. In particular, the particle evolution (3) are numerical solutions of the ODEs: d\u03b8(i) \u03c4 = 1 M PM j=1 h \u03ba(\u03b8(j) \u03c4 , \u03b8(i) \u03c4 )F(\u03b8(i) \u03c4 ) + \u2207\u03b8(j) \u03c4 \u03ba(\u03b8(j) \u03c4 , \u03b8(i) \u03c4 ) i d\u03c4. Di\ufb00erent from SG-MCMC, only particles at the current iteration, {\u03b8(i) k }M i=1, are used to approximate the target distribution. 2.3 Particle-optimization based sampling SG-MCMC and SVGD, though they may look closely related, behave very di\ufb00erently as algorithms, e.g., stochastic and noninteractive versus deterministic and interactive particle updates. Recently, [Chen et al., 2018] proposed a deterministic particle-optimization framework that uni\ufb01ed SG-MCMC and SVGD. Specifically, the authors viewed both SG-MCMC and SVGD as solutions of Wasserstein gradient \ufb02ows (WGFs) on the space of probabilistic measures, and derived several deterministic particle-optimization techniques for particle evolution, like SVGD. For SG-MCMC, the FP equation (2) for SGLD is a special type of WGF. Together with an interpretation of SVGD as a special case of the Vlasov equation in the nonlinear PDE literature, [Chen et al., 2018] proposed a general form of PDE to characterize the evolution of the density for the model parameter \u03b8, denoted as \u03bd\u03c4 at time \u03c4 with \u03bd\u221ematching our target (posterior) distribution, i.e., \u2202\u03c4\u03bd\u03c4 = \u2207\u03b8 \u00b7 \u0000\u03bd\u03c4\u03b2\u22121F(\u03b8) + \u03bd\u03c4 (K \u2217\u03bd\u03c4(\u03b8)) + \u03b2\u22121\u2207\u03b8\u03bd\u03c4 \u0001 , (4) where K is a function controlling the interaction of particles in the PDE system. For example, in SVGD, [Chen et al., 2018] showed that K and K \u2217\u03bd\u03c4(\u03b8) endow the following forms: K \u2217\u03bd\u03c4(\u03b8) \u225c Z K(\u03b8, \u03b8\u2032)\u03bd\u03c4(\u03b8\u2032)d\u03b8\u2032 , (5) where K(\u03b8, \u03b8\u2032) \u225cF(\u03b8\u2032)\u03ba(\u03b8\u2032, \u03b8)\u2212\u2207\u03b8\u2032\u03ba(\u03b8\u2032, \u03b8) and \u03ba(\u00b7, \u00b7) is a kernel function such as the RBF kernel. In the following, we introduce a new unary function K(\u03b8) = exp(\u2212\u2225\u03b8\u22252 \u03b72 ), thus \u03ba(\u03b8, \u03b8\u2032) can be rewritten as \u03ba(\u03b8, \u03b8\u2032) = K(\u03b8 \u2212\u03b8\u2032). Hence, (4) with K de\ufb01ned in (5) is equivalently written as: \u2202\u03c4\u03bd\u03c4 =\u2207\u03b8 \u00b7 (\u03bd\u03c4\u03b2\u22121F(\u03b8) + \u03bd\u03c4 (EY \u223c\u03bd\u03c4 K(\u03b8 \u2212Y )F(Y ) \u2212\u2207K \u2217\u03bd\u03c4(\u03b8)) + \u03b2\u22121\u2207\u03b8\u03bd\u03c4) , (6) where Y is a random sample from \u03bd\u03c4 independent of \u03b8. Note our formula here is signi\ufb01cantly di\ufb00erent from standard granular media equations in the literature. Section N of the SM provides more details. Proposition 1 ([Chen et al., 2018]) The stationary distribution of (6) equals to our target distribution, which means \u03bd\u221e(\u03b8) = p(\u03b8|X). [Chen et al., 2018] proposed to solve (4) numerically with deterministic particle-optimization algorithms, such as what is called the blob method. Speci\ufb01cally, the continuous density \u03bd\u03c4 is approximated by a set of M particles {\u03b8(i) \u03c4 }M i=1 that evolve over time \u03c4, i.e. \u03bd\u03c4 \u2248 1 M PM i=1 \u03b4\u03b8(i) \u03c4 (\u03b8), where \u03b4\u03b8(i) \u03c4 (\u03b8) = 1 if \u03b8 = \u03b8(i) \u03c4 and 0 otherwise. Note \u2207\u03b8\u03bd\u03c4 in (4) is no longer a valid de\ufb01nition when adopting particle approximation for \u03bd\u03c4. Consequently, \u2207\u03b8\u03bd\u03c4 needs nontrivial approximations, e.g., by discrete gradient \ufb02ows or blob methods proposed in [Chen et al., 2018]. We omit the details here for simplicity. 3 Stochastic Particle-Optimization Sampling (SPOS) We \ufb01rst introduce a pitfall of SVGD, which is overcame by SPOS. In the analysis for both SVGD and SPOS, we impose the following basic assumptions. Assumption 1 Assume F and K satisfy the following assumptions: 1.1 F is LF -Lipschitz continuous i.e., \u2225F(\u03b8) \u2212 F(\u03b8\u2032)\u2225\u2264LF \u2225\u03b8 \u2212\u03b8\u2032\u2225. 1.2 K is LK-Lipschitz continuous; \u2207K is L\u2207KLipschitz continuous. 1.3 F(0) = 0 and K is an even function, i.e., K(\u2212\u03b8) = K(\u03b8). A few remarks: i) Assumptions 1.1 is widely adopted in the other theoretical works such as [Dalalyan and Karagulyan, 2017, Chatterji et al., 2018] ii) F(0) = 0 in Assumption 1.3 is reasonable, as F in our setting corresponds to an unnormalized log-posterior, which can be shifted such that F(0) = 0 for a speci\ufb01c problem. The assumptions of K are satis\ufb01ed due to the properties of Gaussian Kernel. 3.1 A pitfall of SVGD We \ufb01rst motivate SPOS by discovering a pitfall of standard SVGD: particles in SVGD tend to collapse to a local mode under some particular conditions. Inspired by the work on analyzing granular media equations by [Malrieu, 2003, Cattiaux et al., 2008], we measure \fStochastic Particle-Optimization Sampling and the Non-Asymptotic Convergence Theory the collapse by calculating the expected distance between exact particles (without numerical errors), called expected particle distance (EPD) de\ufb01ned below. Assumption 2 Assume F and K satisfy the following assumptions: 2.1 There exists positive mK such that \u27e8\u2207K(\u03b8) \u2212 \u2207K(\u03b8\u2032), \u03b8 \u2212\u03b8\u2032\u27e9\u2264\u2212mK\u2225\u03b8 \u2212\u03b8\u2032\u22252. 2.2 F is bounded by HF i.e., \u2225F(\u03b8)\u2225\u2264HF For an RBF kernel, this assumption could be satis\ufb01ed by setting the bandwidth large enough and only considering the concave region for simplicity. This may seem restrictive. However, this assumption is imposed only for the analysis of the pitfall property. It is not needed in the non-asymptotic convergence analysis. Besides, we point out what will happen without this assumption in Remark 1. Theorem 2 Under Assumptions 1 and 2, for the particles \u03b8(i) \u03c4 de\ufb01ned in Section 2.2, the EPD for SVGD is bounded as: EPD \u225c qPM i,j E\u2225\u03b8(i) \u03c4 \u2212\u03b8(j) \u03c4 \u22252 \u2264C0e\u22122\u03bb\u03c4, where C0 = qPM i,j \u2225\u03b8(i) 0 \u2212\u03b8(j) 0 \u22252, \u03bb = mK \u2212HF LK. Remark 1 1) In the case of \u03bb \u22650, Theorem 2 indicates that particles in SVGD would collapse to a point when \u03c4 \u2192\u221e. In practice, we usually \ufb01nd that particles are trapped in a local mode instead of collapsing in practice. This might be due to two reasons: i) Particles in SVGD are numerical solutions instead of exact solutions as used in EPD, which induces extra numerical errors; ii) Some particles might be out of the concave region of K stated in Assumption 2 in SVGD, which is required for the theory to hold. These make the empirical EPD behave not exactly the same as the true particle distance. 2) Theorem 2 and its proof in the SM also apply to the case of non-convex energy functions. 3) Even if the kernel is not concave, the result would still indicate that particles in the concave regions would collapse. 4) The pitfall indicates a challenge for developing non-asymptotic SVGD theory (if possible at all), motivating the development of SPOS. 5) This is a complement to the result of [Liu et al., 2019], which proves SVGD is ill-pose under some conditions. 3.2 Stochastic particle-optimization sampling to mitigate the pitfall We argue the WGF framework proposed in [Chen et al., 2018], if solved appropriately, is able to overcome the pitfall of SVGD. Speci\ufb01cally, the original solution in [Chen et al., 2018] is based on a deterministic particle-approximation method for (4), which introduces hard-to-control approximation errors. Instead, we propose to solve (4) stochastically to replace the \u2207\u03b8\u03bd\u03c4 term in (4) with a Brownian motion. Speci\ufb01cally, \ufb01rst note that the term \u03b2\u22121\u2207\u03b8 \u00b7 \u2207\u03b8\u03bd\u03c4 is contributed from Brownian motion, i.e., solving the SDE, d\u03b8\u03c4 = p 2\u03b2\u22121dW\u03c4, is equivalent to solving the corresponding FP equation: \u2202\u03bd\u03c4 = \u03b2\u22121\u2207\u03b8 \u00b7 \u2207\u03b8\u03bd\u03c4. Consequently, we decompose the RHS of (4) into two parts: F1 \u225c\u2207\u03b8 \u00b7 \u0000\u03bd\u03c4\u03b2\u22121F(\u03b8\u03c4) + (K \u2217\u03bd\u03c4)\u03bd\u03c4 \u0001 and F2 \u225c\u03b2\u22121\u2207\u03b8 \u00b7\u2207\u03b8\u03bd\u03c4. Our idea is to solve F1 deterministically under a PDE setting, and solve F2 stochastically based on its corresponding SDE. When adopting particle approximation for the density \u03bd\u03c4, both solutions of F1 and F2 are represented in terms of particles {\u03b8(i) \u03c4 }. Thus we can combine the solutions from the two parts directly to approximate the original exact solution of (4). Similar to the results of SVGD in Section 3.3 in [Liu, 2017], we \ufb01rst formally show in Theorem 3 that when approximating \u03bd\u03c4 with particles, i.e., \u03bd\u03c4 \u2248 1 M PM i=1 \u03b4\u03b8(i) \u03c4 (\u03b8), the PDE can be transformed into a system of deterministic di\ufb00erential equations with interacting particles. Theorem 3 When approximating \u03bd\u03c4 in (4) with particles {\u03b8(i) \u03c4 }, the PDE \u2202\u03c4\u03bd\u03c4 = F1 reduces to the following system of di\ufb00erential equations describing evolutions of the particles over time: \u2200i d\u03b8(i) \u03c4 = \u2212\u03b2\u22121F(\u03b8(i) \u03c4 )d\u03c4 \u22121 M M X j=1 K(\u03b8(i) \u03c4 \u2212\u03b8(j) \u03c4 )F(\u03b8(j) \u03c4 )d\u03c4 + 1 M M X j=1 \u2207K(\u03b8(i) \u03c4 \u2212\u03b8(j) \u03c4 )d\u03c4 (7) Consequently, by solving \u2202\u03c4\u03bd\u03c4 = F2 stochastically from an SDE perspective, we arrive at the following differential equation system, describing evolution of the particles {\u03b8(i) \u03c4 } over time \u03c4: \u2200i d\u03b8(i) \u03c4 = \u2212\u03b2\u22121F(\u03b8(i) \u03c4 \u22121 M M X j=1 K(\u03b8(i) \u03c4 \u2212\u03b8(j) \u03c4 )F(\u03b8(j) \u03c4 ) + 1 M M X j=1 \u2207K(\u03b8(i) \u03c4 \u2212\u03b8(j) \u03c4 ))d\u03c4 + p 2\u03b2\u22121dW(i) \u03c4 (8) Algorithm 1 Stochastic Particle-Optimization Sampling Input: Initial particles {\u03b8(i) 0 }M i=1 with \u03b8(i) 0 \u2208Rd, step size hk, batch size Bk 1: for iteration k= 0,1,...,T do 2: Update \u03b8(i) k+1 with (9) for \u2200i. 3: end for Output:{\u03b8(i) T }M i=1 Our intuition is that if the particle evolution (8) can be solved exactly, the solution of (6) \u03bd\u03c4 will be wellapproximated by the particles {\u03b8(i) \u03c4 }M i=1. In our theory, we show this intuition is true. In practice, however, solving (8) is typically infeasible, and thus numerical methods are adopted. Furthermore, in the case of large \fJianyi Zhang1, Ruiyi Zhang1, Lawrence Carin1, Changyou Chen2k Figure 1: Comparison of SPOS (left) and SVGD (right) on a multi-mode distribution. The circles with di\ufb00erent colors are the \ufb01nal 100 particles, which are able to spread over all modes for SPOS. data sets, following SG-MCMC, F(\u03b8(i) k ) is typically replaced by a stochastic version G(i) k \u225c N Bk P q\u2208Ik Fq(\u03b8(i) k ) evaluated with a minibatch of data of size Bk for computational feasibility. Based on the Euler method [Chen et al., 2015] with a stepsize hk, (8) leads to the following updates for the particles at the k-th iteration: let \u03be(i) k \u223cN(0, I) for \u2200i, \u03b8(i) k+1 =\u03b8(i) k \u2212hk\u03b2\u22121G(i) k \u2212hk M M X j=1 K(\u03b8(i) k \u2212\u03b8(j) k )G(j) k + hk M M X j=1 \u2207K(\u03b8(i) k \u2212\u03b8(j) k ) + p 2\u03b2\u22121hk\u03be(i) k (9) We call the algorithm with particle update equations (9) stochastic particle-optimization sampling (Algorithm 1), in the sense that particles are optimized stochastically with extra random Gaussian noise. Intuitively, the added noise enhances the ability of the algorithm to jump out of local modes, leading to better exploration properties compared to standard SVGD. This serves as one of our motivations to generalize SVGD to SPOS. To illustrate the advantage of introducing the noise term, we compare SPOS and SVGD on sampling a di\ufb03cult multi-mode distribution, with the density function given in Section A of the SM. The particles are initialized on a local mode close to zero. Note that in this example there is always positive probability to jump between modes. Figure 1 plots the \ufb01nal locations of the particles along with the true density, which shows that particles in SPOS are able to reach di\ufb00erent modes, while they are all trapped at one mode in SVGD. Theorem 4 below bounds the EPD of SPOS, in contrast with that for SVGD in Theorem 2, which is intuitively obtained by taking the \u03b2 \u2192\u221elimit. Theorem 4 Under Assumption 1, further assume that every {\u03b8(i) \u03c4 } of (8) for approximating \u03bd\u03c4 in (4) has the same initial probability law \u03bd0 and \u0393 \u225c E\u03b8\u223c\u03bd0,\u03b8\u2032\u223c\u03bd0[\u2225\u03b8 \u2212\u03b8\u2032\u22252] < \u221e. Choose a \u03b2 such that \u03bb = mF \u03b2 + mK \u2212HF LK > 0. Then the EPD of SPOS is bounded as: EPD \u225c qPM i,j E\u2225\u03b8(i) \u03c4 \u2212\u03b8(j) \u03c4 \u22252 \u2264 C1e\u22122\u03bb\u03c4 + 4 q d \u03b2 M \u03bb , where C1 = M(M \u22121)\u0393 \u2212 4 p d\u03b2\u22121 M \u03bb . Remark 2 There are two interesting cases: i) When C1 > 0, the EPD would decrease to the bound 4 p d\u03b2\u22121M/\u03bb with time t. This represents the phenomenon of an attraction force between particles; ii) When C1 < 0, the EPD would increase to the same bound, which represents the phenomenon of a repulsive force between particles, e.g., when particles are initialized with the same value (\u0393 = 0), they would be pushed away from each other until the EPD increases to the aforementioned bound. 4 Non-Asymptotic Convergence Analysis In this section, we prove non-asymptotic convergence rates for the proposed SPOS algorithm under the 1Wasserstein metric W1, a special case of p-Wasserstein metric de\ufb01ned as Wp(\u00b5, \u03bd) = \u0012 inf \u03b6\u2208\u0393(\u00b5,\u03bd) Z Rd\u00d7Rd \u2225X\u00b5 \u2212X\u03bd\u2225pd\u03b6(X\u00b5, X\u03bd) \u00131/p where \u0393(\u00b5, \u03bd) is the set of joint distributions on Rd\u00d7Rd with marginal distribution \u00b5 and \u03bd. Note that SPOS reduces to SVGD when \u03b2 \u2192\u221e, thus our theory also sheds light on the convergence behavior of SVGD, where non-asymptotic theory is currently mostly missing, despite the asymptotic theory developed recently [Liu, 2017, Lu et al., 2018]. For analysis, we further impose the following assumptions. Assumption 3 Assume F and \u03bd0 satisfy the following assumptions: 3.1 There exists positive mF such that \u27e8F(\u03b8) \u2212 F(\u03b8\u2032), \u03b8 \u2212\u03b8\u2032\u27e9\u2265mF \u2225\u03b8 \u2212\u03b8\u2032\u22252. 3.2 The initial probability law of each particle has a bounded and strictly positive density \u03bd0 with respect to the Lebesgue measure on Rd, and \u03b30 \u225c log R Rd e\u2225\u03b8\u22252\u03bd0(\u03b8)d\u03b8 < \u221e A few remarks: i) Assumption 3.1 indicates U to be a convex function. Theory of non-convex U is presented in Section J of the SM with some extra assumptions. ii) Assumptions 3.1 is widely adopted in other theoretical works such as [Dalalyan and Karagulyan, 2017, Chatterji et al., 2018] iii) Assumptions 3.2 has also been adopted in [Raginsky et al., 2017] 4.1 Basic setup and extra notation Due to the exchangeability of the particle system {\u03b8(i) \u03c4 }M i=1 in (8), if we initialize all the particles \u03b8(i) \u03c4 with the same distribution \u03c10, they would endow the same distribution for each time \u03c4. We denote the distribution of each \u03b8(i) \u03c4 as \u03c1\u03c4. Similar arguments hold for the particle system {\u03b8(i) k }M i=1 in (9), and thus we denote \fStochastic Particle-Optimization Sampling and the Non-Asymptotic Convergence Theory the distribution of each \u03b8(i) k as \u00b5k (k = 1, 2, \u00b7 \u00b7 \u00b7 , T). To this end, our analysis aims at bounding W1(\u00b5T , \u03bd\u221e) since \u03bd\u221eis our target distribution p(\u03b8|X) according to Proposition 1. In the following, for conciseness, we use a summation of stepsizes to represent the \u201ctime index\u201d of some density, e.g., \u03c1PT \u22121 k=0 hk. The high-level idea of bounding W1(\u00b5T , \u03bd\u221e) in this section is to decompose it as follows: W1(\u00b5T , \u03bd\u221e) \u2264W1 \uf8eb \uf8ed\u00b5T , \u03c1T \u22121 P k=0 hk \uf8f6 \uf8f8 (10) + W1 \uf8eb \uf8ed\u03c1T \u22121 P k=0 hk , \u03bdT \u22121 P k=0 hk \uf8f6 \uf8f8+ W1 \uf8eb \uf8ed\u03bdT \u22121 P k=0 hk , \u03bd\u221e \uf8f6 \uf8f8. 4.2 Bounds with stochastic particle approximation In this section, we bound W1(\u03c1PT \u22121 k=0 hk, \u03bdPT \u22121 k=0 hk) and W1(\u03bdPT \u22121 k=0 hk, \u03bd\u221e) in (10). The \ufb01rst term corresponds to a variant of granular media equation, but is much more challenging to bound. Theorem 5 Under Assumption 1&3 and letting \u03c10 = \u03bd0, there exist positive constants c1 and c2 independent of (M, \u03c4) and satisfying c2 < \u03b2\u22121 such that W1(\u03c1\u03c4, \u03bd\u03c4) \u2264c1(\u03b2\u22121 \u2212c2)\u22121M \u22121/2, \u2200\u03c4. (11) Remark 3 According to Theorem 5, we can bound the W1(\u03c1PT \u22121 k=0 hk, \u03bdPT \u22121 k=0 hk) term as W1(\u03c1PT \u22121 k=0 hk, \u03bdPT \u22121 k=0 hk) \u2264 c1 \u221a M(\u03b2\u22121\u2212c2). Furthermore, by letting \u03c4 \u2192\u221e, we have W1(\u03c1\u221e, \u03bd\u221e) \u2264 c1 \u221a M(\u03b2\u22121\u2212c2), an important result to prove the following theorem. Theorem 6 Under Assumption 1&3, the following holds: W1(\u03bd\u03c4, \u03bd\u221e) \u2264c3e\u22122\u03bb1\u03c4, where \u03bb1 = \u03b2\u22121mF \u2212 LF \u22122LK and c3 is some positive constant independent of (M, \u03c4). Furthermore, the W1(\u03bdPT k=0 hk, \u03bd\u221e) term in (10) can be bounded as: W1(\u03bdPT \u22121 k=0 hk, \u03bd\u221e) \u2264c3 exp \u22122\u03bb1( T \u22121 X k=0 hk) ! . (12) To ensure W1(\u03bdPT \u22121 k=0 hk, \u03bd\u221e) decreases over time, one needs to choose \u03b2 small enough such that \u03bb1 > 0. This also sheds light on a failure case of SVGD (where \u03b2 \u2192\u221e) discussed in Section 3.1. 4.3 Bounds with a numerical solution To bound the W1(\u00b5T , \u03c1PT \u22121 k=0 hk) term in (10), we adopt techniques from [Raginsky et al., 2017, Xu et al., 2018] on analyzing the behavior of SGLD, and derive the following results for our SPOS algorithm: Theorem 7 Under Assumptions 1&3, for a \ufb01xed step size hk = h (\u2200k) that is small enough, the corresponding W1(\u00b5T , \u03c1T h) is bounded as: W1(\u00b5T , \u03c1T h) \u2264c4Md 3 2 \u03b2\u22123(c5\u03b22B\u22121 + c6h) 1 2 T 1 2 h 1 2 (13) where B is the minibatch size and (c4, c5, c6) are some positive constants independent of (M, T, h). Combining bounds from Theorems 5 and (7), given T, the optimal bound over h can be seen to decrease at a rate of O(M \u22121/2). Furthermore, the dependence of T in the bound of Theorem 7 makes the bound relatively loose. Fortunately, the bound can be made independent of T by considering a decreasing-stepsize SPOS algorithm, as stated in Theorem 8. Theorem 8 Under Assumptions 1&3, for a decreasing step size hk = h0/(k + 1), and letting the minibatch size in each iteration k be Bk = B0 + [log(k + 1)]100/99 with B0 the initial batch size, the corresponding W1(\u00b5T , \u03c1PT \u22121 k=0 hk) term is bounded, for some \u03b2 small enough, as: W1 \uf8eb \uf8ed\u00b5T , \u03c1T \u22121 P k=0 hk \uf8f6 \uf8f8\u2264c4\u03b2\u22123Md 3 2 \u0000c7h3 0 + c8\u03b23h0/B0 +c9h2 0\u03b22\u00011/2 , (14) where (c4, c7, c8, c9) are positive constants independent of (M, T, h0). Note Bk increases at a very low speed, e.g., only by 15 after 105 iterations, thus it does not a\ufb00ect algorithm e\ufb03ciency. Consequently, W1(\u00b5T , \u03c1PT \u22121 k=0 hk) would approach zero when h1/2 0 M \u21920. The Overall Non-Asymptotic Bounds By directly combining results from Theorems 5\u20138, one can easily bound the target W1(\u00b5T , \u03bd\u221e), stated in Theorem 9 and Theorem 10. Theorem 9 (Fixed Stepsize) Under Assumption 1&3 and setting hk = h0, Bk = B0, W1(\u00b5T , \u03bd\u221e) is bounded as: W1(\u00b5T , \u03bd\u221e) \u2264 c1 \u221a M(\u03b2\u22121 \u2212c2) + c6Md 3 2 \u03b2\u22123(c4\u03b22B\u22121 + c5h) 1 2 T 1 2 h 1 2 + c3 exp \b \u22122 \u0000\u03b2\u22121mF \u2212LF \u22122LK \u0001 Th \t , (15) where (c1, c2, c3, c4, c5, c6, \u03b2) are positive constants such that 1 \u03b2 > c2 and mF \u03b2 > LF + 2LK. Theorem 10 (Decreasing Stepsize) Denote \u02dc hT \u225c PT \u22121 k=0 hk. Under Assumption 1&3, if we set hk = \fJianyi Zhang1, Ruiyi Zhang1, Lawrence Carin1, Changyou Chen2k 100 101 102 103 iteration 1 1.5 2 2.5 3 3.5 4 4.5 5 error 100 101 102 103 iteration 1 1.5 2 2.5 3 3.5 4 4.5 5 error Figure 2: Estimation errors versus number of iterations for SPOS (left) and SVGD (right). h0/(k+1) and Bk = B0+[log(k+1)]100/99, W1(\u00b5T , \u03bd\u221e) is bounded as: W1(\u00b5T , \u03bd\u221e) \u2264 c1 \u221a M(\u03b2\u22121 \u2212c2) (16) + c3 exp{\u22122 \u0000\u03b2\u22121mF \u2212LF \u22122LK \u0001 \u02dc hT } + c10\u03b2\u22123Md 3 2 (c9h3 0 + c7\u03b23h0/B0 + c8h2 0\u03b22) 1 2 . where (c1, c2, c3, c7, c7, c8, c9, c10, \u03b2) are positive constants such that 1 \u03b2 > c2 and mF \u03b2 > LF + 2LK. Remark 4 Four implications are highlighted from the theorems: i) M and T play a similar role when bounding the numerical errors (the third term in the RHS of (15)). The bound increases with increasing M and T, which seems unavoidable and is consistent with the latest result for SGLD, whose bound is proved to increase w.r.t. T [Raginsky et al., 2017]. ii) The increasing bound w.r.t. T can be compromised by using decreasing stepsizes shown in Theorem 10. Unfortunately, this does not seem to eliminate the e\ufb00ect of M. To accommodate this, one should either use a smaller h or a larger \u03b2. We believe future work is needed to improve the bound w.r.t. M. However, this is nontrivial as recent theory shows coordinate-wise SGLD scales linearly w.r.t. parameter dimension [Shen et al., 2019] (corresponding to scaling linearly w.r.t. M in our case, consistent with our theory). iii) When T \u00d7 M (proportional to computation cost) is not too large, the error is bounded above by O(M \u22121/2 + M), indicating the existence of an optimal M, i.e., one should not choose arbitrary many particles as it would induce larger numerical-error bounds. This is somewhat surprising and counter-intuitive compared with the asymptotic theory [Liu, 2017, Lu et al., 2018]. However, we will demonstrate this is true with experiments on synthesized data, where the phenomenon is also observed in SVGD. iv) When T \u00d7 M is large enough, the O(M) term dominates, indicating an increasing error w.r.t. M. This is veri\ufb01ed by the experiments in Section 5.1 (Figure 3), although the bound might not be strictly tight. 5 Experiments We use simple experiments with synthetic data to demonstrate the non-asymptotic convergence behaviors of SPOS indicated by our theory. In addition to the results presented below, more experiments and real applications are provided in Section O of the SM, considering comparisons of SPOS with SVGD and SGLD on Bayesian learning of deep neural network and Bayesian exploration in deep reinforcement learning (RL). 5.1 Sampling a Gaussian distribution We apply the algorithms to sample from a simple 1-D Gaussian distribution with mean 2 and variance 1. Since the 1-Wasserstein distance is infeasible to calculate, we follow [Vollmer et al., 2016, Chen et al., 2015] and measure the convergence using err \u225c |E\u03b8\u223c\u00b5T [f(\u03b8)] \u2212E\u03b8\u223cN (2,1)[f(\u03b8)]| with a test function f(\u03b8) \u225c\u03b82. We \ufb01x T = 1000 and h = 0.03. Particles are initialized as being drawn from N(0, 1). Figure 2 plots the estimation errors versus the number of iterations for di\ufb00erent particles M. For both SPOS and SVGD, it is observed that when T is not too large (\u2248100), the errors increase w.r.t. T, and the optimal M is around 300, consistent with our theory. When T is large enough, the errors decrease w.r.t. T, and larger M induces larger errors. This is also consistent with our theory, because the last term in Theorem 9 dominates when T is large, leading to increasing errors with larger M. The only concern seems to be the tightness of the bound, which might be due to technical di\ufb03culty as current techniques for SGLD also indicate an increasing bound w.r.t. T [Raginsky et al., 2017]. The large optimal M also suggests using a relatively large M should not be a problem in real applications. Impact of particle number M In addition to the above result to demonstrate the existence of an optimal M, we further verify that when T \u00d7 M is large enough, for a \ufb01xed T, we observe the errors increase with increasing M. We use the same setting as above. Figure 3 plots the curves of errors versus number of particles. We see that errors indeed increase w.r.t. particle numbers, consistent with our theory. Although \fStochastic Particle-Optimization Sampling and the Non-Asymptotic Convergence Theory 101 102 103 Number of particles 10-2 10-1 100 Error SPOS, iter = 1000 SVGD, iter = 1000 SPOS, iter = 800 SVGD, iter = 800 SPOS, iter = 600 SVGD, iter = 600 SPOS, iter = 400 SVGD, iter = 400 Figure 3: Errors versus Number of particles. Errors increase with increasing particle numbers. the rate of the bound from our theory might not match exactly with the experimental results, we believe this is still signi\ufb01cant as the problem has never been discovered before, which is somewhat counter-intuitive. On the other hand, the results are also reasonable, as more particles would need much more updates to \ufb01t a distribution well. The results indicate that to get a smaller error, one should increase number of iterations faster than increasing the number of particles. Table 1: Averaged RMSE with standard deviations. Test RMSE Dataset SGLD SVGD SPOS Boston 3.114 \u00b1 0.144 2.961 \u00b1 0.109 2.829 \u00b1 0.126 Concrete 5.508 \u00b1 0.275 5.157 \u00b1 0.082 5.071 \u00b1 0.150 Energy 0.842 \u00b1 0.060 1.291 \u00b1 0.029 0.752 \u00b1 0.029 Kin8nm 0.080 \u00b1 0.001 0.090 \u00b1 0.001 0.079 \u00b1 0.001 Naval 0.004 \u00b1 0.000 0.004 \u00b1 0.000 0.004 \u00b1 0.000 CCPP 4.059 \u00b1 0.080 4.127 \u00b1 0.027 3.939 \u00b1 0.049 Wine 0.632 \u00b1 0.022 0.604 \u00b1 0.007 0.598 \u00b1 0.014 Yacht 1.183 \u00b1 0.263 1.597 \u00b1 0.099 0.840 \u00b1 0.087 Protein 4.281 \u00b1 0.011 4.392 \u00b1 0.015 4.254 \u00b1 0.005 YearPredict 8.707 \u00b1 NA 8.684 \u00b1 NA 8.681 \u00b1 NA 5.2 BNNs for regression We next conduct experiments for Bayesian learning of deep neural networks (DNNs) to empirically compare SGLD, SVGD and SPOS for posterior sampling of BNN weights with standard Gaussian priors. We use a RBF kernel with the bandwidth set to the medium of particles. Following [Li et al., 2015], 10 UCI public datasets are considered, and we consider 100 hidden units for 2 large datasets (Protein and YearPredict), and 50 hidden units for the other 8 small datasets. We use the same setting as [Zhang et al., 2018b]. The datasets are randomly split into 90% training and 10% testing. For a fair comparison, we use the same split of data (train, val and test) for all methods. We report the root mean squared error (RMSE) in Table 1. The proposed SPOS outperforms both SVGD and SGLD. More detailed settings and results are given in Section O of the SM. 5.3 Bayesian exploration in deep RL It is well accepted that RL performance directly measures how well the uncertainty is learned, due to the need for exploration. We apply SPOS for RL, and compare it with SVPG, a SVGD version of the policy gradient method [Liu et al., 2017]. Following [Liu et al., 2017, Zhang et al., 2018a], we de\ufb01ne policies with Bayesian DNNs. This naturally introduces uncertainty into action selection, rendering Bayesian explorations to make policy learning more e\ufb00ective. We follow the same setting as in [Liu et al., 2017], except that we use simpler policy-network architectures, as in [Houthooft et al., 2016]. We conduct experiments on three classical continuous control tasks are considered: Cartpole Swing-Up, Double Pendulum, and Cartpole. Detailed experimental settings are given in the SM. Figure 4 plots the cumulative rewards over time on the Cartpole environment, which clearly shows the advantage of our method over SVPG. More results are provided in the SM. 0 200 400 600 800 1000 Episodes 100 50 0 50 100 150 200 Average\u00a0Reward Cartpole\u00a0Swing\u00a0Up\u00a0(A2C) SVPG SPOS\u00adPG 0 200 400 600 800 1000 Episodes 100 50 0 50 100 150 200 250 300 Average\u00a0Reward Cartpole\u00a0Swing\u00a0Up SVPG SPOS\u00adPG Figure 4: Policy learning with Bayesian exploration in policy-gradient methods with SVPG and SPOS-PG. 6" + } + ], + "Yiran Chen": [ + { + "url": "http://arxiv.org/abs/2108.07140v2", + "title": "MTG: A Benchmark Suite for Multilingual Text Generation", + "abstract": "We introduce MTG, a new benchmark suite for training and evaluating\nmultilingual text generation. It is the first-proposed multilingual multiway\ntext generation dataset with the largest human-annotated data (400k). It\nincludes four generation tasks (story generation, question generation, title\ngeneration and text summarization) across five languages (English, German,\nFrench, Spanish and Chinese). The multiway setup enables testing knowledge\ntransfer capabilities for a model across languages and tasks. Using MTG, we\ntrain and analyze several popular multilingual generation models from different\naspects. Our benchmark suite fosters model performance enhancement with more\nhuman-annotated parallel data. It provides comprehensive evaluations with\ndiverse generation scenarios. Code and data are available at\n\\url{https://github.com/zide05/MTG}.", + "authors": "Yiran Chen, Zhenqiao Song, Xianze Wu, Danqing Wang, Jingjing Xu, Jiaze Chen, Hao Zhou, Lei Li", + "published": "2021-08-13", + "updated": "2022-06-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction Natural language generation (NLG) aims to automatically generate meaningful texts with the input in different formats, such as images (Anderson et al., 2018), tables (Ye et al., 2020) or texts (Guan et al., 2019). The generated texts generally target at realizing an underlying communicative goal while remaining coherent with the input information and keeping grammatically correct. Multilingual text generation extends the natural language generation task to produce texts in multiple languages, which is important to overcome language barriers and enable universal information access for the world\u2019s citizens (Artetxe et al., 2020; Arivazhagan et al., 2019; Pan et al., 2021). To achieve this goal, various multilingual text generation datasets have been proposed. Some of them do not incorporate cross-lingual pairs (Liang et al., 2020; Ladhak et al., 2020). This limits the \u2217Corresponding author. \u2020Work is done while at ByteDance. knowledge transfer from one language to another. Others involve cross-lingual pairs while English is included on either source or target side in most cases (Zhu et al., 2019; Ladhak et al., 2020), leading to dif\ufb01cult transfer between low-resource or distant language pairs. Constructing a multilingual text generation dataset that can directly transfer knowledge between any two languages is still under-explored. To this end, we propose MTG, a humanannotated multilingual multiway dataset. Multiway means that the same sample is expressed in multiple languages. It covers four generation tasks (story generation, question generation, title generation and text summarization) across \ufb01ve languages (English, German, French, Spanish and Chinese). We do not include multilingual machine translation because MT itself is a standard task. The multiway parallel feature enables cross-lingual data construction between arbitrary language pairs. Such direct parallel signal promotes knowledge transfer and cross-lingual generation between any language pairs (even distant pairs such as Spanish-Chinese) without involving an intermediate language such as English (Leng et al., 2019). The multilingual multiway feature also enables various training and test scenarios. In this paper, we design four scenarios to verify the advantages of our MTG from different aspects. Several representative pretrained multilingual models are employed to test these scenarios, including multilingual BERT (M-BERT) (Devlin et al., 2019), XLM (Lample and Conneau, 2019), mBART (Liu et al., 2020) and mT5 (Xue et al., 2020). We leverage various metrics to assess the coherence and diversity of the outputs generated by these models. Besides, we also propose an ensemble metric, which mainly focuses on relevance, measuring to what degree is the generated text close to humanlevel. Human evaluation is also conducted to validate models\u2019 performances. arXiv:2108.07140v2 [cs.CL] 10 Jun 2022 \fIn summary, the contributions of this paper are listed as follows: (i) We propose a new human-annotated multilingual multiway text generation benchmark suite MTG. (ii) We design a new evaluation metric measuring how a text resembles human writing and prove that it has higher correlation scores with human scores compared with other automatic relevance metrics. (iii) We evaluate several representative pretrained multilingual models on our proposed MTG and make a rigorous analysis to verify its advantages. 2 Related Work A signi\ufb01cant body of works have been committed to the construction of multilingual datasets covering diverse tasks (Hu et al., 2020; Jiang et al., 2020; Longpre et al., 2020). XTREME (Hu et al., 2020) is a multilingual understanding benchmark across 40 languages and 9 tasks, but it does not cover any generation task. Jiang et al. (2020) propose X-FACTR, which is a cross-lingual factual retrieval benchmark. Longpre et al. (2020) propose MKQA, an opendomain question answering evaluation dataset covering 26 diverse languages. Ladhak et al. (2020) present WikiLingua, which is a large-scale, multilingual dataset for cross-lingual abstractive summarization systems. MLSUM (Wang et al., 2021) is a dataset for text summarization in 12 languages. Wiki-40B (Guo et al., 2020) is a multilingual language model dataset across 40+ languages. Although these datasets cover multiple languages, they either belong to natural language understanding tasks or a single, speci\ufb01c generation task, which limits researchers to obtain general \ufb01ndings incorporating a set of generation tasks. XGLUE (Liang et al., 2020) is a cross-lingual benchmark dataset for nine understanding tasks and two generation tasks. GEM (Gehrmann et al., 2021) is a newly-presented vision-language dataset covering 11 image-language and video-language tasks and 32 languages. These two datasets encompass multiple tasks and languages. However, a remarkable difference of our MTG from XGLUE and GEM is that MTG focuses on text-to-text generation tasks and is parallel across all languages, which facilitates easier knowledge transfer. 3 Dataset Collection and Methodology This section will introduce how to create the benchmark suite for multilingual text generation (MTG). In order to construct multiway parallel dataset, the initial dataset is translated into other languages by an off-the-shelf translation model. Part of the translated data is randomly selected for further human annotation to increase data quality. The selection of tasks, initial datasets and languages are based on several principles as described below. 3.1 Task and Dataset Selection It is important to select suitable tasks for our MTG benchmark to make it diverse and challenging. Thus, we de\ufb01ne several criteria during the task selection procedure: Task De\ufb01nition Tasks should be well-de\ufb01ned, which means that humans can easily determine whether the generated results meet the task requirements. Task Dif\ufb01culty Tasks should be solvable by most college-educated speakers. In the meantime, they should be challenging to current models, the performance of which in various test scenarios falls short of human performance. Task Diversity Tasks should cover a wide range of generation challenges that allow for \ufb01ndings to be as general as possible. Input Format The input format of the tasks needs to be as simple as possible to reduce the dif\ufb01culty of data processing. Besides, it should not contain anything but text (e.g., without any images or videos). In order to meet the above criteria, 8 domain experts are asked to vote from 10 typical generation tasks1. Finally, four generation tasks are selected for MTG, which are story generation, question generation, title generation and text summarization. Story generation (SG) aims to generate the end of a given story context, which requires the model to understand the story context and generate a reasonable and \ufb02uent ending (Guan et al., 2019). Question generation (QG) targets at generating a correct question for a given passage and its answer (Duan et al., 2017). For the same passage with different answers, the system should be able to generate different questions. Title generation (TG) converts a given article into a condensed sentence while preserving its main idea (Jin and Hauptmann, 2002). The title should be faithful to the original document and encourage users to read the news 1These generation tasks are story generation, commonsense generation, style transfer, question generation, question answering, dialogue generation, title generation, text summarization, image caption, and data-to-text generation. \fTask Corpus Domain Format Goal Story Generation ROCStories Daily life Generate the end of the story Question Generation SQUAD 1.0 Wikipedia Generate the question of the answer Title Generation ByteCup News Generate the title of the document Text Summarization CNN/DailyMail News Generate the summary of the document Table 1: The description of tasks and English datasets included in MTG. For story generation, we use the last sentence as story end to be generated and the rest as input. at the same time. Text summarization (Summ) aims to condense the source document into a coherent, concise, and \ufb02uent summary (Mani, 2001). It is similar to title generation but the output of text summarization is relatively longer. These four tasks focus on different generative abilities and realize different goals. After con\ufb01rming the tasks, the next step is to choose the dataset for each task. The two selection principles are listed as follows:(1) License: Task data must be available under licenses that allow using and redistributing for research purposes. The dataset should be free and available for download. (2) Quality: The dataset size should be as large as possible and the quality should be checked. English datasets are chosen as the initial datasets because they are more accessible in all four tasks and have relatively larger size compared with datasets in other languages. We choose ROCStories (Mostafazadeh et al., 2016) for story generation, SQUAD 1.0 (Rajpurkar et al., 2016) for question generation, ByteCup 2 for title generation and CNN/DailyMail (Nallapati et al., 2016) for text summarization. These datasets are popular in the corresponding \ufb01elds and have been veri\ufb01ed to be high-quality by many works. Moreover, they are all under a permissive license. An overview of all task datasets is shown in Table 1. 3.2 Language Selection The original datasets are in English (en) only and we want to extend them into a multiway parallel form. This means that all English texts should be translated into other languages, which will lead to high annotation costs. Thus, a state-of-the-art translator is leveraged to do the translation and then annotators are asked to correct the translated text. Considering this construction method, MTG should contain languages that (1) have good English-to-X translators and (2) are diverse in language family. Finally, German (de), French (fr), Spanish (es) and Chinese (zh) are chosen. German is from the 2https://www.biendata.xyz/competition/bytecup2018/ same language branch as English while French and Spanish are from different ones. Chinese is more distant from the rest of languages in the language family tree. Task SG, QG, TG, Summ For each language Rough training size 76k/61k/270k/164k Annotated training size 15k/15k/15k/15k Annotated development size 2k/2k/2k/2k Annotated test size 3k/3k/3k/3k For \ufb01ve languages (en, de, fr, es, zh) Total Annotated size 400k Total dataset size 6.9m Table 2: The number of samples in MTG. MTG consists of four subsets: rough training, annotated training, development and test set. The rough training set is \ufb01ltered by back translating across \ufb01ve languages. The annotated training, development and test sets are corrected by human experts. 3.3 Data Collection After determining the tasks and languages, we introduce the data collection process to get the MTG. The Google Translate3 is used to translate the English datasets to the selected languages. To control the quality of translated texts, we back translate the text to English and \ufb01lter the samples whose n-gram overlap ratios with the original English texts are lower than a certain threshold. Different threshold values (from 0.3 to 0.6 with 0.1 as step length) are tested and if it is set to 0.6, the training data size of QG will drop more than 60%. Thus we decide to use 0.5 as the threshold number to improve the quality of the \ufb01ltered data while still maintaining more than 70% of the original training data.4 Samples in four languages are aligned to ensure that the dataset is multiway parallel. 20, 000 samples of each task and language are randomly selected for annotation under the premise 3https://translate.google.com/ 4The detailed sizes of the \ufb01ltered datasets with respect to different thresholds are included in appendix A. \fCorrelation AdaBoost DecisionTree ExtraTree GradientBoosting Kneighbors Linear RandomForest SVR Bagging Pearson 0.100 0.133 0.190 0.215 0.192 0.173 0.208 0.113 0.240 Correlation BLEU ROUGE-1 ROUGE-2 ROUGE-L METEOR BERTScore-P BERTScore-R BERTScore-F1 Bagging Pearson 0.180 0.142 0.163 0.144 0.122 0.142 0.176 0.162 0.344 Table 3: The correlation scores between automatic metric scores and human-annotated scores (the average scores of grammar, \ufb02uency and relevance). Upper part of the table shows the correlation scores of different regression algorithms in test set of all languages. The lower part demonstrates correlation scores of our ensemble score (the bagging regressor) and other classic automatic scores in test set without Chinese results because Meteor does not support Chinese. of ensuring inter-language alignment. The annotators are required to further check the translated results based on the following rules: (1) Semantic aligned Whether the target text is meaningful and is fully semantic aligned with the source text. (2) Fluency Whether the translated text is grammatically correct. (3) Style Whether the translation follows the norms of local culture, language conventions, and gender-related words. If the translated text contradicts any of the above rules, annotators will correct it accordingly. The annotated data is then split to 15k/2k/3k as training/development/test subsets. A team of 10 full-time experts5 are hired to do the annotation, who are paid daily. Some parttime workers6 are also employed to increase the annotation throughput, who are paid by the number of annotations. Each annotator is an expert in at least two languages (English and another target language). They are trained to correct translation errors according to the above rules, \ufb01rst a small number of samples for trial, these annotation results are re-checked by us and feedback is given to the annotators to help them understand the tasks better. After this annotation training process, the annotators start to annotate the dataset. For quality control, we sample 2% from the annotations and arrange for 9 experts to double-check them. Each example is assigned to two other experts and the data is quali\ufb01ed only if both of them agree on the annotation7. If more than 5% of the annotations fail, then all the data of that annotator for that day will be re-checked. Then the multiway parallel generation benchmark MTG is \ufb01nally completed. It contains four 5There are 3 language experts for German, 3 for French, 4 for Spanish and 4 for Chinese 6There are 16 part-time workers who are participated in the German annotation, 39 for French, 4 for Spanish and 15 for Chinese. 7The grammar, expressions, and punctuation of the annotated text are completely correct and the expressions are in accordance with the foreign language. different generation tasks in \ufb01ve languages and its quality is improved by the incorporation of human annotation. However, the number of humanannotated data is still small due to cost concerns. Introducing more human-annotated data or carrying out extra \ufb01ltering for machine-translated data can be future directions to further improve the quality of MTG. The statistics of MTG is shown in Table 2. 4 Experiments In this section, we conduct extensive experiments to benchmark the dif\ufb01culty of our proposed MTG via several state-of-the-art multilingual models under different scenarios. 4.1 Baseline Models The performance of the following four popular multilingual pretrained models is explored8: M-BERT Multilingual BERT (M-BERT) (Devlin et al., 2019) is a language model pretrained from monolingual corpora in 104 languages using Masked Language Modeling (MLM) task. XLM The Cross-Lingual Language Model (XLM) (Lample and Conneau, 2019) is pretrained with Masked Language Modeling (MLM) task using monolingual data and Translation Language Modeling (TLM) task using parallel data. mBART Multilingual BART (mBART) (Liu et al., 2020) is a pretrained encoder-decoder model using denoising auto-encoding objective on monolingual data over 25 languages. mT5 Multilingual T5 (mT5) (Xue et al., 2020) is a multilingual variant of T5 (Raffel et al., 2020) formatting all tasks as text-to-text generation problems. mT5 is pretrained on a span-corruption version of Masked Language Modeling objective over 101 languages. 8Detailed descriptions for models are included in Appendix B. \fen de fr es zh en de fr es zh 31.25 34.53 30.51 30.37 26.42 30.79 34.81 31.62 30.22 26.70 31.13 34.44 31.21 30.19 26.56 31.19 34.32 31.62 30.82 26.41 31.24 34.06 30.47 29.99 26.35 (a) SG M-BERT en de fr es zh en de fr es zh 31.70 29.68 27.74 28.35 26.67 31.97 29.76 27.69 28.06 26.49 32.03 29.39 27.14 28.12 26.28 31.26 30.00 27.97 27.17 26.15 33.84 30.24 27.00 28.58 26.06 (b) SG XLM en de fr es zh en de fr es zh 31.65 34.09 29.80 30.15 21.41 32.06 34.43 32.79 32.25 22.66 32.32 33.68 32.81 29.55 26.47 33.32 30.70 28.10 31.69 26.65 31.45 33.21 26.27 23.61 26.57 (c) SG mBART en de fr es zh en de fr es zh 31.41 34.57 31.99 31.76 26.87 32.24 35.68 32.46 31.90 26.65 32.83 34.33 32.89 32.56 26.99 31.21 34.70 32.18 31.80 27.04 31.67 34.15 32.16 32.05 26.35 10 15 20 25 30 35 (d) SG mT5 en de fr es zh en de fr es zh 30.82 30.59 29.27 31.24 26.36 30.34 31.74 29.69 30.60 27.83 29.54 31.18 29.64 31.43 26.19 30.08 31.31 29.17 32.23 26.00 29.01 30.00 28.73 30.45 28.35 (e) QG M-BERT en de fr es zh en de fr es zh 32.51 29.27 29.79 29.08 28.63 31.99 30.02 29.54 28.69 28.41 31.17 29.54 29.97 28.70 28.30 32.04 29.52 28.99 29.67 28.57 31.20 29.54 28.67 28.01 28.19 (f) QG XLM en de fr es zh en de fr es zh 33.57 28.61 28.66 31.53 27.97 30.63 33.42 30.04 32.58 26.31 29.22 31.39 32.52 33.47 24.29 30.12 32.50 29.20 34.08 28.11 30.20 32.13 28.44 32.11 29.01 (g) QG mBART en de fr es zh en de fr es zh 32.66 33.22 32.44 33.74 28.70 31.72 33.49 31.79 33.20 27.71 32.63 33.53 33.14 34.06 27.88 31.94 32.63 32.37 34.52 28.78 31.02 32.60 31.33 32.80 27.48 10 15 20 25 30 35 (h) QG mT5 en de fr es zh en de fr es zh 30.44 32.70 28.73 28.97 26.03 29.63 31.37 28.34 27.68 26.42 30.77 31.29 28.49 29.10 26.03 29.84 30.79 28.13 28.13 27.01 28.66 30.08 27.59 28.02 26.43 (i) TG M-BERT en de fr es zh en de fr es zh 31.02 30.61 28.47 29.46 27.14 30.66 30.08 27.62 28.23 28.16 29.66 30.47 27.81 28.42 27.01 30.55 29.97 28.19 27.95 27.39 30.08 30.35 28.28 27.99 27.16 (j) TG XLM en de fr es zh en de fr es zh 33.82 21.81 15.72 13.51 27.56 22.60 31.76 22.34 19.86 28.53 23.24 26.81 29.41 22.52 17.83 32.99 23.18 24.79 29.61 27.85 31.99 23.31 23.77 20.50 28.19 (k) TG mBART en de fr es zh en de fr es zh 32.20 33.14 29.33 30.70 27.66 31.71 31.45 30.04 29.47 27.80 31.07 31.08 28.48 30.16 26.81 31.11 32.60 29.29 29.50 27.73 30.06 31.78 28.54 29.11 26.14 10 15 20 25 30 35 (l) TG mT5 en de fr es zh en de fr es zh 21.35 25.97 22.52 20.58 19.38 20.94 26.28 22.39 21.01 19.93 21.22 26.21 21.83 20.92 19.51 20.61 25.96 22.31 20.92 19.63 20.78 26.30 22.45 21.14 20.23 (m) Summ M-BERT en de fr es zh en de fr es zh 22.98 21.47 20.30 20.68 19.61 22.01 21.51 21.14 19.70 19.38 22.26 21.36 20.56 20.13 18.91 21.57 21.16 20.62 19.63 20.14 21.27 22.15 20.06 19.33 11.73 (n) Summ XLM en de fr es zh en de fr es zh 21.86 6.79 16.02 6.27 19.80 21.01 24.97 22.36 20.57 20.07 21.34 24.53 22.26 20.16 19.53 21.00 20.63 22.65 21.21 19.89 20.51 24.84 21.73 21.74 19.56 (o) Summ mBART en de fr es zh en de fr es zh 23.72 23.37 21.16 19.61 20.68 23.39 21.37 20.32 20.42 20.90 22.80 22.57 18.49 19.37 21.63 23.13 23.16 20.85 17.84 18.78 23.05 21.73 20.28 19.74 21.08 10 15 20 25 30 35 (p) Summ mT5 Figure 1: The cross-lingual ensemble metric results for four models in four tasks. Every cell of row lang1 and column lang2 means the result when the languages of input and output are lang1 and lang2 respectively. Deeper red represents better cross-lingual performance while deeper gray indicates worse performance. 4.2 Evaluation Metrics In order to fully understand the model performance, the quality of generated texts is evaluated from different aspects, including metrics measuring the relevance between outputs and references (e.g., BLEU, ROUGE, and BERTScore) and metrics measuring the diversity of the generated texts (e.g., Distinct). Moreover, we propose a new ensemble metric leveraging relevance metrics to measure how close the generated text is to human writing. It not only has higher correlation scores with human judgments but also is capable of measuring model performances fairly between languages. N-gram based Metrics N-gram based metrics evaluate the text-overlapping scores between the outputs and references. The following three metrics are used: (1) BLEU (Papineni et al., 2002) is a popular metric that calculates the word-overlap scores between the generated texts and gold-standard ones. We use the BLEU-4, which is the average score for unigram, bigram, trigram, and 4-gram. (2) ROUGE (Lin, 2004) is a recall-oriented metric that counts the number of overlapping units such as n-gram and word sequences between the produced texts and gold-standard ones. (3) METEOR (Banerjee and Lavie, 2005) relies on semantic features to predict the similarity scores between system hypotheses and human references. Embedding based Metrics The embeddingbased metrics can, to a large extent, capture the semantic-level similarity between the generated text and the ground truth. BERTScore (Zhang et al., 2019) computes the similarity of candidate and reference as a sum of cosine similarities of tokens using BERT contextual embeddings. Diversity Metrics We also employ the distinct metric (Li et al., 2016), which calculates the proportion of the distinct n-grams in all the system hypotheses and can be used to evaluate the diversity of the generated texts. Human Evaluation Human evaluation is also leveraged to better estimate the quality of model outputs. Speci\ufb01cally, 30 cases are randomly sampled from the test set for each task and language while ensuring all 30 cases are aligned among \ufb01ve languages, and then they are presented to human annotators with the model outputs. The generated texts are evaluated under task-agnostic and taskspeci\ufb01c aspects. Task-agnostic aspects include Grammar, Fluency, Relevance and Language \fTask Model BLEU ROUGE-L METEOR BERTScore Distinct-1 Ensemble mono multi mono multi mono multi mono multi mono multi mono multi SG M-BERT 2.486 2.836 16.680 17.240 0.139 0.140 0.741 0.743 0.952 0.959 30.891 30.987 XLM 4.026 2.992 24.520 22.820 0.145 0.144 0.754 0.744 0.967 0.967 28.364 28.449 mBART 4.514 4.880 19.320 19.920 0.149 0.156 0.759 0.762 0.985 0.983 31.430 31.907 mT5 2.668 3.832 16.280 18.620 0.126 0.145 0.751 0.759 0.976 0.974 31.623 31.482 QG M-BERT 8.266 9.980 27.340 29.520 0.240 0.262 0.778 0.785 0.938 0.944 30.553 30.526 XLM 16.472 15.264 41.100 40.600 0.305 0.298 0.810 0.809 0.966 0.967 30.072 29.979 mBART 16.256 17.624 36.640 38.140 0.298 0.315 0.811 0.817 0.981 0.983 32.522 32.961 mT5 15.792 17.700 34.100 37.680 0.294 0.313 0.806 0.818 0.977 0.979 32.257 32.944 TG M-BERT 9.524 10.550 25.440 26.360 0.214 0.228 0.749 0.754 0.930 0.957 28.971 29.422 XLM 11.144 11.926 26.960 28.660 0.236 0.248 0.752 0.759 0.946 0.941 28.808 29.063 mBART 14.726 14.786 31.680 32.120 0.257 0.260 0.773 0.775 0.966 0.968 30.556 30.322 mT5 11.336 13.546 26.460 29.400 0.223 0.257 0.753 0.767 0.959 0.956 29.556 30.010 Summ M-BERT 9.766 10.956 31.280 32.220 0.221 0.232 0.748 0.751 0.787 0.815 22.122 22.018 XLM 9.486 11.830 30.160 34.740 0.235 0.235 0.729 0.755 0.814 0.772 19.281 20.770 mBART 12.858 12.792 32.940 32.920 0.256 0.257 0.750 0.750 0.796 0.803 21.972 22.292 mT5 5.022 6.090 25.060 27.980 0.145 0.162 0.724 0.741 0.826 0.870 20.499 21.826 Table 4: Automatic scores averaged across \ufb01ve languages for four models on four tasks. Mono and multi mean models are trained in monolingual and multilingual setting respectively. Higher scores between monolingual and multilingual results are bolded. Fusion. The former three aspects are scored from 1 to 5 while the language fusion score is set to 1 if all tokens of a model-generated text are in the target language and 0 otherwise. Besides task-agnostic aspects, the generated text is also evaluated under task-speci\ufb01c aspects. For title generation and summarization, coverage measures the degree to which the generated text covers the main content of the document. Correspondence for question generation measures the extent to which the generated question is matched with both document and answer. For story generation, we further evaluate whether the generated story is logically feasible. All task-speci\ufb01c aspects are scored from 1 to 5. Ensemble Metric Some N-gram based metrics such as BLEU and ROUGE largely depend on the tokenizer for speci\ufb01c languages. For example, BLEU scores for Chinese outputs are relatively high because it simply uses a character-level tokenizer. This causes unfair comparison between different languages. To this end, we propose an ensemble metric that evaluates the degree to which a piece of text resembles manual writing. It not only enables fair comparison between languages but is also proved to have a better correlation with humanannotated scores at the end of this subsection. We \ufb01rst average the grammar, \ufb02uency and relevance scores as targets, then normalize the automatic metrics and human scores among every language to eliminate the score discrepancy between languages. Three relevance metrics (BLEU, ROUGE-L, and BERTScore-F1) are gathered as features. The samples are split into training, development and test sets. After comparing different regression models\u2019 performance as shown in the upper part of Table 3 , we \ufb01nally choose bagging regression model (Breiman, 1996) as the ensemble metric. Moreover, the bagging ensemble metric shows a higher correlation with human-annotated scores compared with other relevance automatic metrics as shown in the lower part of Table 3. 4.3 Evaluation Scenarios To validate the effect of different experimental settings on model performance, several state-of-theart multilingual models are studied under four evaluation scenarios. Monolingual \ufb01ne-tuning The pretrained model is tuned for a downstream task using the training data for a speci\ufb01c language and evaluated on the test set for the same language. Multilingual \ufb01ne-tuning The pretrained model is jointly \ufb01ne-tuned with data in all languages for a speci\ufb01c task. Different from the monolingual \ufb01netuning setting, there is only one model for each downstream task, which can serve all languages. Cross-lingual generation Since MTG is multiway parallel, it can be reorganized to create inputoutput pairs that belong to different languages. In this paper, we make use of the multiway parallel data to do the supervised cross-lingual training, e.g., for English centric cross-lingual training, we take the English source as the input and the parallel German, French, Spanish, Chinese target as the out\fput. Then we evaluate the model on same setting (en->de, en->es, en->fr, en->zh). The cross-lingual generation performances on all 5 \u22174 directions are evaluated. Zero-shot transfer We also try to explore the zero-shot ability of multilingual pretrained models on the four tasks. The model is \ufb01ne-tuned on a speci\ufb01c task with English input and output. Then it is used to generate output in other languages with a given language tag. 5 Results 5.1 Monolingual and Cross-lingual This section displays the monolingual and crosslingual model comparison to explore their performances in different tasks and languages. Figure 1 contains the \ufb01ve language-centric cross-lingual and monolingual results. Several conclusions can be drawn from the results: The performance of Cross-lingual is better than monolingual in some cases. As shown in Figure 1, model performances on ensemble scores in cross-lingual setting exceed those in monolingual setting frequently (e.g., the monolingual result of French underperforms the English to French cross-lingual result in Figure 1(b) ). This is because the cross-lingual models are trained with more data (e.g., the English centric cross-lingual model is trained with en->de, en->fr, en->es, en>zh data), and the data from different cross-lingual directions can sometimes bene\ufb01t from each other thus improving the model performance. Chinese text generation is challenging in cross-lingual setting. As illustrated in Figure 1, nearly all models obtain inferior scores when generating Chinese text. Also, model results on Chinese inputs are usually worse than results on inputs in other languages. The wide discrepancies in grammar and vocabulary between Chinese and other languages lead to the poor performance of crosslingual generation when either the target language or source language is Chinese. Multilingual pretrained models obtain lower scores on the Summarization task. Compared with other tasks, summarization task requires longer output, which increases the dif\ufb01culty of text generation, thus causing poor performance both in cross-lingual and monolingual settings. Mono Multi Cross Zero 16 17 18 19 20 21 Ensemble Score Stage1 Stage2 Figure 2: The different stage performances averaged across \ufb01ve languages of XLM in summarization under various settings. Here stage1 represents models trained only on rough training data while stage2 represents models further trained on human-annotated training data based on models in stage1. 5.2 Monolingual and Multilingual In addition to cross-lingual analysis, we also explore the performance difference between models trained in monolingual and multilingual settings. Table 4 displays the monolingual and multilingual training results for four models in four tasks. In most cases, multilingual training can improve model performance on relevance. As shown in Table 4, 75 out of 96 multilingual results outperform the monolingual counterparts on various relevance metrics in different tasks. The reason is that the multilingual data in MTG is fully parallel across all \ufb01ve languages and every sample has semantically aligned counterparts in other languages. It makes better semantic fusion among different languages, thus boosting the multilingual training performance. The advantages of multilingual training are not obvious on diversity measured by distinct1. Especially in the story generation task, 3 out of 4 models obtain better distinct-1 scores in monolingual setting than in multilingual one. Diversity can not be improved by semantic sharing across languages especially when the samples of them are multiway parallel. This is because the multiway parallel dataset with the semantic aligned samples repeating in different languages encourages models to generate similar texts to some extent. 5.3 Zero-shot results To test the cross-lingual generation ability of multilingual pretrained models when no direct crosslingual training data are provided, we evaluate the zero-shot cross-lingual generation performance. Table 5 presents the zero-shot results for XLM in four tasks. It demonstrates that the multilin\fTask Language BLEU ROUGE-1 ROUGE-2 ROUGE-L METEOR BERTScore Distinct-1 Distinct-2 Ensemble SG en->de 0.02/3.20 7.20/27.20 0.20/4.00 7.20/25.80 0.05/0.14 0.63/0.73 0.47/0.96 0.50/1.00 18.90/29.70 en->fr 0.02/4.23 5.90/28.10 0.20/6.30 5.90/26.40 0.04/0.20 0.63/0.74 0.38/0.95 0.41/0.99 14.30/27.70 en->es 0.09/3.38 8.70/26.30 0.40/4.60 8.50/24.80 0.04/0.14 0.65/0.74 0.52/0.96 0.55/0.99 16.90/28.40 en->zh 0.00/5.79 0.00/28.80 0.00/8.80 0.00/26.80 0.45/0.67 0.61/0.99 0.57/0.34 16.60/26.70 QG en->de 1.96/10.41 18.10/38.70 2.40/14.70 17.60/37.20 0.10/0.25 0.73/0.78 0.94/0.97 0.98/1.00 29.80/29.30 en->fr 2.16/14.70 16.80/42.80 2.90/19.00 16.20/39.60 0.08/0.35 0.74/0.80 0.94/0.95 0.99/0.99 28.60/29.80 en->es 7.46/16.93 25.50/49.50 8.70/22.40 23.90/46.80 0.18/0.37 0.76/0.83 0.94/0.95 0.99/1.00 28.50/29.10 en->zh 0.00/16.07 0.00/43.10 0.00/22.90 0.00/37.90 0.44/0.73 0.10/1.00 0.08/1.00 16.40/28.60 TG en->de 2.58/9.15 13.40/26.90 4.40/11.10 12.50/24.30 0.12/0.22 0.67/0.73 0.83/0.95 0.88/0.99 26.30/30.60 en->fr 3.26/11.54 13.90/33.80 4.50/14.70 12.70/29.00 0.12/0.30 0.69/0.75 0.89/0.91 0.93/0.99 25.20/28.50 en->es 4.90/12.45 21.20/36.30 7.40/15.70 18.50/31.10 0.17/0.31 0.71/0.76 0.88/0.91 0.94/0.99 24.50/29.50 en->zh 0.01/15.44 0.00/34.50 0.00/19.40 0.00/29.90 0.45/0.69 0.37/0.98 0.22/0.58 16.70/27.10 Summ en->de 1.85/8.36 15.40/34.70 2.90/11.70 14.50/31.10 0.08/0.20 0.65/0.72 0.61/0.81 0.78/0.97 18.50/21.50 en->fr 1.29/11.79 13.70/39.90 2.60/15.80 13.00/35.50 0.07/0.29 0.68/0.75 0.64/0.75 0.82/0.94 18.60/20.30 en->es 4.18/11.93 22.50/41.00 5.80/15.60 20.30/36.60 0.14/0.29 0.69/0.75 0.64/0.74 0.82/0.95 17.30/20.70 en->zh 0.00/14.58 0.00/42.20 0.00/20.40 0.00/38.70 0.42/0.71 0.68/0.84 0.27/0.94 12.80/19.60 Table 5: English centric zero-shot and cross-lingual results for XLM on four tasks. Scores on the left and right side of each cell represent the zero-shot and cross-lingual results respectively. gual pretrained model XLM still lacks the ability to generate high-quality cross-lingual output in zero-shot scenario. Moreover, English to Chinese and French zero-shot generation shows inferior performance.9 The performance decline is rather salient when generating Chinese text. This is because Chinese and French (especially Chinese) are distant from English in the language family tree. On the other hand, zero-shot results underperform cross-lingual results which further emphasizes the importance of direct cross-lingual training data for cross-lingual text generation. 5.4 Pseudo and Annotated Data To answer the question \u201cDoes the 400k annotated training data help the model generate better? \u201d, we use the rough training data \ufb01ltered by back translation for the \ufb01rst stage \ufb01ne-tuning and the annotated training data for the second stage. The ablation study results on the two-step \ufb01netuning in summarization under all evaluation scenarios with XLM are illustrated in Figure 2. The extra human-annotated data boost model performance by at least 3.8% on the ensemble metric. We also make a T-test and prove that the improvement of annotated training data is signi\ufb01cant in all settings.10 It demonstrates that although the number of annotated data is small, it can signi\ufb01cantly improve the performance. It also highlights the necessity of human-annotated multilingual data compared with pseudo-parallel data via machine translation. 9Zero-shot results show the same trend as shown in Table 17 in Appendix. 10The t-test details are shown in Appendix C. Setting Model Gram. Flu. Rel. lang fuse task spec. SG mono 4.69 4.81 3.75 1.00 3.79 multi 4.71 4.80 3.67 1.00 4.02 cross 4.18 4.23 3.49 0.95 2.53 zero 4.15 4.18 3.27 0.18 3.00 QG mono 4.66 4.69 3.03 0.99 3.95 multi 4.69 4.67 3.06 0.97 4.11 cross 4.30 4.30 2.70 0.95 2.64 zero 3.35 4.26 3.18 0.19 3.09 TG mono 4.53 4.51 3.09 0.96 3.71 multi 4.66 4.65 3.18 0.93 3.17 cross 3.73 3.64 2.63 0.90 1.85 zero 3.52 4.15 3.51 0.18 1.43 Summ mono 4.19 3.99 3.71 0.68 3.71 multi 4.19 4.02 3.78 0.64 3.60 cross 2.14 2.22 2.23 0.68 2.05 zero 1.57 1.54 1.58 0.03 1.59 Table 6: Human evaluation scores averaged on \ufb01ve languages for mBART on four tasks. \u2018Gram.\u2019, \u2018Flu.\u2019, \u2018Rel.\u2019, \u2018Lang Fu.\u2019, \u2018Task Spec.\u2019 indicates Grammar, Fluency, Relevance, Language Fusion and Task Speci\ufb01c scores respectively. 5.5 Human evaluation Table 6 presents the human evaluation scores for mBART in four tasks. Multilingual training results can surpass the monolingual results in QG, TG and Summ on relevance. In terms of task-speci\ufb01c score, multilingual results are also superior in SG and QG. This is consistent with the conclusion in Sec. 5.2. On the other hand, language fusion scores in zero-shot setting are extremely low, indicating the pretrained models still lack the ability to generate texts in correct language in zero-shot setting. 6 Discussions Considering the annotation cost, it is not realistic to construct a multiway text generation dataset with all data annotated by human. As a consequence, most of the non-English data in MTG are automatically translated from their English counterparts. Although the n-gram consistency check \fwhen round-trip translating the data can guarantee the quality of them to some extent, some translation errors are inevitable. MTG with more annotated data and with data \ufb01ltered by more reliable methods will be explored in the future. On the other hand, human often gives an overall evaluation of a generated text rather than measuring it in \ufb01ne-grained aspects of grammar, \ufb02uency and relevance. Thus we try to propose a metric measuring how a text resembles human writing and consider grammar, \ufb02uency and relevance as a whole. This metric may not be perfect, but it is a promising direction as there does not exist a really reliable text generation metric nowadays. 7" + } + ], + "Changyou Chen": [ + { + "url": "http://arxiv.org/abs/1805.11659v2", + "title": "A Unified Particle-Optimization Framework for Scalable Bayesian Sampling", + "abstract": "There has been recent interest in developing scalable Bayesian sampling\nmethods such as stochastic gradient MCMC (SG-MCMC) and Stein variational\ngradient descent (SVGD) for big-data analysis. A standard SG-MCMC algorithm\nsimulates samples from a discrete-time Markov chain to approximate a target\ndistribution, thus samples could be highly correlated, an undesired property\nfor SG-MCMC. In contrary, SVGD directly optimizes a set of particles to\napproximate a target distribution, and thus is able to obtain good\napproximations with relatively much fewer samples. In this paper, we propose a\nprinciple particle-optimization framework based on Wasserstein gradient flows\nto unify SG-MCMC and SVGD, and to allow new algorithms to be developed. Our\nframework interprets SG-MCMC as particle optimization on the space of\nprobability measures, revealing a strong connection between SG-MCMC and SVGD.\nThe key component of our framework is several particle-approximate techniques\nto efficiently solve the original partial differential equations on the space\nof probability measures. Extensive experiments on both synthetic data and deep\nneural networks demonstrate the effectiveness and efficiency of our framework\nfor scalable Bayesian sampling.", + "authors": "Changyou Chen, Ruiyi Zhang, Wenlin Wang, Bai Li, Liqun Chen", + "published": "2018-05-29", + "updated": "2018-07-10", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "main_content": "INTRODUCTION Bayesian methods have been playing an important role in modern machine learning, especially in unsupervised learning (Kingma and Welling, 2014; Li et al., 2017), and recently in deep reinforcement learning (Houthooft et al., 2016; Liu et al., 2017). When dealing with big data, two lines of research directions have been developed to scale up Bayesian methods, e.g., variational-Bayes-based and sampling-based methods. Stochastic gradient Markov chain Monte Carlo (SG-MCMC) is a family of scalable Bayesian learning algorithms designed to ef\ufb01ciently sample from a target distribution such as a posterior distribution (Welling and Teh, 2011; Chen et al., 2014; Ding et al., 2014; Chen et al., 2015). In principle, SG-MCMC generates samples from a Markov chain, which are used to approximate a target distribution. Under a standard setting, samples from SG-MCMC are able to match a target distribution exactly with an in\ufb01nite number of samples (Teh et al., 2016; Chen et al., 2015). However, this is practically infeasible, as only a \ufb01nite number of samples are obtained. Although nonasymptotic approximation bounds w.r.t.the number of samples have been investigated (Teh et al., 2016; Vollmer et al., 2016; Chen et al., 2015), there are no theory/algorithms to guide learning an optimal set of \ufb01xed-size samples/particles. This is an undesirable property of SG-MCMC, because in practice one often seeks to learn the optimal samples of a \ufb01nite size that best approximate a target distribution. A remedy for this issue is to adopt the idea of particlebased sampling methods, where a set of particles (or samples) are initialized from some simple distribution, followed by iterative updates to better approximate a target distribution. The updating procedure is usually done by optimizing some metrics such as a distance measure between the target distribution and the current approximation. There is not much work in this direction for large-scale Bayesian sampling, with an outstanding representative being the Stein variational gradient descent (SVGD) (Liu and Wang, 2016a). In SVGD, the update of particles are done by optimizing the KL-divergence between the empirical particle distribution and a target distribution, thus the samples are designed to be updated optimally to reduce the KL-divergence in each iteration. Because of this property, SVGD is found to perform better than SG-MCMC when the number of samples used to approximate a target distribution is limited, and has been applied to other problems such as deep generative models arXiv:1805.11659v2 [stat.ML] 10 Jul 2018 \f(Feng et al., 2017) and deep reinforcement learning (Liu et al., 2017; Haarnoja et al., 2017; Zhang et al., 2018b). Though often achieving comparable performance in practice, little work has been done on investigating connections between SG-MCMC and SVGD, and on developing particle-optimization schemes for SG-MCMC. In this paper, adopting ideas from Waserstein-gradient-\ufb02ow literature, we propose a uni\ufb01ed particle-optimization framework for scalable Bayesian sampling. The idea of our framework is to work directly on the evolution of a density functions on the space of probability measures, e.g., the Fokker-Planck equation in SG-MCMC. To make the evolution solution computationally feasible, particle approximations are adopted for densities, where particles can be optimized during the evolution process. Both SG-MCMC and SVGD are special cases of our framework, and are shown to be highly related. Notably, sampling with SG-MCMC becomes a deterministic particleoptimization problem as SVGD on the space of probability measures, overcoming the aforementioned correlatedsample issue. Furthermore, we are able to develop new uni\ufb01ed particle-optimization algorithms by combing SGMCMC and SVGD, which is less prone to high-dimension space and thus obtains better performance for large-scale Bayesian sampling. We conduct extensive experiments on both synthetic data and Bayesian learning of deep neural networks, verifying the effectiveness and ef\ufb01ciency of our proposed framework. 2 PRELIMINARIES In this section, we review related concepts and algorithms for SG-MCMC, SVGD, and Wasserstein gradient \ufb02ows (WGF) on the space of probability measures. 2.1 Stochastic gradient MCMC Diffusion-based sampling methods Generating random samples from a distribution (e.g., a posterior distribution) is one of the fundamental problems in Bayesian statistics, which has many important applications in machine learning. Traditional Markov Chain Monte Carlo methods (MCMC), such as the Metropolis\u2013Hastings algorithm (Metropolis et al., 1953) produces unbiased samples from a desired distribution when the density function is known up to a normalizing constant. However, most of these methods are based on random walk proposals which suffer from high dimensionality and often lead to highly correlated samples. On the other hand, dynamicsbased sampling methods such as the Metropolis adjusted Langevin algorithm (MALA) (Xifara et al., 2014) avoid this high degree of correlation by combining dynamical systems with the Metropolis step. In fact, these dynamical systems are derived from a more general mathematical technique called diffusion process, or more speci\ufb01cally, It\u00b4 o diffusion (\u00d8ksendal, 1985). Speci\ufb01cally, our objective is to generate random samples from a posterior distribution p(\u03b8| X) \u221dp(X |\u03b8)p(\u03b8), where \u03b8 \u2208Rr represents the model parameter, and X \u225c{xi}N i=1 represents the data. The canonical form is p(\u03b8| X) = (1/Z) exp(U(\u03b8)), where U(\u03b8) = log p(X |\u03b8) + log p(\u03b8) \u225c N X i=1 log p(xi |\u03b8) + log p(\u03b8) is referred to as the potential energy based on an i.i.d. assumption of the model, and Z is the normalizing constant. In Bayesian sampling, the posterior distribution corresponds to the (marginal) stationary distribution of a (continuous-time) It\u00b4 o diffusion, de\ufb01ned as a stochastic differential equation of the form: d\u0398t = F(\u0398t)dt + g(\u0398t)dWt , (1) where t is the time index; \u0398t \u2208Rp represents the full variables in a dynamical system, and \u0398t \u2287\u03b8t (thus p \u2265r) is potentially an augmentation of model parameter \u03b8; Wt \u2208Rp is p-dimensional Brownian motion. Functions F : Rp \u2192Rp and g : Rp \u2192 Rp \u00d7 p are assumed to satisfy the Lipschitz continuity condition (Ghosh, 2011). By Fokker-Planck equation (or the forward Kolmogorov equation) (Kolmogoroff, 1931; Risken, 1989), when appropriately designing the diffusion-coef\ufb01cient functions F(\u00b7) and g(\u00b7), the stationary distribution of the corresponding It\u00b4 o diffusion equals the posterior distribution of interest, p(\u03b8| X). For example, the 1st-order Langevin dynamic de\ufb01nes \u0398 = \u03b8, and F(\u0398t) = 1 2\u2207\u03b8U(\u03b8), g(\u0398t) = Ir; the 2nd-order Langevin diffusion de\ufb01nes \u0398 = (\u03b8, q), and F(\u0398t) = \u0010 q \u2212B q \u2212\u2207\u03b8U(\u03b8) \u0011 , g(\u0398t) = \u221a 2B \u0010 0 0 0 Ir \u0011 for a scalar B > 0; q is an auxiliary variable known as the momentum (Chen et al., 2014; Ding et al., 2014). Let the density of \u0398t be \u00b5t, it is known \u00b5t is characterized by the Fokker-Planck (FP) equation (Risken, 1989): \u2202\u00b5t \u2202t = \u2212\u2207\u0398 \u00b7 (\u00b5tF(\u0398t)) + \u2207\u0398\u2207\u0398 :(\u00b5t\u03a3(\u0398t)) (2) where \u03a3(\u0398t) \u225cg(\u0398t)g\u22a4(\u0398t), a \u00b7 b \u225ca\u22a4b for vectors a and b, A : B \u225ctrace(A\u22a4B) for matrices A and B. The FP equation is the key to develop our particleoptimization framework for SG-MCMC. In the following, we focus on the simplest case of 1st-order Langevin dynamics if not stated explicitly, though the derivations apply to other variants. Stochastic gradient MCMC SG-MCMC algorithms are discretized numerical approximations of the It\u00b4 o dif\ffusion (1). They mitigate the slow mixing and nonscalability issues encountered in traditional MCMC algorithms by i) adopting gradient information of the posterior distribution, ii) using minibatches of the data in each iteration of the algorithm to generate samples, and iii) ignoring the rejection step as in standard MCMC. To make the algorithms scalable in a big-data setting, three developments will be implemented based on the It\u00b4 o diffusion: i) de\ufb01ne appropriate functions F and g in the It\u00b4 o-diffusion formula so that the (marginal) stationary distributions coincide with the target posterior distribution p(\u03b8| X); ii) replace F or g with unbiased stochastic approximations to reduce the computational complexity, e.g., approximating F with a random subset of the data instead of using the full data. For example, in the 1st-order Langevin dynamics, \u2207\u03b8U(\u03b8) could be approximated by an unbiased estimator with a subset of data: \u2207\u03b8 \u02dc U(\u03b8) \u225c\u2207log p(\u03b8) + N n n X i=1 \u2207\u03b8 log p(x\u03c0i |\u03b8) (3) where \u03c0 is a size-n random subset of {1, 2, \u00b7 \u00b7 \u00b7 , N}, leading to the \ufb01rst SG-MCMC algorithm in machine learning \u2013 stochastic gradient Langevin dynamics (SGLD) (Welling and Teh, 2011); and iii) solve the generally intractable continuous-time It\u02c6 o diffusions with a numerical method, e.g., the Euler method (Chen et al., 2015). For example, this leads to the following update in SGLD: \u03b8\u2113= \u03b8\u2113\u22121 + \u2207\u03b8 \u02dc U(\u03b8\u2113\u22121)h + \u221a 2h \u03b4\u2113, where h means the stepsize, \u2113indexes the samples, \u03b4\u2113\u223c N(0, I) is a random sample from an isotropic normal distribution. After running the algorithm for L steps, the collection of samples {\u03b8\u2113}L \u2113=1 are used to approximate the unknown posterior distribution 1 Z eU(\u03b8). 2.2 Stein variational gradient descent Different from SG-MCMC, SVGD initializes a set of particles which are iteratively updated so that the empirical particle distribution approximates the posterior distribution. Speci\ufb01cally, we consider a set of particles {\u03b8(i)}M i=1 drawn from some distribution q. SVGD tries to update these particles by doing gradient descent on the interactive particle system via \u03b8(i) \u2190\u03b8(i) + h\u03c6(\u03b8(i)), \u03c6 = arg max \u03c6\u2208F { \u2202 \u2202hKL(q[h\u03c6]||p)} where \u03c6 is a function perturbation direction chosen to minimize the KL divergence between the updated density q[h\u03c6] estimated by the particles and the posterior p(\u03b8| X) (p for short). Since KL(q\u2225p) is convex in q, global optimum of q = p can be guaranteed. SVGD considers F as the unit ball of a vector-valued reproducing kernel Hilbert space (RKHS) H associated with a kernel \u03ba(\u03b8, \u03b8\u2032). In such as setting, Liu and Wang (2016b) shown: \u2212\u2202 \u2202hKL(q[h\u03c6]\u2225p)|h=0 = E\u03b8\u223cq[trace(\u0393p\u03c6(\u03b8))], (4) with \u0393p\u03c6(\u03b8) \u225c\u2207\u03b8U(\u03b8)\u22a4\u03c6(\u03b8) + \u2207\u03b8 \u00b7 \u03c6(\u03b8), where \u0393p is called the Stein operator. Assuming that the update function \u03c6(\u03b8) is in a RKHS with kernel \u03ba(\u00b7, \u00b7), it was shown in (Liu and Wang, 2016b) that (4) is maximized with: \u03c6(\u03b8) = E\u03b8\u223cq[\u03ba(\u03b8, \u03b8\u2032)\u2207\u03b8U(\u03b8) + \u2207\u03b8\u03ba(\u03b8, \u03b8\u2032)]. (5) When approximating the expectation E\u03b8\u223cq[\u00b7] with empirical particle distribution and adopting stochastic gradients, we arrive at the following updates for the particles (\u2113 denotes the iteration number): \u03b8(i) \u2113+1 = \u03b8(i) \u2113+ h M M X j=1 h \u03ba(\u03b8(j) \u2113, \u03b8(i) \u2113)\u2207\u03b8(j) \u2113 \u02dc U(\u03b8(j) \u2113) + \u2207\u03b8(j) \u2113\u03ba(\u03b8(j) \u2113, \u03b8(i) \u2113) i (6) SVGD applies updates (6) repeatedly, moving the samples to a target distribution p. 2.3 Wasserstein Gradient Flows For a better motivation of WGF, we start from gradient \ufb02ows de\ufb01ned on the Euclidean space. Gradient \ufb02ows on the Euclidean space For a smooth function E : Rr \u2192R, and a starting point \u03b80 \u2208Rr, the gradient \ufb02ow of E(\u03b8) is de\ufb01ned as the solution of the differential equation: d\u03b8 dt = \u2212\u2207E(\u03b8(t)), s.t. \u03b8(0) = \u03b80. This is a standard Cauchy problem (Rulla, 1996), endowed with a unique solution if \u2207E is Lipschitz continuous. When E is non-differentiable, the gradient is replaced with its subgradient, de\ufb01ned as \u2202E(\u03b8) \u225c{p \u2208 Rr : F(\u03b8\u2032) \u2265F(\u03b8) + p \u00b7(\u03b8\u2032 \u2212\u03b8), \u2200\u03b8\u2032 \u2208Rr}. Note \u2202E(\u03b8) = {\u2207E(\u03b8)} if E is differentiable at \u03b8. In this case, the gradient \ufb02ow formula above is replaced with: d\u03b8 dt \u2208\u2212\u2202E(\u03b8(t)). Wasserstein gradient \ufb02ows Let P(\u2126) denote the space of probability measures on \u2126\u2282Rr. WGF is an extension of gradient \ufb02ows in Euclidean space by lifting the de\ufb01nition onto the space of probability measures. Formally, let P(\u2126) be endowed with a Riemannian geometry induced by the 2nd-order Wasserstein distance, i.e., the curve length between two elements (two distributions) is de\ufb01ned as: W 2 2 (\u00b5, \u03bd) \u225cinf \u03b3 \u001aZ \u2126\u00d7\u2126 \u2225\u03b8 \u2212\u03b8\u2032\u22252 2d\u03b3(\u03b8, \u03b8\u2032) : \u03b3 \u2208\u0393(\u00b5, \u03bd) \u001b where \u0393(\u00b5, \u03bd) is the set of joint distributions over (\u03b8, \u03b8\u2032) such that the two marginals equal \u00b5 and \u03bd, respectively. The Wasserstein distance can be explained as an optimaltransport problem, where one wants to transform elements \fin the domain of \u00b5 to \u03bd with a minimum cost (Villani, 2008). The term \u2225\u03b8\u2212\u03b8\u2032\u22252 2 represents the cost to transport \u03b8 in \u00b5 to \u03b8\u2032 in \u03bd, and can be replaced by a general metric c(\u03b8, \u03b8\u2032) in a metric space. If \u00b5 is absolutely continuous w.r.t. the Lebesgue measure, there is a unique optimal transport plan from \u00b5 to \u03bd, i.e., a mapping T : Rr \u2192Rr pushing elements in the domain of \u00b5 onto \u03bd satisfying T#\u00b5 = \u03bd. Here T#\u00b5 denotes the pushforward measure of \u00b5. The Wasserstein distance is equivalently reformulated as: W 2 2 (\u00b5, \u03bd) \u225cinfT \bR \u2126c(\u03b8, T (\u03b8))d\u00b5(\u03b8) \t . Let {\u00b5t}t\u2208[0,1] be an absolutely continuous curve in P(\u2126) with \ufb01nite second-order moments. We consider to de\ufb01ne the change of \u00b5t\u2019s by investigating W 2 2 (\u00b5t, \u00b5t+h). Motivated by the Euclidean-space case, this is re\ufb02ected by a vector \ufb01eld, vt(\u03b8) \u225climh\u21920 T (\u03b8t)\u2212\u03b8t h called the velocity of the particle. A gradient \ufb02ow can be de\ufb01ned on P(\u2126) correspondingly (Ambrosio et al., 2005). Lemma 1 Let {\u00b5t}t\u2208[0,1] be an absolutely-continuous curve in P(\u2126) with \ufb01nite second-order moments. Then for a.e. t \u2208[0, 1], the above vector \ufb01eld vt de\ufb01nes a gradient \ufb02ow on P(\u2126) as: \u2202t\u00b5t + \u2207\u00b7 (vt \u00b5t) = 0. The gradient \ufb02ow describes the evolution of a functional E, which is a lifted version of the function in the case of Euclidean space in Section 2.3 to the space of probability measures. E maps a probability measure \u00b5 to a real value, i.e., E : P(\u2126) \u2192R. We will focus on the case where E is convex in this paper, which is enough considering gradient \ufb02ows for SG-MCMC and SVGD, though the theory applies to a more general \u03bb-convex energy functional setting (Ambrosio et al., 2005). It can be shown that vt in Lemma 1 has the form vt = \u2212\u2207\u03b4E \u03b4\u00b5t (\u00b5t) (Ambrosio et al., 2005), where \u03b4E \u03b4\u00b5t is called the \ufb01rst variation of E at \u00b5t. Based on this, gradient \ufb02ows on P(\u2126) can be written \u2202t\u00b5t = \u2212\u2207\u00b7 (vt \u00b5t) = \u2207\u00b7 \u0012 \u00b5t\u2207( \u03b4E \u03b4\u00b5t (\u00b5t)) \u0013 . (7) Remark 1 Intuitively, an energy functional E characterizes the landscape structure (appearance) of the corresponding manifold in P(\u2126), and the gradient \ufb02ow (7) de\ufb01nes a geodesic path on this manifold. Usually, by choosing appropriate E, the landscape is convex, e.g., for the cases of both SG-MCMC and SVGD described below. This provides a theoretical guarantee on the optimal convergence of a gradient \ufb02ow. 3 PARTICLE-OPTIMIZATIONBASED SAMPLING In this section, we interpret the continuous versions of both SG-MCMC and SVGD as WGFs, followed by several techniques for particle optimization in the next section. In the following, \u00b5t denotes the distribution of \u03b8t. 3.1 SVGD as WGF The continuous-time and in\ufb01nite-particle limit of SVGD with full gradients, denoted as SVGD\u221e, is known to be a special instance of the Vlasov equation in nonlinear partial-differential-equation literature (Liu, 2017): \u2202t\u00b5t = \u2207\u00b7 ((W \u2217\u00b5t)\u00b5t) , (8) where (W \u2217\u00b5t)(\u03b8) \u225c R W(\u03b8 \u2212\u03b8\u2032)\u00b5t(\u03b8\u2032)d\u03b8\u2032 is the convolutional operator applied for some function W : Rr \u2192 R. To specify SVGD\u221e, we generalize the convolutional operator, and consider W as a function with two input arguments, i.e., (W \u2217\u00b5t)(\u03b8) \u225c Z W(\u03b8, \u03b8\u2032)\u00b5t(\u03b8\u2032)d\u03b8\u2032 . Under this setting, we can specify the function W(\u00b7, \u00b7) for SVGD\u221eas W(\u03b8, \u03b8\u2032) \u225c\u2207\u03b8\u2032 log p(\u03b8\u2032| X)\u03ba(\u03b8\u2032, \u03b8) + \u2207\u03b8\u2032\u03ba(\u03b8\u2032, \u03b8) = \u2207\u03b8\u2032 [p(\u03b8\u2032| X)\u03ba(\u03b8, \u03b8\u2032)] /p(\u03b8\u2032) . (9) As will be shown in Section 4, W in (9) naturally leads to the SVGD algorithm, without the need to derive from an RKHS perspective. Proposition 2 The stationary distribution of (8) is limt\u2192\u221e\u00b5t \u225c\u00b5 = p(\u03b8| X). To interpret SVGD\u221eas a WGF, we need to specify two quantities, the energy functional and an underlying metric to measure distances between density functions. Energy functional and distance metric of SVGD\u221e There are two ways to derive energy functionals for SVGD\u221e, depending on the underlying metrics for probability distributions. When adopting the WGF framework where W2 is used as the underlying metric, according to (7), the energy functional Es must satisfy \u2207\u03b8 \u0012\u03b4Es \u03b4\u00b5t (\u00b5t) \u0013 = W(\u03b8, \u03b8\u2032) \u2217\u00b5t (10) = E\u03b8\u2032\u223c\u00b5t \u0002 \u2207\u03b8\u2032 \u0002 p(\u03b8\u2032| X)K(\u03b8, \u03b8\u2032) \u0003 /p(\u03b8\u2032| X) \u0003 . In general, there is no close-form solution for the above equation. Alternatively, Liu (2017) proved another form of the energy functional by de\ufb01ning a different distance metric on the space of probability measures, called HWasserstein distance: WH(q1, q2) \u225cinf \u03c6t,\u00b5t \u001aZ 1 0 \u2225\u03c6t\u2225Hdt, s.t. \u00b5t = \u2212\u2207\u03b8 \u00b7 (\u03c6t\u00b5t), \u00b50 = q1, \u00b51 = q2\u2225} , (11) where \u03c6t \u225cW \u2217\u00b5t, and \u2225\u00b7 \u2225H is the norm in the Hilbert space induced by \u03ba(\u00b7, \u00b7). Under this metric, the underlying energy functional is proved to be the standard KLdivergence between \u00b5t and p, e.g., Es = KL(\u00b5t, p(\u00b7| X)). As can be seen in Section 4, this interpretation allows one to derive SVGD, a particle-optimization-based algorithm to approximate the continuous-time equation (8). \f3.2 SG-MCMC as WGF The continuous-time limit of SG-MCMC, when considering gradients to be exact, corresponds to standard It\u00b4 o diffusions. We consider the It\u00b4 o diffusion of SGLD for simplicity, e.g., d\u03b8t = 1 2\u2207U(\u03b8t)dt + dW . (12) Energy functional The energy functional for SGMCMC is easily seen by noting that the corresponding FP equation (2) is in the gradient-\ufb02ow form of (7). Speci\ufb01cally, the energy functional E is de\ufb01ned as: E(\u00b5) \u225c\u2212 Z U(\u03b8)\u00b5(\u03b8)d\u03b8 | {z } E1 + Z \u00b5(\u03b8) log \u00b5(\u03b8)d\u03b8 | {z } E2 (13) Note E2 is the energy functional of a pure Brownian motion (e.g., U(\u03b8) = 0 in (12)). We can verify (13) by showing that it satis\ufb01es that FP equation. According to (7), the \ufb01rst variation of E1 and E2 is calculated as \u03b4E1 \u03b4\u00b5 = \u2212U, \u03b4E2 \u03b4\u00b5 = log \u00b5 + 1 . (14) Substituting (14) into (7) recovers the FP equation (2) for the It\u00b4 o diffusion (12). 4 PARTICLE OPTIMIZATION An ef\ufb01cient way to solve the generally infeasible WGF formula (7) is to adopt numerical methods with particle approximation. With a little abuse of notation but for conciseness, we do not distinguish subscripts t and \u2113for the particle \u03b8, i.e., \u03b8t denotes the continuous-time version of the particle, while \u03b8\u2113denotes the discrete-time version. We develop several techniques to approximate different types of WGF for SG-MCMC and SVGD. In particle approximation, the continuous density \u00b5t is approximated by a set of M particles (\u03b8(i) t )M i=1 that evolve over time t with weights (mi)M i=1 such that PM i=1 mi = 1, i.e., \u00b5t \u2248PM i=1 mi\u03b4(\u03b8(i) t ), where \u03b4(\u03b8(i) t ) = 1 when \u03b8 = \u03b8(i) t and 0 otherwise. Typically m\u2032 is are chosen at the beginning and \ufb01xed over time, thus we assume mi = 1 M and rewrite \u00b5t \u2248 1 M PM i=1 \u03b4(\u03b8(i) t ) in the following for simplicity. We investigate two types of particleapproximation methods in the following, discrete gradient \ufb02ows and by blob methods. Particle approximation by discrete gradient \ufb02ows Denote Ps(Rr) be the space of probability measures with \ufb01nite 2nd-order moments. De\ufb01ne the following optimization problem with stepsize h: Jh(\u00b5) \u225carg min \u03bd\u2208Ps(Rd) \u001a 1 2hW 2 2 (\u00b5, \u03bd) + E(\u03bd) \u001b . (15) A discrete gradient \ufb02ow of the continuous one in (7) up to time T is the composition of a sequence of the solutions (\u02dc \u00b5\u2113)T/h \u2113=1 of (15), i.e., \u02dc \u00b5\u2113\u225cJh(\u02dc \u00b5\u2113\u22121) = Jh(Jh(\u00b7 \u00b7 \u00b7 \u00b50)) \u225cJ\u2113 h\u00b50 . (16) One can show that when h \u21920, the discrete gradient \ufb02ow (16) converges to the true \ufb02ow (7) for all \u2113. Speci\ufb01cally, let \u2202E(\u00b5) be the set of Wasserstein subdifferential of E at \u00b5, i.e., \u03be \u2208\u2202E(\u00b5) if \u2202t\u00b5 = \u03be is satis\ufb01ed. De\ufb01ne |\u2202E|(\u00b5) = min{\u2225\u03be\u2225L2(\u00b5) : \u03be \u2208\u2202E(\u00b5)} to be the minimum norm of the elements in \u2202E(\u00b5). We have Lemma 3 (Craig (2014)) Assume E is proper, coercive and lower semicontinuous (specify in Section B of the Supplementary Material (SM)). For an \u00b50 and t \u22650, as T h \u2192\u221e, the discrete gradient sequence \u02dc \u00b5T/h \u225cJT/h h \u00b50 converge uniformly in t to a compact subset of [0, +\u221e), and W 2 2 (\u02dc \u00b5T/h, \u00b5T ) \u2264 \u221a 3|\u2202E|(\u00b5) \u221a Th. Lemma 3 suggests the discrete gradient \ufb02ow can approximate the original WGF arbitrarily well if a small enough stepsize h is adopted. Consequently, one solves (16) through a sequence of optimization procedures to update the particles. We will derive a particle-approximation method for the W2 term in (15), which allows us to solve SG-MCMC ef\ufb01ciently. However, this technique is not applicable to SVGD, as we neither have an explicit form of the energy functional in (10) when adopting the W2 metric, nor have an explicit form for the metric WH in (11) when adopting the KL-divergence as the energy functional. Fortunately, this can be solved by the second approximation method called blob methods. Particle approximation by blob methods The name of blob methods comes from the classical \ufb02uids literature, where instead of evolving the density in (7), one evolves all particles on a grid with time-spacing h (Carrillo et al., 2017). Speci\ufb01cally, note the function vt in (7) represents velocity of particles via transportation map T , thus solving a WGF is equivalent to evolving the particles along their velocity in each iteration. Formally, one can prove Proposition 4 (Craig and Bertozzi (2016)) Let \u00b50 \u2248 1 M PM i=1 \u03b4(\u03b8(i) 0 ). Assume vt in (7) is well-de\ufb01ned and continuous w.r.t.each \u03b8(i) t at time t. Then solving the PDE (7) reduces to solving a system of ordinary differential equations for the locations of the Dirac masses: d\u03b8(i) t /dt = \u2212vt(\u03b8(i) t ) . (17) Proposition 4 suggests evolving each particle along the directions de\ufb01ned by vt, eliminating the requirement to know an explicit form of the energy functional. In the following, we apply the above particle-optimization techniques to derive algorithms for SVGD and SG-MCMC. \f4.1 A particle-optimization algorithm for SVGD As mentioned above, discrete-gradient-\ufb02ow approximation does not apply to SVGD. We thus rely on the blob method. From Section 3.1, vt in SVGD is de\ufb01ned as vt(\u03b8) = (W \u2217\u00b5t)(\u03b8). When \u00b5t(\u03b8) is approximated by particles, vt(\u03b8(i) t ) is simpli\ufb01ed as: vt(\u03b8(i) t ) = 1 M M X j=1 W(\u03b8(i) t , \u03b8(j) t ) . As a result, with the de\ufb01nition of W in (9), updating {\u03b8(i) t } by time discretizing (17) recovers the update equations for standard SVGD in (6). 4.2 Particle-optimization algorithms for SG-MCMC Both the discrete-gradient-\ufb02ow and the blob methods can be applied for SG-MCMC, which are detailed below. Particle optimization with discrete gradient \ufb02ows We \ufb01rst specify Lemma 3 in the case of SG-MCMC in Lemma 5, which is known as the Jordan-KinderlehrerOtto scheme (Jordan et al., 1998). Lemma 5 (Jordan et al. (1998)) Assume that p(\u03b8t| X) \u2264 C1 is in\ufb01nitely differentiable, and \u2225\u2207\u03b8 log p(\u03b8| X)\u2225 \u2264 C2 (1 + C1 \u2212log p(\u03b8| X)) (\u2200\u03b8) for some constants {C1, C2}. Let T = hK with K the number of iterations, \u02dc \u00b50 be an arbitrary distribution with same support as p(\u03b8| X), and {\u02dc \u00b5k}K k=1 be the solution of the functional optimization problem: \u02dc \u00b5k = arg min \u00b5\u2208Ps(Rr) KL (\u00b5\u2225p) + 1 2hW 2 2 (\u02dc \u00b5k\u22121, \u00b5) . (18) Then \u02dc \u00b5K converges to \u00b5T in the limit of h \u21920, i.e., limh\u21920 \u02dc \u00b5K = \u00b5T , where \u00b5T is the solution of the FP equation (2) at time T. According to Lemma 5, it is apparent that SG-MCMC can be implemented by iteratively solving the optimization problem in (18). However, particle approximations for both terms in (18) are challenging. In the following, we develop ef\ufb01cient techniques to solve the problem. First, rewrite the optimization problem in (18) as min \u00b5\u2208Ps(Rr) \u2212E\u00b5[log p(\u03b8| X)] | {z } F1 + E\u00b5[log \u00b5] + 1 2hW 2 2 (\u02dc \u00b5k\u22121, \u00b5) | {z } F2 We aim at deriving gradient formulas for both the F1 and F2 terms under a particle approximation in order to perform gradient descent for the particles. Let \u00b5 \u2248 1 M PM i=1 \u03b4(\u03b8(i)). The gradient of F1 is easily approximated as \u2202F1 \u2202\u03b8(i) \u2248\u2212\u2207\u03b8(i) log p(\u03b8(i)| X) . (19) To approximate the gradient for F2, let pij denote the joint distribution of the particle-pair (\u03b8(i), \u03b8(j) k\u22121). Note E\u00b5[log \u00b5] is minimized when the particles {\u03b8(i)} are uniformly distributed. In other words, the marginal distribution vector (P j pij)i is a uniform distribution. Combining E\u00b5[log \u00b5] with the de\ufb01nition of W2, calculating F2 is equivalent to solving the following optimization problem: P \u225c{pij} = arg min pi,j X i,j pijdij (20) s.t. X j pij = 1 M , X i pij = 1 M , where dij \u225c\u2225\u03b8(i) \u2212\u03b8(j) k\u22121\u22252. We can further enforce the joint distribution {pij} to have maximum entropy by introducing a regularization term Epij[log pij], which is stronger than the regularizer enforced for the marginal distribution above. After introducing Lagrangian multipliers {\u03b1i, \u03b2i} to deal with the constraints in (20), we arrive at the dual problem: maxLD({pij}, {\u03b1i}, {\u03b2i}) = \u03bb X i,j pij log pij + pijdij + X i \u03b1i( X j pij \u22121 M ) + X j \u03b2j( X i pij \u22121 M ) , where \u03bb is the weight for the regularizer. The optimal pij\u2019s can be obtained by applying KKT conditions to set the derivative w.r.t. pij to be zero, ending up with the following form: p\u2217 ij = uie\u2212dij/\u03bbvj , where ui \u225ce\u22121 2 \u2212\u03b1i \u03bb , vj = e\u22121 2 \u2212 \u03b2j \u03bb . As a result, the particle gradients on F2 can be approximated as \u2202F2 \u2202\u03b8(i) \u2248\u2212 P j uivjdije\u2212dij/\u03bb \u2202\u03b8(i) (21) = X j 2uivj(dij \u03bb \u22121)e\u2212dij/\u03bb(\u03b8(i) \u2212\u03b8(j) k\u22121) . Theoretically, we need to adaptively update {ui, vj} as well to ensure the constraints in (20). In practice, however, we use a \ufb01xed scaling factor \u03b3 to approximate uivj for the sake of simplicity. Particle gradients are obtained by combining (19) and (21), which are then used to update the particles {\u03b8(i)} by standard gradient descent. Intuitively, (19) encourages particles move to local modes while (21) regularizes particle interactions. Different from SVGD, our scheme imposes both attractive and repulsive forces for the particles. Speci\ufb01cally, by inspecting (21), we can conclude that: i) When \u03b8(i) is far from a previous particle \u03b8(j) k , i.e., dij \u03bb > 1, \u03b8(i) is pulled close to {\u03b8(j) k } with force proportional to ( dij \u03bb \u22121)e\u2212dij/\u03bb; ii) when \u03b8(i) is close enough to a previous particle \u03b8(j) k , i.e., dij \u03bb < 1, \u03b8(i) is pushed away, preventing it from collapsing to \u03b8(j) k . \fParticle optimization with blob methods The idea of blob methods can also be applied to particle approximation for SG-MCMC, which require the velocity vector \ufb01eld vt. According to (13), this is calculated as: vt(\u03b8) = \u2212\u2207\u03b8 \u03b4(E1+E2) \u03b4\u00b5 = \u2212\u2207\u03b8U \u2212\u2207\u03b8\u00b5/\u00b5. Unfortunately, direct application of particle approximation is infeasible because the term \u2207\u03b8\u00b5 is unde\ufb01ned with discrete \u00b5. To tackle this problem, we adopt the idea in Carrillo et al. (2017) to approximate the energy functional E2 in (13) as: E2 \u2248 R \u00b5(\u03b8) log(\u00b5 \u2217K)(\u03b8)d\u03b8, where K(\u00b7, \u00b7) is another kernel function to smooth out \u00b5. Consequently, based on Carrillo et al. (2017), the velocity vt can be calculated as (details in Section C of the SM): vt(\u03b8) = \u2212\u2207\u03b8U \u2212 n X j=1 \u2207\u03b8(j) t K(\u03b8, \u03b8(j) t )/ X k K(\u03b8(j) t , \u03b8(k) t ) \u2212 n X j=1 \u2207\u03b8(j) t K(\u03b8, \u03b8(j) t )/ n X k=1 K(\u03b8, \u03b8(k) t ) (22) Given vt, particle updates can be obtained by solving (17) numerically as in SVGD. By inspecting the formula of vt in (22), the last two terms both act as repulsive forces. Interestingly, the mechanism is similar to SVGD, but with adaptive force between different particle pairs. 5 THE GENERAL RECIPE Based on the above development, a more general particleoptimization framework is proposed by combining the PDEs of both SG-MCMC and SVGD. As a result, we propose the following PDE to drive evolution of densities \u2202\u00b5t \u2202t = \u2212\u2207\u03b8 \u00b7 (\u00b5tF(\u03b8t)) + \u03bb1\u2207\u03b8 \u00b7 ((W \u2217\u00b5t)\u00b5t) + \u03bb2\u2207\u03b8\u2207\u03b8 : \u0000\u00b5tg(\u03b8t)g\u22a4(\u03b8t) \u0001 , (23) where \u03bb1 and \u03bb2 are two constants. It is easily seen that to ensure the stationary distribution of (23) to be equal to p(\u03b8| X), the following condition must be satis\ufb01ed: \u2207\u03b8\u00b7 (p(\u03b8| X)F(\u03b8)) = \u03bb1\u2207\u03b8 \u00b7 ((W \u2217p(\u03b8| X))p(\u03b8| X)) + \u03bb2\u2207\u03b8\u2207\u03b8 : \u0000p(\u03b8| X)g(\u03b8)g\u22a4(\u03b8) \u0001 (24) There are many feasible choices for the functions and parameters {F(\u03b8), W, g(\u03b8), \u03bb1, \u03bb2} to satisfy (24). However, the veri\ufb01cation procedure might be complicated given the present of a convolutional term in (24). We recommend the following choices for simplicity: \u2022 F(\u03b8) = 1 2U(\u03b8), W = 0, g(\u03b8) = I and \u03bb2 = 1: this reduces to the Wasserstein-based SGLD with particle optimization. Speci\ufb01cally, when the discretegradient-\ufb02ow approximation is adopted, the algorithm is denoted as w-SGLD; whereas when the blob method is adopted, it is denoted as w-SGLD-B. \u2022 F(\u03b8) = 0, g(\u03b8) = 0, W is de\ufb01ned as (9): this reduces to standard SVGD. \u2022 F(\u03b8) = 1 2U(\u03b8), g(\u03b8) = I, W is de\ufb01ned as (9), and \u03bb2 = 1: this is the combination of SGLD and SVGD, and is called particle interactive SGLD, denoted as PI-SGLD or \u03c0-SGLD. It is easy to verify that condition (24) is satis\ufb01ed for all the above three particle-optimization algorithms. Furthermore, particle updates are readily developed by applying either the discrete-gradient-\ufb02ow or blob-based methods. 6 RELATED PARTICLE-BASED MCMC METHODS There have been related particle-based MCMC algorithms. Representative methods are sequential Monte Carlo (SMC) (Moral et al., 2006), particle MCMC (PMCMC) (Andrieu et al., 2010) and many variants. In SMC, particles are sample from a proposal distribution, and the corresponding weights are updated by a resampling step. PMCMC extends SMC by sampling from an extended distribution interacted with a MH-rejection step. Compared to our framework, their proposal distributions are typically hard to choose; furthermore, optimality of the particles from both methods can not be guaranteed. Furthermore, the methods are typically much more computationally expensive. Recently, Dai et al. (2016) proposed a particle-based MCMC algorithm by approximating a target distribution with weighted kernel density estimator, which updates particle weights based on likelihoods of the corresponding particles. This approach is theoretically sound but lacks an underlying geometry interpretation. Finally, we note that w-SGLD has been successfully applied to reinforcement learning recently for improved policy optimization (Zhang et al., 2018a). 7 EXPERIMENTS We verify our framework on a set of experiments, including a number of toy experiments and applications to Bayesian sampling of deep neural networks (DNNs). 7.1 Demonstrations Toy Distributions We compare various sampling methods on multi-mode toy examples, i.e., SGLD, SVGD, w-SGLD, w-SGLD-B and \u03c0-SGLD. We aim to sample from four unnormalized 2D densities p(z) \u221dexp{U(z)}, with detailed functional form provided in the SM. We optimize/sample 2000 particles to approximate target distributions. The results are shown in Figure 1. It can be seen from Figure 1 that though SGLD maintains good asymptotic properties, it is inaccurate to approximate distributions with only a few samples; in some case, the \fsamples cannot even cover all the modes. Interestingly, all other particle-optimization-based algorithms successfully \ufb01nd all the modes and \ufb01t the distributions well. wSGLD is good at \ufb01nding modes, but worse at modeling the correct variance due to dif\ufb01culty of controlling the balance between attractive and repulsive forces between particles. w-SGLD-B is better than w-SGLD at modeling the distribution variance, performing similarly to SVGD and \u03c0-SGLD. Even though, we note that w-SGLD is very useful when the number of particles is small, which \ufb01ts a distribution better, as shown in Section E of the SM. Figure 1: Illustration of different algorithms on toy distributions. Each column is a distribution case. 1st row: Ground truth; 2nd row: standard SGLD; 3rd row: w-SGLD; 4th row: w-SGLD-B; 5th row: SVGD; 6th row: \u03c0-SGLD. Bayesian Logistic regression We next compare the three variants of our framework (i.e.SVGD, w-SGLD and w-SGLD-B) on a simple logistic-regression task with quantitative evaluations. We use the same model, data and experimental settings as Liu and Wang (2016a). The Covertype dataset contains 581,012 data points and 54 features. We perform 5 runs for each setting and report the mean of testing accuracies/log-likelihoods. Figure 2 plots both test accuracies and test log-likelihoods w.r.t.the Iteration Figure 2: Test accuracies (left) and log-likelihoods (right) v.s.iterations for SVGD, w-SGLD and w-SGLD-B. number of training iterations. It is clearly that while all methods converge to the same accuracy/likelihood level, both w-SGLD and w-SGLD-B converge slightly faster than SVGD. In addition, w-SGLD and w-SGLD-B have similar convergence behaviors, thus we only use w-SGLD in the DNN experiments below. 0 0.1 0.2 0.5 1 W2 2 0.735 0.740 0.745 0.750 0.755 0.760 Accuracy M=3 M=5 M=10 M=20 M=50 Figure 3: Impact of W 2 2 factor \u03b3 and particle number M. Parameter Sensitivity Now we study the role of hyperparameters in \u03c0-SGLD: the number of particles M and the scaling factor \u03b3 to replace the uivj-term in (21). We use the same dataset and model as the above experiment. Figure 3 plots test accuracies along with different parameter settings. As expected, the best performance is achieved with appropriate scale of W 2 2 . The performance keep improving with increasing particles. Interestingly, the Wasserstein regularization is more important when the number of particles is small, demonstrating the superiority when approximate distributions with very few particles. 7.2 Applications on deep neural networks We conduct experiments for Bayesian learning of DNNs. Different from traditional optimization for DNNs, we are interested in modeling weight uncertainty of neural networks, an important topic that has been well explored (Hern\u00b4 andez-Lobato and Adams, 2015; Blundell et al., 2015a; Li et al., 2016; Louizos and Welling, 2016). We assign priors to the weights, which are simple isotropic Gaussian priors in our case, and perform posterior sampling with the proposed particle-optimization-based algorithms, as well as other standard algorithms such as SGLD and SGD. We use the RMSprop optimizer for feed-forward networks (FNN), and Adam for for convolutional neural networks (CNNs) and recurrent neural \fnetworks (RNNs). For all methods, we use a RBF kernel K(\u03b8, \u03b8\u2032) = exp(\u2212\u2225\u03b8 \u2212\u03b8\u2032\u22252 2/h), with the bandwidth set to h = med2/ log M. Here med is the median of the pairwise distance between particles. All experiments are conducted on a single TITAN X GPU. Feed-forward Neural Networks We perform the classi\ufb01cation tasks on the standard MNIST dataset. A twolayer model 784-X-X-10 with ReLU activation function is used, with X being the number of hidden units for each layer. The training epoch is set to 100. The test errors are reported in Table 1. Not surprisingly, Bayesian methods generally perform better than their optimization counterparts. The new \u03c0-SGLD which combines w-SGLD and SVGD improves both methods with little computational overhead. In additional, w-SGLD seems to perform better than SVGD in this case, partially due to a better asymptotic property mentioned in (Liu, 2017). Furthermore, standard SGLD which is based on MCMC obtains higher test errors compared to particle-optimization-based algorithms, partially due to the correlated-sample issue discussed in the introduction. See (Blundell et al., 2015b) for details on the other methods in Table 1. Table 1: Classi\ufb01cation error of FNN on MNIST. Test Error Method 400-400 800-800 \u03c0-SGLD 1.36% 1.30% w-SGLD 1.44% 1.37% SVGD 1.53% 1.40% SGLD 1.64% 1.41% RMSprop 1.59% 1.43% RMSspectral 1.65% 1.56% SGD 1.72% 1.47% BPB, Gaussian 1.82% 1.99% SGD, dropout 1.51% 1.33% Convolution Neural Networks We use the CIFAR-10 dataset to test our framework on CNNs. We adopt a CNN of three convolution layers, using 3\u00d73 \ufb01lter size with C64-C128-C256 channels, and 2\u00d72 max-pooling after each convolution layer. Our implementation adopts batch normalization, drop out and data augmentation to improve the performance. Training losses and test accuracies are presented in Table 2. Consistently, \u03c0-SGLD outperforms all other algorithms in terms of test accuracy. ADAM obtains a better training loss but worse test accuracy, indicating worse generalization ability of the optimization-based methods compared to Bayesian methods. Recurrent Neural Networks For RNNs, we run standard language models. Experiments are presented on three publicly available corpora: APNEWS, IMDB and BNC. APNEWS is a collection of Associated Press news articles from 2009 to 2016. IMDB is a set of movie reTable 2: Classi\ufb01cation error of CNN on CIFAR-10. Method Training Loss Test Accuracy ADAM 23.80 86.76% SVGD 30.57 88.72% SGLD 28.52 88.64% w-SGLD 31.26 88.80% \u03c0-SGLD 25.06 89.52% views collected by Maas et al. (2011), and BNC BNC Consortium (2007) is the written portion of the British National Corpus, which contains excerpts from journals, books, letters, essays, memoranda, news and other types of text. These datasets can be downloaded from Github\u2217. Table 3: Perplexity of language model on three corpora. Method APNEWS IMDB BNC SGD 64.13 72.14 102.89 SGLD 63.01 68.12 95.13 SVGD 61.64 69.25 94.99 w-SGLD 61.22 67.41 93.68 \u03c0-SGLD 59.83 67.04 92.33 We follow the standard set up as Wang et al. (2017). Speci\ufb01cally, we lower case all the word tokens and \ufb01lter out word tokens that occur less than 10 times. All the datasets are divided into training, development and testing sets. For the language model set up, we consider a 1-layer LSTM model with 600 hidden units. The sequence length is \ufb01xed to be 30. In order to alleviate over\ufb01tting, dropout with a rate of 0.4 is used in each LSTM layer. Results in terms of test perplexities are presented in Table 3. Again, we see that \u03c0-SGLD performs best among all algorithms, and w-SGLD is slightly better than SVGD, both of which are better than other algorithms. 8" + }, + { + "url": "http://arxiv.org/abs/1711.10927v1", + "title": "Particle Optimization in Stochastic Gradient MCMC", + "abstract": "Stochastic gradient Markov chain Monte Carlo (SG-MCMC) has been increasingly\npopular in Bayesian learning due to its ability to deal with large data. A\nstandard SG-MCMC algorithm simulates samples from a discretized-time Markov\nchain to approximate a target distribution. However, the samples are typically\nhighly correlated due to the sequential generation process, an undesired\nproperty in SG-MCMC. In contrary, Stein variational gradient descent (SVGD)\ndirectly optimizes a set of particles, and it is able to approximate a target\ndistribution with much fewer samples. In this paper, we propose a novel method\nto directly optimize particles (or samples) in SG-MCMC from scratch.\nSpecifically, we propose efficient methods to solve the corresponding\nFokker-Planck equation on the space of probability distributions, whose\nsolution (i.e., a distribution) is approximated by particles. Through our\nframework, we are able to show connections of SG-MCMC to SVGD, as well as the\nseemly unrelated generative-adversarial-net framework. Under certain\nrelaxations, particle optimization in SG-MCMC can be interpreted as an\nextension of standard SVGD with momentum.", + "authors": "Changyou Chen, Ruiyi Zhang", + "published": "2017-11-29", + "updated": "2017-11-29", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML" + ], + "main_content": "Introduction Bayesian methods have been playing an important role in modern machine learning, especially in an unsupervised-learning setting. When facing with big data, two lines of research directions have been developed to scale up Bayesian methods, e.g., variational-Bayes-based and sampling-based methods. Stochastic gradient Markov chain Monte Carlo (SG-MCMC) is a family of scalable Bayesian learning algorithms designed to ef\ufb01ciently sample from a target distribution such as a posterior distribution [1\u2013 4]. In principle, SG-MCMC generates samples from a Markov chain, which are used to approximate a target distribution. Under a standard setting, samples from SG-MCMC are able to match a target distribution exactly in an in\ufb01nite-sample regime [5, 4]. However, this case never occurs in practice, as only a \ufb01nite amount of samples are available. Although nonasymptotic bounds w.r.t.the number of samples have been investigated [5, 6, 4], there are no theory/algorithms to guide learning an optimal set of \ufb01xed-size samples/particles. This is an undesirable property of SG-MCMC, because given a \ufb01xed number of samples, one often wants ot learn the optimal samples that best approximate a target distribution. A remedy for this issue is to adopt the idea of particle-based sampling methods, where a set of particles (or samples) are initialized from some simple distribution, and they are updated iteratively such that they approximate a target distribution better and better. The updating procedure is usually done by optimizing some objective function. There is not much work in this direction for Bayesian sampling, with an outstanding representative being the Stein variational gradient descent (SVGD) [7]. In SVGD, the update of particles are done by optimizing the KL-divergence between the empiricalparticle distribution and a target distribution, thus the samples can be guaranteed to be optimal in each update. Because of this property, SVGD is found to perform better than SG-MCMC when the number of samples used to approximate a target distribution is limited [7]. arXiv:1711.10927v1 [stat.ML] 29 Nov 2017 \fLittle research has been done on investigating the particle-optimization idea in SG-MCMC. Inspired by SVGD, we develop a similar particle-optimization procedure for SG-MCMC for more ef\ufb01cient sampling. To achieve this goal, we propose a novel technique to directly optimize particles based on a variational reformulation of the corresponding Fokker-Planck equation of an SG-MCMC algorithm, adapted from [8]. In this way, instead of sampling from a Markov chain sequentially, we evolve particles through an optimization procedure, obtaining both optimal particles and faster convergence speed compared to standard SG-MCMC. Furthermore, under some relaxations, we are able to show particle optimization in SG-MCMC can be regarded as an extension of SVGD with momentum. To the best of our knowledge, this is the \ufb01rst time particles can be optimized in SG-MCMC algorithms. 2 Preliminaries 2.1 Stochastic gradient MCMC Diffusion-based sampling methods Generating random samples from a posterior distribution is a pervasive problem in Bayesian statistics which has many important applications in machine learning. The Markov Chain Monte Carlo method (MCMC), proposed by Metropolis et al. [9], produces unbiased samples from a desired distribution when the density function is known up to a normalizing constant. However, traditional MCMC methods are based on random walk proposals which often lead to highly correlated samples. On the other hand, dynamics-based sampling methods, e.g., Hybrid Monte Carlo (HMC) [10, 11], avoid this high degree of correlation by combining dynamical systems with the Metropolis step. The dynamical system uses information from the gradient of the log density to reduce the random walk effect, and the Metropolis step serves as a correction of the discretization error introduced by the numerical integration of the dynamical systems. In fact, these dynamical systems are derived from a more general mathematical technique called diffusion process (or more speci\ufb01cally, It\u00f3 diffusion) [12]. Speci\ufb01cally, our objective is to generate random samples from a posterior distribution p(\u03b8| X) \u221d p(X |\u03b8)p(\u03b8), where \u03b8 \u2208Rr represents the model parameter, and X \u225c{xi}N i=1 represents the data. The canonical form is p(\u03b8| X) = (1/Z) exp(\u2212U(\u03b8)), where U(\u03b8) = \u2212log p(X |\u03b8) \u2212log p(\u03b8) \u225c\u2212 N X i=1 log p(xi |\u03b8) \u2212log p(\u03b8) (1) is referred to as the potential energy based on an i.i.d. assumption of the model, and Z is the normalizing constant. In general, the posterior distribution can be corresponding to the (marginal) stationary distribution of a (continuous-time) It\u00f3 diffusion, de\ufb01ned as a stochastic differential equation of the form: d\u0398t = F(\u0398t)dt + g(\u0398t)dWt , (2) where t is the time index; \u0398t \u2208Rp represents the full variables in a dynamical system, and \u0398t \u2287\u03b8t (thus p \u2265r) is potentially an augmentation of model parameter \u03b8; Wt \u2208Rp is pdimensional Brownian motion. Functions F : Rp \u2192Rp and g : Rp \u2192Rp \u00d7 p are assumed to satisfy the Lipschitz continuity condition [13]. By Fokker-Planck equation (or the forward Kolmogorov equation) [14, 15], when appropriately designing the diffusion-coef\ufb01cient functions F(\u00b7) and g(\u00b7), the stationary distribution of the corresponding It\u00f3 diffusion equals the posterior distribution of interest, p(\u03b8| X). For example, the 1st-order Langevin dynamic de\ufb01nes \u0398 = \u03b8, and F(\u0398t) = \u2212\u2207\u03b8U(\u03b8), g(\u0398t) = \u221a 2 Ir; the 2nd-order Langevin diffusion de\ufb01nes \u0398 = (\u03b8, q), and F(\u0398t) = \u0010 q \u2212B q \u2212\u2207\u03b8U(\u03b8) \u0011 , g(\u0398t) = \u221a 2B \u0010 0 0 0 In \u0011 for a scalar B > 0; q is an auxiliary variable known as the momentum [2, 3]. Denoting the distribution of \u0398t as \u03c1t, it is well known [15] that \u03c1t is characterized by the FokkerPlanck (FP) equation: \u2202\u03c1t \u2202t = \u2212\u2207\u0398 \u00b7 (\u03c1tF(\u0398t)) + \u2207\u0398 \u2207\u0398 : \u0000\u03c1tg(\u0398t)g\u22a4(\u0398t) \u0001 , (3) where a \u00b7 b \u225ca\u22a4b for vectors a and b, A : B \u225ctrace(A\u22a4B) for matrices A and B. The FP equation is the key to develop our particle-optimization framework in SG-MCMC. 2 \fStochastic gradient MCMC SG-MCMC algorithms are discretized numerical approximations of the It\u00f3 diffusions. They mitigate the slow mixing and non-scalability issues encountered by traditional MCMC algorithms by i) adopting gradient information of the posterior distribution, and ii) using minibatches of the data in each iteration of the algorithm to generate samples. To make the algorithms scalable in a big-data setting, three developments will be implemented based on the It\u00f3 diffusion: i) de\ufb01ne appropriate functions F and g in the It\u00f3-diffusion formula so that the (marginal) stationary distributions coincide with the target posterior distribution p(\u03b8| X); ii) replace F or g with unbiased stochastic approximations to reduce the computational complexity, e.g., approximating F with a random subset of the data instead of using the full data. For example, in the 1st-order Langevin dynamics, \u2207\u03b8U(\u03b8) could be approximated by an unbiased estimator with a subset of data: \u2207\u03b8 \u02dc U(\u03b8) \u225c\u2207log p(\u03b8) + N n n X i=1 \u2207\u03b8 log p(x\u03c0i |\u03b8) (4) where \u03c0 is a size-n random subset of {1, 2, \u00b7 \u00b7 \u00b7 , N}, leading to the \ufb01rst SG-MCMC algorithm in machine learning \u2013 stochastic gradient Langevin dynamics (SGLD) [1]; and iii) solve the generally intractable continuous-time It\u00f4 diffusions with a numerical method, e.g., the Euler method [4]. For example, this leads to the following update in SGLD: \u03b8(\u2113) = \u03b8(\u2113\u22121) \u2212\u2207\u03b8 \u02dc U(\u03b8(\u2113\u22121))h + \u221a 2h \u03b4\u2113, where h means the stepsize, \u2113indexes the samples, \u03b4\u2113\u223cN(0, I) is a random sample from an isotropic normal distribution. After running the algorithm for L steps, the collection of samples {\u03b8(\u2113)}L \u2113=1, which are collected from a Markov chain, are used to approximate the unknown posterior distribution 1 Z e\u2212U(\u03b8). 2.2 Stein variational gradient descent Different from SG-MCMC, SVGD initializes a set of particles and iteratively updates them so that the empirical particle distribution approximates the posterior distribution. Speci\ufb01cally, considers a set of particles {\u03b8i}M i=1 drawn from distribution q. SVGD tries to update these particles by doing gradient descent on the space of probability distributions via \u03b8i \u2190\u03b8i + \u03f5\u03c6(\u03b8i), \u03c6 = arg max \u03c6\u2208F \u001a \u2202 \u2202\u03f5KL(q[\u03f5\u03c6]||p) \u001b , (5) where \u03c6 is a function perturbation direction chosen to minimize the KL divergence between the updated empirical distribution q[\u03f5\u03c6] and the posterior p(\u03b8| X), p for short. Since KL(q\u2225p) is convex in q, global optimum of q = p can be guaranteed. SVGD considers F as the unit ball of a vector-valued reproducing kernel Hilbert space (RKHS) H associated with a kernel \u03ba(\u03b8, \u03b8\u2032). In such as setting, it was shown in [16] that: \u2212\u2202 \u2202\u03f5KL(q[\u03f5\u03c6]\u2225p)|\u03f5=0 = E\u03b8\u223cq[trace(\u0393p\u03c6(\u03b8))], (6) with \u0393p\u03c6(\u03b8) \u225c\u2207\u03b8 log p(\u03b8| X)\u22a4\u03c6(\u03b8) + \u2207\u03b8 \u00b7 \u03c6(\u03b8), where \u0393p is called the Stein operator. Assuming that the update function \u03c6(\u03b8) is in a RKHS with kernel \u03ba(\u00b7, \u00b7), it was shown in [16] that (6) is maximized with: \u03c6(\u03b8) = E\u03b8\u223cq[\u03ba(\u03b8, \u03b8\u2032)\u2207\u03b8 log p(\u03b8| X) + \u2207\u03b8\u03ba(\u03b8, \u03b8\u2032)]. (7) When approximating the expectation E\u03b8\u223cq[\u00b7] with empirical particle distribution, we arrive at the following updates for the particles at the \u2113-th iteration: \u03b8(\u2113) i = \u03b8(\u2113\u22121) i + \u03f5 M M X j=1 h \u03ba(\u03b8(\u2113\u22121) j , \u03b8(\u2113\u22121) i )\u2207\u03b8j log p(\u03b8(\u2113\u22121) j | X) + \u2207\u03b8j\u03ba(\u03b8(\u2113\u22121) j , \u03b8(\u2113\u22121) i ) i . (8) SVGD applies the updates in (8) repeatedly, and the samples move closer to the target distribution p in each iteration. 3 \f2.3 Comparing SG-MCMC with SVGD SG-MCMC is a Markov-chain-based sampling methods, in the sense that samples are generated from a Markov chain, with potentially highly correlated samples. Furthermore, it often requires a large number of samples in order to approximate a target distribution reasonably well [4]. In contrast, SVGD directly updates the particles to their optimum guided by an objective function, thus requires much less samples to approximate a target distribution. On the other hand, SVGD has been explained as gradient \ufb02ows whose gradient operator is de\ufb01ned on the RKHS [17]; whereas SG-MCMC are \ufb02ows with \ufb02ow operator de\ufb01ned on the L2 space the functional space that is square integrable. Since RKHS is smaller than L2, SG-MCMC can potentially obtain better asymptotic properties than SVGD in theory [17]. The above arguments motivate us to combine goods from both sides, i.e., we aim to developed a particle-based SG-MCMC algorithm similar to what SVGD does. 3 Particle Optimization in SG-MCMC To develop our particle-optimization framework, we \ufb01rst introduce the following lemma adapted from [18], viewing SG-MCMC from an optimization perspective. Lemma 1 Assume that p(\u03b8t| X) \u2264C1 is in\ufb01nitely differentiable, and \u2225\u2207\u03b8 log p(\u03b8| X)\u2225\u2264 C2 (1 + C1 \u2212log p(\u03b8| X)) (\u2200\u03b8) for some constants {C1, C2}. Let T = \u03f5K with K being an integer, \u02dc \u03c10 is an arbitrary distribution with same support as p(\u03b8| X), and {\u02dc \u03c1k}K k=1 be the solution of the functional optimization problem: \u02dc \u03c1k = arg min \u03c1\u2208K KL (\u03c1\u2225p(\u03b8| X)) + 1 2hW 2 2 (\u02dc \u03c1k\u22121, \u03c1) , (9) where W 2 2 (\u00b51, \u00b52) \u225cinfp\u2208P(\u00b51,\u00b52) R \u2225x \u2212y\u22252 2 p(d x, d y), W2 (\u00b51, \u00b52) is the 2nd-order Wasserstein distance, with P(\u00b51, \u00b52) being the space of joint distributions of {\u00b51, \u00b52}; K is the space of probability distributions with \ufb01nite 2nd-order moment. Then \u02dc \u03c1K converges to \u03c1T in the limit of h \u21920, i.e., limh\u21920 \u02dc \u03c1K = \u03c1T , where \u03c1T is the solution of the FP equation (3) at time T. Lemma 1 reveals an interesting way to compute \u03c1T via a sequence of functional optimization problems. By comparing it with the objective of SVGD, which minimizes the KL-divergence between \u03c1k and p(\u03b8| X), at each sub-optimization-problem in Lemma 1, it minimizes the KL-divergence, plus a regularization term as the Wasserstein distance between \u02dc \u03c1k\u22121 and \u02dc \u03c1k. The extra Wassersteindistance term arises naturally due to the fact that the corresponding diffusion is a gradient \ufb02ow equipped with a geometric associated with the Wasserstein distance [19]. From another point of view, it is known that the Wasserstein distance is a better metric for probability distributions than the KL-divergence, especially in the case of non-overlapping domains [20, 21]. According to Lemma 1, it is now apparent that SG-MCMC can be achieved by alternatively solving the optimization problem in (9) for each iteration. 3.1 Optimizing on the space of probability distributions Our idea of particle-optimization is inspired by Lemma 1, in that we can obtain the optimal distribution \u02dc \u03c1k for each iteration (which will be approximated by particles) by optimizing (9). Consequently, instead of doing simulation based on the original It\u00f3 diffusion, we propose to directly solve (9) on the space of probability distributions K. As will be shown, this allows us to derive algorithms which directly optimize particles in SG-MCMC. However, this also bring challenges for the optimization, as the probability-distribution space K is too \ufb02exible to derive exact solutions. In the following, we propose techniques to approximate the corresponding terms in (9). We denote \u03b8k as a sample from \u02dc \u03c1k, i.e., \u03b8k \u223c\u02dc \u03c1k. 3.1.1 Approximating the KL-divergence Given \u02dc \u03c1k\u22121, the solution \u02dc \u03c1k of (9) can be considered as an unknown transformation G from \u02dc \u03c1k\u22121 to \u02dc \u03c1k, i.e., \u02dc \u03c1k = G(\u02dc \u03c1k\u22121) , (10) 4 \funder the constraint that \u02dc \u03c1k still lies in K after the transformation. Directly solving the unknown transformation is challenging, we propose two methods to solve it approximately, detailed below. Optimizing G with adversarial learning Optimizing the KL divergence with an unknown transformation G is generally infeasible. Instead, we approximate it with the Jensen-Shanon divergence, which appears to have the same optimality solution, thus they are equivalent. Formally, we \ufb01rst introduce the following lemma. Lemma 2 Let p1 and p2 be probability distributions on (X, \u03a3) with the same support. Then KL divergence KL(p1\u2225p2) is equivalent to the Jensen-Shanon divergence (JSD) JSD(p1\u2225p2) in the sense that they are both convex in (p1, p2), and achieve the same minimum value of zero at p1 = p2. Based on Lemma 2, we can replace the KL term in (9) with the JSD, resulting in \u02dc \u03c1k = arg min \u03c1\u2208K JSD (\u03c1\u2225p(\u03b8| X)) + \u03bb 2hW 2 2 (\u02dc \u03c1k\u22121, \u03c1) , (11) where \u03bb > 0 is introduced to balance the difference between the scales of KL and JSD. It is well known that JSD is the metric for measuring the distance between two probability distributions in generative adversarial networks (GANs) [22]. According to the properties of GAN, it is easy to see that the unknown transformation G is equivalent to the generator network in GANs. The only difference is that in our case, the latent space of GAN will be the same as the data space, which does not impact the learning algorithm. Consequently, we can update the transformation G in each iteration by running a GAN update on its generator, which are then used to generate samples from \u02dc \u03c1k. Optimizing G via kernelized Stein operator Optimizing G with adversarial learning described above brings us a little computation overhead to update the transformation. Here we propose a more ef\ufb01cient method based on the kernelized Stein operator [7]. We \ufb01rst introduce a theoretical result from [7] in Lemma 3. Lemma 3 ([7]) Let H denotes the reproducing kernel Hilbert space (RKHS), and Hr the space of vector functions f = [f1, \u00b7 \u00b7 \u00b7 , fr] with fi \u2208H. Let G(\u03b8) = \u03b8 + f(\u03b8), where f \u2208Hr. Denote \u03c1 to be the density of \u02dc \u03b8 = G(\u03b8), then we have \u2207fKL(\u03c1\u2225p(\u03b8| X)) f\u2208Hr |f=0 = \u2212\u03c6\u2217 \u03c1,p(\u03b8) , where \u03c6\u2217 \u03c1,p(\u03b8) = E\u03b8\u223c\u03c1 [k(\u03b8, \u00b7)\u2207\u03b8 log p(\u03b8| X) + \u2207\u03b8k(\u03b8, \u00b7)]. Lemma 3 essentially says that the functional gradient of the KL divergence is \u2212\u03c6\u2217 q,p(\u03b8), when the transformation is in the form of G(\u03b8) = \u03b8 + f(\u03b8) with f restricted to Hr. The result seems to be applicable in our problem (9) except that we require fi to be in a large space L2 instead of H. To compromise this, we propose to inject noise into the functional gradient, leading to the following approximation: \u2207fKL(\u03c1\u2225p(\u03b8| X)) f\u2208Lr |f=0 \u2248\u2212\u03c6\u2217 \u03c1,p(\u03b8) + \u03c3 \u03b4 , (12) where Lr 2 \u225cL2 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 L2, \u03b4 \u223cN(0, I), and \u03c3 controls the variance of the injected noise, which is typically decreasing in the algorithm to ensure convergence. Note this resembles stochastic gradient descent, where full gradients are replaced with noisy gradients, thus convergence can still be guaranteed. 3.1.2 Approximating the Wasserstein distance In order to calculate the Wasserstein term on the RHS of (9), we adapt results from optimal transport theory [23, 24] to rewrite W 2 2 (\u02dc \u03c1k\u22121, \u03c1) as W 2 2 (\u02dc \u03c1k\u22121, \u03c1) (13) =E\u02dc \u03c1k\u22121 \u2225\u03b8t\u22121\u22252 + E\u03c1 \u2225\u03b8\u22252 + 2 sup \u03c8(\u00b7) convex \u2212E\u02dc \u03c1k\u22121 [\u03c8(\u03b8t\u22121)] \u2212E\u03c1 [\u03c8\u2217(\u03b8)] , 5 \fwhere \u03c8\u2217(\u03b8) \u225csupv \u0000vT \u03b8 \u2212\u03c8(v) \u0001 is the convex-conjugate of the function \u03c8. Optimizing W 2 2 (\u02dc \u03c1k\u22121, \u03c1) is in general infeasible due to the need to search on the space of convex functions. However, we can approximate it by restricting \u03c8 on some nice convex functions. We describe some preliminary results below. Restricting \u03c8 to be quadratic We can de\ufb01ned \u03c8 to be in the form of \u03c8(\u03b8) \u225c1 2\u27e8\u03b8, A \u03b8\u27e9+ \u27e8b, \u03b8\u27e9+ c with parameters {A, b, c}. In this case, the convex-conjugate is in a nice form of \u03c8\u2217(\u03b8) \u225c1 2\u27e8\u03b8 \u2212b, A\u22121(\u03b8 \u2212b)\u27e9\u2212c . Substituting these forms into the \u201csup\u201d part and maximizing w.r.t.{A, b, c} by setting the derivatives to zero, we have b = \u03b8 \u2212A \u03b8t\u22121 Substituting the above formula into (13) and simplifying, we have W 2 2 (\u02dc \u03c1k\u22121, \u03c1) = E\u02dc \u03c1k\u22121 \u2225\u03b8t\u22121\u22252 + E\u03c1 \u2225\u03b8\u22252 \u22122E\u02dc \u03c1k\u22121,\u03c1 \u0002 \u03b8T t\u22121\u03b8 \u0003 = E\u02dc \u03c1k\u22121,\u03c1 (\u03b8 \u2212\u03b8t\u22121)2 . (14) Restricting \u03c8 to other forms It is also interesting to parameterize \u03c8 with other convex functions. We are currently investigating on this direction. 3.2 Particle optimization The above sections describes how to optimize the distribution \u02dc \u03c1k directly from (9). In practice, one must adopt some representation for \u02dc \u03c1k in order to perform optimization. A simple representation is to use particles for approximation, i.e., \u02dc \u03c1k \u2248 1 M PM i=1 \u03b4\u03b8(k) i where \u03b4\u03b8 is a point mass at \u03b8, {\u03b8(k) i } is the set of particles at the k-th iteration. Denote our objective to be F({\u03b8(\u2113) i }) where \u2113indexes the iteration. We can update the particles by calculating their gradients and perform gradient descent. For example, when adopting the approximations (12) and (14), we have \u2202 \u2202\u03b8(\u2113) i F({\u03b8(\u2113) i }) \u2248\u2212\u02c6 \u03c6\u2217(\u03b8(\u2113) i ) + \u03c3\u2113\u03b4 + 1 h(\u03b8(\u2113) i \u2212\u03b8(\u2113\u22121) i ) , (15) where \u02c6 \u03c6\u2217(\u03b8) = 1 M PM i=1 h k(\u03b8(\u2113) i , \u03b8)\u2207\u03b8(\u2113) i log p(\u03b8(\u2113) i | X) + \u2207\u03b8(\u2113) i k(\u03b8(\u2113) i , \u03b8) i . SG-MCMC vs SVGD with Polyak\u2019s momentum gradient descent Let f(\u03b8) be the objective function to be optimized. Polyak\u2019s momentum gradient descent update [25] is given by \u03b8(\u2113) = \u03b8(\u2113\u22121) \u2212h\u2207\u03b8f(\u03b8) + \u00b5 \u0010 \u03b8(\u2113\u22121) \u2212\u03b8(\u2113\u22122)\u0011 . (16) To see the relation of particle optimization in SG-MCMC of (15) with SVGD, \ufb01rst note that because (13) is an upper bound for the 2nd-order Wasserstein distance, we should scale it by some constant \u00b5 \u2208(0, 1) in the implementation to approximate the true W 2 2 (\u02dc \u03c1k\u22121, \u03c1). Based on the gradient formula in (15), the update equation for \u03b8(\u2113) i then becomes \u03b8(\u2113) i = \u03b8(\u2113\u22121) i \u2212h \u0010 \u02c6 \u03c6\u2217(\u03b8(\u2113\u22121) i ) + \u03c3\u2113\u03b4 \u0011 + \u00b5 \u0010 \u03b8(\u2113\u22121) i \u2212\u03b8(\u2113\u22122) i \u0011 , which is in the same form of (16) where the gradient is represented as noisy gradient. Thus particle optimization in SG-MCMC with (15) can be regarded as SVGD with Polyak\u2019s momentum method. 6 \f500 1500 2500 3500 4500 5500 #iterations 0.62 0.64 0.66 0.68 0.7 0.72 0.74 0.76 0.78 accuracy SG-MCMC SVGD 500 1500 2500 3500 4500 5500 #iteration -0.66 -0.64 -0.62 -0.6 -0.58 -0.56 -0.54 -0.52 -0.5 log-likelihood SG-MCMC SVGD Figure 1: Test accuracies (left) and log-likelihoods (right) v.s.training iterations for SG-MCMC and SVGD. 4 Empirical Veri\ufb01cation We test our algorithm against SVGD on a simple logistic-regression task. We use the same model and data as [7]. For SVGD, we adopt the same setting as in [7]. Note we use the authors\u2019 implementation for SVGD [7], which uses Adagrad for optimization, thus it is not strictly the standard SVGD (should perform better). For our particle optimization in SG-MCMC, we simply set the noise \u03b4 to be standard isotropic normal. We scale W 2 2 (\u02dc \u03c1k\u22121, \u03c1) with 0.1, and use (15) to update particles in SG-MCMC. Figure 1 plots both test accuracies and test log-likelihoods w.r.t. the number of training iterations. It is clearly that without tuning, SG-MCMC already obtains slightly faster convergence speed than SVGD in terms of both test accuracy and test log-likelihood. 5" + }, + { + "url": "http://arxiv.org/abs/1709.01180v1", + "title": "A Convergence Analysis for A Class of Practical Variance-Reduction Stochastic Gradient MCMC", + "abstract": "Stochastic gradient Markov Chain Monte Carlo (SG-MCMC) has been developed as\na flexible family of scalable Bayesian sampling algorithms. However, there has\nbeen little theoretical analysis of the impact of minibatch size to the\nalgorithm's convergence rate. In this paper, we prove that under a limited\ncomputational budget/time, a larger minibatch size leads to a faster decrease\nof the mean squared error bound (thus the fastest one corresponds to using full\ngradients), which motivates the necessity of variance reduction in SG-MCMC.\nConsequently, by borrowing ideas from stochastic optimization, we propose a\npractical variance-reduction technique for SG-MCMC, that is efficient in both\ncomputation and storage. We develop theory to prove that our algorithm induces\na faster convergence rate than standard SG-MCMC. A number of large-scale\nexperiments, ranging from Bayesian learning of logistic regression to deep\nneural networks, validate the theory and demonstrate the superiority of the\nproposed variance-reduction SG-MCMC framework.", + "authors": "Changyou Chen, Wenlin Wang, Yizhe Zhang, Qinliang Su, Lawrence Carin", + "published": "2017-09-04", + "updated": "2017-09-04", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML" + ], + "main_content": "Introduction With the increasing size of datasets of interest to machine learning, stochastic gradient Markov Chain Monte Carlo (SGMCMC) has been established as an effective tool for largescale Bayesian learning, with applications in topic modeling (Gan et al. 2015; Liu, Zhu, and Song 2016), matrix factorization (Chen, Fox, and Guestrin 2014; Ding et al. 2014; S \u00b8ims \u00b8ekli et al. 2016), differential privacy (Wang, Fienberg, and Smola 2015), Bayesian optimization (Springenberg et al. 2016) and deep neural networks (Li et al. 2016). Typically, in each iteration of an SG-MCMC algorithm, a minibatch of data is used to generate the next sample, yielding computational ef\ufb01ciency comparable to stochastic optimization. While a large number of SG-MCMC algorithms have been proposed, their optimal convergence rates generally appear to share the same form, and are typically slower than stochastic gradient descent (SGD) (Chen, Ding, and Carin 2015). The impact of stochastic gradient noise comes from a higherorder term (see Lemma 1 below), which was omitted in the analysis of (Chen, Ding, and Carin 2015). In other words, current theoretical analysis for SG-MCMC does not consider the impact of minibatch size (corresponding to stochastic gradient noise), making the underlying convergence theory w.r.t.minibatch size unclear. Recent work by (Dubey et al. 2016) on applying variance reduction in stochastic gradient Langevin dynamics (SGLD) claims to improve the convergence rate of standard SGLD (Welling and Teh 2011; Teh, Thiery, and Vollmer 2016; Vollmer, Zygalakis, and Teh 2016). The theoretical analysis in (Dubey et al. 2016) omits certain aspects of variance reduction in SGLD, that we seek to address here: i) how does the minibatch size (or equivalently the stochastic gradient noise) affect the convergence rate of an SG-MCMC algorithm? and ii) how can one effectively reduce the stochastic gradient noise in SG-MCMC to improve its convergence rate, from both an algorithmic and a theoretical perspective? For (i), we provide theoretical results on the convergence rates of SG-MCMC w.r.t.minibatch size. For (ii), we propose a practical variance-reduction technique for SG-MCMC, as well as theory to analyze improvements of the corresponding convergence rates. The resulting SG-MCMC algorithm is referred to as variance-reduction SG-MCMC (vrSG-MCMC). For a clearer description, we \ufb01rst de\ufb01ne notation. In a Bayesian model, our goal is typically to evaluate the posterior average of a test function \u03c6(x), de\ufb01ned as \u00af \u03c6 \u225c R X \u03c6(x)\u03c1(x)d x, where \u03c1(x) is the target posterior distribution with x the possibly augmented model parameters (see Section ). Let {xl}L l=1 be the samples generated from an SG-MCMC algorithm. We use the sample average, \u02c6 \u03c6L \u225c 1 L PL l=1 \u03c6(xl), to approximate \u00af \u03c6. The corresponding bias and mean square error (MSE) are de\ufb01ned as |E\u02c6 \u03c6L \u2212\u00af \u03c6| and E \u0010 \u02c6 \u03c6L \u2212\u00af \u03c6 \u00112 , respectively. In vrSG-MCMC, unbiased estimations of full gradients are used, leading to the same bias bound as standard SG-MCMC (Chen, Ding, and Carin 2015). As a result, we focus here on analyzing the MSE bound for vrSG-MCMC. Speci\ufb01cally, we \ufb01rst analyze how minibatch size affects the MSE convergence rate of standard SG-MCMC, summarized in two cases: i) for a limited computation budget, the optimal MSE bound is achieved when using full gradients in the algorithm; ii) for a large enough computational budget, i.e., in a long-run setting, stochastic gradients with minibatches of size one are preferable. This indicates that stochastic gradient noise hurts SG-MCMC at the beginning arXiv:1709.01180v1 [stat.ML] 4 Sep 2017 \fof the algorithm. While it is computationally infeasible to use full gradients in practice, a remedy to overcome this issue is to use relatively small minibatches with variance reduction techniques to reduce stochastic gradient noise. Consequently, we propose a practical variance-reduction scheme, making SG-MCMC computationally ef\ufb01cient in a big-data setting. Finally, we develop theory to analyze the bene\ufb01t of the proposed variance-reduction technique and empirically show improvements of vrSG-MCMC over standard SG-MCMC algorithms. Preliminaries SG-MCMC is a family of scalable Bayesian sampling algorithms, developed recently to generate approximate samples from a posterior distribution p(\u03b8| D). Here \u03b8 \u2208Rr represents a model parameter vector and D = {d1, \u00b7 \u00b7 \u00b7 , dN} represents the data available to learn the model. In general, SG-MCMC algorithms are discretized numerical approximations of continuous-time It\u02c6 o diffusions (Chen, Ding, and Carin 2015; Ma, Chen, and Fox 2015), which are equipped with stationary distributions coincident with the target posterior distributions. An It\u02c6 o diffusion is written as d xt = F(xt)dt + g(xt)dwt , (1) where x \u2208Rd is the state variable, t is the time index, and wt \u2208Rd is d-dimensional Brownian motion. Typically, x \u2287 \u03b8 is an augmentation of the model parameters, so r \u2264d. Functions F : Rd \u2192Rd and g : Rd \u2192Rd\u00d7d are assumed to satisfy the Lipschitz continuity condition (Ghosh 2011). According to (Ma, Chen, and Fox 2015), all SG-MCMC algorithms can be formulated by de\ufb01ning appropriate functions F and g in (1). For example, the stochastic gradient Langevin dynamic (SGLD) model corresponds to x = \u03b8, and F(xt) = \u2212\u2207\u03b8U(\u03b8), g(xt) = \u221a 2 Ir, where U(\u03b8) \u225c\u2212log p(\u03b8) \u2212PN i=1 log p(di |\u03b8) denotes the unnormalized negative log-posterior. Similar formula can be de\ufb01ned for other SG-MCMC algorithms, such as stochastic gradient Hamiltonian Monte Carlo (SGHMC) (Chen, Fox, and Guestrin 2014) and stochastic gradient thermostats (SGNHT) (Ding et al. 2014). An SG-MCMC algorithm is usually developed by numerically solving the corresponding It\u02c6 o diffusion and replacing the full gradient \u2207\u03b8U(\u03b8) with an unbiased estimate from a minibatch of data \u2207\u03b8 \u02dc U(\u03b8) in each iteration. For example, in SGLD, this yields an update equation of \u03b8l = \u03b8l\u22121 \u2212\u2207\u03b8 \u02dc U(\u03b8l\u22121)hl + \u221a2hl\u03b6l for the l-th iteration, where hl is the stepsize, \u03b6l \u223cN(0, Ir). This brings two sources of error into the chain: numerical error (from discretization of the differential equation) and stochastic noise error from use of minibatches. In particular, (Chen, Ding, and Carin 2015) proved the following bias and MSE bounds for general SG-MCMC algorithms: Lemma 1 ((Chen, Ding, and Carin 2015)) Under Assumption 2 in Appendix , the bias and MSE of SG-MCMC with a Kth-order integrator\u2217at time t = hL are bounded \u2217The order characterizes the accuracy of a numerical integrator, e.g., the Euler method is a 1st-order integrator. as: \f \f \fE\u02c6 \u03c6L \u2212\u00af \u03c6 \f \f \f = O \u0012P l \u2225E\u2206Vl\u2225 L + 1 Lh + hK \u0013 E \u0010 \u02c6 \u03c6L \u2212\u00af \u03c6 \u00112 = O 1 L P l E \u2225\u2206Vl\u22252 L + 1 Lh + h2K ! Here \u2206Vl \u225c (L \u2212\u02dc Ll)\u03c6, where L is the in\ufb01nitesimal generator of the It\u02c6 o diffusion (1) de\ufb01ned as Lf(xt) = \u0000F(xt) \u00b7 \u2207x + 1 2 \u0000g(xt)g(xt)T \u0001 :\u2207x\u2207T x \u0001 f(xt), for any compactly supported twice differentiable function f : Rd \u2192R. a \u00b7 b \u225caT b for two vectors a and b, A : B \u225ctr{AT B} for two matrices A and B. \u2225\u00b7 \u2225is de\ufb01ned as the standard operator norm acting on the space of bounded functions, e.g., \u2225f\u2225\u225csupx f(x) for a function f. \u02dc Ll is the same as L except for the substitution of the stochastic gradient \u2207\u02dc Ul(\u03b8) for the full gradient due to the usage of a stochastic gradient in the l-th iteration. By substituting the de\ufb01nition of \u2206Vl and L, typically we have \u2206Vl = (\u2207\u03b8Ul(\u03b8) \u2212\u2207\u03b8 \u02dc Ul(\u03b8))\u00b7\u2207\u03c6. By using an unbiased estimate of the true gradient, the term E\u2206Vl in the bias bound in Lemma 1 vanishes, indicating that stochastic gradients (or equivalently minibatch size) only affect the MSE bound. Consequently, we focus on improving the MSE bound with the proposed variance-reduction SGMCMC framework. Practical Variance-Reduction SG-MCMC We \ufb01rst motivate the necessity of variance reduction in SGMCMC, by analyzing how minibatch size affects the MSE bound. A practical variance reduction scheme is then proposed, which is ef\ufb01cient from both computational and storage perspectives. Comparison with existing variance-reduction SG-MCMC approaches is also highlighted. Previous research has revealed that the convergence of diffusion-based MCMC scales at an order of O(d1/3) w.r.t. dimension d (Durmus et al. 2016). For the interest of SG-MCMC, we following standard analysis (Vollmer, Zygalakis, and Teh 2016) and do not consider the impact of d in our analysis. The necessity of variance reduction: a theoretical perspective It is clear from Lemma 1 that the variance of noisy stochastic gradients plays an important role in the MSE bound of an SG-MCMC algorithm. What is unclear is how exactly minibatch size affects the convergence rate. Intuitively, minibatch size appears to play the following roles in SG-MCMC: i) smaller minibatch sizes introduce larger variance into stochastic gradients; ii) smaller minibatch sizes allow an algorithm to run faster (thus more samples can be obtained in a given amount of computation time). To balance the two effects, in addition to using the standard assumptions for SG-MCMC (which basically requires the coef\ufb01cients of It\u02c6 o diffusions to be smooth and bounded, and is deferred to Assumption 2 in the Appendix), we assume that the algorithms with different minibatch sizes all run for a \ufb01xed computational time/budget T in the analysis, as stated in Assumption 1. \fAssumption 1 For a fair comparison, all SG-MCMC algorithms with different minibatch sizes are assumed to run for a \ufb01xed amount of computation time/budget T. Further, we assume that T linearly depends on the minibatch size n and the sample size L, i.e., T \u221dnL. For simplicity, we rewrite the gradient of the log-likelihood for data di in the l-th iteration as: \u03b1li = \u2207\u03b8 log p(di |\u03b8l). We \ufb01rst derive the following lemma about the property of {\u03b1li}, which is useful in the subsequent developments, e.g., to guarantee a positive bound in Theorem 3 and an improved bound for the proposed vrSG-MCMC (Theorem 6). Lemma 2 Under Assumption 2, given \u03b8l in the l-th iteration, \u0393l \u225c 1 N 2 PN i=1 PN j=1 E \u0002 \u03b1T li\u03b1lj \u0003 \u2212 P i\u0338=j E\u03b1T li\u03b1lj N(N\u22121) \u22650, where the expectation is taken over the randomness of an SG-MCMC algorithm\u2020. We next generalize Lemma 1 by incorporating the minibatch size n into the MSE bound. The basic idea in our derivation is to associate with each data di a binary random variable, zi, to indicate whether data di is included in the current minibatch or not. These {zi} depend on each other such that PN i=1 zi = n in order to guarantee minibatches of size n. Consequently, the stochastic gradient in the l-th iteration can be rewritten as: \u2207\u03b8 \u02dc Ul(\u03b8) = \u2212\u2207\u03b8 log p(\u03b8l) \u2212N n PN i=1 \u2207\u03b8 log p(di |\u03b8l)zi. Substituting the above gradient formula into the proof of standard SGMCMC (Chen, Ding, and Carin 2015) and further summing out {zi} results in an alternative MSE bound for SG-MCMC, stated in Theorem 3. In the analysis, we assume to use a 1st-order numerical integrator for simplicity, e.g. the Euler method, though the results generalize to Kth-order integrators easily. Theorem 3 Under Assumption 2, let the minibatch size of an SG-MCMC be n, \u0393M \u225cmaxl \u0393l. The \ufb01nite-time MSE is bounded, for a constant C independent of {h, L, n}, as: E \u0010 \u02c6 \u03c6L \u2212\u00af \u03c6 \u00112 \u2264C \u00122(N \u2212n)N 2\u0393M nL + 1 Lh + h2 \u0013 . Theorem 3 represents the bound in terms of minibatch size n and sample size L. Note in our \ufb01nite-time setting, L and N are considered to be constants. Consequently, \u0393M is also a bounded constant in our analysis. To bring in the computational budget T, based on Assumption 1, e.g., T \u221dnL, the optimal MSE bound w.r.t.stepsize h in Theorem 3 can be written as: E \u0010 \u02c6 \u03c6L \u2212\u00af \u03c6 \u00112 = O \u0010 (N\u2212n)N 2\u0393M T + n2/3 T 2/3 \u0011 . After further optimizing the bound w.r.t.n by setting the derivative of the above MSE bound to zero, the optimal minibatch size can be written as n = O \u0010 8T 27N 6\u03933 M \u0011 . To guarantee this bound for n to be \ufb01nite and integers, it is required that the computational budget T to scale at the order of O(N 6) when varying N. When considering both T and N as impact factors, the \u2020The same meaning goes for other expectations in the paper if not explicitly speci\ufb01ed. optimal n becomes more interesting, and is concluded in Corollary 4\u2021. Corollary 4 Under Assumption 1 and 2, we have three cases of optimal minibatch sizes, each corresponding to different levels of computational budget. 1) When the computational budget is small, e.g., T < O \u0000 27 8 \u03933 MN 6\u0001 , the optimal MSE bound is decreasing w.r.t. n in range [1, N]. The minimum MSE bound is achieved at n = N. 2) When the computational budget is large, e.g., T > O \u0000 27 8 \u03933 MN 7\u0001 , the optimal MSE bound is increasing w.r.t. n in range [1, N]. The minimum MSE bound is achieved at n = 1. 3) When the computational budget is in between the above two cases, the optimal MSE bound \ufb01rst increases then decreases w.r.t.n in range [1, N]. The optimal MSE bound is obtained either at n = 1 or at n = N, depending on (N, T, \u0393M). In many machine learning applications, the computational budget is limited, leading the algorithm to the \ufb01rst case of Corollary 4, i.e., T < O \u0000 27 8 \u03933 MN 6\u0001. According to Corollary 4, processing full data (i.e., no minibatch) is required to achieve the optimal MSE bound, which is computationally infeasible when N is large (which motivated use of minibatches in the \ufb01rst place). A practical way to overcome this is to use small minibatches and adopt variance-reduction techniques to reduce the stochastic gradient noise. A practical variance reduction algorithm For practical use, we require that a variance-reduction method should achieve both computational and storage ef\ufb01ciency. While variance reduction has been studied extensively in stochastic optimization, it is applied much less often in SGMCMC. In this section we propose a vrSG-MCMC algorithm, a simple extension of the algorithm in (Dubey et al. 2016), but is more computationally practical in large-scale applications. A convergence theory is also developed in Section . The proposed vrSG-MCMC is illustrated in Algorithm 1. Similar to stochastic optimization (Schmidt, Le Roux, and Bach 2016), the idea of variance reduction is to balance the gradient noise with a less-noisy old gradient, i.e., a stochastic gradient is calculated based on a previous sample, as well as using a larger minibatch than that of the current stochastic gradient, resulting in a less noisy estimation. In each iteration of our algorithm, an unbiased stochastic gradient is obtained by combining the above two versions of gradients in an appropriate way (see gl+1 in Algorithm 1). Such a construction of stochastic gradients essentially inherits a low variance with theoretical guarantees (detailed in Section ). In Algorithm 1, the whole parameter x is decomposed into the model parameter \u03b8 and the remaining algorithm-speci\ufb01c parameter \u03c4, e.g., the momentum parameter. The expression \u201c\u03b8 \u2190x\u201d means assigning the corresponding model parameter from x to \u03b8. The old gradient is denoted as \u02dc g, calculated \u2021Note we only have that T = C1nL for some unknown constant C1, i.e., the speci\ufb01c value of T is unknown. \fwith a minibatch of size n1. The current stochastic gradient is calculated on a minibatch of size n2 < n1. We use xl+1 = NextS (xl, gl+1, hl) to denote a function which generates the next sample xl+1 with an SG-MCMC algorithm, based on the current sample xl, input stochastic gradient gl+1, and step size hl. Algorithm 1 Practical Variance-Reduction SG-MCMC. Input: \u00af x = x0 = (\u03b80, \u03c40) \u2208Rd, minibatch sizes (n1, n2) such that n1 > n2, update interval m, total iterations L, stepsize {hl}L l=1 Output: approximate samples {xl}L l=1 for l = 0 to L \u22121 do if (l mod m) = 0 then Sample w/t replacement {\u03c0i}n1 i=1 \u2286{1, \u00b7 \u00b7 \u00b7 , N}; \u00af x = xl; \u02dc \u03b8l \u2190\u00af x; \u02dc g = N n1 P i\u2208\u03c0 \u2207\u03b8 log p(di |\u02dc \u03b8l); end if \u03b8l \u2190xl; \u02dc \u03b8l \u2190\u00af x; Sample w/t replacement {\u02dc \u03c0i}n2 i=1 \u2286{1, \u00b7 \u00b7 \u00b7 , N}; gl+1 = \u02dc g + \u2207\u03b8 log p(\u03b8l) + N n2 P i\u2208\u02dc \u03c0 \u0010 \u2207\u03b8 log p(di |\u03b8l) \u2212\u2207\u03b8 log p(di |\u02dc \u03b8l) \u0011 ; xl+1 = NextS (xl, gl+1, hl+1); end for One should note that existing variance-reduction algorithms, e.g.(Johnson and Zhang 2013), use a similar concept to construct low-variance gradients. However, most algorithms use the whole training data to compute \u02dc g in Algorithm 1, which is computationally infeasible in large-scale settings. Moreover, we note that like in stochastic optimization (Reddi et al. 2016; Allen-Zhu and Hazan 2016), instead of using a single parameter sample to compute \u02dc g, similar methods can be adopted to compute \u02dc g based on an average of old parameter samples. The theoretical analysis can be readily adopted for such cases, which is omitted here for simplicity. More references are discussed in Section . Comparison with existing variance-reduction SG-MCMC algorithms The most related variance-reduction SG-MCMC algorithm we are aware of is a recent work on variance-reduction SGLD (SVRG-LD) (Dubey et al. 2016). SVRG-LD shares a similar \ufb02avor to our scheme from the algorithmic perspective, except that when calculating the old gradient \u02dc g, the whole training data set is used in SVRG-LD. As mentioned above, this brings a computational challenge for largescale learning. Although the problem is mitigated by using a moving average estimation of the stochastic gradient, this scheme does not match their theory. A more distinctive advantage of vrSG-MCMC over SVRG-LD (Dubey et al. 2016) is in terms of theoretical analysis. Concerning SVRG-LD, i) the authors did not show theoretically in which case variance reduction is useful in SGLD, and ii) it is not clear in their theory whether SVRG-LD is able to speed up the convergence rate compared to standard SGLD. Specifically, the MSE of SVRG-LD was shown to be bounded by O \u0010 N 2 min{2\u03c32,m2(D2h2\u03c32+hd)} nL + 1 Lh + h2\u0011 , compared to O \u0010 N 2\u03c32 nL + 1 Lh + h2\u0011 for SGLD, where (d, D, \u03c3) are constants. By inspecting the above bounds, it is not clear whether SVRG-LD improves SGLD because the two bounds are not directly comparable\u00a7. More detailed explanations are provided in Appendix . Convergence rate We derive convergence bounds for Algorithm 1 and analyze the improvement of vrSG-MCMC over the corresponding standard SG-MCMC. Using a similar approach as in Section , we \ufb01rst introduce additional binary random variables, {bi}N i=1, to indicate which data points are included in calculating the old gradient \u02dc g in Algorithm 1. This results in the expression for the stochastic gradient used in the l-th iteration: \u2207\u03b8 \u02dc U(\u03b8l) = N n2 PN i=1 \u0010 \u2207\u03b8 log p(di |\u03b8l) \u2212\u2207\u03b8 log p(di |\u02dc \u03b8l) \u0011 zi + N n1 PN i=1 PN i=1 \u2207\u03b8 log p(di |\u02dc \u03b8l)bi. It is easy to verify that the above stochastic gradient is an unbiased estimation of the true gradient in the l-th iteration (see Appendix ). In order to see how Algorithm 1 reduces the variance of stochastic gradients, from Lemma 1, it suf\ufb01ces to study \u2206Vl, as the minibatch size only impacts this term. For notational simplicity, similar to the \u03b1li de\ufb01ned in Section , we denote \u03b2 \u03b2 \u03b2li \u225c\u2207\u03b8 log p(di |\u02dc \u03b8l), which is similar to \u03b1li but evaluated on the old parameter \u02dc \u03b8l. Intuitively, since the old gradient \u02dc g is calculated from \u03b2 \u03b2 \u03b2 to balance the stochastic gradient noise (calculated from \u03b1), \u03b1 and \u03b2 \u03b2 \u03b2 are expected to be close to each other. Lemma 5 formulates the intuition, a key result in proving our main theorem, where we only consider the update interval m and stepsize h as factors. In the lemma below, following (Chen et al. 2016) (Assumption 1), we further assume the gradient function \u2207\u03b8U(\u03b8) to be Lipschitz. Lemma 5 Under Assumption 2 and assume \u2207\u03b8U(\u03b8) to be Lipschitz (Assumption 1 in (Chen et al. 2016)), \u03b1li and \u03b2 \u03b2 \u03b2li are close to each other in expectation, i.e., E\u03b1li = E\u03b2 \u03b2 \u03b2li +O(mh). In the Appendix, we further simplify E\u2225\u2206Vl\u22252 in the MSE bound by decomposing it into several terms. Finally, we arrive at our main theorem for the proposed vrSG-MCMC framework. Theorem 6 Under the setting of Lemma 5, let AM \u225cmaxl Al, and Al = \u0010 N n2 \u22121 \u0011 P ij E\u03b1T li\u03b1lj \u2212 2 N(N\u2212n2) n2(N\u22121) P i 0 for \u2200l, so that \u03bbM > 0. Note that for a \ufb01xed m, mh L in the above bound is a highorder term relative to 1 Lh. As a result, the MSE is bounded by O \u0000 AM L + 1 Lh + h2K \u2212\u03bbM L \u0001 . Because the MSE of standard SG-MCMC is bounded by O \u0000 AM L + 1 Lh + h2K\u0001 (see Appendix ) and \u03bbM > 0 from Theorem 6, we conclude that vrSG-MCMC induces a lower MSE bound compared to the corresponding SG-MCMC algorithm, with an improvement of O \u0000 \u03bbM L \u0001 . It is worth noting that in Algorithm 1, the minibatch for calculating the old gradient \u02dc g is required to be larger than that for calculating the current stochastic gradient, i.e., n1 > n2. Otherwise, \u03bbl in Theorem 6 would become negative, leading to an increased MSE bound compared to standard SG-MCMC. This matches the intuition that old gradients need to be more accurate (thus with larger minibatches) than current stochastic gradients in order to reduce the stochastic gradient noise. Remark 7 In the special case of (Dubey et al. 2016) where n1 = N for SGLD, Theorem 6 gives a MSE bound of O \u0000 AM L + 1 Lh + h2 + mh L \u2212maxl \u03bbl L \u0001 , with \u03bbl = \u0010 N n2 \u22121 \u0011 P ij E\u03b2 \u03b2 \u03b2T li \u03b2 \u03b2 \u03b2li \u22122N(N\u2212n2) n2(N\u22121) P i