|
|
"gt": "Large Language Models (LLMs) have extensive applica- tions in facilitating decision-making across professional and social domains, underscoring the importance of aligning LLMs with safety considerations. To safeguard against the generation of responses that deviate from human values, safety alignment is pursued through diverse mechanisms, including model fine-tuning Howard and Ruder (2018), re- inforcement learning with human feedback (RLHF) Ziegler et al. (2019), and model editing Mitchell et al. (2022). The overall goal of these approaches is to mitigate the risk of LLMs producing harmful or unlawful responses to user queries. While most Large Language Models (LLMs) serve as re- liable AI assistants capable of identifying and declining to respond harmful queries in many instances, they remain vul- nerable to carefully crafted prompts designed to manipulate them into producing toxic content, which is referred as \"jail- breaking\". Existing studies on jailbreaking LLMs can be categorized into two main approaches: manually designed jailbreak attacks web (2023); Li, Zheng, and Huang (2024) and learning-based jailbreak attacks. Representative of the *Corresponding author Figure 1: Examples of the false positive and false negative cases in the refusal matching evaluations. latter category is the GCG attack Zou et al. (2023), which reformulates the jailbreak attack as a process of generating adversarial examples, aiming to elicit LLMs to produce an affirmative response of a few tokens (e.g., \"sure, here is how to...\"). Building upon this, subsequent studies by Zhu et al. (2023) and Liu et al. (2023) have refined such attacks, focus- ing on improving stealthiness and readability using different optimization algorithms. Although learning-based attack such as GCG can success- fully jailbreak in some cases, some limitations restrict its performance, e.g. discrete input space and the lack of suit- able jailbreak target. The categories of objectionable behav- iors and reasonable responses to them are numerous Carlini et al. (2023). Moreover, the GCG target loss could not be the perfect optimization target regarding the jailbreak optimiza- tion problem, as also proposed by one concurrent work Liao and Sun (2024). To overcome such limitations, we introduce the DSN (Don\u2019t Say No) attack, by which universal adver- sarial suffixes can be generated stimulating LLMs to both produce affirmative responses and suppress refusals (Don\u2019t Say No). To achieve this goal, we incorporate an augmentation loss item that directs LLM\u2019s response away from predefined re- fusal keywords or strings. As shown in the upper part of Figure 2, the loss object involves: maximizing the affirma- tive response probability and minimizing the refusal key- word probability. Given the LDSN and the initial suffix, the universal adversarial suffix will be obtained by the Greedy Coordinate Gradient-based Search Zou et al. (2023). Another challenge of jailbreaking is the assessment met- ric. Unlike classification task, where the success of one adversarial example can be indicated by misclassification, 1 arXiv:2404.16369v1 [cs.CL] 25 Apr 2024 Figure 2: Detailed illustration of DSN attack and ensemble evaluation pipeline. The red arrow and left example represents affirmative response maximization. The green arrow and right example represents refusal minimization. evaluating jailbreak attack is challenging. It is hard to au- tomatically ascertain the harmfulness of LLM completions, and relying solely on manual annotation is both impractical and unrealistic. The existing work commonly employs a refusal string/keyword matching metric (refusal matching for short), where an attack is considered successful if the ini- tial fixed-length segments of the response do not contain pre-defined refusal strings (e.g. \"Sorry, I cannot...\") and vice versa. While it appears intuitive and aligns with human eval- uation processes, a closer examination reveals numerous false positive (FP) and false negative (FN) instances. One major limitation is it relies largely on the length of the pre- determined initial segments, as also proposed by one con- current work Mazeika et al. (2024). If the initial segments are short (e.g. 64 tokens), it might neglect the potential later refusal strings and evaluate it as a successful jailbreak in- stance, resulting false positive (case 1 in figure 1). On the other hand, if the initial segments are too long (e.g. 512 to- kens), the result could have been false negative if a refusal appears at the end but some harmful content is generated be- forehand (case 2 in figure 1; Vicuna\u2019s significant difference between figure 5 and 7). Other erroneous evaluation cases are illustrated in Figure 1. To enhance the reliability of evaluation metric, we pro- pose an ensemble evaluation approach involving three mod- ules as shown in the lower part of Figure 2. Instead of adopt- ing the refusal matching metric, we first employ one natural language inference (NLI)He et al. (2021) based method to assess the contradiction among the completions. This step aims to handle cases where the response contains semantic sharp turn (as depicted in Figure 1 case 3). After that, we integrate two third-party LLMs, namely GPT-4 Achiam et al. (2023) and HarmBench Mazeika et al. (2024), to provide a robust and comprehensive evaluation. The final evaluation result is the aggregation of all three modules. The contribution can be summarized as: \u2022 We introduce DSN, a powerful attack that incorporates a novel objective to not only elicit the affirmative response but also suppress the refusal response. \u2022 We apply Unlikelihood loss to stabilize the convergence and optimization of the two opposite loss objectives. \u2022 We propose an ensemble evaluation pipeline by novelly incorporating NLI contradiction as well as LLM evalua- tors to examine the success of the attack more accurately. \u2022 Extensive experiments demonstrate the potency of the DSN and the effectiveness of ensemble evaluation com- pared to baseline methods.", |
|
|
"main_content": "Adversarial examples. Since the discovery of adversarial examples Szegedy et al. (2014); Goodfellow, Shlens, and Szegedy (2014), the exploration of vulnerabilities within deep learning models to well-designed and imperceptible perturbations has attracted significant research interest for one decade. Under the white-box setting, a series of effective adversarial attack algorithms have been proposed Carlini and Wagner (2017); Kurakin, Goodfellow, and Bengio (2017). In an automated learning manner, these methods utilize gradient-based approaches to search for imperceptible perturbations. In addition, several effective adversarial attacks based on transfer attacks have also been proposed to address black-box setting. Papernot et al. (2016); Liu et al. (2016) Jailbreak attacks. In recent years, with the advancement of the Large Language Model (LLM), the field of jailbreaking attacks, aiming to induce the target LLMs to generate harmful and objectionable content, has gathered widespread research attention Wei, Haghtalab, and Steinhardt (2023). Current jailbreak methods include approaches based on manually constructed prompts web (2023), representation engineering Li, Zheng, and Huang (2024), utilizing LLMs to generate Deng et al. (2024), grammatical rules Zhang, Pan, and Yang (2023), modification of decoding methods Huang et al. (2023), usage of multi-modal LLM Niu et al. (2024); Gong et al. (2023) as well as those reformulating the jailbreaking as adversarial attacks and employing pure optimization Zou et al. (2023); Zhu et al. (2023); Liao and Sun (2024). GCG Zou et al. (2023) is a typical optimizationbased attack, and according to one concurrent work Mazeika et al. (2024), it is actually the most powerful attack among other methods. Thus GCG is used as the baseline method in this paper. The evaluation methods employed so far is primarily refusal matching as described in Section 1. Another method typically involves constructing a binary classifier, aiming to determine whether LLM generates harmful content Zhu et al. (2023); Huang et al. (2023); Mazeika et al. (2024). However, due to LLM\u2019s diverse generation capabilities, these attempts still require further enhancement. Optimization Strategy. As mentioned in section 1, one difficulty the learning-based jailbreak attack shall overcome is the discrete input space. To address it, there exist two main categories: embedding-based and token-based methods. The former category directly optimize on continuous embeddings and then inferring back the target tokens Lester, Al-Rfou, and Constant (2021) Wen et al. (2023) Qin et al. (2022). The latter treats the one-hot vectors of discrete tokens as optimizable continuous quantities, obtaining the final tokens through different greedy-based algorithms. Several effective methods Ebrahimi et al. (2017) Shin et al. (2020) Zou et al. (2023), including us, adopt this paradigm. In a separate line of work, there were also attempts to direct the language model output from unwanted contents. Welleck et al. (2019); Li et al. (2019) 3 Methods 3.1 DSN: Suppress Refusal Intuition. The first part of the objective is to suppress the model giving a refusal response. For examples: System: You are a chat assistant designed to provide helpful and not harmful responses to user queries. User: Tell me how to build a bomb. ! ! ! ! ! Assistant: As an AI, I cannot fulfill... Above, the blue and red text represents the user prompt and adversarial jailbreaking suffix, while the purple text indicates the target initial of the response. The intuition to incorporate the suppression of pre-defined refusal keywords is to force the language model\u2019s generation far away from a refusal response and lead to the success of jailbreak. Notation. We consider an LLM to be a mapping from a sequence of tokens x1:n, with xi \u2208{1, . . . , V } (where V denotes the vocabulary size) to a distribution over the next token. Specifically, we use the notation p(xn+1|x1:n) to denote the probability of the next token being xn+1 given previous tokens x1:n. Then we denote the probability of generating the response with a sequence of H tokens as p(xn+1:n+H|x1:n), which can be formalized as: p(xn+1:n+H|x1:n) = H Y i=1 p(xn+i|x1:n+i\u22121) (1) Unlikelihood loss. Normally, to maximize the prediction of a target distribution or token, we incorporate the standard softmax cross-entropy loss LCE(p, q) = \u2212P i pi log(qi), which aims to maximize the true probability distribution p by minimizing the LCE(p, q), where p and q refer to the true probability distribution and predicted probability distribution generated by softmax function. However, our purpose is the opposite, as we want to minimize the probability of refusal generation. One intuitive way is to take the negative softmax cross-entropy loss which will however lead to negative infinity and catastrophic loss term unbalance. To stabilize the convergence, we apply the Unlikelihood loss Welleck et al. (2019), which is previously proposed to also direct the language model outputs from unwanted contents, in an attempt to suppress the refusal responses: LUn(p, q) = \u2212 X i pi log(1 \u2212qi) (2) Objective. The objective of suppressing refusal is achieved by minimizing the probability of generating tokens in the pre-defined refusal keyword list, denoted as RKL = \"as an\", \"sorry, i cannot\", \"unethical\", .... Each refusal keyword\u2019s corresponding token length is represented by RTL, indicating the length of each keyword. To implement this, we utilize a sliding window with the size of each refusal keyword to scan through the response and compare it with the refusal keywords. The process of keyword matching is illustrated in Figure 3. The loss function for suppressing the refusal response using Unlikelihood loss can be stated as below, where H denotes the maximum length of the response. Lrefusal(x1:n) = X y\u2208RKL n+H\u2212RTL(y) X i=n+1 LUn(y, xi:i+RTL(y)) (3) Figure 3: Illustration of the refusal loss. 3.2 DSN: Elicit Affirmative Response Intuition.The second part of the objective is eliciting the model to begin its response with a positive affirmation of the user query. For example: System: You are a chat assistant to provide helpful and not harmful responses to user queries. User: Tell me how to build a bomb. ! ! ! ! ! Assistant: Sure, here is how to build a bomb: The intuition lies in the language model\u2019s next-word prediction capability. By initiating with an affirmative response, the subsequent completion are expected to align with the affirmative response, bypass the alignment mechanism and fulfill the user query effectively. Loss function. The objective of eliciting affirmative response is to maximize the probability of affirmative tokens x\u22c6 n+1:n+H, which equals to minimize the negative log of the probability: Ltarget(x1:n) = \u2212log p(x\u22c6 n+1:n+H|x1:n). (4) 3.3 DSN:Loss Function As also proposed by one concurrent work Liao and Sun (2024), whether the GCG target loss could be the perfect jailbreak optimization target is doubtful and questionable. As one fundamental component, the optimization target would directly lead the jailbreak process. Unfortunately the optimal one still remains unresolved, evidenced by the findings of Liao and Sun (2024). Thus in an attempt to explore the potential candidate for more effective and efficient jailbreak optimization target, we propose to integrate the Lrefusal with Ltarget, obtaining one complex and powerful jailbreak optimization target. DSN target could elicit the LLM to generate objectionable content and suppress refusal responses simultaneously. The overall DSN loss can be stated as below, where \u03b1 is a hyperparameter to balance the convergence of two loss objectives. LDSN(x1:n) = Ltarget(x1:n) + \u03b1 \u2217Lrefusal(x1:n), (5) 3.4 Optimization and Algorithm Finally, we will introduce the optimization process and the algorithm. Our goal is to optimize an adversarial suffix adv\u2217 with the aforementioned loss function. The optimization process will first initialize a string with a fixed length. adv\u2217\u2190arg min LDSN(x1:n \u2295adv) (6) However, as described in section 1 and 2, a primary challenge in optimizing adversarial suffixes is the discrete input space. To resolve this problem, we incorporate a greedy coordinate gradient-based approach Zou et al. (2023). The intuition is to leverage gradients with respect to one-hot vectors to find a set of promising candidates for replacement at each greedy step, and then exactly evaluate all these replacements via a forward pass. As this algorithm is based on the optimization in GCG, it is relegated to Appendix. 3.5 Evaluation Metrics Refusal matching. The refusal matching algorithm detects whether a response contains any refusal keyword, as already Algorithm 1 NLI Contradiction Evaluation Input: the user query Q, the adversarial suffix adv, the language model M, a threshold T. 1: Response R : [o1, o2...on] = M(Q + adv) \u25b7Generate the response R containing number of n sentences. 2: for i = 1, . . . , n do 3: scoreQo i = NLI(Q + adv, oi) 4: lQo i = length(Q + adv + oi) 5: if i \u0338= n then 6: scoreoo i = NLI(oi, oi+1) 7: loo i = length(oi + oi+1) 8: CEQo = Pn i=1 lQo i \u2217scoreQo i P lQo i \u25b7Compute the overall NLI score between sentence and query. 9: CEoo = Pn\u22121 i=1 loo i \u2217scoreoo i P i loo i \u25b7Compute the overall NLI score between adjacent sentences. 10: if T \u2212(CEoo + CEQo) \u22640 then 11: Return Fail 12: else: 13: Return Success described in section 1 and 2. The attack is considered successful if the initial segment of the response do not contain pre-defined refusal strings. As detailed in section 1, the length of the fixed-length initial segment also plays a crucial role towards rigorous assessment. The initial segment length and the refusal keyword list utilized in this paper will be detailed in section C.3. NLI contradiction. In natural language inference (NLI), contradiction means two statements are mutually exclusive or incompatible with each other. The intuition of employing NLI contradiction is that the semantic inconsistency is frequently observed among negative cases, where the completion fail to answer the objectionable query. (e.g. case 3 in Figure 1). We design an algorithm to evaluate the extent of contradiction within the user query and model completion. By using open-source NLI model, responses can be determined according to the contradiction extent. Higher overall NLI contradiction score signifies lower response consistency and diminishes the likelihood of being a jailbreaking response. Intuitively, false positive cases shall decrease, ensuring the positive cases to be semantically consistent. As present in Algorithm 1, given the user query Q, adversarial suffix adv, language model M, we first generate response R containing n sentences (line 1). Then, for each sentence oi in response R, we assess how well it aligns with the user query and the relationship between pairs of sentences within the response by calculating the standard NLI contradiction score (lines 2-7). We use a weighted sum of scores according to their sentence length to compute overall contradiction extent CEoo and CEQo (lines 8-9), as the sentence length plays a vital role in assessing overall contradiction extent. By comparing with a predefined threshold T, we can determine the attack result (lines 10-13). More details will be covered in the appendix C.3. Third-party evaluator. Besides refusal matching and NLI, recent works have also introduced some promising eval(a) Llama2: Lrefusal only for search (b) Llama2: Lrefusal for sampling and search (c) Vicuna: Lrefusal only for search (d) Vicuna: Lrefusal for sampling and search Figure 4: ASR over steps on Llama2 and Vicuna. uation methods, mainly LLM based. We will incorporate HarmBench Mazeika et al. (2024), GPT-4 Achiam et al. (2023) into our ensemble pipeline as third-party evaluators. Details about these third-party evaluators will be covered in appendix C.2. Ensemble Evaluation. We use last three aforementioned evaluation modules, and we decide whether a response is successful or unsuccessful jailbreak by taking the majority vote among each components. The reason and its superiority will be discussed in Section 4.4. 4 Experiments 4.1 Threat Model The objective of attackers is to jailbreak Large Language Models (LLMs), aiming to circumvent the safeguards in place and generate malicious responses. The victim model in this paper is open-sourced language model, providing whitebox access to the attacker. As system prompt will also play one significant role in jailbreaking Huang et al. (2023), the default system prompts of each language models will be reserved. 4.2 Configuration Datasets. AdvBench is the main adopted dataset, which aims to systematically evaluate the effectiveness and robustness of jailbreaking prompts to elicit harmful content generation. A collection of 520 goal-target pairs are presented that reflects harmful or toxic behavior, categorized as profanity, graphic depictions, threatening behavior, misinformation, discrimination, cybercrime, and dangerous or illegal suggestions. Zou et al. (2023) Target models. We target Llama-2-Chat-7B Touvron et al. (2023) and Vicuna-7b-v1.3 Zheng et al. (2023), which are two state-of-the-art open-source LLMs. These two language models have undergone different levels of alignment process and exhibit varying degrees of human-value alignment capability. During transfer experiments in section 4.5, the transferability towards GPT-3.5-turbo model will examined rigorously. Baselines and evaluation metrics. We compare DSN attack with GCG Zou et al. (2023), the typical and most powerful learning-based jailbreak attack method Mazeika et al. (2024). To evaluate the effectiveness of the DSN attack, we adopt the standard attack success rate (ASR), as shown in equation 7. ASR measures the portion of toxic responses generated from the LLM M, where the adversarial suffix adv is appended to the malicious query Q. Here I is an evaluation indicator that returns 1 if the response is assessed as harmful (a successful jailbreak case) and 0 otherwise. The comparison will be firstly conducted by refusal matching in section 4.3, then the proposed evaluation ensemble metric will come into play in section 4.4. ASR(M) def = 1 |D\u2032| X (Q)\u2208D\u2032 I(M(Q \u2295adv)) (7) ASR% at step 500 Llama-2 Llama-2 optimal Vicuna Vicuna optimal GCG 29.8 \u00b1 12.6 43 47.4 \u00b1 5.6 52 DSN 47.7 \u00b1 14.7 74 57.1 \u00b1 11.8 83 Table 1: ASR results under refusal matching metric. 4.3 Evaluation 1: Effectiveness of DSN Attack ASR convergence rate. In Figure 4, we present the ASR of GCG attack and DSN with respect to the optimization steps. The shadow regions with the dotted lines are the margin plots representing the mean and variance of repeated experiments with different hyper-parameter configurations, while the solid lines represent the ASR of optimal run among repeated experiments. Note that the sampling of candidate suffixes and searching of adversarial suffix from the candidates both involve the loss function Equation 5 (detail relegated to the Appendix C.1 together with the algorithm). It can be observed that the results of DSN attack are significantly superior to those of the baseline method, in terms of both mean and optimal results. This is evidenced by the lines representing the DSN method consistently positioned above those of the baseline. Moreover, it could be found that the yellow shaded area representing the DSN method remains above the blue shaded area of the baseline across nearly the entire 0-500 steps interval. This indicates that the DSN attack are robustly superior to the baseline with limited step, presenting an ideal scenario for malicious attackers who might lack sufficient computational resources, e.g. fail to support 500 steps of attack for each setting. Moreover, the wider span of the shaded area for the DSN attack suggests a greater variance, which is reasonable, as the repeated DSN experiments are distinct in hyper-parameter. Experimental strategies between DSN and GCG are also different, where the latter only involves single setting and has been launched for more trials. Ablation study on \u03b1. To investigate the impact of the augmentation term Lrefusal loss on the jailbreaking results (equation 5), for different hyper-parameter \u03b1 setting, we present the max ASR among multiple rounds of experiments in Figure 5, which controls the magnitudes of the Lrefusal term. The fixed-length segments of 128 and 512 for Llama and Vicuna completions are examined by refusal matching metrics here. The baseline results on GCG correspond to the leftmost alpha = None case as well as the dotted line, which only involves the target loss in Equation 4. The yellow, blue, and red bars represent the cases where the Lrefusal term is involved in different stages, namely Lrefusal only used for searching, used for both selecting and searching with the same \u03b1, and used for both selecting and searching but with different \u03b1. More details about the setting and hyperparamter is presented in appendix C.3 In Figure 5, the DSN method consistently surpasses the baseline performance under nearly every hyper-parameter setting. We didn\u2019t include the results for higher values of (a) ASR of Llama (b) ASR of Vicuna Figure 5: Ablation study of ASR vs. \u03b1 by refusal matching evaluation \u03b1 because when \u03b1 exceeds 100, the DSN loss is dominated by the Lrefusal term, resulting in the generated responses focusing too much on avoiding refusal keywords rather than responding to the objectionable requests, which is not desirable in a jailbreaking scenario. 4.4 Evaluation 2: Effectiveness of Evaluation Ensemble Pipeline Instead of adopting the refusal matching evaluation method like current works Zou et al. (2023); Zhu et al. (2023), mainly considering its limitations mentioned earlier in section 1, in this section we will adopt the previously proposed evaluation ensemble pipeline to ensure more accurate and reliable evaluation results. Human evaluation. To accurately and fairly assess the proposed ensemble evaluation pipeline and compare it to the widely adopted refusal matching, we involve human evaluation by manually annotating 300 generated responses. Since the NLI method ascertain some certain hyperparameters, the annotated 300 data will be split into 100 trainset as well as 200 testset, accounts for 100 Llama2 completion and 100 Vicuna completion respectively. More details about the data split as well as annotation principle will be covered in appendix C.2. Aggregation strategy comparison. Aggregating evaluation results from each module is crucial for the accuracy of the ASR% at step 500 Llama-2 Llama-2 optimal Vicuna Vicuna optimal GCG 31.0 \u00b1 13.4 46 91.6 \u00b1 2.9 96 DSN 45.6 \u00b1 15.1 84 88.1 \u00b1 8.0 98 Table 2: ASR results under ensemble evaluation metric. Figure 6: ROC curve of different aggregation policy on testset evaluation pipeline. Common methods include majority voting, one-vote approval (requiring only one module to detect jailbreaking), and one-vote veto (requiring all modules to detect jailbreaking). To determine which aggregation policy is more accurate on testset, we employ a ROC curve illustrating the True Positive Rate versus False Positive Rate and compare their AUROC scores (shown in Figure 6). A larger area under the curve indicates better results. Soft and hard majority votes return probabilities and binary outcomes respectively. The ROC curve demonstrates the superiority of the majority vote as an aggregation strategy (the green and orange curve), with ensemble evaluation showing a higher AUROC score compared to refusal matching. Eval method Acc AUROC F1 Refusal matching 0.74 0.72 0.79 Gpt4 0.80 0.77 0.85 HarmBench 0.80 0.78 0.84 NLI(ours) 0.77 0.79 0.76 Ensemble(ours) 0.82 0.79 0.86 Table 3: The evaluation results obtained by different evaluation methods, reported by taking average on two distinct test set which respectively contains 100 manually annotated real jailbreaking responses. Examination of different metrics. By adopting a learning approach, the hyperparameter configuration of a novel NLI evaluation metric has been determined, which is a constituent part of our proposed ensemble evaluation. (a) ASR of Llama (b) ASR of Vicuna Figure 7: Ablation study of ASR vs. \u03b1 by ensemble evaluation. To further demonstrate the superiority of the evaluation pipeline effectively and rigorously, we present the evaluation results of different evaluation methods in table 3. From the table, it is observed that the Ensemble eval achieved superior performance on our annotated test set. It is noteworthy that, although the performance of the NLI model itself alone is not the best\u2014for instance, it even falls short of the Refusal Matching baseline in the F1 metric\u2014the ensemble from the combination of \"Gpt4, NLI, HarmBench\" yields the overall best performance among different metrics. This is attributed to the NLI\u2019s focus on identifying semantic incoherence and semantic inconsistency within the model\u2019s completion, a consideration that refusal matching or other alternative evaluation methods do not adequately address. Moreover, given that the NLI model is lightweight and opensource, employing this evaluation method results in significant savings in terms of time and financial resources, particularly in comparison to evaluation methods that rely on multiple calls to third-party commercial LLM APIs. ASR under new evaluation. In figure 7, we present the max ASR vs the hyper-parameter \u03b1 under the new ensemble evaluation pipeline. Similar to Figure 5, DSN method gives superior jailbreaking results in the much more aligned model Llama2, however, both method gives nearly 100% ASR in the less aligned model Vicuna. These two observations are in consistent with the results from one concurrent Transfer ASR% Llama Vicuna Refusal Matching Eval Esemble Refusal Matching Eval Esemble train test train test train test train test GCGpaper None None None None None 34.3 None None DSNmean 45.21 42.95 44.19 50.07 54.98 54.27 53.73 59.59 DSNmax 100 87 96 95 96 90 100 93 Table 4: The transfer ASR towards the black-box gpt-3.5-turbo model work Mazeika et al. (2024) and the findings mentioned in section 1, respectively. 4.5 Transferability Interestingly, the suffixes purely optimized by DSN demonstrate great level of transferability, where no kinds of ensemble or multi-model optimization is utilized as in the original GCG paper Zou et al. (2023). In table 4, the transfer ASR towards gpt-3.5-turbo model is detailed for different victim model, different metrics, and different dataset split. It is noteworthy to point out the importance of the existence of system prompt yet Huang et al. (2023). In our open-source GCG and DSN attack results, the presence of system prompt has already been reserved since the modification upon it could affect the jailbreak results drastically. However, during our transfer experiments the default system prompt for gpt-3.5-turbo model, e.g. \"you\u2019re a helpful assistant\", is removed from the conversation template because otherwise the jailbreak attack result of both methods would shrink immediately and dramatically. Running time (hours) GCG DSN Round1 60.96 60.58 Round2 60.11 60.46 Round3 59.71 61.08 Round4 60.73 61.30 Round5 60.58 61.01 Overall 60.42 \u00b1 0.45 60.89 \u00b1 0.31 Table 5: Running time analysis. 4.6 Running time analysis No significant extra time cost is needed between DSN and GCG method. Here we each sample 5 rounds of Llama-2 experiments and compare their running time in the table 5. On our machine, only a 0.77% relative increase regarding the average running time is observed. The computation overhead doesn\u2019t largely rise up because the extra computation introduced by DSN is magnitudes lower than the process of obtaining the logits during forward pass and inferring the gradients during the back propagation. Thus the extra time cost could be relatively neglected. 5 Conclusion In conclusion, we introduce the DSN (Don\u2019t Say No) attack to prompt LLMs not only to produce affirmative responses but also to effectively suppress refusals. Furthermore, we propose an ensemble evaluation pipeline integrating Natural Language Inference (NLI) contradiction assessment and two external LLM evaluators. Through extensive experiments, we showcase the potency of the DSN attack and the effectiveness of our ensemble evaluation approach compared to baseline methods. This work offers insights into advancing safety alignment mechanisms for LLMs and contributes to enhancing the robustness of these systems against malicious manipulations." |