|
|
"gt": "Supervised fine-tuning (SFT) refines large lan- guage models (LLMs) using task-specific instruc- tion data to enhance their capability to follow in- structions (Touvron et al., 2023; Peng et al., 2023) and to align their outputs with human preferences and safety considerations (Ouyang et al., 2022; Rafailov et al., 2023; Dong et al., 2023b; Yuan et al., 2023). This process is often termed \u201calign- ment\u201d, signifying the tailoring of model outputs *Work was done during a visit to Westlake University. \u0000 Co-corresponding authors. to conform to specific downstream requirements. Nevertheless, current research casts doubt on the necessity and potential adverse impacts of SFT. But the alignment achieved through SFT is often considered to be \u201csuperficial\u201d, with the process po- tentially repurposing pre-existing knowledge from pre-training to merely reshape outputs to meet spe- cific criteria (Zhou et al., 2023; Lin et al., 2023). It has been observed that even a small-scale SFT training dataset can produce significant alignment effects (Liu et al., 2023; Xia et al., 2024). On the other hand, recent empirical studies (Luo et al., 2023; Dong et al., 2023a) have raised concerns that SFT might hurt the knowledge acquired during its pre-training phase, leading to serious consequences like catastrophic forgetting. Not only is there no definitive consensus on the necessity of SFT, but the majority of these stud- ies also focus on monolingual tasks. LLMs still encounter challenges in handling complex cross- lingual generation tasks (Schioppa et al., 2023; Wang et al., 2023). Current research on cross- lingual alignment primarily seeks to extrapolate or align English capabilities to other languages us- ing the SFT paradigm (Zhang et al., 2023; Chai et al., 2024; Xu et al., 2024), yet there remains a gap in exploring the specific impacts of SFT-based cross-lingual alignment. Furthermore, given the potential risk of SFT leading to the forgetting of pre-training knowledge, the question of how to achieve cross-lingual alignment without training remains underexplored. To bridge these gaps, our study conducts an in- depth examination of the impact of SFT on cross- lingual generation. We investigate the influence of SFT on the decoding patterns of foundation models in cross-lingual contexts, hypothesizing that the success of SFT largely hinges on the selection of initial prior tokens that are critical for eliciting task- specific generation in the target language. Further- more, the observed decoding similarities between 1 arXiv:2404.16766v1 [cs.CL] 25 Apr 2024 Instruction: Translate the following sentence from English to Ukrainian: \u201cWe now have 4-month-old mice that are non-diabetic that used to be diabetic,\u201d he added. \"They're not cured, but they're no longer diabetic.\"\\n\"We now have 4-month- \u2026 \u041c\u0438 \u0442\u0435\u043f\u0435\u0440\u0456\u0448\u043d\u0456\u0445 4 \u043c\u0456\u0441\u044f\u0446\u0456\u0432 \u043c\u0430\u044e\u0442\u044c \u043c\u0438\u0448\u0435\u0439, \u044f\u043a\u0456 \u0440\u0430\u043d\u0456\u0448\u0435 \u0431\u0443\u043b\u0438 \u0434\u0456\u0430\u0431\u0435\u0442\u0438\u043a\u0430\u043c\u0438 \u2026 Foundation LLM SFT-tuned LLM SFT-based Alignment \"They're not cured, but they're no longer diabetic.\"\\n\"We now have 4-month- \u2026 Foundation LLM + Prior Tokens + SFT Pipeline Pretty: Prefix TexT as a Yarn ? ? Input: [Instruction, \u201c\u041c\u0438\u201d] SFT-like LLM \u041c\u0438 \u043d\u0430\u0440\u0435\u0448\u0442\u0456 \u043c\u0430\u043b\u0438 \u043c\u0438\u0448\u0435\u0439, \u0449\u043e \u043d\u0435\u043c\u0430\u044e\u0442\u044c \u0434\u0456\u0430\u0431\u0435\u0442\u0443, \u044f\u043a\u0456 \u0440\u0430\u043d\u0456\u0448\u0435 \u0431\u0443\u043b\u0438 \u0434\u0456\u0430\u0431\u0435\u0442\u0438\u043a\u0430\u043c\u0438 \u2026 1) Understand the alignment under cross-lingual setting. 2) Propose a training-free alignment method for non-English tasks. SFT Prior Refined Prior How does SFT change the model? Pseudo Prior High Resource Low Figure 1: Illustration of our research question and proposed Prefix TexT as a Yarn (PRETTY) framework. foundation and SFT models support the extension of the superficial alignment hypothesis to cross- lingual scenarios. Responding to these insights, we introduce a training-free alignment method named \u201cPRETTY\u201d for cross-lingual and non-English tasks. The Prefix TexTs act as a Yarn (PRETTY) linking the foundation LLM and the SFT LLM, eliciting the foundation LLM to exhibit near-SFT perfor- mance levels. Specifically, we augment the origi- nal input with a few tokens that serve as decoding priors, and then prompt the foundation LLM to re- sume decoding based on this modified input. In most cases, only one or two task-related prior to- kens are needed, and the method for constructing these prior tokens is flexible across various kinds of language resources, fostering the democratization of multilingual LLMs. We conducted experiments on machine transla- tion (Goyal et al., 2022), cross-lingual summariza- tion (Bhattacharjee et al., 2023) and non-English part-of-speech (POS) tagging (Liang et al., 2020) tasks across eight languages. These tasks exem- plify cross-lingual generation and multilingual lan- guage understanding, and they provide ample non- English test data to evaluate effectiveness across varying levels of resource availability. The exper- imental results demonstrate that PRETTY can ef- fectively align the foundation model to match SFT model\u2019s performance without training, by merely adding two prior tokens in the decoding.", |
|
|
"main_content": "2.1 Preliminaries Pre-training The pre-training (PT) of LLMs is primarily conducted through language modeling tasks on large-scale unlabeled data (Touvron et al., 2023; Achiam et al., 2023). During this phase, given a sequence XPT of length N and a context window k, the optimization objective is maximizing the joint probability PLM as: PLM(XPT) = N \ufffd i=1 mod \ufffd i=1 P(xi|xi\u2212k:i\u22121) (1) \ufffd which encourages the model to generate text that naturally follows from the preceding context. However, this \u201ctext completion\u201d behavior can become a bottleneck when models are prompted to switch languages or follow specific instructions of crosslingual generation. It is frequently observed that when prompted with English input and instructed to produce text in a different language, as illustrated in the upper example of Figure 1, the foundation model often continues to decode in English. SFT SFT leverages labeled data pair (Xins., Y ) to empower models with the ability to follow instructions. This stage aims to maximize the probability of the expected answer Y conditioned on the 2 input text Xins., where Xins. consists of the task instruction and task input. PSFT(Y |Xins.) = T Y j=1 P(yj|y1:j\u22121, Xins.) (2) SFT is crucial for aligning foundation models to perform task-specific instructions, effectively transforming a general-purpose LLM into an instructionfollowing assistant. However, data quality, training costs, and the imbalance of multilingual data hinder the democratization of assistant LLM. As mentioned before, SFT may be harmful to pre-training knowledge. Thus, it is meaningful and important to understand the underlying mechanism of SFTbased alignment and propose a more efficient alignment method. 2.2 Beneath the SFT-based Alignment Prior Knowledge Hypothesis It is worth noting that pre-training corpora also contain sequences that naturally express task-specific information, which imparts certain capabilities to the foundation LLMs. For example, the presence of semantically equivalent expressions in the pre-training text may enable LLM acquire machine translation ability during pre-training stage (Radford et al., 2019). Despite its extensive prior knowledge, the foundation LLM still struggles with complex crosslingual generation tasks. Beyond existing studies, we provide more concrete insights into this issue by prompting foundation LLMs with various instructions (Bawden and Yvon, 2023). Notably, only 31.8% of these prompts successfully elicit translation capability from the foundation LLMs1. This deficiency may stem from two main factors: First, the proportion of text with the aforementioned characteristics in the pre-training corpus XPT is still relatively small, and most of it is far from resembling human instruction text Xins.. Consequently, the model is more likely to predict tokens suitable for completing formal texts than those required for task-specific instructions. As a result, the foundation LLM often fails to produce tokens y \u2208Y1:T in the intended target language. Secondly, the predominance of English in the pretraining data skews the token generation probabilities of foundation LLM. Given a cross-lingual context, the model favors predicting tokens in English, while the token probabilities for other languages remain comparatively low. For example, English data 1For detailed information, please refer to Appendix B.3. 1 3 10 20 30 40 0 20 40 60 80 100 Top-K Sampling Tokens Agreement@K (%) Foundation LLM + Prior Token Figure 2: The agreement between the SFT model and the foundation model in terms of the selection of the next token. Once the Prior Token is provided, the token chosen by the SFT model is also can be found within the Top-K candidate words of foundation model. comprises up to 90% of the Llama2 pre-training data (Touvron et al., 2023), which may lead models to generate text with an English-centric bias. The above hypothesis might be reasonable when we revisit Equation (1) and Equation (2). The probability PLM(XPT) of the next token prediction for the foundation model is conditioned on the distribution of the pre-training text XPT. SFT narrows the probability space for token selection, adjusting the parameters to better align with the distribution, i.e., the probability PSFT(y|Xins.) is conditioned on the distribution of the instruction text Xins.. Experimental Settings To validate the aforementioned hypothesis, we selected the representative cross-lingual task of machine translation as our analytical testbed. The main research method involved quantifying the differences and similarities in the decision space and token selection behavior between the foundation LLM and the SFT-aligned LLM. For the model selection, we chose the foundation Llama2 7B model and conducted supervised fine-tuning on it using the Alpaca dataset2(Taori et al., 2023). The optimization was carried out using a cosine learning rate scheduler, with the maximum learning rate set to 2e\u22125 and a warmup ratio of 0.03. Training was performed on two NvidiaH800 GPUs using LoRA parameter-efficient finetuning (Hu et al., 2022) technique, with a cumulative batch size of 64. Other hyper-parameters follow those of the original Alpaca settings. 2https://github.com/tatsu-lab/stanford_alpaca 3 + Prior Token Figure 3: The probability distribution of tokens selected by various models. Incorporation of a Prior Token causes the decision probabilities of both models to converge across all data instances. 0 3 6 9 Comparison Group KL Divergence Foundation LLM vs. SFT LLM + Prior Token vs. SFT LLM 0 0.2 0.4 0.6 JS Divergence 0 5 10 15 Cross Entropy Figure 4: The divergence in probability distributions across the entire vocabulary during decoding. Prior Token significantly reduces the discrepancy between the foundation model and the SFT model. A Prior Token Elicits Silent Majority Inspired by the categorization of token shifts by Lin et al. (2023), we propose to quantify the agreement of token selection between foundation LLM \u03b8PT and SFT LLM \u03b8SFT. Given the same prefix input \u02c6 X, we aim to measure whether the next token selected by the SFT LLM, ySFT, is among the top-K tokens, yPT, with the highest probabilities in the decision space of the foundation LLM, which can be formally expressed as follows: ySFT = argmax y\u2208V P(y| \u02c6 X; \u03b8SFT) yPT = {y| arg topK y\u2208V P(y| \u02c6 X; \u03b8PT)} AggrementK = 1 L L X l=1 1ySFT\u2208yPT (3) where V is the vocabulary shared by two models, and L is the length of the dataset. We compare the agreement of the token selection made by the models under the same prefix text \u02c6 X in two different experimental setups. The first setup uses the instruction text as the prefix, i.e., \u02c6 X = Xins.; the second takes the first token decoded by the SFT model as a prior token, appending it to the original instruction prefix, i.e., \u02c6 X = h Xins., y(1) SFT i . For the SFT model, the second setup is equivalent to continuing its own decoding behavior, whereas for the foundation model, it becomes decoding with the addition of a prior token. Figure 2 illustrates the agreement between the foundation model\u2019s predictions and those of the SFT model regarding the selection of the next token, given an identical text prefix. Across the entire translation data, it is observed that after incorporating merely one prior token, the foundation model exhibits a high degree of agreement with the SFT model in terms of token selection. This demonstrates that the alignment effect of SFT in crosslingual generation tasks is also somewhat superficial. Even in instances where the token with the highest probability differs between the two models, 90.8% of the tokens chosen by the SFT model are present within the \u201csilent majority\u201d in the decision space of the foundation model, specifically, among the top 20 most probable token choices. Lens of Distribution Instead of focusing on the coverage of token selection outcomes, we also observe the decision dynamics and similarities from the perspective of the overall probability distribution, with the data settings consistent with the previous setup. First, as shown in Figure 3, after adding a prior token, the probability of the next tokens chosen by both models have closely aligned distributions. The reason that the foundation model 4 exhibits a high probability given the instruction text as a prefix lies in a preference for choosing to continue the instruction text rather than completing the cross-linguistic semantic transformation. Additionally, we quantify the distribution disparities between the two models through the probability distribution of the vocabulary. The disparity metrics used include Kullback-Leibler (KL) divergence, Jensen-Shannon (JS) divergence, and cross-entropy (Kullback, 1997). As depicted in Figure 4, the disparity of decision space of the foundation model significantly decreases after adding the prior token, aligning more closely with the SFT model. These findings indicate that such prior tokens serve a dual function: they not only steer the foundation model towards generating tokens pertinent to cross-lingual generation but also modulate the decision space to align more closely with the taskspecific distribution. 3 Pretty: Prefix TexT as a Yarn 3.1 Motivation The observations discussed earlier confirm that SFT effectively narrows the decision space of the foundation model during text generation that is conditioned on instruction text. The disparity in token selection between the foundation LLM and the SFT LLM, however, might not be reduced by a trainingbased transfer methodology. By appending a prior token into the instruction text, the choices of the next token between the two models tend to become largely consistent, and in the vast majority of cases, the tokens chosen by SFT model are also found within the high-probability candidate words of foundation model. These phenomena show that the alignment elicited by SFT is somewhat superficial in cross-lingual generation tasks and motivate us to propose a training-free alignment method by leveraging these prior tokens. 3.2 Formulation Upon revisiting Equation (1) and Equation (2), the goal of proposing a training-free approach is to enable the conditional decoding probability of foundation model to approximate those of SFT model. Therefore, ideally, the selected prior tokens Xpri. = {xpri.} may satisfy the following criteria: P(yPT| [Xins., Xpri.] ; \u03b8PT) \u2248P(ySFT|Xins.; \u03b8SFT) (4) where yPT and ySFT represent the outputs of the foundation and the SFT models, respectively. It is important to note that a single prior token may not serve as an optimal solution due to its non-derivable characteristic. Hence, we extend our methodological approach to include appending multiple prior tokens, grouping them to form a prefix text. 3.3 Construction of Prior Tokens To ensure that the proposed method is applicable to a wide array of languages, we propose three construction strategies based on the availability of language resources, aiming to guarantee the universality of our approach. SFT Prior represents an ideal scenario where the first few tokens generated by a SFT model are used as priors. This method is theoretically rational when the SFT model is derived from the same foundation model because it directly approximates Equation (4) by sampling xpri. \u223c{ySFT}. In practical applications, this might be suitable for high-resource languages due to the imbalanced language capabilities of other languages. Additionally, SFT could potentially degrade the knowledge and abilities that the foundation model has already acquired. In such cases, using prior tokens from the SFT model can contribute to generating better results. This situation will be discussed further in the subsequent section. Refined Prior is more readily accessible for most languages and tasks. We can utilize the output tokens generated by a smaller model trained for specific downstream tasks and use them as prior tokens to achieve weak-to-strong generalization (Burns et al., 2023). Pseudo Prior For extremely low-resource language pairs, where there is no labeled data for downstream tasks, both SFT and Refined priors are difficult to obtain. For cross-lingual tasks, we can create pseudo labels in target language as prior tokens. For instance, in machine translation tasks, we might use bilingual dictionaries to acquire pseudo prior tokens. However, the quality and accuracy of pseudo labels remain uncertain, and the extent of their impact on the generative performance of the foundation LLM is not yet clear. We will explore this problem further in the context of experimental results discussed later in the paper. 5 4 Experiments We examine the effectiveness of our proposed training-free alignment method on two distinct tasks: machine translation, cross-lingual summarization and non-English POS tagging. Machine translation serves as a prototypical cross-lingual generation task, entailing the transformation of a sequence from a source language to a target language (Bahdanau et al., 2015; Vaswani et al., 2017; Zhan et al., 2023). As for cross-lingual summarization, it requires the model to generate a summary of an article in a different language (Bhattacharjee et al., 2023; Chen et al., 2023). Although POS tagging (Manning, 2011; Nivre et al., 2017; Chiche and Yitagesu, 2022) primarily assesses the model\u2019s ability to understand monolingual text, we include it as multilingual experiments to show the universality of our methods. 4.1 Experimental Settings Data We use Flores-101 (Goyal et al., 2022), CrossSum (Bhattacharjee et al., 2023) as benchmarks for machine translation and cross-lingual summarization tasks, respectively. For POS tagging tasks, we choose the POS test split from the XGLUE benchmark (Liang et al., 2020), which is derived from the Universal Dependencies Treebank v2.5. To investigate the performance across various resource languages, we carefully selected eight languages based on the pre-training data proportions disclosed in the Llama2 technical report (Touvron et al., 2023). These languages are French, German, Chinese, Russian, Ukrainian, Portuguese, Hindi and Arabic. Among these, the first four languages account for more than 0.1% of the pretraining data of Llama2, while Ukrainian and Portuguese fall below 0.1%, Hindi and Arabic is below 0.05%. For the Llama2 model, we can categorize these three types of languages into high-resource languages, low-resource languages, and extremely low-resource languages, respectively. Models and Baselines The settings of Llama2 foundation model and the SFT model are consistent with those described in Section 2.1. To further demonstrate the generality of our proposed method, we incorporated the Mistral-7B LLM family (Jiang et al., 2023) into our experiments, covering both out-of-the-box SFT and foundation models. In the machine translation task, the Llama2 foundation model does not tend to generate translations when given explicit translation instructions. While this is a normal phenomenon according to our previous discussion, to ensure a fair comparison, we also searched for a better prompts for the foundation model. This prompting approach is referred to as \u201cLlama2-7BPROMPTING\u201d in subsequent sections. For POS tagging, we experimented with various instructions and selected one that consistently prompts both the foundation model and the SFT model to reliably generate classification results in text. Although we report the zero-shot performance for the aforementioned tasks, we found that even out-of-the-box SFT models cannot produce stable output for cross-lingual summarization task. Hence, we prepend a constant demonstration before the input to also assess the effectiveness of our proposed method under the in-context learning paradigm (Dong et al., 2023c). Sources of Prior Token The sources of crafting prior tokens include: \u2022 SFT Prior: We took the first k tokens of output produced by SFT model as the prior tokens. For multiple SFT models, we select the model that demonstrates better performance. \u2022 Refined Prior: We use downstream task models with smaller parameter sizes as the source of refined priors. For the different tasks, we utilized the distilled 600M variant of NLLB-200 translation model3(Costajuss\u00e0 et al., 2022), mT5 cross-lingual summarization model4 and the Unicoder-NLU model5(Huang et al., 2019), respectively. \u2022 Pseudo Prior: The pseudo prior is applied to two cross-lingual tasks since it can utilize cross-lingual language resources. We create pseudo prior tokens for machine translation task by referencing dictionary 6 entries. For cross-lingual summarization, we initially extract keywords from each passage using KeyBERT (Grootendorst, 2020) and then perform word-by-word translation. However, not all initial sentence tokens will be covered by the dictionary. To handle such instances, a backoff strategy is implemented, where the target language equivalent of the first available dictionary token is used as the prior token. 3https://huggingface.co/facebook/ nllb-200-distilled-600M 4https://hf.co/csebuetnlp/mT5_m2m_crossSum 5https://github.com/microsoft/Unicoder/ 6Please refer to Appendix B.4 for dictionary information. 6 English-Centric Models En-Zh En-Uk Zh-En Uk-En Avg. %SFT. spBL. CoM. spBL. CoM. spBL. CoM. spBL. CoM. spBL. CoM. All Llama2-7B Llama2-7B-Alpaca 13.6 80.9 24.0 83.3 23.5 85.1 34.4 85.5 23.9 83.7 Llama2-7B-Chat 7.8 67.2 18.1 71.0 18.5 81.3 30.4 83.3 18.7 75.7 Llama2-7BPROMPTING 5.9 64.1 11.0 60.9 24.3 84.8 34.2 85.0 18.9 73.7 80.4 Llama2-7B 7.7 72.0 0.2 32.4 12.0 74.4 9.3 59.2 7.3 59.5 52.5 +PRETTY (SFT Prior) 13.3 80.0 23.0 83.1 23.7 84.9 33.6 85.3 23.4 83.3 98.8 +PRETTY (Pseudo Prior) 12.0 75.7 18.1 74.1 16.9 80.3 27.2 78.3 18.6 77.1 85.4 +PRETTY (Refined Prior) 14.2 80.5 24.1 83.8 24.0 84.9 34.6 85.6 24.2 83.7 100.9 Mistral-7B Mistral-7B-Instruct 6.6 64.6 20.3 78.2 20.5 83.2 32.9 84.8 20.1 77.7 Mistral-7B 1.2 42.6 0.3 30.8 19.9 77.1 21.5 69.4 10.7 55.0 46.2 +PRETTY (SFT Prior) 13.8 78.1 23.1 79.2 20.0 82.3 32.1 83.3 22.3 80.7 117.2 +PRETTY (Pseudo Prior) 13.3 75.8 20.1 75.7 16.5 79.7 24.9 77.3 18.7 77.1 107.2 +PRETTY (Refined Prior) 15.9 81.3 24.9 82.9 21.5 83.0 32.3 83.9 23.7 82.7 124.6 Non-English-Centric Models De-Fr Fr-De Zh-Pt Pt-Zh Avg. %SFT. spBL. CoM. spBL. CoM. spBL. CoM. spBL. CoM. spBL. CoM. All Llama2-7B Llama2-7B-Alpaca 29.8 81.5 24.1 80.9 16.6 81.4 11.3 78.6 20.5 80.6 Llama2-7B-Chat 6.2 68.0 7.3 64.5 3.0 67.8 6.2 66.6 5.7 66.7 Llama2-7BPROMPTING 22.2 77.4 15.4 73.3 14.4 78.9 4.4 64.1 14.1 73.4 78.5 Llama2-7B 1.0 51.1 3.2 54.0 0.9 61.4 7.3 70.0 3.1 59.1 47.6 +PRETTY (SFT Prior) 28.2 80.6 23.0 80.4 16.3 81.1 10.5 77.4 19.5 79.9 97.2 +PRETTY (Pseudo Prior) 18.3 68.9 17.3 72.2 11.6 70.4 5.0 65.6 13.1 69.3 73.9 +PRETTY (Refined Prior) 29.1 81.4 22.9 80.4 17.1 81.1 12.2 79.4 20.3 80.6 100.4 Mistral-7B Mistral-7B-Instruct 22.1 76.1 20.4 75.9 10.5 74.8 3.3 60.2 14.1 71.8 Mistral-7B 1.2 46.1 1.6 40.6 1.0 52.8 0.4 43.6 1.1 45.8 36.5 +PRETTY (SFT Prior) 20.1 73.3 20.7 75.1 11.0 74.7 6.8 67.3 14.7 72.6 113.8 +PRETTY (Pseudo Prior) 18.1 66.4 17.3 70.4 5.9 65.6 3.7 59.4 11.3 65.5 87.7 +PRETTY (Refined Prior) 28.3 78.8 22.3 78.5 14.2 78.6 13.6 80.6 19.6 79.1 153.8 Table 1: Translation performance of different models on Flores-101 subsets. Bold values indicate that the best performance among foundation models. The overall best results are underlined. \u201c%SFT.\u201d denotes the relative performance compared to the best SFT model of each family. For two cross-lingual task, the first k = 2 tokens are chosen as the prior tokens. This helps to avoid inadequate guidance from single non-informative tokens like punctuation or numbers. In the case of the pseudo prior, due to the back-off strategy, only one token is used for fair comparison. For POS tagging task, the strategy is more straightforward with only the first k = 1 label considered as the prior token. 4.2 Evaluation To ensure the integrity of the output data from all models, we standardized the output by cleaning it in accordance with the specific output style of each model. Subsequently, we conducted a manual inspection to guarantee that only the required labels were retained. Task-specific Metrics We use two metrics to evaluate the performance of translation quality: spBLEU7 (Goyal et al., 2022) and COMET8(Rei et al., 2020). We employed the ROUGE (Lin, 2004) and LaSE (Bhattacharjee et al., 2023) metrics for the evaluation of summarization quality. For the POS tagging task, we report both the precision score and F1 score. Relative Performance We further compute the ratio of the performance scores of the foundation model to the scores of the SFT model with the application of different strategies. This ratio serves 7https://github.com/mjpost/sacrebleu/ 8https://github.com/Unbabel/COMET 7 Models En-Zh En-Hi Uk-Pt Ar-Ru Avg. %SFT. R2 RL LS R2 RL LS R2 RL LS R2 RL LS R2 RL LS All Llama2-7B w/ Constant 1-Shot Demonstration Llama2-7B-Alpaca 7.0 12.4 11.9 1.7 10.7 17.3 1.5 6.1 5.8 0.1 0.5 1.3 2.6 7.4 9.1 Llama2-7B-Chat 6.3 11.6 8.7 1.5 11.7 27.1 2.5 8.3 7.1 0.0 0.3 0.2 2.6 8.0 10.7 Llama2-7B 9.3 16.6 29.2 1.6 10.2 15.3 0.8 4.0 1.9 0.6 4.1 15.5 3.1 7.6 12.1 262.4 +PRETTY (SFT Prior) 7.4 13.9 25.9 1.5 9.7 12.9 1.9 6.7 9.8 0.1 0.4 0.8 2.7 6.7 9.8 106.3 +PRETTY (Pseudo Prior) 8.0 14.5 29.1 1.4 9.9 14.5 2.5 9.1 13.6 1.2 5.9 23.5 3.3 8.5 15.4 387.5 +PRETTY (Refined Prior) 11.2 19.0 32.6 1.6 10.8 15.9 3.4 10.5 11.3 1.5 7.9 30.1 4.4 10.5 17.5 490.6 Mistral-7B w/ Constant 1-Shot Demonstration Mistral-7B-Instruct 5.9 12.2 17.2 1.0 10.3 23.4 1.5 6.2 17.7 0.4 2.6 12.8 2.2 7.8 17.8 Mistral-7B 12.3 20.9 44.5 1.6 10.6 17.6 4.8 12.9 27.7 1.8 6.5 23.3 5.1 11.2 21.6 206.1 +PRETTY (SFT Prior) 9.7 17.6 40.7 1.4 10.0 17.0 2.3 7.9 17.5 0.2 1.1 3.2 3.4 8.0 15.0 114.5 +PRETTY (Pseudo Prior) 9.9 17.5 41.0 1.4 9.9 17.4 3.1 11.6 35.1 1.7 7.9 32.9 4.0 10.2 23.5 195.8 +PRETTY (Refined Prior) 15.0 24.1 49.6 1.8 11.3 19.7 5.5 16.5 46.9 2.6 10.9 42.0 6.2 13.8 29.7 275.6 Table 2: Summarization performance of different models on CrossSum subsets. \u201cR2/L\u201d and \u201cLS\u201d refer to the ROUGE and LaSE score, respectively. Bold values indicate that the best performance among foundation models. The overall best results are underlined. \u201c%SFT.\u201d denotes the relative performance compared to the best SFT model. Models Fr Zh Pt Ru Ar Avg. %SFT. Prec. F1 Prec. F1 Prec. F1 Prec. F1 Prec. F1 Prec. All Llama2-7B-Alpaca 48.2 42.8 38.6 36.3 40.7 35.9 42.3 36.7 34.4 30.8 38.7 Llama2-7B 45.0 37.9 39.8 36.2 39.8 33.2 42.5 33.8 36.5 32.1 37.7 97.4 +PRETTY (SFT Prior) 54.8 50.0 38.0 33.5 49.1 45.3 49.7 44.1 35.1 31.1 43.1 111 +PRETTY (Refined Prior) 59.3 54.8 43.0 38.8 54.5 50.6 55.3 49.2 44.0 39.6 48.9 126 Table 3: POS tagging performance of different Llama2 models on XGLUE subsets. Bold values indicate that the best performance among foundation models. The overall best results are underlined. \u201c%SFT.\u201d denotes the relative performance compared to Alpaca model. as a metric for assessing the extent to which the foundation model approximates the SFT model\u2019s performance when different strategies are applied. 4.3 Main Results Machine Translation As shown in Table 1, for the machine translation task, we use up to two prior tokens as decoding guidance, allowing the base model to achieve performance comparable to that of a model after SFT. Moreover, in some language pairs, the translation performance outperforms SFT model when guided by Refined Prior tokens from a smaller model. For Llama2 model family, the prior tokens provided by the SFT model, although slightly less effective, still allow the foundation model to achieve 98% of the performance of SFT model. On the other hand, the use of pseudo labels derived from a dictionary exhibits the least effectiveness, yet this strategy still surpasses the results achieved through costly prompt engineering. Cross-lingual Summarization The results presented in Table 2 indicate that the foundation model exhibited superior performance compared to the SFT model in this in-context learning scenario. For prior-guided decoding, the performance of the foundation model was degraded when using prefix tokens from the SFT model, and the small performance gap in this setting suggests that the alignment achieved by the SFT model is relatively \u201csuperficial\u201d. Notably, the performance of Llama2 foundation model significantly improved when other priors were provided, even when using translated keywords as pseudo labels. Non-English POS tagging The performance results of POS tagging task are presented in Table 3. These results align with the insights gleaned from the machine translation task, specifically regarding the strategy of prior token construction. Notably, for POS tagging task, the performance of SFT model on most language pairs falls short of the foundation model, suggesting that SFT detrimentally affect the knowledge learned at the pretraining stage. Encouragingly, when the foundation model empowered by auxiliary prior token surpasses the performance of SFT model as well as the prompting results of itself, highlighting the poten8 tial of our proposed method in mitigating the catastrophic forgetting problem associated with SFT. 5 Analysis and Discussion 5.1 Quality of Prior Tokens To investigate the quality of prior tokens from different sources and how they impact the final performance, we further analyze why the prior tokens given by the SFT model are less effective than those from external auxiliary models in POS tagging task. Unlike the machine translation task, the positional result for the POS task is definite, so we are able to verify whether it corresponds to a ground truth label. The results in Table 4 confirm two points. First, even if the prior tokens provided by the SFT model are of low quality, the foundation model does not suffer from severe error propagation. Secondly, the final performance of proposed method is still associated with the quality of prior tokens. This suggests that prior tokens closely aligned with the ground truth can steer the foundation model towards a more accurate decision trajectory, thereby yielding superior performance. Fr Zh Pt Ru Ar SFT Prior 18.3 18.3 3.74 16.3 12.1 Refined Prior 88.9 88.9 88.54 87.7 79.6 Table 4: Accuracy of prior tokens used in POS tagging task. SFT prior tokens are of inferior quality. 5.2 Choice of Prior Tokens Based on the findings from the previous section, if incorrect labels used as prior tokens can still elicit the ability of foundation model, then could random prior tokens in the target language trigger crosslingual generative capabilities? To investigate this, we attempted to use random tokens of different parts of speech as the prior tokens in the EnglishChinese machine translation task. For instance, \u201cModal Prior\u201d refers to the use of randomly picked modal verb in Chinese as the initial token. The results shown in Table 5 indicate that the model could not be aligned to a better decision trajectory by these random prior tokens, whether they were function words or tokens with actual meaning. This supports the validity of our proposed methods for constructing prior tokens and also supplements previous findings. From this, we can summarize some rules about prior tokens: they can be of low quality but should not be completely unrelated to the target sequence. spBLEU COMET BLEU Llama2-7B 7.7 72.01 16.1 + Modal Prior 8.0 68.29 16.0 + Adverb Prior 6.4 63.72 13.1 + Random Prior 6.2 57.11 11.5 Table 5: Comparison of translation performance using three types of random prior tokens. 5.3 Number of Prior Tokens Figure 5 depicts the relationship between the number of preceding tokens provided and the resulting changes in translation performance. It becomes apparent that performance generally improves with the addition of more tokens. Additionally, we note that introducing two prior tokens appears to be a performance inflection point, which may be due to instances where the initial token is a punctuation mark or a number. 1 2 3 4 5 85 90 100 110 Number of Prior Tokens %SFT. En-Zh De-Fr Pt-Zh Zh-Pt Figure 5: Impact of incrementally adding refined prior tokens on performance across Flores-101 subsets. 6 Conclusions In this paper, we investigate and analyze the decision-making discrepancies between the foundation model and the SFT model within crosslingual generation contexts. Drawing from our analysis, we introduce a novel cross-lingual alignment method that requires no additional training and is resource-efficient. The proposed method aligns the foundation LLM to perform comparably with the SFT model solely by utilizing prefix text as priors during generation. In the future, we aim to broaden our research to encompass additional alignment scenarios, such as those involving reinforcement learning from human feedback. 9 Limitations The primary limitations of our study stem from the scope of model validation. Our research is limited to 7B models. Future endeavors should aim to extend the validation to a broader scope of models and incorporate various parameter scales to support the universality of our findings. Furthermore, the availability of language resources is still a practical problem, particularly for low-resource languages where access to Prior Token and Refined Token sources is limited. Despite these challenges, our experimental results indicate that Pseudo Prior tokens still exhibits promising potential. It is important to note, however, that the development of pseudo tags may require a dedicated investigation into the linguistic rules specific to each downstream task. This process is inherently time-intensive and resourcedemanding. Acknowledgements This work was supported in part by the Science and Technology Development Fund, Macau SAR (Grant Nos. FDCT/0070/2022/AMJ, FDCT/060/2022/AFJ), Ministry of Science and Technology of China (Grant No. 2022YFE0204900), National Natural Science Foundation of China (Grant No. 62261160648), the Multi-year Research Grant from the University of Macau (Grant No. MYRG-GRG2023-00006FST-UMDF), and Tencent AI Lab Rhino-Bird Gift Fund (Grant No. EF2023-00151-FST). This work was performed in part at SICC which is supported by SKL-IOTSC, and HPCC supported by ICTO of the University of Macau." |