|
|
"gt": "Modern large language models (LLM) typically undergo additional fine-tuning to align with human expectations [29, 27, 28], including both supervised fine-tuning (SFT) on demonstration outputs [33, 40] and alignment training with human preference [5, 31]. Similar to the pre-training phase [15], the alignment of LLMs can also be continuously improved by increasing data and training steps [8, 5, 42]. However, in reality, alignment training is inevitably constrained by available resources and thus cannot grow indefinitely. Suppose we have a moderately-trained LLM in hand, is it possible to further exploit its potential and cheaply acquire a stronger model? We draw inspiration from the literature of model interpolation, also known as model/weight averaging. It aims to integrate different models fine-tuned from the same base model into a unified one by interpolating between their weights [38, 19, 41], relying on the mode connectivity of neural networks [11, 10]. Previous work showed that with the basic uniform interpolation (i.e., using the same interpolation ratio for all the model modules), the obtained new model usually achieves trade-off performance between the original ones [26, 41, 23]. We similarly observe this phenomenon when we interpolate between an SFT model and a model further trained by direct preference optimization (DPO) [31] or reinforcement learning from human feedback (RLHF) [49], as shown in Figure 2. Interpolation Extrapolation Figure 2: Calculating the reward scores (\u00a73.1) on the UltraFeedback [7] development set, we observe that model interpolation usually gives trade-off performance between the two original models (e.g., an SFT model and the model further trained by DPO/RLHF), as similarly observed in previous literature [26, 41, 23]. This observation motivates our proposal of EXPO that cheaply obtains a stronger model from weaker models via model extrapolation. We are intrigued by another question: If we treat the DPO/RLHF model as an intermediate result of interpolation between the SFT model and some stronger model, can we obtain this stronger model by reversely extrapolating from the former two models\u2019 weights? If so, we can actually start with two relatively weaker models from the training process and straightforwardly obtain a stronger one. As indicated by the gray arrow in Figure 2, this could also improve off-the-shelf already-aligned models, such as the many open-sourced LLMs on HuggingFace. Based on the above motivation, we propose a simple method called EXPO (model extrapolation) to boost LLMs\u2019 alignment with human preference (\u00a72). EXPO assumes a medium-aligned model M can be interpolated from a less-aligned (weaker) model Mw (e.g., the SFT model) and a better- aligned (stronger) one Ms. Then, we can directly obtain this stronger model Ms by extrapolating from the weights of the two relatively weaker models M and Mw, without any additional training on top of them. Despite its simplicity, we demonstrate that EXPO is quite effective in improving the alignment of various LLMs, as summarized in Figure 1. Specifically, for standard DPO training, we show that EXPO pushes the models trained with less data (e.g., 10% or 20%) to reach and even surpass the fully-trained one, as evaluated on the AlpacaEval 2.0 benchmark [22, 9] (\u00a73). Furthermore, EXPO also remarkably improves off-the-shelf DPO/RLHF models, by up to 6.8% on AlpacaEval 2.0 (\u00a74), and manifests satisfactory scalability across model sizes from 7B to 70B. Our work demonstrates model extrapolation as a promising method for boosting LLMs\u2019 alignment with human preference and better exploiting the capabilities of LLMs, which we believe deserves more future exploration. 2", |
|
|
"main_content": "2.1 Overview Inspired by the observation in Figure 2, we make the following assumption: A model M can be interpolated between a weaker model Mw and a stronger model Ms, which satisfy the relationship in terms of their alignment with human preference: Mw < M < Ms. Specifically, we suppose the medium-aligned model M (parameterized by \u03b8) to be one that has been moderately trained for human preference alignment. We also suppose the less-aligned weaker model Mw (parameterized by \u03b8w) simply to be the SFT model used for initializing M. The above assumption suggests that there exists a better-aligned stronger model Ms (parameterized by \u03b8s) and an interpolation coefficient \u03b3 \u2208[0, 1] such that: \u03b8 = (1 \u2212\u03b3)\u03b8w + \u03b3\u03b8s. (1) Here we consider the simplest form of uniform linear interpolation. With the substitution of \u03b1 = 1/\u03b3 \u22121 \u2208[0, +\u221e), we can obtain the assumed stronger model Ms by extrapolating from the weights of the relatively weaker Mw and M (i.e., weak-to-strong extrapolation). Our proposed EXPO method is formulated as follows: \u03b8s = (1 + \u03b1)\u03b8 \u2212\u03b1\u03b8w = \u03b8 + \u03b1(\u03b8 \u2212\u03b8w) = \u03b8 + \u03b1\u2206\u03b8, (2) where the coefficient \u03b1 serves as the hyperparameter that controls the length of extrapolation. In practice, \u03b1 can be cheaply tuned as a decoding hyperparameter (like the sampling temperature) on a development set with no model training involved. 2.2 Insights on EXPO Mw M e 3: EXPO can \u03b1\u2206\u03b8 \u2206\u03b8 Ms e view M M M Figure 3: EXPO can be viewed as a \u201cglobal gradient update\u201d that moves the model weight along the direction of \u2206\u03b8 in which the model\u2019s alignment with human preference is improved (measured by a reward score). We first use Figure 3 for an intuitive illustration of EXPO. Specifically, EXPO can be viewed as a \u201cglobal gradient update\u201d, based on the global weight change \u2206\u03b8 = \u03b8\u2212\u03b8w from the initial Mw to the final M. The weight change \u2206\u03b8 indicates a direction in the parameter space, in which the model\u2019s alignment with human preference is improved (measured by a reward score). Hence, EXPO essentially aims to amplify the learned reward signal through the extrapolation \u03b1\u2206\u03b8. \u201cglobal gradient update\u201d that moves the model weight along the direction of \u2206\u03b8 in which the model\u2019s alignment with human preference is improved (measured by a reward score). Based on the above illustration, we identify two prerequisites for EXPO. First, the model M should have not yet been trained to its optimality. This prerequisite is generally valid, as evidenced by the most powerful LLMs such as GPT-4 and Claude that are undergoing constant optimization for better alignment. We will show in \u00a74 that even the open-source models that have been extensively trained for human preference alignment still have significant room for further improvement. Second, also more importantly, the weight change \u2206\u03b8 from Mw to M should be of \u201chigh quality\u201d, meaning it should as accurately as possible indicate an extrapolation direction in which the alignment can get improved. In mainstream preference alignment algorithms such as DPO or RLHF, this prerequisite can also be generally established, as M is initialized from Mw (the SFT model) and is essentially trained to maximize the reward signal of human preference, either from preference data or reward models. Nonetheless, the \u201cquality\u201d of \u2206\u03b8 can vary depending on the training configuration of M and the capability of Mw, as we will discuss in \u00a73.3 and 4.2. Combining the two prerequisites, when the model M initialized from its SFT checkpoint M has M M Combining the two prerequisites, when the model M initialized from its SFT checkpoint Mw has undergone moderate alignment training, it can potentially get better aligned by EXPO. We will experimentally verify this in \u00a73 and 4. However, other model combinations for Mw and M, such as a Base and an SFT models or two separately-trained RLHF models, usually cannot guarantee the second prerequisite. We will discuss this more in \u00a74.3 in conjunction with empirical results. 3 2.3 Highlights We underline the following appealing properties of EXPO: \u2022 Simplicity: EXPO is extremely simple and quick to implement. It merely involves performing extrapolation based on the weights of two checkpoints Ms and M, which can be accomplished within just a few lines of code. \u2022 Efficiency: EXPO needs no additional model training on top of Ms and M. The only hyperparameter \u03b1 is also cheap to tune as no training will be involved. Moreover, we believe more efficient means of hyperparameter search can be developed in future work, as evidenced by the advances in adaptive model interpolation [17, 23]. \u2022 Scalability: EXPO is in principle applicable to various LLMs, including those of large sizes or that have been extensively trained for human preference alignment. We will show in \u00a74 that EXPO can improve off-the-shelf already-aligned models of varying sizes and capabilities. 3 Experiments We first demonstrate the effectiveness of EXPO in a controlled setting, i.e., training the model M with less preference data, so we can know in advance that M still has room for further improvement (corresponding to the first prerequisite in \u00a72.2). We show that EXPO endows the models trained using less data (e.g., 10% or 20%) with equivalent or superior performance to the fully-trained one. 3.1 Experimental Setup Models To train models for human preference alignment, we refer to the alignment handbook3 [36], a widely-used code base released by HuggingFace for alignment training of LLMs. We follow the their setup of training the Mistral-based [20] zephyr-7b-sft-full and zephyr-7b-dpo-full models [37]. Specifically, we use the same preference dataset but varying data sizes to train the models. We employ the same mainstream DPO [31] algorithm for alignment training, where the SFT model zephyr-7b-sft-full is used as the reference model in DPO and also used for initializing the policy models. We adopt the same hyperparameter configuration as zephyr-7b-dpo-full (see Appendix B) and train all the models on 4 A100 80GB GPUs. We use zephyr-7b-dpo-full as the fully-trained baseline (i.e., trained with 100% data). Data We use the same preprocessed UltraFeedback4 [7] dataset for DPO training. UltraFeedback is a large-scale preference dataset, containing diverse instructions and response pairs with GPT-4annotated preference labels. It has been popularly used by the open-source community for training aligned LLMs [18, 37, 48]. The preprocessed version provided by HuggingFace contains 61K and 1K preference data in the training and development set, respectively. Each data consists of an instruction and a pair of responses, with one labeled as preferred. Evaluation We evaluate the models on AlpacaEval 2.0 [22], a leading and popular benchmark that assesses LLMs\u2019 alignment with human preference. It contains 805 instructions representative of real user cases. For each instruction, the response of the evaluated model is compared head-to-head with that of the GPT-4 baseline. An evaluator based on GPT-4 (its version is gpt-4-1106-preview during our work) produces the probability of preferring the evaluated model, which provides an affordable and replicable alternative to human preference annotation. Then, the win rate over the GPT-4 baseline is computed as the expected preference probability on all the 805 instructions. Recently, AlpacaEval 2.0 has introduced the new length-controlled (LC) win rate metric [9], which aims to alleviate the length bias of the GPT-4 evaluator (i.e., the prior preference toward longer responses) [30]. According to [9], the LC win rate metric currently has the highest correlation (a Spearman correlation of 0.98) with real-world human evaluation [47], which consolidates the reliability of AlpacaEval 2.0 evaluation. 3https://github.com/huggingface/alignment-handbook 4https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized 4 For EXPO, we decide the optimal \u03b1 based on the performance on the UltraFeedback development set, as evaluated by an open-source reward model5. It ranks among the top on RewardBench6 [21], a leaderboard that assesses the performance of reward models. More importantly, this reward model is not involved in either preference annotation or RLHF training of all the models we experiment with in this work, thus reducing the risk of reward hacking. In our experiments, we also report the average scores produced by the reward model on the 805 AlpacaEval 2.0 instructions as a reference. 3.2 Results Table 1: AlpacaEval 2.0 evaluation results of models trained with less preference data. The DPO models are all initialized from the SFT model zephyr-7b-sft-full. Reward Win Rate LC Win Rate SFT 3.42 4.7% 8.7% DPO (full data) 6.16 14.7% 17.3% + EXPO, no training 6.52 (+0.36) 18.0% (+3.3%) 20.2% (+2.9%) DPO (10% data) 3.97 5.9% 10.4% + EXPO, no training 6.57 (+2.60) 17.9% (+12.0%) 16.3% (+5.9%) DPO (20% data) 4.70 8.6% 12.9% + EXPO, no training 6.95 (+2.25) 22.7% (+14.1%) 21.3% (+8.4%) \u03b81 \u03b82 \u03b1\u2206\u03b81 \u03b1\u2206\u03b82 Figure 4: The \u201cquality\u201d of \u2206\u03b8 and the effectiveness of EXPO can vary depending on the training configurations of M. Here, \u2206\u03b82 indicates a superior extrapolation direction to \u2206\u03b81. In Table 1, we show the performance of the models trained with less (10% and 20%) preference data as well as the results of further applying EXPO on top of them. As expected, training with less preference data results in lower-tier performance, as indicated by their LC win rates on AlpacaEval 2.0. For instance, compared to the 17.3% of using 100% data, using 10% and 20% only achieves the performance of 10.4% and 12.9%, respectively. However, after applying EXPO, using only 10% data can achieve competitive performance to the fully-trained model (16.3% vs. 17.3%), while using 20% of the data already achieves beyond that (21.3%), giving a remarkable advantage of 21.3% 17.3% = 4%. We also observe that the model trained with 20% data obtains a greater improvement from EXPO than that trained with 10% data (8.4% vs. 5.9%). It implies that the former gives a superior extrapolation direction \u2206\u03b8 to the latter, as illustrated in Figure 4. However, the \u201cquality\u201d of \u2206\u03b8 is not simply correlated with the amount of data, as shown in Table 1 where using 20% data slightly outperforms using full data when both applying EXPO (21.3% vs. 20.2%). This is because the increasing size can also amplify the biases in the preference data, which becomes more likely to be learned by the model M as shortcuts. We next analyze the impact of M\u2019s training configuration on \u2206\u03b8 in detail. 3.3 Analysis Comparison between Data Sizes Figure 5 presents the reward scores and output lengths on the UltraFeedback development set versus the extrapolation coefficient \u03b1 values in EXPO. We have two main observations. Firstly, for the models M trained with different data sizes, the optimal \u03b1 of EXPO varies and generally decreases as the data size increases, as indicated by the vertical dashed lines in the left part of Figure 5. This is because the larger data size usually leads to the more thorough convergence of training, even if only to a local optimum, which naturally narrows the viable range of the extrapolation coefficient \u03b1. Secondly, the global optimal reward score (6.08) achieved by EXPO is obtained with a medium size (20%) of training data, rather than the smaller (5% or 10%) or larger (40%) ones. For the former (5% 5https://huggingface.co/weqweasdas/RM-Mistral-7B 6https://huggingface.co/spaces/allenai/reward-bench 5 and 10% data), although ExPO significantly improves the performance (from the reward score 3.13 to 4.79, and 3.59 to 5.82, respectively), the limited data still cannot provide an accurate \u2206\u03b8, thus capping the potential performance after model extrapolation. For the latter (40% data), we conjecture that the model may have learned the spurious features in preference data as shortcuts, especially the length bias7 [30] where the preferred responses are usually longer. As shown in the right part of Figure 5, for the model trained with 40% data, using a very small \u03b1 results in a dramatic increase in the output length. In this case, \u2206\u03b8 becomes more likely to contain the spurious features, and in particular, the length bias can be amplified by model extrapolation. But this does not lead to sustained improvement of performance, as shown in the right part of Figure 5, where the optimal rewards typically correspond to moderate output lengths ranging between 500 and 600. Figure 5: For the models M trained with varying data sizes, we plot the reward scores (left) and output lengths (right) on the UltraFeedback development set versus varying \u03b1 values in EXPO. Comparison with Hyperparameter Tuning As EXPO can be viewed as a \u201cglobal gradient update\u201d (\u00a72.2), we also compare with simply tuning the training hyperparameters. Specifically, we use the same 20% training data but increase the learning rate or training epochs. From the left part of Figure 6, we observe that increasing the two hyperparameters indeed somewhat improves the original reward score. However, it is still inferior to the optimal reward score achieved by EXPO under the default configuration, and also noticeably impairs the gains from model extrapolation (the peak points are lower than that of the default configuration). This is probably because the model is overfitted to the training data and similarly learns the spurious features (like the length bias), thus failing to provide an accurate \u2206\u03b8. The overfitting issue can also be evidenced by the right part of Figure 6. The models trained with larger learning rates or for more epochs become prone to generating longer outputs with a small \u03b1, but do not obtain noticeable reward improvement (the left part of Figure 6), implying that \u2206\u03b8 is very likely to contain the spurious length feature rather than the true human preference. Figure 6: For the models trained using 20% data but with larger learning rates or for more epochs, we plot the reward scores (left) and output lengths (right) on the UltraFeedback development set versus varying \u03b1 values in EXPO. Based on the above empirical analysis, we emphasize the critical role of \u2206\u03b8 in EXPO. Particularly, we show that the \u201cquality\u201d of \u2206\u03b8 requires the appropriate choice of the training configuration for M, including both the preference data and the training hyperparameters. In the subsequent \u00a74.2, we will further discuss the impact of Mw\u2019s capability on the effectiveness of EXPO. 7The average lengths of the preferred and unpreferred responses in the UltraFeedback training set are 319 and 277 tokens, respectively. 6 4 Model Extrapolation Boosts Off-the-Shelf Models We next demonstrate the impressive efficacy of EXPO in improving off-the-shelf already-aligned LLMs from HuggingFace, based on their SFT and DPO/RLHF checkpoints. We particularly underscore the scalability of EXPO across different model sizes and capabilities. 4.1 Experimental Setup When selecting open-source LLMs for experiments, we found that many well-known aligned LLMs, such as LLaMA-2/3 [35, 1], Gemma [34], and Qwen [4], do not release the corresponding SFT checkpoints. Such an opacity hinders the feasibility of experimenting with these more representative models. To facilitate reproducible research, we select the following open-source DPO/RLHF models that (1) have also publicly accessible SFT checkpoints, (2) have disclosed the training data, and (3) are popularly downloaded on HuggingFace or have been evaluated on the AlpacaEval 2.0 leaderboard: \u2022 tulu-2-dpo-7/13/70b [18], a LLaMA-2-based model suite. Since the three-sized models undergo the same SFT and DPO training processes (including both the data and configuration), they can serve as a reasonable testbed for the scalability of EXPO across different model sizes. \u2022 zephyr-7b-alpha/beta and zephyr-7b-dpo-full [37], three Mistral-based models. They are trained with different hyperparameter configurations and on slightly different preference data. \u2022 Starling-LM-7B-alpha/beta [48], two Mistral-based models. They are trained by the RLHF algorithm with different reward models. Similar to \u00a73.1, we select the optimal \u03b1 from [0.1, 0.2, 0.3, 0.4, 0.5] based on the performance on the UltraFeedback development set, as evaluated by the aforementioned reward model. 4.2 Results Table 2: AlpacaEval 2.0 evaluation results of off-the-shelf DPO/RLHF models. The gray models\u2019 scores are copied from the official leaderboard for reference. For the models that have been officially evaluated, we report the higher one between our reproduced score\u2020 and that from the leaderboard\u2021. Reward Win Rate LC Win Rate Llama-2-70b-chat-hf 13.9% 17.4% gpt-3.5-turbo-0613 14.1% 22.7% Gemini Pro 18.2% 24.4% claude-2.1 15.7% 25.3% tulu-2-dpo-7b 5.09 8.5%\u2020 10.2%\u2020 + EXPO 5.42 (+0.33) 11.5% (+3.0%) 11.7% (+1.5%) tulu-2-dpo-13b 5.37 11.2%\u2020 15.5%\u2020 + EXPO 5.89 (+0.52) 15.6% (+4.4%) 17.6% (+2.1%) tulu-2-dpo-70b 5.84 16.0%\u2021 21.2%\u2021 + EXPO 6.12 (+0.28) 23.0% (+7.0%) 25.7% (+4.5%) zephyr-7b-alpha 4.68 8.4%\u2021 10.3%\u2021 + EXPO 4.87 (+0.19) 10.6% (+2.2%) 13.6% (+3.3%) zephyr-7b-beta 5.31 11.0%\u2021 13.2%\u2021 + EXPO 5.40 (+0.09) 11.1% (+0.1%) 14.0% (+0.8%) zephyr-7b-dpo-full 6.16 14.7% 17.3% + EXPO 6.52 (+0.36) 18.0% (+3.3%) 20.2% (+2.9%) Starling-LM-7B-alpha 5.80 15.0%\u2020 18.3%\u2020 + EXPO 5.98 (+0.18) 18.2% (+3.2%) 19.5% (+1.2%) Starling-LM-7B-beta 7.12 26.6% 25.8% + EXPO 7.40 (+0.28) 29.6% (+3.0%) 26.4% (+0.6%) 7 The results in Table 2 demonstrate that EXPO enhances the performance of the already-aligned LLMs, by impressive increases of up to 6.8% LC win rate and 10.5% basic win rate on AlpacaEval 2.0. The improvement is made across LLMs of various capabilities, from the weakest zephyr-7b-alpha and tulu-2-dpo-7b to the strongest Starling-LM-7B-beta and tulu-2-dpo-70b. It suggests that most open-source LLMs have not been aligned with human preference optimally, and EXPO enables the further exploitation of these models\u2019 capabilities. Specifically for the model suite Tulu-2, where the 7B/13B/70B models are trained using the same preference data and configuration, the enhancement by EXPO nicely scales up with the increasing model size. We conjecture that this is because the larger/stronger Mw enables the better learning of the reward signal in the preference data or reward models, leading to both a stronger M and a more accurate \u2206\u03b8, which together result in the greater improvement for M after model extrapolation. Therefore, with the same preference data and training configuration, we optimistically expect the improvement by EXPO can also scale up as the capability of Mw increases. 4.3 Discussion Finally, we discuss the impact of model choices for Mw and M on the effectiveness of EXPO. In previous analyses and experiments, we choose Mw as an SFT model, and M as the model further trained for human preference alignment on top of Mw. Can other types of model combination Mw and M, such as a Base and an SFT model, or two separately-trained RLHF models, be able to produce meaningful extrapolated models? We experiment with the following types of combinations: \u2022 Base + SFT: Mistral-7B-v0.1 [20] as Mw and Mistral-7B-Instruct-v0.1 as M. \u2022 SFT 1 + SFT 2 (trained from different base models): Mistral-7B-Instruct-v0.1 as Mw and Mistral-7B-Instruct-v0.2 as M. \u2022 SFT 1 + SFT 2 (same base): openchat_3.5 [39] as Mw and openchat-3.5-0106 as M. \u2022 RLHF 1 + RLHF 2 (same base): gemma-7b-it [34] as Mw and gemma-1.1-7b-it as M. Note that it is not disclosed whether the two models are initialized from the same SFT model. Model 1 Model 2 0.1 0.2 0.3 0.4 0.5 2 4 6 Reward Base + SFT SFT 1 + SFT 2 (different base) SFT 1 + SFT 2 (same base) RLHF 1 + RLHF 2 (same base) Figure 7: Reward scores of other types of model combinations on the UltraFeedback development set, with \u03b1 varying from 0.1 to 0.5. \u03b81 \u03b82 \u03b1\u2206\u03b8 Figure 8: Extrapolation from two separatelytrained models may not improve alignment, as their weight difference (\u2206\u03b8) usually cannot guarantee a direction along which the reward signal can get further amplified. From the results shown in Figure 7, we find that extrapolating from two SFT models that are trained from different base models can easily lead to the model collapse, probably because they do not meet the requirement of mode connectivity [11, 10], namely, the same or close initialization. For the combination of Base and SFT, extrapolation degrades the performance. One cause is that the training from Base to SFT does not naturally reflect human preference, which is exactly why we need additional preference alignment training. Another cause is that compared to the Base model, the SFT one acquires the instruction-following ability and is also adapted to specified input/output formats [45]. EXPO can amplify both learned features (\u00a72.2), but the latter does not aid in alignment and may instead similarly lead to model collapse. For the two separately-trained SFT or RLHF models, we find that they also cannot benefit from model extrapolation. We speculate that this is because M is not initialized from Mw, so the path in the parameter space from \u03b8w to \u03b8 is not in the direction along which the reward signal can be amplified. As illustrated in Figure 8, even though M (\u03b82) has not yet achieved optimality on its own optimization path, it still cannot be improved in another direction of 8 \u2206\u03b8. Overall, our method EXPO is currently applicable to the combination of an SFT model Mw and a model M further trained on top of the former, which is a very realistic combination choice, as modern LLMs that are trained to align with human preference are almost all initialized from their SFT checkpoints. 5 Related Work LLM Alignment Modern LLMs are typically first pre-trained on massive textual corpora (resulting in a Base model) [6, 35, 1] and then trained to align with human expectations [27, 28, 35]. The alignment process generally contains two stages. In the first stage, an LLM is supervisedly fine-tuned (SFT) on demonstration outputs and learns to follow human instructions [40, 33]. In the second stage, the LLM is trained to learn human preference and assign higher probabilities to human-preferred outputs over the disfavored ones. This is usually implemented in the fashion of reinforcement learning (RL) [29, 5] or contrastive learning [44, 46, 31], as exemplified by the reinforcement learning from human feedback (RLHF) [49] and direct preference optimization (DPO) [31] algorithms, respectively. Similar to the scaling law in the pre-training phase [15], recent work also revealed that the capabilities of aligned models can also be constantly improved by scaling up the amount of alignment data [40, 33, 8] and increasing the training steps or iterations for human preference alignment [5, 42, 14]. However, the data and computation resources available in reality are always finite, which may prevent the full exploitation of models\u2019 capabilities. Our work proposes the EXPO method to boost LLMs\u2019 alignment with human preference in a simple, efficient, and scalable manner. Model Merging and Interpolation Model merging is a recently focal technique for building powerful LLMs based on existing ones [2, 3]. It aims to integrate multiple models fine-tuned from the same base model into a unified one that retains the respective strengths [43, 12]. The simplest form of model merging is model interpolation, also known as model/weight averaging [26, 41, 23], which builds upon the mode connectivity of neural networks [11, 10]. In practice, the uniform interpolation usually results in trade-off performance between the two original models, as observed in previous literature [26, 41, 23] and our experiments in Figure 2. One approach to addressing this issue is to adaptively adjust the interpolation coefficient for different model modules (e.g., different model layers) [17, 23]. Our proposed EXPO method (\u00a72) has a similar idea of blending model weights to improve the model capability, but works under a distinct premise and goal. Rather than integrating multiple strong models into a generalist, our method aims to use two relatively weaker models to produce a stronger model that can even surpass the limits of the fully-trained one (\u00a73 and 4). 6 Conclusion We present EXPO, a simple method to boost LLMs\u2019 alignment with human preference. By extrapolating from the weights of an SFT model Mw and a further trained one M, EXPO enables directly obtaining a better-aligned model without any additional training. We demonstrate the efficacy of EXPO across various LLMs, from those trained with limited preference data to the off-the-shelf ones from HuggingFace, where EXPO manifests decent scalability across varying model sizes and capabilities. Given its simplicity, efficiency, and scalability, we recommend EXPO as a promising approach for better exploiting LLMs\u2019 capabilities, which deserves more future exploration. Limitations & Future Work Our work is limited by the public accessibility to the checkpoints of the SFT and DPO/RLHF models. Thus unfortunately, we are unable to experiment with the more representative LLMs like LLaMA-2/3 [35, 1], Gemma [34], and Qwen [4]. We hope for more open-source efforts in increasing LLMs\u2019 transparency and accessibility. Outside the scope of this study, there are several problems that may attract future research. First, since EXPO is based on the simplest uniform linear extrapolation (Equation 2, using the same \u03b1 for all the model modules), future work may devise methods to adaptively search optimal \u03b1 for different model modules. Second, while we currently rely on an external reward model for searching \u03b1, future work may get rid of such reliance by resorting to the capability of the models M and Mw themselves. Third, although our work provides intuitive illustrations for EXPO and empirically demonstrates its effectiveness, future work may establish theoretical explanations and analyses for its underlying mechanisms. Finally, it would also be interesting to apply EXPO to multi-modal LLMs like LLaVA [24] and other model architectures like Mamba [13]. 9 Acknowledgements We thank the open-source community, including the HuggingFace, AllenAI, and Nexusflow teams, for promoting the transparency of LLMs by releasing model checkpoints and disclosing training details. This work would not be possible without these efforts from the open-source community. We thank Wei Xiong for releasing the reward models and for the valuable discussion." |