AcademicEval / related_34K /test_related_short_2404.16385v1.json
username
syn
b9dcaaf
raw
history blame
234 kB
[
{
"url": "http://arxiv.org/abs/2404.16385v1",
"title": "Efficiency in Focus: LayerNorm as a Catalyst for Fine-tuning Medical Visual Language Pre-trained Models",
"abstract": "In the realm of Medical Visual Language Models (Med-VLMs), the quest for\nuniversal efficient fine-tuning mechanisms remains paramount, especially given\nresearchers in interdisciplinary fields are often extremely short of training\nresources, yet largely unexplored. Given the unique challenges in the medical\ndomain, such as limited data scope and significant domain-specific\nrequirements, evaluating and adapting Parameter-Efficient Fine-Tuning (PEFT)\nmethods specifically for Med-VLMs is essential. Most of the current PEFT\nmethods on Med-VLMs have yet to be comprehensively investigated but mainly\nfocus on adding some components to the model's structure or input. However,\nfine-tuning intrinsic model components often yields better generality and\nconsistency, and its impact on the ultimate performance of Med-VLMs has been\nwidely overlooked and remains understudied. In this paper, we endeavour to\nexplore an alternative to traditional PEFT methods, especially the impact of\nfine-tuning LayerNorm layers, FFNs and Attention layers on the Med-VLMs. Our\ncomprehensive studies span both small-scale and large-scale Med-VLMs,\nevaluating their performance under various fine-tuning paradigms across tasks\nsuch as Medical Visual Question Answering and Medical Imaging Report\nGeneration. The findings reveal unique insights into the effects of intrinsic\nparameter fine-tuning methods on fine-tuning Med-VLMs to downstream tasks and\nexpose fine-tuning solely the LayerNorm layers not only surpasses the\nefficiency of traditional PEFT methods but also retains the model's accuracy\nand generalization capabilities across a spectrum of medical downstream tasks.\nThe experiments show LayerNorm fine-tuning's superior adaptability and\nscalability, particularly in the context of large-scale Med-VLMs.",
"authors": "Jiawei Chen, Dingkang Yang, Yue Jiang, Mingcheng Li, Jinjie Wei, Xiaolu Hou, Lihua Zhang",
"published": "2024-04-25",
"updated": "2024-04-25",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"label": "Original Paper",
"paper_cat": "Parameter AND Efficient AND Fine AND Tuning",
"gt": "2.1 Medical Visual Language Models In the medical domain, Med-VLMs play a pivotal role in automating visual-language tasks, such as VQA and IRG. Initially, these models [7, 10, 26, 31] leverage Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) to extract visual and linguistic features separately. Yet, such approaches frequently fell short in terms of generalizability and transferability across different tasks due to the limitations of their structure. Driven by learning-based technologies [9, 17, 23, 24, 39\u201349], modern Med-VLMs [2\u20134] primarily adopt the transformer architecture, following a pretrainingfinetuning paradigm. They undergo initial pretraining on extensive, generalized medical image-text pair datasets, followed by comprehensive fine-tuning on more focused, task-specific datasets. For example, MISS [3], utilizing the ALBEF [22] methodology, begins its training on 38,800 selectively curated image-text pairs from the MedICaT dataset[33] before undergoing fine-tuning for VQA tasks. Similarly, LLaVA-Med [19] employs a dual-phase pretraining strategy, starting with image-text feature alignment on two million pairs from PubMed, then enhancing conversational capabilities using instruction-format data, culminating in full-scale fine-tuning for VQA tasks. These approaches consistently rely on full-model fine-tuning for task adaptation, a method that, despite its efficacy, demands substantial resources, particularly for large-scale models such as LLaVA-Med. The restricted dataset sizes available for downstream task training further jeopardize the model\u2019s generalizability, leading to potential catastrophic forgetting and diminishing its broader applicability in medical contexts. 2.2 Efficient Fine-tuning Techniques The fine-tuning of large-scale Pre-trained Language Models (PLMs) is a demanding process [1, 51, 53], requiring extensive computational resources and data. To alleviate these burdens, PEFT techniques [11, 25, 30, 36, 38] have been introduced. These methods [13, 14, 29] typically incorporate trainable components into the PLMs while maintaining the rest of the model\u2019s parameters in a frozen state. Some strategies [18, 25, 30] also involve the nuanced manipulation of input embeddings across different layers to minimize or negate modifications to the original model\u2019s architecture. PEFT methods have demonstrated efficacy in transitioning largescale PLMs to new tasks or downstream applications and have been instrumental in converting LLMs into multimodal LLMs [5, 20, 34, 51, 53]. For instance, LLaVA [28] uses an MLP adapter to connect a vision feature extractor with a large language model, selectively training the MLP adapter while keeping both components static, thus adapting the LLM into a VLM. [52] introduces an efficient strategy where tuning LayerNorm layers suffices to yield strong performance to transform an LLM into an LVLM. Nonetheless, the capability of existing PEFT methods to efficiently adapt pre-trained VLMs to specialized, especially medical, tasks remains largely uninvestigated. With the diverse architectures of LVLMs, the most effective application of PEFT methods is uncertain, and their generalizability to non-textual encoders/decoders is limited [12] (e.g., prefix-tuning and p-tuning are not viable for Vision Transformers (ViT) [8]). Consequently, investigating the adjustment of a model\u2019s intrinsic parameters for efficient fine-tuning emerges as a critical necessity. In this paper, we propose a novel method that eschews adding components to the original model structure or input, focusing instead on fine-tuning the model\u2019s inherent parameters. This strategy is designed to ensure the method\u2019s broad applicability for efficient fine-tuning across various Med-VLM types. Efficiency in Focus: LayerNorm as a Catalyst for Fine-tuning Medical Visual Language Pre-trained Models Preprint, 2024, Conference Paper",
"pre_questions": [],
"main_content": "INTRODUCTION Visual language models (VLMs) have become pivotal in facilitating multimodal tasks within the medical domain, such as medical visual question answering (Med-VQA) and medical imaging report generation (Med-IRG). The pretraining-finetuning paradigm, heralded for its success in domain adaptation and transfer, now stands as the predominant training approach for Medical VLMs (Med-VLMs). Nonetheless, the substantial data and computational resource demands for VLMs\u2019 pretraining pose significant challenges. Despite the success of visual language pre-training paradigms like CLIP [32] and BLIP [21] fostering a series of open-source medical visual language pre-trained (VLP) models contributed by the community, adapting these models for specific downstream tasks remains a formidable task for those constrained by resource availability. Especially considering the inherent minor variability in medical imaging across different medical centers or imaging devices, which necessitates researchers to frequently and rapidly fine-tune medical VLP models (Med-VLPs). The recent surge in Large Visual Language Models (LVLMs) has exacerbated these challenges. Although a series of ParameterEfficient Fine-Tuning (PEFT) methods [13, 14, 25, 30] have been developed in the Large Language Model (LLM) domain, their applicability and effectiveness in the context of LVLMs are yet to be ascertained[2]. Some empirical studies[36] have shown that the performance of certain PEFT methods contradicts their demonstrated competitiveness in the original LLM domain when finetuning domain-specific LVLMs because fine-tuning parameters for different parts of the LVLM can have drastically different effects. Distinct from universal domains, the medical field presents unique arXiv:2404.16385v1 [cs.CV] 25 Apr 2024 Preprint, 2024, Conference Paper Jiawei Chen, et al challenges, such as limited dataset size and specialized data characteristics, which necessitate a tailored approach to fine-tuning. These domain-specific requirements underscore the need for a dedicated evaluation of PEFT methods on Med-VLMs to ensure their efficacy and appropriateness for medical tasks. Even so, research on the impact of imposing different PEFT methods on different scale Med-VLMs for different tasks remains completely blank. Meanwhile, current PEFT methods typically focus on domain adaptation by adding extra components to the model structure (i.e., Reparameterized Fine-tuning) [14, 50] or prefixes to the input (i.e., Additive Fine-tuning) [11, 25, 30], while the role of fine-tuning intrinsic structural parameters of models has been widely neglected, especially in vertical domain fine-tuning. As shown in Figure 1, extrinsic tuning methods focus on concatenating additional components to the dense layer or adding prefixes to the inputs while intrinsic tuning methods select the intrinsic units of the transformer to be tuned. In this paper, we focus on efficiently fine-tuning Med-VLPs for specific downstream tasks, aiming to explore an innovative method that achieves task adaptation by fine-tuning a small subset of the model\u2019s intrinsic parameters. To find a universally efficient finetuning method applicable to various types of Med-VLMs, regardless of their scale, we turn our attention to common foundational layers in transformer-based components, including Attention layers, Feed-Forward Neural Networks (FFN), and Layer Normalization (LayerNorm) layers. Through systematic experiments, we demonstrate that LayerNorm can serve as the most efficient method for fine-tuning Med-VLPs. To further explore the efficiency, adaptability, and performance of fine-tuning the intrinsic parameters for transferring Med-VLPs to downstream tasks, we conduct extensive evaluations across both large-scale and small-scale Med-VLPs, encompassing core medical visual language tasks like Med-VQA and Med-IRG. Our findings expose the variability of intrinsic parameter fine-tuning methods in fine-tuning Med-VLMs to a downstream task that is different from traditional extrinsic parameter fine-tuning methods. We underscore not only the superior efficiency of LayerNorm fine-tuning over existing PEFT methods but also its remarkable adaptability and transferability for fine-tuning MedVLPs across diverse downstream tasks. The main contributions of this article are as follows: \u2022 To our best knowledge, we are the first to centre on finetuning a small subset of the Med-VLP\u2019s inherent parameters to adapt to downstream tasks. We conduct a comprehensive series of experiments fineto adapt to downstream tasks. \u2022 We conduct a comprehensive series of experiments finetuning foundational components of Med-VLMs, including systematic comparisons with existing PEFT methods centred on tuning extrinsic components. Our research identifies LayerNorm fine-tuning as a highly on tuning extrinsic components. \u2022 Our research identifies LayerNorm fine-tuning as a highly efficient strategy for adapting Med-VLPs to specific downstream tasks within the medical domain. Contemporary generative Med-VLMs, irrespective of their scale\u2014be it large-scale or small-scale, tend to follow a similar architectural framework. This typical structure comprises a vision feature extractor, a text feature extractor, a connector that integrates the former two, and a Language Model (LM) head. Most Med-VLMs opt for ViT [8] as the vision feature extractor, while the text encoder is based on mainstream frameworks such as BERT [6] or GPT [1]. Despite possible minor variations in their structural implementations, the transformer-based layer serves as their common denominator, with FFN, Attention mechanisms, and LayerNorm being indispensable core components. 3.2 Previous PEFT Methods Transitioning from the core mechanisms of attention and layer normalization, which provide stability and specificity within the model\u2019s architecture, we delve into the domain of extrinsic PEFT methods. These methods are categorized primarily into two types: Reparameterized Fine-tuning (i.e., LoRA (Low-Rank Adaptation)) and Additive Fine-tuning (i.e., Prefix-tuning). LoRA-Tuning: LLM maps data into a high-dimensional space for processing. LoRA indirectly trains the dense layers in the network by optimizing the rank-decomposition matrix that changes in the adaptation process of the dense layer, thereby achieving the best fine-tuning effect by optimizing only the rank-decomposition matrix of the dense layer. For the pretrained parameters \ud835\udf03\ud835\udc37 0 , the dense layer weight parameter matrix on a specific downstream task is defined as \ud835\udc4a0 \u2208R\ud835\udc51\u00d7\ud835\udc58and the intrinsic rank of it is \ud835\udf03\ud835\udc51; the specific downstream task\u2019s parameters \ud835\udf03\ud835\udc37is calculated as \ud835\udf03\ud835\udc37= \ud835\udf03\ud835\udc37 0 +\ud835\udf03\ud835\udc51\ud835\udc40, where \ud835\udc40is the rank-decomposition matrix. For\ud835\udc4a0 \u2208R\ud835\udc51\u00d7\ud835\udc58, LoRA updates it with the following equation: \ud835\udc4a0 + \u0394\ud835\udc4a= \ud835\udc4a0 + \ud835\udc35\ud835\udc34, \ud835\udc35\u2208R\ud835\udc51\u00d7\ud835\udc5f,\ud835\udc34\u2208R\ud835\udc5f\u00d7\ud835\udc58, (1) where \ud835\udc51is the output dimension of the previous layer, and \ud835\udc58is the input dimension of the next layer. For input \ud835\udc65, the forward propagation process is calculated as follows: \u210e= \ud835\udc4a0\ud835\udc65+ \u0394\ud835\udc4a\ud835\udc65= \ud835\udc4a0\ud835\udc65+ \ud835\udc35\ud835\udc34\ud835\udc65. (2) v v0Prefix-Tuning: Inspired by the In-Context Prompting method adopted by GPT3 [1], Li et al [25] propose the Prefix-tuning method for generation tasks. Instead of the discrete text used in prompt tuning, continuous vectors are prefixed to the input text. Specifically, the generation task is deemed as a table-to-text task, the input \ud835\udc65 is treated as a linear table and the output \ud835\udc66represents a short text. For an encoder-decoder model, different prefixes are attached to the beginning of the encoder and decoder with the input defined as: \ud835\udc67= [\ud835\udc43\ud835\udc45\ud835\udc38\ud835\udc39\ud835\udc3c\ud835\udc4b,\ud835\udc65, \ud835\udc43\ud835\udc45\ud835\udc38\ud835\udc39\ud835\udc3c\ud835\udc4b\u2032], and the prefixes are generated by a trainable matrix \ud835\udc43\ud835\udf03\u2208R|\ud835\udc43\ud835\udc56\ud835\udc51\ud835\udc65|\u00d7\ud835\udc51\ud835\udc56\ud835\udc5a(\u210e\ud835\udc56), the global training objective is defined as: \u2211\ufe01 max \ud835\udf19 log \ud835\udc43\ud835\udf19(\ud835\udc66|\ud835\udc65) = max \ud835\udf19 \u2211 \u2208\ud835\udc4c\ud835\udc56\ud835\udc51 \u2211\ufe01 \ud835\udc56\u2208\ud835\udc4c\ud835\udc56\ud835\udc51\ud835\udc65 log \ud835\udc43\ud835\udf19(\ud835\udc67\ud835\udc56|\u210e<\ud835\udc56). (3) 3.3 Medical Visual Language Tasks Medical Visual Language Answering: The primary objective of Med-VQA is to provide answers based on professional questions posed by the inquirer regarding medical images, enhancing the understanding of medical images, and facilitating patient care through the automated interpretation of visual data. The tasks are categorized into open-ended questions, which require detailed descriptive answers, and close-ended questions, which demand concise, often binary responses like \u201cyes\u201d or \u201cno\u201d. This interdisciplinary domain requires the Med-VLM to interpret and provide insights into complex medical imagery, such as X-rays, MRI scans, and CT images. Medical Imaging Report Generation: Medical IRG involves the automatic creation of textual descriptions for medical images, using Med-VLMs. This task aims to analyze visual medical data and produce accurate, coherent, and clinically relevant reports. The goal is to assist radiologists and healthcare professionals by reducing their workload and improving diagnostic efficiency while maintaining high standards of patient care. 3.4 Difference between Universal VLMs and Med-VLMs\u2019 Downstream Tuning The differences between Med-VLMs and Universal VLMs make it necessary to study the tuning of Med-VLMs separately in addition to using the PEFT approach for Universal VLMs. From the data perspective, the dataset used for downstream task fine-tuning in the medical domain is extremely narrow compared to the universal domain, for example, the current largest radiological image dataset used for the Med-VQA task includes only 14,028 image-question pairs, which makes the fine-tuning of LVLMs fall into the problems of overfitting and catastrophic forgetting. At the same time, the answers of the textual Instruction pairs embedded in the dataset usually include only one or two simple words. Furthermore, the current training loss used by generative models in the fine-tuning process makes it very easy to fall into the learning of the data distribution from the long text to its short text rather than the learning of the correct image-text association. From the model perspective, most Med-VLMs are obtained by transfer learning from VLMs in the universal domain, a process where the visual coder is usually frozen, however, the domain gap between natural images and medical images affects the performance of Med-VLMs on medical tasks, where most of the current PEFT methods are not available for ViTs or or have not demonstrated their effective impact on visual encoders for transfer learning from general domain to medical domain. Therefore, a separate study of efficient fine-tuning methods for Med-VLM on downstream tasks is necessary. 4 TUNING SETTINGS 4.1 Baseline Model To explore a method that achieves task adaptation by fine-tuning a small subset of the model\u2019s own parameters, we choose two different-scale pre-trained Med-VLMs for different tasks: 1) smallscale VLM MISS [3] and 2) large-scale VLM LLaVA-Med [19] for Med-VQA and Med-IRG. These baseline models cover generative Med-VLM at different scales and for different tasks so that we can Preprint, 2024, Conference Paper Jiawei Chen, et al Assistant LayerNorm: The chest x-ray image shows a patient with a rightsided pneumothorax. A pneumothorax is a condition where air accumulates in the pleural space, which is the area between the lung and the chest wall. This can cause the lung to collapse, leading to difficulty breathing and chest pain. In this case, the pneumothorax is located on the right side of the patient's chest. Attention: The chest x-ray shows no acute abnormalities. The cardiac and mediastinal contours are normal, and there is no evidence of pneumothorax or pleural effusion. The lungs are clear, and there is no focal airspace consolidation. The impression is that there is no acute cardiopulmonary abnormality. FFN: The chest x-ray shows no acute abnormalities. The heart size and mediastinal contour are normal, and there is no evidence of focal consolidation, pneumothorax, or pleural effusion. Additionally, there is no visible free intraperitoneal air under the diaphragm. The impression is that there are no acute cardiopulmonary findings. LoRA: The chest x-ray image shows a patient with a right-sided pleural effusion. A pleural effusion is an abnormal accumulation of fluid in the pleural space, which is the area between the layers of tissue that line the lungs and the chest cavity. The image displays a silhouette of the heart and lungs, The presence of a pleural effusion may cause the heart and lungs to appear differently than expected, such as having a more pronounced silhouette. Prefix: The chest x-ray image shows a patient with a right-sided pneumothorax. A pneumothorax is a condition where air accumulates in the pleural space, which is the area between the lung and the chest wall. This can cause the lung to collapse, leading to difficulty breathing and chest pain. Question: Describe the given chest x-ray image in detail. Question: What modality is used to take this image? (Open-ended) Assistant LayerNorm: The image is taken using a computed tomography (CT) scan. Attention: Computed tomography (CT) is the modality used to take this image. FFN: CT scan, or computed tomography scan, is the modality used to take this image. LoRA: The image is taken using a computed tomography (CT) scan. Prefix: The image is taken using a computed tomography (CT) scan. Question: Does this image look normal? (Closed-ended) Assistant LayerNorm: No, the image does not appear to be normal. It shows bilateral infiltrates, which are abnormal substances or collections of cells within the lung tissue. This can be indicative of various conditions, such as infection, inflammation, or other lung diseases. Attention: No, this image does not look normal. It shows bilateral patchy infiltrates, which are abnormal areas of increased opacity in the lungs. FFN: No. LoRA: No, this chest X-ray does not look normal. It shows bilateral infiltrates, which are abnormal opacities or densities in both lungs. Prefix: No, the image does not appear to be normal. It shows bilateral infiltrates, which are abnormal substances or collections of cells within the lung tissue. This can be indicative of various conditions, such as infection, inflammation, or other lung diseases. Vision Encoder Image Embeddings Language model Question \u201cIs there any pneumonia in the X-ray?\u201d Ground-truth \u201cYes\u201d Connector LM head Text Output Prompt for Eval You will act as an intelligent evaluator of answers generated by Generative Medical Visual Language Model (Med-VLM). Please note that Med-VLM answers may be more varied than benchmark answers. If a Med-VLM answer is approximately correct about the image from a medical point of view, it should be regarded as correct even if there are some differences from the benchmark answer, so do not arbitrarily give an incorrect assessment. I'll give you the $questions$, $Med-VLM answer$, and $ground_truth$. You must output a word $correct$ or $incorrect$: $questions$: What modality is used to take this image? $Med-VLM answer$: The image is taken using a computed tomography (CT) scan. $ground_truth$: CT. or Figure 2: The pipeline of our study. The flowchart details the step-by-step process from input reception to output generation, showcasing the model\u2019s method for processing medical images and questions to generate contextually relevant responses. The right side presents the fine-tuning results across different paradigms, including both Med-VQA and Med-IRG tasks. provide comprehensive insights into the impact of different PEFT methods on fine-tuning Med-VLM to downstream tasks. 4.2 Tuning within Transformer-based Layer The transformer-based layer, serving as a fundamental structure across vision encoders, language models, and certain VLM connectors, undergoes fine-tuning through both intrinsic parameter adjustments and the application of extrinsic component fine-tuning methods. This process aims to assess the impact of various tuning approaches on the overall model performance. Strategic Intrinsic Adjustments: We emphasize the transformerbased layer\u2019s role as the computational core of the model and selectively fine-tune its intrinsic parameters. Attention layers, LayerNorm layers, and FFNs constitute the critical units of this layer. As depicted in Figure 1, when one of these three components is set to be trainable, the remaining parameters within the transformer-based layer are kept frozen. Incorporation of PEFT Techniques: LoRA-tuning and Prefixtuning, representing the forefront of PEFT methods, are chosen for comparison against intrinsic parameter adjustments. Figure 1 illustrates the application details of these PEFT methods within our study. For LoRA-tuning, low-rank matrices are selectively applied to the parameters of the query and value matrices within the attention layer, mapping data into a low-dimensional subspace for processing. In the case of Prefix-tuning, we follow prevalent practices by appending prefix vectors to the origin input \ud835\udc65of the key and value within the attention layer, and the final input embeddings can be defined as \ud835\udc67= [\ud835\udc43\ud835\udc45\ud835\udc38\ud835\udc39\ud835\udc3c\ud835\udc4b,\ud835\udc65]. 4.3 Tuning within the VLM Architecture In our pursuit to uncover the nuanced impact of various modules within the full VLM\u2019s architecture on overall model performance, we embark on a strategic fine-tuning expedition which entails selectively training specific modules within the VLM framework while employing efficient fine-tuning methods for certain components or maintaining others in a frozen trainable state. Such a strategy allows us to dissect the individual contributions of each component to the model\u2019s efficacy in medical visual language tasks, offering insights into optimizing Med-VLMs for enhanced performance and efficiency. This selective fine-tuning approach aims to validate the hypothesis that certain components within the Med-VLM architecture wield more significant influence over the model\u2019s performance on medical tasks. By applying focused fine-tuning strategies to individual modules, we seek to delineate the performance impact of targeted adjustments versus broad model updates. Details for Small-scale Med-VLMs: For small-scale Med-VLMs, such as MISS, we experiment with applying efficient fine-tuning techniques to some modules at a time, with the rest of the model\u2019s parameters set to remain fully trainable This is because for smallscale Med-VLM, making either module completely frozen may make the model unable to transfer to downstream tasks [2]. For instance, when the language model undergoes LayerNorm-tuning (LN-tuning), the vision feature extractor, connector, and LM head are kept in a state that allows full parameter adjustments. This strategy allows us to compare the impact of different efficient finetuning methods on model performance under the premise that we can evaluate the impact of fine-tuning different module parameters of small-scale Med-VLMs on the overall task performance. Efficiency in Focus: LayerNorm as a Catalyst for Fine-tuning Medical Visual Language Pre-trained Models Preprint, 2024, Conference Paper Strategy for Large-scale Med-VLMs: In the case of large-scale Med-VLMs, like LLaVA-Med, our fine-tuning strategy is more nuanced, reflecting the diverse requirements of comparative analysis. As shown in Figure 2, \u201cSnowflakes or flames\u201d indicate that any module has the option to be adjusted or frozen. Depending on the specific experimental setup, modules within these larger models may be categorized into three states: fully trainable (T), efficiently fine-tuned some of the parameters (PEFT), and completely frozen (F). This flexible approach comprehensively evaluates how different tuning states across various modules influence large-scale VLMs\u2019 performance on complex medical visual language tasks. 4.4 Downstream Fine-tuning Benchmarks: In this paper, we use a total of four datasets, Slake Dataset [27] and VQA-RAD dataset [16] for Med-VQA, and OpenI dataset [37] and MIMIC dataset[15] for Med-IRG. The Slake dataset consists of 14,028 QA pairs, of which 70% are used for training, 15% for validation, and 15% for testing. The VQA-RAD dataset is used for the zero-shot performance of the model on the VQA task including 3515 QA pairs, of which 451 pairs are used for testing. The OpenI dataset is used for the training of the Med-IRG task including 6,459 images and 3,955 reports, the instructions are the same as those adopted in [35]. The MIMIC test set was chosen for the evaluation of the model\u2019s Med-IRG performance, which includes 5,159 images and 3,269 reports. For the inference of the IRG task, we uniformly use the phrase \u201cDescribe the given chest x-ray image in detail.\u201d as the instruction. Application Details: For the fine-tuning of our chosen models (MISS and LLaVAMed), a consistent set of hyperparameters is employed to ensure uniformity across our experiments. Each model is fine-tuned with an initial learning rate of 2e-5, utilizing the Adam optimizer for its well-regarded efficiency in handling the optimization landscapes of deep learning models. Specifically, MISS underwent training for 120 epochs with a batch size of 16, adopting a weight decay of 0.05 to encourage regularization. In contrast, LLaVA-Med\u2019s finetuning is characterized by a warmup ratio of 0.03 and a cosine learning rate scheduler, alongside specific adjustments such as enabling tensor float 32 for enhanced computational performance, and employing FSDP strategies for memory efficiency, with settings like \u201cfull_shard auto_wra\u201d and targeting the \u201cLlamaDecoderLayer\u201d for wrapping. During all the inferences, Med-LLaVA generates outputs using a set of predefined generation parameters, including sampling methods and beam search configurations (num-beams=1), and the temperature is kept at 0.2. Detailed information regarding the hyperparameter settings for each model, along with additional configurations and the rationale for their selection, is provided in the Appendix for further reference. 5 EXPERIMENT RESULTS AND DISCUSSION 5.1 Small-scale MISS Result As shown in Table 1, we employ both supervised fine-tuning (SFT) and performance assessment on the Slake dataset\u2019s training and testing sets. The reported performance metrics include accuracy rates for \u2018opened\u2019 and \u2018closed\u2019 types, which means open-ended and closed-ended questions, as well as a global accuracy rate that averages the performance across both types. Given the potentially catastrophic impact of freezing any module on the overall performance of small-scale models, when certain modules underwent efficient fine-tuning, the remaining modules were maintained fully trainable. In the context of the MISS model, ViT, JTM, and DEC represent the visual encoder, joint textmultimodal encoder, and text decoder, respectively. The term \u201ctrainable params\u201d refers to the total volume of trainable model parameters, with \u201c#Params\u201d indicating the ratio of trainable to total parameters. \u201cPEFT params\u201d denotes the proportion of parameters fine-tuned using PEFT methods, with \u201c#PEFT Params\u201d reflecting the proportion of PEFT-tuned parameters relative to the total parameters within the corresponding module. When the baseline model is fully fine-tuned, it achieves the highest open-ended question accuracy and global accuracy rates of 82.91% and 82%, respectively. Under the premise of maintaining the visual encoder fully trainable and only efficiently tuning one module at a time, keeping the JTM encoder fully trainable enabled the model to achieve optimal performance. Compared to scenarios where DEC underwent full parameter training while JTM was efficiently tuned, the model\u2019s global accuracy rates under LayerNorm, attention, and FFN intrinsic parameter tuning methods were higher by 4%, 9%, and 12%, respectively. Maintaining the visual encoder fully trainable while efficiently tuning all the remaining modules resulted in significantly poor model performance, failing to correctly judge the closed-source questions in all the fine-tuning methods except Attention-tuning. Under LayerNorm, FFN, LoRA, and Prefix-tuning methods, the model never answered \u2018yes\u2019 to any close-ended question, with accuracy rates lower than random guessing at 38.03%, 54.65%, 50.70%, and 32.95%, respectively. Comparing different fine-tuning methods, the effect of LN-tuning is remarkable, achieving the best accuracy on close-ended questions with the lowest PEFT Params, even surpassing full parameter tuning and reaching 84.51%. In contrast, although Attention-tuning and FFN-tuning slightly outperform LN-tuning in terms of global accuracy, this came at the cost of tuning over 40% of the parameters in their respective modules. The LoRA method fine-tuning model using the [T, PEFT, T] paradigm tuned approximately five times more PEFT parameters than LN-tuning (only 56,823), with Prefix-tuning at twenty times more. This underscores the viability of LN-tuning as a comparable method to the most classical PEFT methods in small-scale fine-tuning scenarios that require saving certain parameter volumes. From a global parameter tuning perspective, Attention-tuning achieves performance closest to full fine-tuning by saving 23% of trainable parameters, marking it as another viable fine-tuning approach for small-scale Med-VLMs. 5.2 Large-scale LLaVA-Med Result Furthermore, we conduct comprehensive evaluations on LLaVAMed, a large-scale model designated for Med-VQA tasks. Our approach encompassed four distinct training paradigms: [PEFT, F, PEFT, F], [PEFT, T, PEFT, T], [F, F, PEFT, F], and [F, T, PEFT, T]. Considering the substantial parameter size of LVLMs, we aimed to restrict the volume of fine-tuning parameters to within about Preprint, 2024, Conference Paper Jiawei Chen, et al Table 1: Comparison of accuracy (ACC-%) of MISS on Slake dataset using different methods of fine-tuning. \u2018T\u2019 stands for trainable while \u2018F\u2019 stands for frozen. ViT JTM DEC Opened \u2191 Closed \u2191 Gobal \u2191 Trainable Params #Params PEFT Params #PEFT Params T T T 82.91 81.47 82.00 361,478,972 100% T LayerNorm LayerNorm 40.79 38.03 39.87 86,454,528 23.92% 115,200 0.04% T LayerNorm T 75.64 84.51 78.61 224,277,308 62.04% 56,832 0.04% T T LayerNorm 73.65 77.46 74.93 223,656,192 61.87% 58,368 0.04% T Attention Attention 64.51 74.65 71.25 199,806,720 55.27% 113,467,392 41.24% T Attention T 78.47 85.92 80.96 280,954,172 77.72% 56,733,696 41.33% T T Attention 75.50 64.23 71.72 280,331,520 77.55% 56,733,696 41.15% T FFN FFN 74.79 54.65 68.05 199,677,696 55.24% 113,338,368 41.19% T FFN T 76.63 84.51 79.26 280,889,660 77.71% 56669184 41.27% T T FFN 76.20 49.86 67.39 280,267,008 77.53% 56669184 41.10% T LoRA LoRA 68.14 50.70 62.29 86,929,152 24.05% 589,824 0.21% T LoRA T 76.77 82.81 78.79 224,515,388 62.11% 294,912 0.21% T T LoRA 78.52 79.44 78.83 223,892,736 61.94% 294,912 0.21% T Prefix Prefix 41.50 32.95 38.61 115,884,288 32.06% 29,544,960 10.74% T Prefix T 75.92 83.38 78.42 238,992,956 66.12% 14,772,480 10.76% T T Prefix 76.82 82.25 78.65 238,370,304 65.94% 14,772,480 10.71% Table 2: Comparison of results of LLaVA-Med on the Slake dataset using different methods of fine-tuning. Vision Tower Connector LLM LM Head Slake Dataset #Params Trainable Params BERTS-Recall Mean Token Opened \u2191 Closed \u2191 Global \u2191 LayerNorm F LayerNorm F 59.53 69.95 63.62 3.79% 266,737,664 46.35% 28.27 F F LayerNorm F 58.76 69.71 63.05 0.00372% 262,144 46.24% 27.81 LayerNorm T LayerNorm T 59.84 67.55 62.87 3.79% 266,737,664 46.93% 26.81 F T LayerNorm T 60.31 66.11 62.58 3.78% 266,637,312 46.93% 26.50 Attention F Attention F 61.4 67.79 63.9 31.91% 2,248,245,248 49.25% 25.24 F F Attention F 61.71 68.03 64.18 30.48% 2,147,483,648 49.11% 25.95 Attention T Attention T 60.93 65.87 62.87 35.69% 2,514,620,416 48.47% 25.89 F T Attention T 58.76 66.83 61.92 34.26% 2,413,858,816 48.49% 25.85 F T FFN T 64.5 62.26 63.62 44.74% 3,152,056,320 51.98% 16.42 F F FFN F 64.34 66.59 65.22 40.96% 2,885,943,296 52.07% 17.37 F F LoRA F 58.14 64.42 60.6 0.14% 9,994,240 47.55% 25.43 F T LoRA T 58.76 65.38 61.36 3.92% 276,369,408 47.26% 25.72 F F Prefix F 56.9 67.07 60.89 15.48% 1,090,805,760 46.19% 26.61 F T Prefix T 59.22 70.19 63.52 19.26% 1,357,180,928 46.28% 26.58 40%, thereby excluding full parameter training of the ViT and FFNtuning methods that involve adjusting ViT. Table 2 showcases the experimental results of LLaVA-Med, trained and tested on the Slake dataset, employing the aforementioned fine-tuning paradigms. When opting to keep both the connector and LM head trainable, the model\u2019s performance did not exhibit significant improvement, despite a substantial increase in the volume of adjusted parameters. Specifically, when fine-tuning adopted the [F, T, PEFT, T] paradigm, changes in global accuracy rates for LN, attention, and FFN tuning compared to [F, F, PEFT, F] are -0.5%, -2.1%, and -3.3% respectively. This contradicts the common notion that more parameter adjustments correlate with better SFT performance, indicating that full parameter adjustments of the connector and LM head during efficient fine-tuning of LLMs do not guarantee the expected outcomes. The performance changes are inconsistent under the [PEFT, F, PEFT, F] and [PEFT, T, PEFT, T]. For LN-tuning, fine-tuning the image encoder led to respective increases in global accuracy of 0.57% and 0.29%, while Attention-tuning resulted in changes of -0.28% and +0.85%. Such subtle differences do not conclusively indicate whether adjusting parameters of the image encoder benefits or hinder model performance, especially when considering Recall metrics. The increase in ViT-adjusted parameter volume did not regularly alter recall, suggesting that larger adjustments to ViT parameters do not consistently improve model recall. Comparing different intrinsic parameter adjustments revealed that increasing the volume of fine-tuned parameters indeed enhances the model\u2019s recall of generated content: as fine-tuning parameters shifted from 0.003% to 44%, recall correspondingly increased from 46.24% to 52.07%. This indicates that enlarging the volume of fine-tuned parameters allows the model to learn the distribution of ground-truth tokens in the vocabulary space more effectively, both quantitatively and spatially. However, considering accuracy\u2014a gold standard in medical tasks\u2014significant increases in parameter volume do not necessarily elevate all accuracy metrics concurrently. LN-tuning under the [PEFT, F, PEFT, F] paradigm once again achieved state-of-the-art (SOTA) accuracy for close-ended questions, which was consistent with observations in small-scale VLMs. Across two models of different scales, pre-trained on distinct Efficiency in Focus: LayerNorm as a Catalyst for Fine-tuning Medical Visual Language Pre-trained Models Preprint, 2024, Conference Paper Figure 3: Loss curves for various methods under the [F, T, PEFT, T] fine-tuning paradigm. datasets and tasks, LN-based fine-tuning consistently enhanced their accuracy on close-ended questions. While Attention-tuning and FFN-tuning marginally surpassed LN-tuning in global accuracy, achieving peak open-ended question accuracies of 64.34% and global accuracy of 65.22%, this came at the cost of escalating the volume of tuned parameters from 262,144 to 2,885,943,296\u2014a millionfold increase. Furthermore, following peak performance under current fine-tuning paradigms, the model ceased learning intrinsic relations of the features, instead focusing on the quantitative distribution of ground-truth tokens. This shift manifested in minimal accuracy improvements and a dramatic reduction in average output length, with mean tokens dropping from 28.27 to 17.38. Figure 2 compares the generated outcomes across different intrinsic and extrinsic fine-tuning methods under the [F, F, PEFT, F] paradigms, illustrating this phenomenon driven by inherent large model training patterns and suboptimal training data. Most datasets employed for fine-tuning Med-VQA tasks comprise answers in short text formats, with close-ended answers typically being \u2018yes\u2019 or \u2018no\u2019, and open-ended answers containing only a few words. For LLMs seeking interpretability of the answer, adjusting more parameters paradoxically impairs generative performance. When applying LoRA-tuning and Prefix-tuning to LLaVA-Med, the model\u2019s performance did not exhibit notable improvement. LoRA-tuning\u2019s recall reached 47.55% and 47.26%, indicating a closer alignment of the model\u2019s output with the ground truth distribution in the vocabulary space. However, this did not translate to enhanced evaluation accuracy, with accuracies under [F, F, PEFT, F] and [F, T, PEFT, T] fine-tuning paradigms reaching only 60.6% and 61.36%, respectively. This suggests that LoRA-tuning failed to deepen the multimodal model\u2019s understanding of joint image-text features, merely aligning output closer to the ground truth distribution. Figure 3 displays the loss curves for various methods under the [F, T, PEFT, T] fine-tuning paradigm. Compared to LN-tuning, LoRAtuning\u2019s minimum fine-tuned parameter volume is approximately fifty times larger, yet its accuracy is roughly 3% lower, with average output lengths of 25.43 and 25.72, trailing behind LN-tuning. These factors collectively indicate that existing PEFT methods may not directly enhance text-based accuracy in discerning medical images in multimodal model downstream fine-tuning, underscoring the advantages of LN-tuning over traditional PEFT approaches. Table 3: Comparison of LLaVA-Med competence on the MIMIC test dataset which SFT on the OpenI dataset. Vision Tower Connector LLM LM Head METEOR Score Rouge-L Mean Token Recall Precision F1 F F LayerNorm F 12.85% 12.58% 15.92% 13.61% 122.66 F F Attention F 12.85% 12.58% 15.92% 13.61% 122.66 F F FFN F 24.53% 17.01% 23.84% 19.34% 123.11 F F LoRA F 12.95% 12.57% 15.93% 13.62% 122.70 F F Prefix F 12.99% 12.47% 15.92% 13.54% 122.72 5.3 Large-scale VLM IRG Result To further explore the impact of various fine-tuning methods on the performance of large-scale Med-VLMs in the Med-IRG context, we employed the [F, F, PEFT, F] fine-tuning paradigm using the OpenI dataset to fine-tune Med-LLaVA. The model\u2019s OOD performance was then tested on the MIMIC dataset\u2019s test set to assess how it handles variations in input text domains. As shown in Table 3, the performance of the models fine-tuned with LN, Attention, LoRA, and Prefix methods showed minimal differences, with output text lengths averaging around 122.7. In contrast, FFN tuning significantly outperformed other fine-tuning approaches, demonstrating its superior capability in learning the underlying representations of ground-truth in long text generation tasks like Med-IRG. 5.4 Out of Distribution Performance Testing To assess whether the performance of LLaVA-Med on a familiar dataset like Slake correlates with its performance on a novel dataset, we conducted an OOD testing on the VQA-RAD dataset. This test serves to evaluate the model\u2019s robustness and flexibility by applying it to a different domain within the same field but with unseen data. More specifically, the images in the VQA-RAD dataset belong to the proximity domain with the Slake dataset but are quite different from the Slake dataset in terms of question formulation. Such experiments allow us to consider the ability of different fine-tuning methods to reason on non-proximity-domain text over similar medical images, in order to speculate on the ability of the models fine-tuned with the VQA dataset to be applied to the real Med-VQA scenarios. In this experiment, we observe various fine-tuning paradigms, focusing particularly on the role of the transformer-based LayerNorm, Attention, and FFN adjustments. The results show a notable variance in the model\u2019s ability to generalize the learned features to the VQA-RAD dataset. As Table 4 shows, fine-tuning methods that show comparable results on the Slake dataset exhibit significant performance variances on the OOD VQA-RAD dataset. Notably, under the [F, T, PEFT, T] training paradigm, LoRA-tuning underwent a remarkable reversal, surpassing the performances of Attentiontuning and FFN-tuning, which were previously effective on the Slake dataset. It achieves a global accuracy of 65.41% and matches the best-closed question accuracy of 73.71%, initially noted with LNtuning. Conversely, FFN-tuning, despite being superior at learning adjacent training text representations, disappointed in its OOD performance. While it excels in Rouge-L metrics with scores of 26.35%, 9.80%, and 13.51%, significantly surpassing other fine-tuning methods, it only managed accuracy scores of 56.50%, 64.54%, and 60.98%. Moreover, its mean output length plummeted to 18.51, the lowest among all methods. This combination of metrics further validates that although FFN-tuning can closely fit the training data during Preprint, 2024, Conference Paper Jiawei Chen, et al Table 4: Comparison of LLaVA-Med competence on the VQA-RAD test dataset which SFT on the Slake dataset. Vision Tower Connector LLM LM Head VQA-RAD Bertscore METEOR Score Rouge-L Mean Token Opened\u2191 Closed\u2191 Global\u2191 Precision Recall F1 Recall Precision F1 F T LayerNorm T 54.50 73.71 65.19 29.77% 49.00% 36.53% 12.53% 7.91% 1.48% 2.30% 29.52 F T Attention T 50.50 63.35 57.65 29.97% 49.27% 36.73% 11.87% 7.97% 1.46% 2.30% 31.12 F T FFN T 56.50 64.54 60.98 35.44% 52.05% 41.89% 19.23% 26.35% 9.80% 13.51% 18.50 F T LoRA T 55.00 73.71 65.41 30.06% 49.52% 36.87% 12.03% 7.89% 1.50% 2.29% 27.52 F T Prefix T 51.00 70.12 61.64 29.89% 48.85% 36.58% 12.71% 8.53% 1.57% 2.45% 28.87 F F LayerNorm F 54.50 75.30 66.08 29.90% 48.84% 36.60% 12.97% 7.84% 1.50% 2.33% 29.52 F F Attention F 55.50 71.71 64.52 30.25% 49.64% 37.08% 8.44% 8.44% 1.66% 2.58% 29.45 F F FFN F 52.00 61.35 57.21 35.20% 51.76% 41.60% 18.43% 24.31% 8.96% 12.40% 19.16 F F LoRA F 49.50 70.92 61.42 30.08% 49.80% 36.96% 12.46% 7.91% 1.51% 2.33% 29.04 F F Prefix F 51.00 69.72 61.42 29.76% 48.49% 36.36% 12.81% 8.11% 1.54% 2.41% 29.21 Table 5: Comparison of LLaVA-Med zero-shot competence on the MIMIC test dataset which SFT on the Slake dataset. Vision Tower Connector LLM LM Head METEOR Score Rouge-L Mean Token Recall Precision F1 F T LayerNorm T 11.86% 11.27% 18.54% 13.54% 71.78 F T Attention T 11.88% 11.15% 17.98% 13.33% 73.73 F T FFN T 12.12% 11.24% 18.07% 13.42% 73.23 F T LoRA T 11.57% 10.99% 17.91% 13.18% 71.51 F F LayerNorm F 12.07% 11.40% 18.26% 13.56% 75.00 F F Attention F 12.17% 11.50% 18.20% 13.65% 75.01 F F FFN F 12.81% 11.48% 17.84% 13.53% 76.88 F F LoRA F 11.91% 11.25% 18.07% 13.40% 72.93 SFT, it predominantly learns the distribution of ground-truth tokens rather than enhancing the model\u2019s ability to generalize image-text reasoning. A similar pattern is observed with Attention-tuning; the global accuracy decreases by 7.54% compared to LN-tuning when the connector and LM head are trained more. However, the text length is optimal at this time. Comprehensively, the mean token length of attention tuning under the same paradigm in Table 1 can show that attention tuning slightly overlearns the text in the adjacent domains during SFT training, but does not cause large damage to the model\u2019s output ability of textual diversity in textual reasoning in the non-adjacent domains. When enlarging the perspective to compare the accuracy performances across different fine-tuning methods, LN-tuning consistently displayed formidable strength. Under the [F, F, PEFT, F] tuning paradigm, LN-tuning, utilizing the smallest parameter adjustment, reached the highest scores in opened, closed, and global accuracy at 54.5%, 75.3%, and 66.08%, respectively. It also maintains a longer mean text output than any other method under the same tuning conditions. These results, coupled with the Rouge-L metrics from Table 1, indicate that LN-tuning manages to enhance the model\u2019s understanding of multimodal feature interrelations, significantly minimizing the model\u2019s overemphasis on learning groundtruth text token distributions due to low training data quality. This is evidenced by the lowest recall rate of 1.5% and the highest global accuracy of 66.08%. In contrast, LoRA-tuning, despite adjusting 50 times more parameters, did not significantly outperform LN-tuning. 5.5 Zero-shot Capability Investigation To further investigate the zero-shot capabilities of different finetuning paradigms on Med-VLMs, we conduct extensive evaluations of the LLaVA-Med model on the MIMIC test dataset after SFT on the Slake dataset. This analysis aims to understand the impact of various intrinsic tuning methods on the model\u2019s ability to generalize and adapt to new tasks within the medical domain, particularly for IRG tasks. The evaluation employs metrics such as METEOR score, Rouge-L, and mean token length to measure factual accuracy, linguistic precision, and diversity of output in medical report generation. From Table 5, experimental results indicate that different tuning methods exhibit varying impacts on the model\u2019s zero-shot performance. LN-tuning consistently showed robust performance across different configurations, achieving the highest precision scores (18.54% and 18.26% under different paradigms), which underscores its effectiveness in preserving the factualness of model outputs. In contrast, the Attention and FFN methods, although effective in some scenarios, demonstrate greater variability in their influence on model generalization. Notably, FFN-tuning, which previously excelled in VQA tasks, scored the lowest in precision (17.84%) under the [F, F, FFN, F] paradigm on the MIMIC dataset. This suggests that the model may have overlearned task-specific features from the Slake dataset, thus hindering its generalization and transfer capabilities. Furthermore, METEOR scores positively correlated with the number of adjusted parameters, increasing from 11.86% to a high of 12.81%, indicating that a larger volume of tuned parameters enhances the model\u2019s linguistic alignment capabilities in medical text generation tasks. Examining the effects of freezing versus tuning the connector and LM head reveals no clear pattern in performance metrics between [F, T, PEFT, T] and [F, F, PEFT, F] configurations. Changes in Rouge-L score and precision are minimal, suggesting that extensive finetuning of the connector and LM head does not necessarily contribute to improved zero-shot performance across these metrics. Comparison between intrinsic tuning and traditional methods, such as LoRA-tuning, did not exhibit standout performance in the zero-shot setting. Under the [F, T, PEFT, T] paradigm, LoRA-tuning shows lower METEOR scores, Rouge-L, and mean token length compared to intrinsic methods, indicating that LoRA-tuning might not effectively maintain the overall transferability and generalization of the model in medical applications. Thus, intrinsic tuning methods, particularly LN-tuning with minimal parameter adjustments, might be a better choice, especially under the [F, F, PEFT, F] paradigm, where it outperforms more parameter-intensive methods like Attention-tuning in maintaining the generalization capabilities. These observations underscore the efficacy of LN-tuning in preserving the generalization of LVLMs for diverse medical tasks. Efficiency in Focus: LayerNorm as a Catalyst for Fine-tuning Medical Visual Language Pre-trained Models Preprint, 2024, Conference Paper 6 CONCLUSION This study presents a thorough examination of intrinsic parameter fine-tuning and exposing LN-tuning, as a potent alternative to traditional PEFT methods for Med-VLMs. Our extensive experimental analysis across both small-scale and large-scale Med-VLMs demonstrated that fine-tuning the LayerNorm layers significantly enhances the models\u2019 adaptability, efficiency, and scalability in performing specialized medical tasks, such as Med-VQA and MedIRG. We hope this work will enhance the clinical applicability of Med-VLMs in real-world medical settings. ACKNOWLEDGMENTS This work is supported in part by the National Key R&D Program of China (2021ZD0113503) and in part by the Shanghai Municipal Science and Technology Committee of Shanghai Outstanding Academic Leaders Plan (No. 21XD1430300). REFERENCES [1] Tom Brown, Benjamin Mann, Ryder, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877\u20131901. [2] Jiawei Chen, Yue Jiang, Dingkang Yang, Mingcheng Li, Jinjie Wei, Ziyun Qian, and Lihua Zhang. 2024. Can LLMs\u2019 Tuning Methods Work in Medical Multimodal Domain? arXiv preprint arXiv:2403.06407 (2024). [3] Jiawei Chen, Dingkang Yang, Yue Jiang, et al. 2024. MISS: A Generative Pretraining and Finetuning Approach for Med-VQA. arXiv preprint arXiv:2401.05163 (2024). [4] Zhihong Chen, Yuhao Du, Hu, et al. 2022. Multi-modal masked autoencoders for medical vision-and-language pre-training. In MICCAI. Springer, 679\u2013689. [5] W Dai, J Li, D Li, AMH Tiong, J Zhao, W Wang, B Li, P Fung, and S Hoi. [n. d.]. InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning. arXiv 2023. arXiv preprint arXiv:2305.06500 ([n. d.]). [6] Jacob Devlin, Ming-Wei Chang, Lee, et al. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018). [7] Tuong Do, Binh X. Nguyen, et al. 2021. Multiple Meta-model Quantifying for Medical Visual Question Answering. In MICCAI. Cham, 64\u201374. [8] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020). [9] Yangtao Du, Dingkang Yang, Peng Zhai, Mingchen Li, and Lihua Zhang. 2021. Learning Associative Representation for Facial Expression Recognition. In IEEE International Conference on Image Processing (ICIP). 889\u2013893. [10] Haifan Gong, Guanqi Chen, Mao, et al. 2022. VQAMix: Conditional Triplet Mixup for Medical Visual Question Answering. IEEE Transactions on Medical Imaging 41, 11 (2022), 3332\u20133343. https://doi.org/10.1109/TMI.2022.3185008 [11] Zeyu Han, Chao Gao, Jinyang Liu, Sai Qian Zhang, et al. 2024. ParameterEfficient Fine-Tuning for Large Models: A Comprehensive Survey. arXiv preprint arXiv:2403.14608 (2024). [12] Xuehai He, Chunyuan Li, Pengchuan Zhang, Jianwei Yang, and Xin Eric Wang. 2023. Parameter-efficient model adaptation for vision transformers. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 37. 817\u2013825. [13] Neil Houlsby, Andrei Giurgiu, Jastrzebski, et al. 2019. Parameter-efficient transfer learning for NLP. In International Conference on Machine Learning. PMLR, 2790\u2013 2799. [14] Edward J Hu, Shen, et al. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685 (2021). [15] Alistair EW Johnson, Tom J Pollard, Seth J Berkowitz, Nathaniel R Greenbaum, Matthew P Lungren, Chih-ying Deng, Roger G Mark, and Steven Horng. 2019. MIMIC-CXR, a de-identified publicly available database of chest radiographs with free-text reports. Scientific data 6, 1 (2019), 317. [16] Jason J Lau, Gayen, et al. 2018. A dataset of clinically generated visual questions and answers about radiology images. Scientific data 5, 1 (2018), 1\u201310. [17] Yuxuan Lei, Dingkang Yang, Mingcheng Li, Shunli Wang, Jiawei Chen, and Lihua Zhang. 2023. Text-oriented Modality Reinforcement Network for Multimodal Sentiment Analysis from Unaligned Multimodal Sequences. arXiv preprint arXiv:2307.13205 (2023). [18] Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691 (2021). [19] Chunyuan Li, Cliff Wong, Zhang, et al. 2023. Llava-med: Training a large languageand-vision assistant for biomedicine in one day. arXiv preprint arXiv:2306.00890 (2023). [20] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. 2023. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597 (2023). [21] Junnan Li, Dongxu Li, Xiong, et al. 2022. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In ICCV. 12888\u201312900. [22] Junnan Li, Ramprasaath Selvaraju, Gotmare, et al. 2021. Align before fuse: Vision and language representation learning with momentum distillation. NIPS 34 (2021), 9694\u20139705. [23] Mingcheng Li, Dingkang Yang, Yuxuan Lei, Shunli Wang, Shuaibing Wang, Liuzhen Su, Kun Yang, Yuzheng Wang, Mingyang Sun, and Lihua Zhang. 2024. A Unified Self-Distillation Framework for Multimodal Sentiment Analysis with Uncertain Missing Modalities. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), Vol. 38. 10074\u201310082. [24] Mingcheng Li, Dingkang Yang, and Lihua Zhang. 2023. Towards Robust Multimodal Sentiment Analysis under Uncertain Signal Missing. IEEE Signal Processing Letters 30 (2023), 1497\u20131501. [25] Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190 (2021). [26] Bo Liu, Li-Ming Zhan, and Xiao-Ming Wu. 2021. Contrastive Pre-training and Representation Distillation for Medical Visual Question Answering Based on Radiology Images. In MICCAI 2021. Springer International Publishing, Cham, 210\u2013220. [27] Bo Liu, Li-Ming Zhan, Xu, et al. 2021. Slake: A semantically-labeled knowledgeenhanced dataset for medical visual question answering. In 2021 ISBI. 1650\u20131654. [28] Haotian Liu, Chunyuan Li, Wu, et al. 2023. Visual instruction tuning. arXiv preprint arXiv:2304.08485 (2023). [29] Shih-Yang Liu, Chien-Yi Wang, Hongxu Yin, Pavlo Molchanov, Yu-Chiang Frank Wang, Kwang-Ting Cheng, and Min-Hung Chen. 2024. DoRA: WeightDecomposed Low-Rank Adaptation. arXiv preprint arXiv:2402.09353 (2024). [30] Xiao Liu, Yanan Zheng, Zhengxiao Du, Ding, et al. 2023. GPT understands, too. AI Open (2023). [31] Binh D. Nguyen, Thanh-Toan Do, Binh X Nguyen, et al. 2019. Overcoming Data Limitation in Medical Visual Question Answering. In MICCAI. Cham, 522\u2013530. [32] Alec Radford, Jong Wook Kim, Hallacy, et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning. PMLR, 8748\u20138763. [33] Sachin Mehta Sanjay Subramanian, Lucy Lu Wang et al. 2020. MedICaT: A Dataset of Medical Images, Captions, and Textual"
},
{
"url": "http://arxiv.org/abs/2306.00890v1",
"title": "LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day",
"abstract": "Conversational generative AI has demonstrated remarkable promise for\nempowering biomedical practitioners, but current investigations focus on\nunimodal text. Multimodal conversational AI has seen rapid progress by\nleveraging billions of image-text pairs from the public web, but such\ngeneral-domain vision-language models still lack sophistication in\nunderstanding and conversing about biomedical images. In this paper, we propose\na cost-efficient approach for training a vision-language conversational\nassistant that can answer open-ended research questions of biomedical images.\nThe key idea is to leverage a large-scale, broad-coverage biomedical\nfigure-caption dataset extracted from PubMed Central, use GPT-4 to\nself-instruct open-ended instruction-following data from the captions, and then\nfine-tune a large general-domain vision-language model using a novel curriculum\nlearning method. Specifically, the model first learns to align biomedical\nvocabulary using the figure-caption pairs as is, then learns to master\nopen-ended conversational semantics using GPT-4 generated instruction-following\ndata, broadly mimicking how a layperson gradually acquires biomedical\nknowledge. This enables us to train a Large Language and Vision Assistant for\nBioMedicine (LLaVA-Med) in less than 15 hours (with eight A100s). LLaVA-Med\nexhibits excellent multimodal conversational capability and can follow\nopen-ended instruction to assist with inquiries about a biomedical image. On\nthree standard biomedical visual question answering datasets, LLaVA-Med\noutperforms previous supervised state-of-the-art on certain metrics. To\nfacilitate biomedical multimodal research, we will release our\ninstruction-following data and the LLaVA-Med model.",
"authors": "Chunyuan Li, Cliff Wong, Sheng Zhang, Naoto Usuyama, Haotian Liu, Jianwei Yang, Tristan Naumann, Hoifung Poon, Jianfeng Gao",
"published": "2023-06-01",
"updated": "2023-06-01",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.CL"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2010.06000v1",
"title": "MedICaT: A Dataset of Medical Images, Captions, and Textual References",
"abstract": "Understanding the relationship between figures and text is key to scientific\ndocument understanding. Medical figures in particular are quite complex, often\nconsisting of several subfigures (75% of figures in our dataset), with detailed\ntext describing their content. Previous work studying figures in scientific\npapers focused on classifying figure content rather than understanding how\nimages relate to the text. To address challenges in figure retrieval and\nfigure-to-text alignment, we introduce MedICaT, a dataset of medical images in\ncontext. MedICaT consists of 217K images from 131K open access biomedical\npapers, and includes captions, inline references for 74% of figures, and\nmanually annotated subfigures and subcaptions for a subset of figures. Using\nMedICaT, we introduce the task of subfigure to subcaption alignment in compound\nfigures and demonstrate the utility of inline references in image-text\nmatching. Our data and code can be accessed at\nhttps://github.com/allenai/medicat.",
"authors": "Sanjay Subramanian, Lucy Lu Wang, Sachin Mehta, Ben Bogin, Madeleine van Zuylen, Sravanthi Parasa, Sameer Singh, Matt Gardner, Hannaneh Hajishirzi",
"published": "2020-10-12",
"updated": "2020-10-12",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.CL"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2107.07651v2",
"title": "Align before Fuse: Vision and Language Representation Learning with Momentum Distillation",
"abstract": "Large-scale vision and language representation learning has shown promising\nimprovements on various vision-language tasks. Most existing methods employ a\ntransformer-based multimodal encoder to jointly model visual tokens\n(region-based image features) and word tokens. Because the visual tokens and\nword tokens are unaligned, it is challenging for the multimodal encoder to\nlearn image-text interactions. In this paper, we introduce a contrastive loss\nto ALign the image and text representations BEfore Fusing (ALBEF) them through\ncross-modal attention, which enables more grounded vision and language\nrepresentation learning. Unlike most existing methods, our method does not\nrequire bounding box annotations nor high-resolution images. In order to\nimprove learning from noisy web data, we propose momentum distillation, a\nself-training method which learns from pseudo-targets produced by a momentum\nmodel. We provide a theoretical analysis of ALBEF from a mutual information\nmaximization perspective, showing that different training tasks can be\ninterpreted as different ways to generate views for an image-text pair. ALBEF\nachieves state-of-the-art performance on multiple downstream vision-language\ntasks. On image-text retrieval, ALBEF outperforms methods that are pre-trained\non orders of magnitude larger datasets. On VQA and NLVR$^2$, ALBEF achieves\nabsolute improvements of 2.37% and 3.84% compared to the state-of-the-art,\nwhile enjoying faster inference speed. Code and pre-trained models are\navailable at https://github.com/salesforce/ALBEF/.",
"authors": "Junnan Li, Ramprasaath R. Selvaraju, Akhilesh Deepak Gotmare, Shafiq Joty, Caiming Xiong, Steven Hoi",
"published": "2021-07-16",
"updated": "2021-10-07",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.AI"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2303.16199v2",
"title": "LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention",
"abstract": "We present LLaMA-Adapter, a lightweight adaption method to efficiently\nfine-tune LLaMA into an instruction-following model. Using 52K self-instruct\ndemonstrations, LLaMA-Adapter only introduces 1.2M learnable parameters upon\nthe frozen LLaMA 7B model, and costs less than one hour for fine-tuning on 8\nA100 GPUs. Specifically, we adopt a set of learnable adaption prompts, and\nprepend them to the word tokens at higher transformer layers. Then, a\nzero-initialized attention mechanism with zero gating is proposed, which\nadaptively injects the new instructional cues into LLaMA, while effectively\npreserves its pre-trained knowledge. With our efficient training, LLaMA-Adapter\ncan generate high-quality responses, comparable to Alpaca with fully fine-tuned\n7B parameters. Besides language commands, our approach can be simply extended\nto multi-modal instructions for learning image-conditioned LLaMA model, which\nachieves superior reasoning performance on ScienceQA and COCO Caption\nbenchmarks. Furthermore, we also evaluate the zero-initialized attention\nmechanism for fine-tuning other pre-trained models (ViT, RoBERTa) on\ntraditional vision and language tasks, demonstrating the superior\ngeneralization capacity of our approach. Code is released at\nhttps://github.com/OpenGVLab/LLaMA-Adapter.",
"authors": "Renrui Zhang, Jiaming Han, Chris Liu, Peng Gao, Aojun Zhou, Xiangfei Hu, Shilin Yan, Pan Lu, Hongsheng Li, Yu Qiao",
"published": "2023-03-28",
"updated": "2023-06-14",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.AI",
"cs.CL",
"cs.LG",
"cs.MM"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2103.10385v2",
"title": "GPT Understands, Too",
"abstract": "Prompting a pretrained language model with natural language patterns has been\nproved effective for natural language understanding (NLU). However, our\npreliminary study reveals that manual discrete prompts often lead to unstable\nperformance -- e.g., changing a single word in the prompt might result in\nsubstantial performance drop. We propose a novel method P-Tuning that employs\ntrainable continuous prompt embeddings in concatenation with discrete prompts.\nEmpirically, P-Tuning not only stabilizes training by minimizing the gap\nbetween various discrete prompts, but also improves performance by a sizeable\nmargin on a wide range of NLU tasks including LAMA and SuperGLUE. P-Tuning is\ngenerally effective for both frozen and tuned language models, under both the\nfully-supervised and few-shot settings.",
"authors": "Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, Jie Tang",
"published": "2021-03-18",
"updated": "2023-10-25",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.LG"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2312.11420v1",
"title": "Tuning LayerNorm in Attention: Towards Efficient Multi-Modal LLM Finetuning",
"abstract": "This paper introduces an efficient strategy to transform Large Language\nModels (LLMs) into Multi-Modal Large Language Models (MLLMs). By\nconceptualizing this transformation as a domain adaptation process, i.e.,\ntransitioning from text understanding to embracing multiple modalities, we\nintriguingly note that, within each attention block, tuning LayerNorm suffices\nto yield strong performance. Moreover, when benchmarked against other tuning\napproaches like full parameter finetuning or LoRA, its benefits on efficiency\nare substantial. For example, when compared to LoRA on a 13B model scale,\nperformance can be enhanced by an average of over 20% across five multi-modal\ntasks, and meanwhile, results in a significant reduction of trainable\nparameters by 41.9% and a decrease in GPU memory usage by 17.6%. On top of this\nLayerNorm strategy, we showcase that selectively tuning only with\nconversational data can improve efficiency further. Beyond these empirical\noutcomes, we provide a comprehensive analysis to explore the role of LayerNorm\nin adapting LLMs to the multi-modal domain and improving the expressive power\nof the model.",
"authors": "Bingchen Zhao, Haoqin Tu, Chen Wei, Jieru Mei, Cihang Xie",
"published": "2023-12-18",
"updated": "2023-12-18",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"cs.CV"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2101.00190v1",
"title": "Prefix-Tuning: Optimizing Continuous Prompts for Generation",
"abstract": "Fine-tuning is the de facto way to leverage large pretrained language models\nto perform downstream tasks. However, it modifies all the language model\nparameters and therefore necessitates storing a full copy for each task. In\nthis paper, we propose prefix-tuning, a lightweight alternative to fine-tuning\nfor natural language generation tasks, which keeps language model parameters\nfrozen, but optimizes a small continuous task-specific vector (called the\nprefix). Prefix-tuning draws inspiration from prompting, allowing subsequent\ntokens to attend to this prefix as if it were \"virtual tokens\". We apply\nprefix-tuning to GPT-2 for table-to-text generation and to BART for\nsummarization. We find that by learning only 0.1\\% of the parameters,\nprefix-tuning obtains comparable performance in the full data setting,\noutperforms fine-tuning in low-data settings, and extrapolates better to\nexamples with topics unseen during training.",
"authors": "Xiang Lisa Li, Percy Liang",
"published": "2021-01-01",
"updated": "2021-01-01",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2010.11929v2",
"title": "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale",
"abstract": "While the Transformer architecture has become the de-facto standard for\nnatural language processing tasks, its applications to computer vision remain\nlimited. In vision, attention is either applied in conjunction with\nconvolutional networks, or used to replace certain components of convolutional\nnetworks while keeping their overall structure in place. We show that this\nreliance on CNNs is not necessary and a pure transformer applied directly to\nsequences of image patches can perform very well on image classification tasks.\nWhen pre-trained on large amounts of data and transferred to multiple mid-sized\nor small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision\nTransformer (ViT) attains excellent results compared to state-of-the-art\nconvolutional networks while requiring substantially fewer computational\nresources to train.",
"authors": "Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby",
"published": "2020-10-22",
"updated": "2021-06-03",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.AI",
"cs.LG"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2402.09353v4",
"title": "DoRA: Weight-Decomposed Low-Rank Adaptation",
"abstract": "Among the widely used parameter-efficient finetuning (PEFT) methods, LoRA and\nits variants have gained considerable popularity because of avoiding additional\ninference costs. However, there still often exists an accuracy gap between\nthese methods and full fine-tuning (FT). In this work, we first introduce a\nnovel weight decomposition analysis to investigate the inherent differences\nbetween FT and LoRA. Aiming to resemble the learning capacity of FT from the\nfindings, we propose Weight-Decomposed LowRank Adaptation (DoRA). DoRA\ndecomposes the pre-trained weight into two components, magnitude and direction,\nfor fine-tuning, specifically employing LoRA for directional updates to\nefficiently minimize the number of trainable parameters. By employing DoRA, we\nenhance both the learning capacity and training stability of LoRA while\navoiding any additional inference overhead. DoRA consistently outperforms LoRA\non fine-tuning LLaMA, LLaVA, and VL-BART on various downstream tasks, such as\ncommonsense reasoning, visual instruction tuning, and image/video-text\nunderstanding. Code available at https://github.com/NVlabs/DoRA.",
"authors": "Shih-Yang Liu, Chien-Yi Wang, Hongxu Yin, Pavlo Molchanov, Yu-Chiang Frank Wang, Kwang-Ting Cheng, Min-Hung Chen",
"published": "2024-02-14",
"updated": "2024-04-28",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.CV"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2106.09685v2",
"title": "LoRA: Low-Rank Adaptation of Large Language Models",
"abstract": "An important paradigm of natural language processing consists of large-scale\npre-training on general domain data and adaptation to particular tasks or\ndomains. As we pre-train larger models, full fine-tuning, which retrains all\nmodel parameters, becomes less feasible. Using GPT-3 175B as an example --\ndeploying independent instances of fine-tuned models, each with 175B\nparameters, is prohibitively expensive. We propose Low-Rank Adaptation, or\nLoRA, which freezes the pre-trained model weights and injects trainable rank\ndecomposition matrices into each layer of the Transformer architecture, greatly\nreducing the number of trainable parameters for downstream tasks. Compared to\nGPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable\nparameters by 10,000 times and the GPU memory requirement by 3 times. LoRA\nperforms on-par or better than fine-tuning in model quality on RoBERTa,\nDeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher\ntraining throughput, and, unlike adapters, no additional inference latency. We\nalso provide an empirical investigation into rank-deficiency in language model\nadaptation, which sheds light on the efficacy of LoRA. We release a package\nthat facilitates the integration of LoRA with PyTorch models and provide our\nimplementations and model checkpoints for RoBERTa, DeBERTa, and GPT-2 at\nhttps://github.com/microsoft/LoRA.",
"authors": "Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen",
"published": "2021-06-17",
"updated": "2021-10-16",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"cs.LG"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2203.16329v3",
"title": "Parameter-efficient Model Adaptation for Vision Transformers",
"abstract": "In computer vision, it has achieved great transfer learning performance via\nadapting large-scale pretrained vision models (e.g., vision transformers) to\ndownstream tasks. Common approaches for model adaptation either update all\nmodel parameters or leverage linear probes. In this paper, we aim to study\nparameter-efficient model adaptation strategies for vision transformers on the\nimage classification task. We formulate efficient model adaptation as a\nsubspace training problem and perform a comprehensive benchmarking over\ndifferent efficient adaptation methods. We conduct an empirical study on each\nefficient model adaptation method focusing on its performance alongside\nparameter cost. Furthermore, we propose a parameter-efficient model adaptation\nframework, which first selects submodules by measuring local intrinsic\ndimensions and then projects them into subspace for further decomposition via a\nnovel Kronecker Adaptation (KAdaptation) method. We analyze and compare our\nmethod with a diverse set of baseline model adaptation methods (including\nstate-of-the-art methods for pretrained language models). Our method performs\nthe best in terms of the tradeoff between accuracy and parameter efficiency\nacross 20 image classification datasets under the few-shot setting and 7 image\nclassification datasets under the full-shot setting.",
"authors": "Xuehai He, Chunyuan Li, Pengchuan Zhang, Jianwei Yang, Xin Eric Wang",
"published": "2022-03-29",
"updated": "2023-07-13",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.AI"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2305.08381v3",
"title": "Parameter-efficient Tuning of Large-scale Multimodal Foundation Model",
"abstract": "Driven by the progress of large-scale pre-training, parameter-efficient\ntransfer learning has gained immense popularity across different subfields of\nArtificial Intelligence. The core is to adapt the model to downstream tasks\nwith only a small set of parameters. Recently, researchers have leveraged such\nproven techniques in multimodal tasks and achieve promising results. However,\ntwo critical issues remain unresolved: how to further reduce the complexity\nwith lightweight design and how to boost alignment between modalities under\nextremely low parameters. In this paper, we propose A graceful prompt framework\nfor cross-modal transfer (Aurora) to overcome these challenges. Considering the\nredundancy in existing architectures, we first utilize the mode approximation\nto generate 0.1M trainable parameters to implement the multimodal prompt\ntuning, which explores the low intrinsic dimension with only 0.04% parameters\nof the pre-trained model. Then, for better modality alignment, we propose the\nInformative Context Enhancement and Gated Query Transformation module under\nextremely few parameters scenes. A thorough evaluation on six cross-modal\nbenchmarks shows that it not only outperforms the state-of-the-art but even\noutperforms the full fine-tuning approach. Our code is available at:\nhttps://github.com/WillDreamer/Aurora.",
"authors": "Haixin Wang, Xinlong Yang, Jianlong Chang, Dian Jin, Jinan Sun, Shikun Zhang, Xiao Luo, Qi Tian",
"published": "2023-05-15",
"updated": "2023-10-28",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2304.08485v2",
"title": "Visual Instruction Tuning",
"abstract": "Instruction tuning large language models (LLMs) using machine-generated\ninstruction-following data has improved zero-shot capabilities on new tasks,\nbut the idea is less explored in the multimodal field. In this paper, we\npresent the first attempt to use language-only GPT-4 to generate multimodal\nlanguage-image instruction-following data. By instruction tuning on such\ngenerated data, we introduce LLaVA: Large Language and Vision Assistant, an\nend-to-end trained large multimodal model that connects a vision encoder and\nLLM for general-purpose visual and language understanding.Our early experiments\nshow that LLaVA demonstrates impressive multimodel chat abilities, sometimes\nexhibiting the behaviors of multimodal GPT-4 on unseen images/instructions, and\nyields a 85.1% relative score compared with GPT-4 on a synthetic multimodal\ninstruction-following dataset. When fine-tuned on Science QA, the synergy of\nLLaVA and GPT-4 achieves a new state-of-the-art accuracy of 92.53%. We make\nGPT-4 generated visual instruction tuning data, our model and code base\npublicly available.",
"authors": "Haotian Liu, Chunyuan Li, Qingyang Wu, Yong Jae Lee",
"published": "2023-04-17",
"updated": "2023-12-11",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.AI",
"cs.CL",
"cs.LG"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2104.08691v2",
"title": "The Power of Scale for Parameter-Efficient Prompt Tuning",
"abstract": "In this work, we explore \"prompt tuning\", a simple yet effective mechanism\nfor learning \"soft prompts\" to condition frozen language models to perform\nspecific downstream tasks. Unlike the discrete text prompts used by GPT-3, soft\nprompts are learned through backpropagation and can be tuned to incorporate\nsignal from any number of labeled examples. Our end-to-end learned approach\noutperforms GPT-3's \"few-shot\" learning by a large margin. More remarkably,\nthrough ablations on model size using T5, we show that prompt tuning becomes\nmore competitive with scale: as models exceed billions of parameters, our\nmethod \"closes the gap\" and matches the strong performance of model tuning\n(where all model weights are tuned). This finding is especially relevant in\nthat large models are costly to share and serve, and the ability to reuse one\nfrozen model for multiple downstream tasks can ease this burden. Our method can\nbe seen as a simplification of the recently proposed \"prefix tuning\" of Li and\nLiang (2021), and we provide a comparison to this and other similar approaches.\nFinally, we show that conditioning a frozen model with soft prompts confers\nbenefits in robustness to domain transfer, as compared to full model tuning.",
"authors": "Brian Lester, Rami Al-Rfou, Noah Constant",
"published": "2021-04-18",
"updated": "2021-09-02",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2301.12597v3",
"title": "BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models",
"abstract": "The cost of vision-and-language pre-training has become increasingly\nprohibitive due to end-to-end training of large-scale models. This paper\nproposes BLIP-2, a generic and efficient pre-training strategy that bootstraps\nvision-language pre-training from off-the-shelf frozen pre-trained image\nencoders and frozen large language models. BLIP-2 bridges the modality gap with\na lightweight Querying Transformer, which is pre-trained in two stages. The\nfirst stage bootstraps vision-language representation learning from a frozen\nimage encoder. The second stage bootstraps vision-to-language generative\nlearning from a frozen language model. BLIP-2 achieves state-of-the-art\nperformance on various vision-language tasks, despite having significantly\nfewer trainable parameters than existing methods. For example, our model\noutperforms Flamingo80B by 8.7% on zero-shot VQAv2 with 54x fewer trainable\nparameters. We also demonstrate the model's emerging capabilities of zero-shot\nimage-to-text generation that can follow natural language instructions.",
"authors": "Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi",
"published": "2023-01-30",
"updated": "2023-06-15",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2112.06825v2",
"title": "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks",
"abstract": "Recently, fine-tuning language models pre-trained on large text corpora have\nprovided huge improvements on vision-and-language (V&L) tasks as well as on\npure language tasks. However, fine-tuning the entire parameter set of\npre-trained models becomes impractical since the model size is growing rapidly.\nHence, in this paper, we introduce adapter-based parameter-efficient transfer\nlearning techniques to V&L models such as VL-BART and VLT5. We evaluate our\nmethods in a unified multi-task setup on both image-text and video-text\nbenchmarks. For the image-text tasks, we use four diverse V&L datasets: VQAv2,\nGQA, NLVR2 , and MSCOCO image captioning. For video-text tasks, we use TVQA,\nHow2QA, TVC, and YC2C. With careful training and thorough experiments, we\nbenchmark three popular adapter-based methods (Adapter, Hyperformer, Compacter)\nagainst the standard full fine-tuning and the recently proposed prompt-tuning\napproach. We also enhance the efficiency and performance of adapters by sharing\ntheir weights to attain knowledge across tasks. Our results demonstrate that\ntraining the adapter with the weight-sharing technique (4.18% of total\nparameters for image-text tasks and 3.39% for video-text tasks) can match the\nperformance of fine-tuning the entire model. Lastly, we present a comprehensive\nanalysis including the combination of adapter and task-specific prompts and the\nimpact of V&L pre-training on adapters. Our code is available at:\nhttps://github.com/ylsung/VL_adapter.",
"authors": "Yi-Lin Sung, Jaemin Cho, Mohit Bansal",
"published": "2021-12-13",
"updated": "2022-03-24",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.AI",
"cs.CL",
"cs.LG"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2304.10592v2",
"title": "MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models",
"abstract": "The recent GPT-4 has demonstrated extraordinary multi-modal abilities, such\nas directly generating websites from handwritten text and identifying humorous\nelements within images. These features are rarely observed in previous\nvision-language models. However, the technical details behind GPT-4 continue to\nremain undisclosed. We believe that the enhanced multi-modal generation\ncapabilities of GPT-4 stem from the utilization of sophisticated large language\nmodels (LLM). To examine this phenomenon, we present MiniGPT-4, which aligns a\nfrozen visual encoder with a frozen advanced LLM, Vicuna, using one projection\nlayer. Our work, for the first time, uncovers that properly aligning the visual\nfeatures with an advanced large language model can possess numerous advanced\nmulti-modal abilities demonstrated by GPT-4, such as detailed image description\ngeneration and website creation from hand-drawn drafts. Furthermore, we also\nobserve other emerging capabilities in MiniGPT-4, including writing stories and\npoems inspired by given images, teaching users how to cook based on food\nphotos, and so on. In our experiment, we found that the model trained on short\nimage caption pairs could produce unnatural language outputs (e.g., repetition\nand fragmentation). To address this problem, we curate a detailed image\ndescription dataset in the second stage to finetune the model, which\nconsequently improves the model's generation reliability and overall usability.\nOur code, pre-trained model, and collected dataset are available at\nhttps://minigpt-4.github.io/.",
"authors": "Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, Mohamed Elhoseiny",
"published": "2023-04-20",
"updated": "2023-10-02",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/1909.11867v1",
"title": "Overcoming Data Limitation in Medical Visual Question Answering",
"abstract": "Traditional approaches for Visual Question Answering (VQA) require large\namount of labeled data for training. Unfortunately, such large scale data is\nusually not available for medical domain. In this paper, we propose a novel\nmedical VQA framework that overcomes the labeled data limitation. The proposed\nframework explores the use of the unsupervised Denoising Auto-Encoder (DAE) and\nthe supervised Meta-Learning. The advantage of DAE is to leverage the large\namount of unlabeled images while the advantage of Meta-Learning is to learn\nmeta-weights that quickly adapt to VQA problem with limited labeled data. By\nleveraging the advantages of these techniques, it allows the proposed framework\nto be efficiently trained using a small labeled training set. The experimental\nresults show that our proposed method significantly outperforms the\nstate-of-the-art medical VQA.",
"authors": "Binh D. Nguyen, Thanh-Toan Do, Binh X. Nguyen, Tuong Do, Erman Tjiputra, Quang D. Tran",
"published": "2019-09-26",
"updated": "2019-09-26",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2312.12148v1",
"title": "Parameter-Efficient Fine-Tuning Methods for Pretrained Language Models: A Critical Review and Assessment",
"abstract": "With the continuous growth in the number of parameters of transformer-based\npretrained language models (PLMs), particularly the emergence of large language\nmodels (LLMs) with billions of parameters, many natural language processing\n(NLP) tasks have demonstrated remarkable success. However, the enormous size\nand computational demands of these models pose significant challenges for\nadapting them to specific downstream tasks, especially in environments with\nlimited computational resources. Parameter Efficient Fine-Tuning (PEFT) offers\nan effective solution by reducing the number of fine-tuning parameters and\nmemory usage while achieving comparable performance to full fine-tuning. The\ndemands for fine-tuning PLMs, especially LLMs, have led to a surge in the\ndevelopment of PEFT methods, as depicted in Fig. 1. In this paper, we present a\ncomprehensive and systematic review of PEFT methods for PLMs. We summarize\nthese PEFT methods, discuss their applications, and outline future directions.\nFurthermore, we conduct experiments using several representative PEFT methods\nto better understand their effectiveness in parameter efficiency and memory\nefficiency. By offering insights into the latest advancements and practical\napplications, this survey serves as an invaluable resource for researchers and\npractitioners seeking to navigate the challenges and opportunities presented by\nPEFT in the context of PLMs.",
"authors": "Lingling Xu, Haoran Xie, Si-Zhao Joe Qin, Xiaohui Tao, Fu Lee Wang",
"published": "2023-12-19",
"updated": "2023-12-19",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2105.08913v2",
"title": "Multiple Meta-model Quantifying for Medical Visual Question Answering",
"abstract": "Transfer learning is an important step to extract meaningful features and\novercome the data limitation in the medical Visual Question Answering (VQA)\ntask. However, most of the existing medical VQA methods rely on external data\nfor transfer learning, while the meta-data within the dataset is not fully\nutilized. In this paper, we present a new multiple meta-model quantifying\nmethod that effectively learns meta-annotation and leverages meaningful\nfeatures to the medical VQA task. Our proposed method is designed to increase\nmeta-data by auto-annotation, deal with noisy labels, and output meta-models\nwhich provide robust features for medical VQA tasks. Extensively experimental\nresults on two public medical VQA datasets show that our approach achieves\nsuperior accuracy in comparison with other state-of-the-art methods, while does\nnot require external data to train meta-models.",
"authors": "Tuong Do, Binh X. Nguyen, Erman Tjiputra, Minh Tran, Quang D. Tran, Anh Nguyen",
"published": "2021-05-19",
"updated": "2021-06-26",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"label": "Related Work"
},
{
"url": "http://arxiv.org/abs/2008.03156v1",
"title": "Better Fine-Tuning by Reducing Representational Collapse",
"abstract": "Although widely adopted, existing approaches for fine-tuning pre-trained\nlanguage models have been shown to be unstable across hyper-parameter settings,\nmotivating recent work on trust region methods. In this paper, we present a\nsimplified and efficient method rooted in trust region theory that replaces\npreviously used adversarial objectives with parametric noise (sampling from\neither a normal or uniform distribution), thereby discouraging representation\nchange during fine-tuning when possible without hurting performance. We also\nintroduce a new analysis to motivate the use of trust region methods more\ngenerally, by studying representational collapse; the degradation of\ngeneralizable representations from pre-trained models as they are fine-tuned\nfor a specific end task. Extensive experiments show that our fine-tuning method\nmatches or exceeds the performance of previous trust region methods on a range\nof understanding and generation tasks (including DailyMail/CNN, Gigaword,\nReddit TIFU, and the GLUE benchmark), while also being much faster. We also\nshow that it is less prone to representation collapse; the pre-trained models\nmaintain more generalizable representations every time they are fine-tuned.",
"authors": "Armen Aghajanyan, Akshat Shrivastava, Anchit Gupta, Naman Goyal, Luke Zettlemoyer, Sonal Gupta",
"published": "2020-08-06",
"updated": "2020-08-06",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.CL",
"stat.ML"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2312.03694v3",
"title": "Parameter-Efficient Transfer Learning of Audio Spectrogram Transformers",
"abstract": "The common modus operandi of fine-tuning large pre-trained Transformer models\nentails the adaptation of all their parameters (i.e., full fine-tuning). While\nachieving striking results on multiple tasks, this approach becomes unfeasible\nas the model size and the number of downstream tasks increase. In natural\nlanguage processing and computer vision, parameter-efficient approaches like\nprompt-tuning and adapters have emerged as solid alternatives by fine-tuning\nonly a small number of extra parameters, without sacrificing performance\naccuracy. For audio classification tasks, the Audio Spectrogram Transformer\nmodel shows impressive results. However, surprisingly, how to efficiently adapt\nit to several downstream tasks has not been tackled before. In this paper, we\nbridge this gap and present a detailed investigation of common\nparameter-efficient methods, revealing that adapters and LoRA consistently\noutperform the other methods across four benchmarks. Whereas adapters prove to\nbe more efficient in few-shot learning settings, LoRA turns out to scale better\nas we increase the number of learnable parameters. We finally carry out\nablation studies to find the best configuration for adapters and LoRA.",
"authors": "Umberto Cappellazzo, Daniele Falavigna, Alessio Brutti, Mirco Ravanelli",
"published": "2023-12-06",
"updated": "2024-01-11",
"primary_cat": "eess.AS",
"cats": [
"eess.AS"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2212.10650v1",
"title": "KronA: Parameter Efficient Tuning with Kronecker Adapter",
"abstract": "Fine-tuning a Pre-trained Language Model (PLM) on a specific downstream task\nhas been a well-known paradigm in Natural Language Processing. However, with\nthe ever-growing size of PLMs, training the entire model on several downstream\ntasks becomes very expensive and resource-hungry. Recently, different Parameter\nEfficient Tuning (PET) techniques are proposed to improve the efficiency of\nfine-tuning PLMs. One popular category of PET methods is the low-rank\nadaptation methods which insert learnable truncated SVD modules into the\noriginal model either sequentially or in parallel. However, low-rank\ndecomposition suffers from limited representation power. In this work, we\naddress this problem using the Kronecker product instead of the low-rank\nrepresentation. We introduce KronA, a Kronecker product-based adapter module\nfor efficient fine-tuning of Transformer-based PLMs. We apply the proposed\nmethods for fine-tuning T5 on the GLUE benchmark to show that incorporating the\nKronecker-based modules can outperform state-of-the-art PET methods.",
"authors": "Ali Edalati, Marzieh Tahaei, Ivan Kobyzev, Vahid Partovi Nia, James J. Clark, Mehdi Rezagholizadeh",
"published": "2022-12-20",
"updated": "2022-12-20",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2310.18602v1",
"title": "Device-Edge Cooperative Fine-Tuning of Foundation Models as a 6G Service",
"abstract": "Foundation models (FoMos), referring to large-scale AI models, possess\nhuman-like capabilities and are able to perform competitively in the domain of\nhuman intelligence. The breakthrough in FoMos has inspired researchers to\ndeploy such models in the sixth-generation (6G) mobile networks for automating\na broad range of tasks in next-generation mobile applications. While the sizes\nof FoMos are reaching their peaks, their next phase is expected to focus on\nfine-tuning the models to specific downstream tasks. This inspires us to\npropose the vision of FoMo fine-tuning as a 6G service. Its key feature is the\nexploitation of existing parameter-efficient fine-tuning (PEFT) techniques to\ntweak only a small fraction of model weights for a FoMo to become customized\nfor a specific task. To materialize the said vision, we survey the\nstate-of-the-art PEFT and then present a novel device-edge fine-tuning (DEFT)\nframework for providing efficient and privacy-preserving fine-tuning services\nat the 6G network edge. The framework consists of the following comprehensive\nset of techniques: 1) Control of fine-tuning parameter sizes in different\ntransformer blocks of a FoMo; 2) Over-the-air computation for realizing neural\nconnections in DEFT; 3) Federated DEFT in a multi-device system by downloading\na FoMo emulator or gradients; 4) On-the-fly prompt-ensemble tuning; 5)\nDevice-to-device prompt transfer among devices. Experiments are conducted using\npre-trained FoMos with up to 11 billion parameters to demonstrate the\neffectiveness of DEFT techniques. The article is concluded by presenting future\nresearch opportunities.",
"authors": "Hai Wu, Xu Chen, Kaibin Huang",
"published": "2023-10-28",
"updated": "2023-10-28",
"primary_cat": "cs.NI",
"cats": [
"cs.NI",
"cs.IT",
"math.IT"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2312.11875v2",
"title": "Sparse is Enough in Fine-tuning Pre-trained Large Language Models",
"abstract": "With the prevalence of pre-training-fine-tuning paradigm, how to efficiently\nadapt the pre-trained model to the downstream tasks has been an intriguing\nissue. Parameter-Efficient Fine-Tuning (PEFT) methods have been proposed for\nlow-cost adaptation. Although PEFT has demonstrated effectiveness and been\nwidely applied, the underlying principles are still unclear. In this paper, we\nadopt the PAC-Bayesian generalization error bound, viewing pre-training as a\nshift of prior distribution which leads to a tighter bound for generalization\nerror. We validate this shift from the perspectives of oscillations in the loss\nlandscape and the quasi-sparsity in gradient distribution. Based on this, we\npropose a gradient-based sparse fine-tuning algorithm, named Sparse Increment\nFine-Tuning (SIFT), and validate its effectiveness on a range of tasks\nincluding the GLUE Benchmark and Instruction-tuning. The code is accessible at\nhttps://github.com/song-wx/SIFT/.",
"authors": "Weixi Song, Zuchao Li, Lefei Zhang, Hai Zhao, Bo Du",
"published": "2023-12-19",
"updated": "2024-05-02",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AI",
"cs.CL"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2203.12119v2",
"title": "Visual Prompt Tuning",
"abstract": "The current modus operandi in adapting pre-trained models involves updating\nall the backbone parameters, ie, full fine-tuning. This paper introduces Visual\nPrompt Tuning (VPT) as an efficient and effective alternative to full\nfine-tuning for large-scale Transformer models in vision. Taking inspiration\nfrom recent advances in efficiently tuning large language models, VPT\nintroduces only a small amount (less than 1% of model parameters) of trainable\nparameters in the input space while keeping the model backbone frozen. Via\nextensive experiments on a wide variety of downstream recognition tasks, we\nshow that VPT achieves significant performance gains compared to other\nparameter efficient tuning protocols. Most importantly, VPT even outperforms\nfull fine-tuning in many cases across model capacities and training data\nscales, while reducing per-task storage cost.",
"authors": "Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, Ser-Nam Lim",
"published": "2022-03-23",
"updated": "2022-07-20",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2312.08900v1",
"title": "Context-PEFT: Efficient Multi-Modal, Multi-Task Fine-Tuning",
"abstract": "This paper introduces a novel Parameter-Efficient Fine-Tuning (PEFT)\nframework for multi-modal, multi-task transfer learning with pre-trained\nlanguage models. PEFT techniques such as LoRA, BitFit and IA3 have demonstrated\ncomparable performance to full fine-tuning of pre-trained models for specific\ndownstream tasks, all while demanding significantly fewer trainable parameters\nand reduced GPU memory consumption. However, in the context of multi-modal\nfine-tuning, the need for architectural modifications or full fine-tuning often\nbecomes apparent. To address this we propose Context-PEFT, which learns\ndifferent groups of adaptor parameters based on the token's domain. This\napproach enables LoRA-like weight injection without requiring additional\narchitectural changes. Our method is evaluated on the COCO captioning task,\nwhere it outperforms full fine-tuning under similar data constraints while\nsimultaneously offering a substantially more parameter-efficient and\ncomputationally economical solution.",
"authors": "Avelina Asada Hadji-Kyriacou, Ognjen Arandjelovic",
"published": "2023-12-14",
"updated": "2023-12-14",
"primary_cat": "cs.LG",
"cats": [
"cs.LG"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2211.00585v1",
"title": "Adapter-Based Extension of Multi-Speaker Text-to-Speech Model for New Speakers",
"abstract": "Fine-tuning is a popular method for adapting text-to-speech (TTS) models to\nnew speakers. However this approach has some challenges. Usually fine-tuning\nrequires several hours of high quality speech per speaker. There is also that\nfine-tuning will negatively affect the quality of speech synthesis for\npreviously learnt speakers. In this paper we propose an alternative approach\nfor TTS adaptation based on using parameter-efficient adapter modules. In the\nproposed approach, a few small adapter modules are added to the original\nnetwork. The original weights are frozen, and only the adapters are fine-tuned\non speech for new speaker. The parameter-efficient fine-tuning approach will\nproduce a new model with high level of parameter sharing with original model.\nOur experiments on LibriTTS, HiFi-TTS and VCTK datasets validate the\neffectiveness of adapter-based method through objective and subjective metrics.",
"authors": "Cheng-Ping Hsieh, Subhankar Ghosh, Boris Ginsburg",
"published": "2022-11-01",
"updated": "2022-11-01",
"primary_cat": "eess.AS",
"cats": [
"eess.AS",
"cs.LG",
"cs.SD"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2303.08566v2",
"title": "Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning",
"abstract": "Visual Parameter-Efficient Fine-Tuning (PEFT) has become a powerful\nalternative for full fine-tuning so as to adapt pre-trained vision models to\ndownstream tasks, which only tunes a small number of parameters while freezing\nthe vast majority ones to ease storage burden and optimization difficulty.\nHowever, existing PEFT methods introduce trainable parameters to the same\npositions across different tasks depending solely on human heuristics and\nneglect the domain gaps. To this end, we study where to introduce and how to\nallocate trainable parameters by proposing a novel Sensitivity-aware visual\nParameter-efficient fine-Tuning (SPT) scheme, which adaptively allocates\ntrainable parameters to task-specific important positions given a desired\ntunable parameter budget. Specifically, our SPT first quickly identifies the\nsensitive parameters that require tuning for a given task in a data-dependent\nway. Next, our SPT further boosts the representational capability for the\nweight matrices whose number of sensitive parameters exceeds a pre-defined\nthreshold by utilizing existing structured tuning methods, e.g., LoRA [23] or\nAdapter [22], to replace directly tuning the selected sensitive parameters\n(unstructured tuning) under the budget. Extensive experiments on a wide range\nof downstream recognition tasks show that our SPT is complementary to the\nexisting PEFT methods and largely boosts their performance, e.g., SPT improves\nAdapter with supervised pre-trained ViT-B/16 backbone by 4.2% and 1.4% mean\nTop-1 accuracy, reaching SOTA performance on FGVC and VTAB-1k benchmarks,\nrespectively. Source code is at https://github.com/ziplab/SPT",
"authors": "Haoyu He, Jianfei Cai, Jing Zhang, Dacheng Tao, Bohan Zhuang",
"published": "2023-03-15",
"updated": "2023-08-31",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.AI",
"cs.LG"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2210.00036v2",
"title": "Differentially Private Bias-Term only Fine-tuning of Foundation Models",
"abstract": "We study the problem of differentially private (DP) fine-tuning of large\npre-trained models -- a recent privacy-preserving approach suitable for solving\ndownstream tasks with sensitive data. Existing work has demonstrated that high\naccuracy is possible under strong privacy constraint, yet requires significant\ncomputational overhead or modifications to the network architecture.\n We propose differentially private bias-term fine-tuning (DP-BiTFiT), which\nmatches the state-of-the-art accuracy for DP algorithms and the efficiency of\nthe standard BiTFiT. DP-BiTFiT is model agnostic (not modifying the network\narchitecture), parameter efficient (only training about $0.1\\%$ of the\nparameters), and computation efficient (almost removing the overhead caused by\nDP, in both the time and space complexity). On a wide range of tasks, DP-BiTFiT\nis $2\\sim 30\\times$ faster and uses $2\\sim 8\\times$ less memory than DP full\nfine-tuning, even faster than the standard full fine-tuning. This amazing\nefficiency enables us to conduct DP fine-tuning on language and vision tasks\nwith long-sequence texts and high-resolution images, which were computationally\ndifficult using existing methods.",
"authors": "Zhiqi Bu, Yu-Xiang Wang, Sheng Zha, George Karypis",
"published": "2022-09-30",
"updated": "2022-10-04",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.CL",
"cs.CR",
"cs.CV"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2401.15207v2",
"title": "HiFT: A Hierarchical Full Parameter Fine-Tuning Strategy",
"abstract": "Full-parameter fine-tuning has become the go-to choice for adapting language\nmodels (LMs) to downstream tasks due to its excellent performance. As LMs grow\nin size, fine-tuning the full parameters of LMs requires a prohibitively large\namount of GPU memory. Existing approaches utilize zeroth-order optimizer to\nconserve GPU memory, which can potentially compromise the performance of LMs as\nnon-zero order optimizers tend to converge more readily on most downstream\ntasks. In this paper, we propose a novel optimizer-independent end-to-end\nhierarchical fine-tuning strategy, HiFT, which only updates a subset of\nparameters at each training step. HiFT can significantly reduce the amount of\ngradients and optimizer state parameters residing in GPU memory at the same\ntime, thereby reducing GPU memory usage. Our results demonstrate that: (1) HiFT\nachieves comparable performance to parameter-efficient fine-tuning and standard\nfull parameter fine-tuning. (2) HiFT supports various optimizers including\nAdamW, AdaGrad, SGD, etc. (3) HiFT can save more than 60\\% GPU memory compared\nwith standard full-parameter fine-tuning for 7B model. (4) HiFT enables\nfull-parameter fine-tuning of a 7B model on single 48G A6000 with a precision\nof 32 using the AdamW optimizer, without using any memory saving techniques.",
"authors": "Yongkang Liu, Yiqun Zhang, Qian Li, Tong Liu, Shi Feng, Daling Wang, Yifei Zhang, Hinrich Sch\u00fctze",
"published": "2024-01-26",
"updated": "2024-02-25",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.CL"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2110.06274v2",
"title": "LiST: Lite Prompted Self-training Makes Parameter-Efficient Few-shot Learners",
"abstract": "We present a new method LiST is short for Lite Prompted Self-Training for\nparameter-efficient fine-tuning of large pre-trained language models (PLMs) for\nfew-shot learning. LiST improves over recent methods that adopt prompt-based\nfine-tuning (FN) using two key techniques. The first is the use of\nself-training to leverage large amounts of unlabeled data for prompt-based FN\nin few-shot settings. We use self-training in conjunction with meta-learning\nfor re-weighting noisy pseudo-prompt labels. Self-training is expensive as it\nrequires updating all the model parameters repetitively. Therefore, we use a\nsecond technique for light-weight fine-tuning where we introduce a small number\nof task-specific parameters that are fine-tuned during self-training while\nkeeping the PLM encoder frozen. Our experiments show that LiST can effectively\nleverage unlabeled data to improve the model performance for few-shot learning.\nAdditionally, the fine-tuning is efficient as it only updates a small\npercentage of parameters and the overall model footprint is reduced since\nseveral tasks can share a common PLM encoder as backbone. A comprehensive study\non six NLU tasks demonstrate LiST to improve by 35% over classic fine-tuning\nand 6% over prompt-based FN with 96% reduction in number of trainable\nparameters when fine-tuned with no more than 30 labeled examples from each\ntask. With only 14M tunable parameters, LiST outperforms GPT-3 in-context\nlearning by 33% on few-shot NLU tasks.",
"authors": "Yaqing Wang, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, Jianfeng Gao",
"published": "2021-10-12",
"updated": "2022-05-18",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2310.17491v2",
"title": "FedPEAT: Convergence of Federated Learning, Parameter-Efficient Fine Tuning, and Emulator Assisted Tuning for Artificial Intelligence Foundation Models with Mobile Edge Computing",
"abstract": "The emergence of foundation models, including language and vision models, has\nreshaped AI's landscape, offering capabilities across various applications.\nDeploying and fine-tuning these large models, like GPT-3 and BERT, presents\nchallenges, especially in the current foundation model era. We introduce\nEmulator-Assisted Tuning (EAT) combined with Parameter-Efficient Fine-Tuning\n(PEFT) to form Parameter-Efficient Emulator-Assisted Tuning (PEAT). Further, we\nexpand this into federated learning as Federated PEAT (FedPEAT). FedPEAT uses\nadapters, emulators, and PEFT for federated model tuning, enhancing model\nprivacy and memory efficiency. Adapters adjust pre-trained models, while\nemulators give a compact representation of original models, addressing both\nprivacy and efficiency. Adaptable to various neural networks, our approach also\nuses deep reinforcement learning for hyper-parameter optimization. We tested\nFedPEAT in a unique scenario with a server participating in collaborative\nfederated tuning, showcasing its potential in tackling foundation model\nchallenges.",
"authors": "Terence Jie Chua, Wenhan Yu, Jun Zhao, Kwok-Yan Lam",
"published": "2023-10-26",
"updated": "2024-02-28",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.NI"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2312.06353v3",
"title": "Federated Full-Parameter Tuning of Billion-Sized Language Models with Communication Cost under 18 Kilobytes",
"abstract": "Pre-trained large language models (LLMs) need fine-tuning to improve their\nresponsiveness to natural language instructions. Federated learning offers a\nway to fine-tune LLMs using the abundant data on end devices without\ncompromising data privacy. Most existing federated fine-tuning methods for LLMs\nrely on parameter-efficient fine-tuning techniques, which may not reach the\nperformance height possible with full-parameter tuning. However, federated\nfull-parameter tuning of LLMs is a non-trivial problem due to the immense\ncommunication cost. This work introduces FedKSeed that employs zeroth-order\noptimization with a finite set of random seeds. It significantly reduces\ntransmission requirements between the server and clients to just a few random\nseeds and scalar gradients, amounting to only a few thousand bytes, making\nfederated full-parameter tuning of billion-sized LLMs possible on devices.\nBuilding on it, we develop a strategy enabling probability-differentiated seed\nsampling, prioritizing perturbations with greater impact on model accuracy.\nExperiments across six scenarios with various LLMs, datasets and data\npartitions demonstrate that our approach outperforms existing federated LLM\nfine-tuning methods in both communication efficiency and new task\ngeneralization.",
"authors": "Zhen Qin, Daoyuan Chen, Bingchen Qian, Bolin Ding, Yaliang Li, Shuiguang Deng",
"published": "2023-12-11",
"updated": "2024-01-31",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.DC"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2110.04366v3",
"title": "Towards a Unified View of Parameter-Efficient Transfer Learning",
"abstract": "Fine-tuning large pre-trained language models on downstream tasks has become\nthe de-facto learning paradigm in NLP. However, conventional approaches\nfine-tune all the parameters of the pre-trained model, which becomes\nprohibitive as the model size and the number of tasks grow. Recent work has\nproposed a variety of parameter-efficient transfer learning methods that only\nfine-tune a small number of (extra) parameters to attain strong performance.\nWhile effective, the critical ingredients for success and the connections among\nthe various methods are poorly understood. In this paper, we break down the\ndesign of state-of-the-art parameter-efficient transfer learning methods and\npresent a unified framework that establishes connections between them.\nSpecifically, we re-frame them as modifications to specific hidden states in\npre-trained models, and define a set of design dimensions along which different\nmethods vary, such as the function to compute the modification and the position\nto apply the modification. Through comprehensive empirical studies across\nmachine translation, text summarization, language understanding, and text\nclassification benchmarks, we utilize the unified view to identify important\ndesign choices in previous methods. Furthermore, our unified framework enables\nthe transfer of design elements across different approaches, and as a result we\nare able to instantiate new parameter-efficient fine-tuning methods that tune\nless parameters than previous methods while being more effective, achieving\ncomparable results to fine-tuning all parameters on all four tasks.",
"authors": "Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg-Kirkpatrick, Graham Neubig",
"published": "2021-10-08",
"updated": "2022-02-02",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.LG"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2404.17245v1",
"title": "Parameter Efficient Fine-tuning of Self-supervised ViTs without Catastrophic Forgetting",
"abstract": "Artificial neural networks often suffer from catastrophic forgetting, where\nlearning new concepts leads to a complete loss of previously acquired\nknowledge. We observe that this issue is particularly magnified in vision\ntransformers (ViTs), where post-pre-training and fine-tuning on new tasks can\nsignificantly degrade the model's original general abilities. For instance, a\nDINO ViT-Base/16 pre-trained on ImageNet-1k loses over 70% accuracy on\nImageNet-1k after just 10 iterations of fine-tuning on CIFAR-100. Overcoming\nthis stability-plasticity dilemma is crucial for enabling ViTs to continuously\nlearn and adapt to new domains while preserving their initial knowledge. In\nthis work, we study two new parameter-efficient fine-tuning strategies:\n(1)~Block Expansion, and (2) Low-rank adaptation (LoRA). Our experiments reveal\nthat using either Block Expansion or LoRA on self-supervised pre-trained ViTs\nsurpass fully fine-tuned ViTs in new domains while offering significantly\ngreater parameter efficiency. Notably, we find that Block Expansion experiences\nonly a minimal performance drop in the pre-training domain, thereby effectively\nmitigating catastrophic forgetting in pre-trained ViTs.",
"authors": "Reza Akbarian Bafghi, Nidhin Harilal, Claire Monteleoni, Maziar Raissi",
"published": "2024-04-26",
"updated": "2024-04-26",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2210.16032v1",
"title": "Parameter-efficient transfer learning of pre-trained Transformer models for speaker verification using adapters",
"abstract": "Recently, the pre-trained Transformer models have received a rising interest\nin the field of speech processing thanks to their great success in various\ndownstream tasks. However, most fine-tuning approaches update all the\nparameters of the pre-trained model, which becomes prohibitive as the model\nsize grows and sometimes results in overfitting on small datasets. In this\npaper, we conduct a comprehensive analysis of applying parameter-efficient\ntransfer learning (PETL) methods to reduce the required learnable parameters\nfor adapting to speaker verification tasks. Specifically, during the\nfine-tuning process, the pre-trained models are frozen, and only lightweight\nmodules inserted in each Transformer block are trainable (a method known as\nadapters). Moreover, to boost the performance in a cross-language low-resource\nscenario, the Transformer model is further tuned on a large intermediate\ndataset before directly fine-tuning it on a small dataset. With updating fewer\nthan 4% of parameters, (our proposed) PETL-based methods achieve comparable\nperformances with full fine-tuning methods (Vox1-O: 0.55%, Vox1-E: 0.82%,\nVox1-H:1.73%).",
"authors": "Junyi Peng, Themos Stafylakis, Rongzhi Gu, Old\u0159ich Plchot, Ladislav Mo\u0161ner, Luk\u00e1\u0161 Burget, Jan \u010cernock\u00fd",
"published": "2022-10-28",
"updated": "2022-10-28",
"primary_cat": "eess.AS",
"cats": [
"eess.AS",
"cs.SD",
"eess.SP"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2401.16137v1",
"title": "X-PEFT: eXtremely Parameter-Efficient Fine-Tuning for Extreme Multi-Profile Scenarios",
"abstract": "Parameter-efficient fine-tuning (PEFT) techniques, such as adapter tuning,\naim to fine-tune a pre-trained language model (PLM) using a minimal number of\nparameters for a specific task or profile. Although adapter tuning provides\nincreased parameter efficiency compared to full-model fine-tuning, it\nintroduces a small set of additional parameters attached to a PLM for each\nprofile. This can become problematic in practical applications with multiple\nprofiles, particularly when a significant increase in the number of profiles\nlinearly boosts the total number of additional parameters. To mitigate this\nissue, we introduce X-PEFT, a novel PEFT method that leverages a multitude of\ngiven adapters by fine-tuning an extremely small set of compact tensors for a\nnew profile, which serve as binary masks to adaptively select the given\nadapters. To efficiently validate our proposed method, we implement it using a\nlarge number of trained or untrained (random) adapters. We evaluate the\nperformance of X-PEFT through LaMP and GLUE tasks and demonstrate that it\neither matches or surpasses the effectiveness of conventional adapter tuning,\ndespite reducing the memory requirements per profile by a factor of 10,000\ncompared to it.",
"authors": "Namju Kwak, Taesup Kim",
"published": "2024-01-29",
"updated": "2024-01-29",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AI",
"cs.CL"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2304.08109v2",
"title": "A Comparative Study between Full-Parameter and LoRA-based Fine-Tuning on Chinese Instruction Data for Instruction Following Large Language Model",
"abstract": "Recently, the instruction-tuning of large language models is a crucial area\nof research in the field of natural language processing. Due to resource and\ncost limitations, several researchers have employed parameter-efficient tuning\ntechniques, such as LoRA, for instruction tuning, and have obtained encouraging\nresults In comparison to full-parameter fine-tuning, LoRA-based tuning\ndemonstrates salient benefits in terms of training costs. In this study, we\nundertook experimental comparisons between full-parameter fine-tuning and\nLoRA-based tuning methods, utilizing LLaMA as the base model. The experimental\nresults show that the selection of the foundational model, training dataset\nscale, learnable parameter quantity, and model training cost are all important\nfactors. We hope that the experimental conclusions of this paper can provide\ninspiration for training large language models, especially in the field of\nChinese, and help researchers find a better trade-off strategy between training\ncost and model performance. To facilitate the reproduction of the paper's\nresults, the dataset, model and code will be released.",
"authors": "Xianghui Sun, Yunjie Ji, Baochang Ma, Xiangang Li",
"published": "2023-04-17",
"updated": "2023-04-18",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2401.16405v2",
"title": "Scaling Sparse Fine-Tuning to Large Language Models",
"abstract": "Large Language Models (LLMs) are difficult to fully fine-tune (e.g., with\ninstructions or human feedback) due to their sheer number of parameters. A\nfamily of parameter-efficient sparse fine-tuning methods have proven promising\nin terms of performance but their memory requirements increase proportionally\nto the size of the LLMs. In this work, we scale sparse fine-tuning to\nstate-of-the-art LLMs like LLaMA 2 7B and 13B. We propose SpIEL, a novel sparse\nfine-tuning method which, for a desired density level, maintains an array of\nparameter indices and the deltas of these parameters relative to their\npretrained values. It iterates over: (a) updating the active deltas, (b)\npruning indices (based on the change of magnitude of their deltas) and (c)\nregrowth of indices. For regrowth, we explore two criteria based on either the\naccumulated gradients of a few candidate parameters or their approximate\nmomenta estimated using the efficient SM3 optimizer. We experiment with\ninstruction-tuning of LLMs on standard dataset mixtures, finding that SpIEL is\noften superior to popular parameter-efficient fine-tuning methods like LoRA\n(low-rank adaptation) in terms of performance and comparable in terms of run\ntime. We additionally show that SpIEL is compatible with both quantization and\nefficient optimizers, to facilitate scaling to ever-larger model sizes. We\nrelease the code for SpIEL at https://github.com/AlanAnsell/peft and for the\ninstruction-tuning experiments at https://github.com/ducdauge/sft-llm.",
"authors": "Alan Ansell, Ivan Vuli\u0107, Hannah Sterz, Anna Korhonen, Edoardo M. Ponti",
"published": "2024-01-29",
"updated": "2024-02-02",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"cs.LG"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2309.12109v1",
"title": "PEFTT: Parameter-Efficient Fine-Tuning for low-resource Tibetan pre-trained language models",
"abstract": "In this era of large language models (LLMs), the traditional training of\nmodels has become increasingly unimaginable for regular users and institutions.\nThe exploration of efficient fine-tuning for high-resource languages on these\nmodels is an undeniable trend that is gradually gaining popularity. However,\nthere has been very little exploration for various low-resource languages, such\nas Tibetan. Research in Tibetan NLP is inherently scarce and limited. While\nthere is currently no existing large language model for Tibetan due to its\nlow-resource nature, that day will undoubtedly arrive. Therefore, research on\nefficient fine-tuning for low-resource language models like Tibetan is highly\nnecessary. Our research can serve as a reference to fill this crucial gap.\nEfficient fine-tuning strategies for pre-trained language models (PLMs) in\nTibetan have seen minimal exploration. We conducted three types of efficient\nfine-tuning experiments on the publicly available TNCC-title dataset:\n\"prompt-tuning,\" \"Adapter lightweight fine-tuning,\" and \"prompt-tuning +\nAdapter fine-tuning.\" The experimental results demonstrate significant\nimprovements using these methods, providing valuable insights for advancing\nTibetan language applications in the context of pre-trained models.",
"authors": "Zhou Mingjun, Daiqing Zhuoma, Qun Nuo, Nyima Tashi",
"published": "2023-09-21",
"updated": "2023-09-21",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2304.10880v3",
"title": "Med-Tuning: Parameter-Efficient Transfer Learning with Fine-Grained Feature Enhancement for Medical Volumetric Segmentation",
"abstract": "Deep learning-based medical volumetric segmentation methods either train the\nmodel from scratch or follow the standard ``pre-training then fine-tuning\"\nparadigm. Although fine-tuning a pre-trained model on downstream tasks can\nharness its representation power, the standard full fine-tuning is costly in\nterms of computation and memory footprint. In this paper, we present the study\non parameter-efficient transfer learning for medical volumetric segmentation\nand propose a new framework named Med-Tuning based on intra-stage feature\nenhancement and inter-stage feature interaction. Additionally, aiming at\nexploiting the intrinsic global properties of Fourier Transform for\nparameter-efficient transfer learning, a new adapter block namely Med-Adapter\nwith a well-designed Fourier Transform branch is proposed for effectively and\nefficiently modeling the crucial global context for medical volumetric\nsegmentation. Given a large-scale pre-trained model on 2D natural images, our\nmethod can exploit both the crucial spatial multi-scale feature and volumetric\ncorrelations along slices for accurate segmentation. Extensive experiments on\nthree benchmark datasets (including CT and MRI) show that our method can\nachieve better results than previous parameter-efficient transfer learning\nmethods on segmentation tasks, with much less tuned parameter costs. Compared\nto full fine-tuning, our method reduces the fine-tuned model parameters by up\nto 4x, with even better segmentation performance. The code will be made\npublicly available at https://github.com/jessie-chen99/Med-Tuning.",
"authors": "Wenxuan Wang, Jiachen Shen, Chen Chen, Jianbo Jiao, Jing Liu, Yan Zhang, Shanshan Song, Jiangyun Li",
"published": "2023-04-21",
"updated": "2023-11-30",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2208.02070v1",
"title": "Efficient Fine-Tuning of Compressed Language Models with Learners",
"abstract": "Fine-tuning BERT-based models is resource-intensive in memory, computation,\nand time. While many prior works aim to improve inference efficiency via\ncompression techniques, e.g., pruning, these works do not explicitly address\nthe computational challenges of training to downstream tasks. We introduce\nLearner modules and priming, novel methods for fine-tuning that exploit the\noverparameterization of pre-trained language models to gain benefits in\nconvergence speed and resource utilization. Learner modules navigate the double\nbind of 1) training efficiently by fine-tuning a subset of parameters, and 2)\ntraining effectively by ensuring quick convergence and high metric scores. Our\nresults on DistilBERT demonstrate that learners perform on par with or surpass\nthe baselines. Learners train 7x fewer parameters than state-of-the-art methods\non GLUE. On CoLA, learners fine-tune 20% faster, and have significantly lower\nresource utilization.",
"authors": "Danilo Vucetic, Mohammadreza Tayaranian, Maryam Ziaeefard, James J. Clark, Brett H. Meyer, Warren J. Gross",
"published": "2022-08-03",
"updated": "2022-08-03",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.LG"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2004.14129v1",
"title": "How fine can fine-tuning be? Learning efficient language models",
"abstract": "State-of-the-art performance on language understanding tasks is now achieved\nwith increasingly large networks; the current record holder has billions of\nparameters. Given a language model pre-trained on massive unlabeled text\ncorpora, only very light supervised fine-tuning is needed to learn a task: the\nnumber of fine-tuning steps is typically five orders of magnitude lower than\nthe total parameter count. Does this mean that fine-tuning only introduces\nsmall differences from the pre-trained model in the parameter space? If so, can\none avoid storing and computing an entire model for each task? In this work, we\naddress these questions by using Bidirectional Encoder Representations from\nTransformers (BERT) as an example. As expected, we find that the fine-tuned\nmodels are close in parameter space to the pre-trained one, with the closeness\nvarying from layer to layer. We show that it suffices to fine-tune only the\nmost critical layers. Further, we find that there are surprisingly many good\nsolutions in the set of sparsified versions of the pre-trained model. As a\nresult, fine-tuning of huge language models can be achieved by simply setting a\ncertain number of entries in certain layers of the pre-trained parameters to\nzero, saving both task-specific parameter storage and computational cost.",
"authors": "Evani Radiya-Dixit, Xin Wang",
"published": "2020-04-24",
"updated": "2020-04-24",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.LG",
"stat.ML"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2302.04870v1",
"title": "Offsite-Tuning: Transfer Learning without Full Model",
"abstract": "Transfer learning is important for foundation models to adapt to downstream\ntasks. However, many foundation models are proprietary, so users must share\ntheir data with model owners to fine-tune the models, which is costly and raise\nprivacy concerns. Moreover, fine-tuning large foundation models is\ncomputation-intensive and impractical for most downstream users. In this paper,\nwe propose Offsite-Tuning, a privacy-preserving and efficient transfer learning\nframework that can adapt billion-parameter foundation models to downstream data\nwithout access to the full model. In offsite-tuning, the model owner sends a\nlight-weight adapter and a lossy compressed emulator to the data owner, who\nthen fine-tunes the adapter on the downstream data with the emulator's\nassistance. The fine-tuned adapter is then returned to the model owner, who\nplugs it into the full model to create an adapted foundation model.\nOffsite-tuning preserves both parties' privacy and is computationally more\nefficient than the existing fine-tuning methods that require access to the full\nmodel weights. We demonstrate the effectiveness of offsite-tuning on various\nlarge language and vision foundation models. Offsite-tuning can achieve\ncomparable accuracy as full model fine-tuning while being privacy-preserving\nand efficient, achieving 6.5x speedup and 5.6x memory reduction. Code is\navailable at https://github.com/mit-han-lab/offsite-tuning.",
"authors": "Guangxuan Xiao, Ji Lin, Song Han",
"published": "2023-02-09",
"updated": "2023-02-09",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.CV",
"cs.LG"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2312.15698v3",
"title": "RepairLLaMA: Efficient Representations and Fine-Tuned Adapters for Program Repair",
"abstract": "Automated Program Repair (APR) has evolved significantly with the advent of\nLarge Language Models (LLMs). Fine-tuning LLMs for program repair is a recent\navenue of research, with many dimensions which have not been explored. Existing\nwork mostly fine-tunes LLMs with naive code representations and is\nfundamentally limited in its ability to fine-tune larger LLMs. To address this\nproblem, we propose RepairLLaMA, a novel program repair approach that combines\n1) code representations for APR and 2) the state-of-the-art parameter-efficient\nLLM fine-tuning technique called LoRA. This results in RepairLLaMA producing a\nhighly effective `program repair adapter' for fixing bugs with language models.\nOur experiments demonstrate the validity of both concepts. First, fine-tuning\nadapters with program repair specific code representations enables the model to\nuse meaningful repair signals. Second, parameter-efficient fine-tuning helps\nfine-tuning to converge and contributes to the effectiveness of the repair\nadapter to fix data-points outside the fine-tuning data distribution. Overall,\nRepairLLaMA correctly fixes 125 Defects4J v2 and 82 HumanEval-Java bugs,\noutperforming all baselines.",
"authors": "Andr\u00e9 Silva, Sen Fang, Martin Monperrus",
"published": "2023-12-25",
"updated": "2024-03-11",
"primary_cat": "cs.SE",
"cats": [
"cs.SE",
"cs.LG"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2307.08122v2",
"title": "Tangent Transformers for Composition, Privacy and Removal",
"abstract": "We introduce Tangent Attention Fine-Tuning (TAFT), a method for fine-tuning\nlinearized transformers obtained by computing a First-order Taylor Expansion\naround a pre-trained initialization. We show that the Jacobian-Vector Product\nresulting from linearization can be computed efficiently in a single forward\npass, reducing training and inference cost to the same order of magnitude as\nits original non-linear counterpart, while using the same number of parameters.\nFurthermore, we show that, when applied to various downstream visual\nclassification tasks, the resulting Tangent Transformer fine-tuned with TAFT\ncan perform comparably with fine-tuning the original non-linear network. Since\nTangent Transformers are linear with respect to the new set of weights, and the\nresulting fine-tuning loss is convex, we show that TAFT enjoys several\nadvantages compared to non-linear fine-tuning when it comes to model\ncomposition, parallel training, machine unlearning, and differential privacy.",
"authors": "Tian Yu Liu, Aditya Golatkar, Stefano Soatto",
"published": "2023-07-16",
"updated": "2023-07-20",
"primary_cat": "cs.LG",
"cats": [
"cs.LG"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2312.10136v2",
"title": "Gradient-based Parameter Selection for Efficient Fine-Tuning",
"abstract": "With the growing size of pre-trained models, full fine-tuning and storing all\nthe parameters for various downstream tasks is costly and infeasible. In this\npaper, we propose a new parameter-efficient fine-tuning method, Gradient-based\nParameter Selection (GPS), demonstrating that only tuning a few selected\nparameters from the pre-trained model while keeping the remainder of the model\nfrozen can generate similar or better performance compared with the full model\nfine-tuning method. Different from the existing popular and state-of-the-art\nparameter-efficient fine-tuning approaches, our method does not introduce any\nadditional parameters and computational costs during both the training and\ninference stages. Another advantage is the model-agnostic and non-destructive\nproperty, which eliminates the need for any other design specific to a\nparticular model. Compared with the full fine-tuning, GPS achieves 3.33%\n(91.78% vs. 88.45%, FGVC) and 9.61% (73.1% vs. 65.57%, VTAB) improvement of the\naccuracy with tuning only 0.36% parameters of the pre-trained model on average\nover 24 image classification tasks; it also demonstrates a significant\nimprovement of 17% and 16.8% in mDice and mIoU, respectively, on medical image\nsegmentation task. Moreover, GPS achieves state-of-the-art performance compared\nwith existing PEFT methods.",
"authors": "Zhi Zhang, Qizhe Zhang, Zijun Gao, Renrui Zhang, Ekaterina Shutova, Shiji Zhou, Shanghang Zhang",
"published": "2023-12-15",
"updated": "2024-05-04",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2303.15647v1",
"title": "Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning",
"abstract": "This paper presents a systematic overview and comparison of\nparameter-efficient fine-tuning methods covering over 40 papers published\nbetween February 2019 and February 2023. These methods aim to resolve the\ninfeasibility and impracticality of fine-tuning large language models by only\ntraining a small set of parameters. We provide a taxonomy that covers a broad\nrange of methods and present a detailed method comparison with a specific focus\non real-life efficiency and fine-tuning multibillion-scale language models.",
"authors": "Vladislav Lialin, Vijeta Deshpande, Anna Rumshisky",
"published": "2023-03-28",
"updated": "2023-03-28",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2402.11417v1",
"title": "LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models",
"abstract": "Various parameter-efficient fine-tuning (PEFT) techniques have been proposed\nto enable computationally efficient fine-tuning while maintaining model\nperformance. However, existing PEFT methods are still limited by the growing\nnumber of trainable parameters with the rapid deployment of Large Language\nModels (LLMs). To address this challenge, we present LoRETTA, an\nultra-parameter-efficient framework that significantly reduces trainable\nparameters through tensor-train decomposition. Specifically, we propose two\nmethods, named {LoRETTA}$_{adp}$ and {LoRETTA}$_{rep}$. The former employs\ntensorized adapters, offering a high-performance yet lightweight approach for\nthe fine-tuning of LLMs. The latter emphasizes fine-tuning via weight\nparameterization with a set of small tensor factors. LoRETTA achieves\ncomparable or better performance than most widely used PEFT methods with up to\n$100\\times$ fewer parameters on the LLaMA-2-7B models. Furthermore, empirical\nresults demonstrate that the proposed method effectively improves training\nefficiency, enjoys better multi-task learning performance, and enhances the\nanti-overfitting capability. Plug-and-play codes built upon the Huggingface\nframework and PEFT library will be released.",
"authors": "Yifan Yang, Jiajun Zhou, Ngai Wong, Zheng Zhang",
"published": "2024-02-18",
"updated": "2024-02-18",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"cs.LG"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2310.03123v1",
"title": "Efficient Federated Prompt Tuning for Black-box Large Pre-trained Models",
"abstract": "With the blowout development of pre-trained models (PTMs), the efficient\ntuning of these models for diverse downstream applications has emerged as a\npivotal research concern. Although recent investigations into prompt tuning\nhave provided promising avenues, three salient challenges persist: (1) memory\nconstraint: the continuous growth in the size of open-source PTMs renders\nfine-tuning, even a fraction of their parameters, challenging for many\npractitioners. (2) model privacy: existing PTMs often function as public API\nservices, with their parameters inaccessible for effective or tailored\nfine-tuning. (3) data privacy: the fine-tuning of PTMs necessitates\nhigh-quality datasets, which are typically localized and not shared to public.\nTo optimally harness each local dataset while navigating memory constraints and\npreserving privacy, we propose Federated Black-Box Prompt Tuning (Fed-BBPT).\nThis innovative approach eschews reliance on parameter architectures and\nprivate dataset access, instead capitalizing on a central server that aids\nlocal users in collaboratively training a prompt generator through regular\naggregation. Local users leverage API-driven learning via a zero-order\noptimizer, obviating the need for PTM deployment. Relative to extensive\nfine-tuning, Fed-BBPT proficiently sidesteps memory challenges tied to PTM\nstorage and fine-tuning on local machines, tapping into comprehensive,\nhigh-quality, yet private training datasets. A thorough evaluation across 40\ndatasets spanning CV and NLP tasks underscores the robustness of our proposed\nmodel.",
"authors": "Zihao Lin, Yan Sun, Yifan Shi, Xueqian Wang, Lifu Huang, Li Shen, Dacheng Tao",
"published": "2023-10-04",
"updated": "2023-10-04",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AI"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2305.15348v1",
"title": "READ: Recurrent Adaptation of Large Transformers",
"abstract": "Fine-tuning large-scale Transformers has led to the explosion of many AI\napplications across Natural Language Processing and Computer Vision tasks.\nHowever, fine-tuning all pre-trained model parameters becomes impractical as\nthe model size and number of tasks increase. Parameter-efficient transfer\nlearning (PETL) methods aim to address these challenges. While effective in\nreducing the number of trainable parameters, PETL methods still require\nsignificant energy and computational resources to fine-tune. In this paper, we\nintroduce \\textbf{RE}current \\textbf{AD}aption (READ) -- a lightweight and\nmemory-efficient fine-tuning method -- to overcome the limitations of the\ncurrent PETL approaches. Specifically, READ inserts a small RNN network\nalongside the backbone model so that the model does not have to back-propagate\nthrough the large backbone network. Through comprehensive empirical evaluation\nof the GLUE benchmark, we demonstrate READ can achieve a $56\\%$ reduction in\nthe training memory consumption and an $84\\%$ reduction in the GPU energy usage\nwhile retraining high model quality compared to full-tuning. Additionally, the\nmodel size of READ does not grow with the backbone model size, making it a\nhighly scalable solution for fine-tuning large Transformers.",
"authors": "Sid Wang, John Nguyen, Ke Li, Carole-Jean Wu",
"published": "2023-05-24",
"updated": "2023-05-24",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AI"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2304.05216v1",
"title": "Towards Efficient Fine-tuning of Pre-trained Code Models: An Experimental Study and Beyond",
"abstract": "Recently, fine-tuning pre-trained code models such as CodeBERT on downstream\ntasks has achieved great success in many software testing and analysis tasks.\nWhile effective and prevalent, fine-tuning the pre-trained parameters incurs a\nlarge computational cost. In this paper, we conduct an extensive experimental\nstudy to explore what happens to layer-wise pre-trained representations and\ntheir encoded code knowledge during fine-tuning. We then propose efficient\nalternatives to fine-tune the large pre-trained code model based on the above\nfindings. Our experimental study shows that (1) lexical, syntactic and\nstructural properties of source code are encoded in the lower, intermediate,\nand higher layers, respectively, while the semantic property spans across the\nentire model. (2) The process of fine-tuning preserves most of the code\nproperties. Specifically, the basic code properties captured by lower and\nintermediate layers are still preserved during fine-tuning. Furthermore, we\nfind that only the representations of the top two layers change most during\nfine-tuning for various downstream tasks. (3) Based on the above findings, we\npropose Telly to efficiently fine-tune pre-trained code models via layer\nfreezing. The extensive experimental results on five various downstream tasks\ndemonstrate that training parameters and the corresponding time cost are\ngreatly reduced, while performances are similar or better. Replication package\nincluding source code, datasets, and online Appendix is available at:\n\\url{https://github.com/DeepSoftwareAnalytics/Telly}.",
"authors": "Ensheng Shi, Yanlin Wang, Hongyu Zhang, Lun Du, Shi Han, Dongmei Zhang, Hongbin Sun",
"published": "2023-04-11",
"updated": "2023-04-11",
"primary_cat": "cs.SE",
"cats": [
"cs.SE",
"cs.AI",
"cs.CL"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2311.07820v1",
"title": "On the Analysis of Cross-Lingual Prompt Tuning for Decoder-based Multilingual Model",
"abstract": "An exciting advancement in the field of multilingual models is the emergence\nof autoregressive models with zero- and few-shot capabilities, a phenomenon\nwidely reported in large-scale language models. To further improve model\nadaptation to cross-lingual tasks, another trend is to further fine-tune the\nlanguage models with either full fine-tuning or parameter-efficient tuning.\nHowever, the interaction between parameter-efficient fine-tuning (PEFT) and\ncross-lingual tasks in multilingual autoregressive models has yet to be\nstudied. Specifically, we lack an understanding of the role of linguistic\ndistributions in multilingual models in the effectiveness of token-based prompt\ntuning. To address this question, we conduct experiments comparing prompt\ntuning and fine-tuning on the decoder-based multilingual model, XGLM, with four\ncross-lingual tasks (XNLI, PAWS-X, POS, NER). According to our study, prompt\ntuning achieves on par or better performance over fine-tuning across all\nlanguages while updating at most 0.13\\% of the model parameters. Moreover, we\nempirically show that prompt tuning is more effective in enhancing the\nperformance of low-resource languages than fine-tuning. Our further analysis\nshows that the phenomenon is related to the tokenization scheme of the\nmultilingual model.",
"authors": "Nohil Park, Joonsuk Park, Kang Min Yoo, Sungroh Yoon",
"published": "2023-11-14",
"updated": "2023-11-14",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2403.20284v1",
"title": "LayerNorm: A key component in parameter-efficient fine-tuning",
"abstract": "Fine-tuning a pre-trained model, such as Bidirectional Encoder\nRepresentations from Transformers (BERT), has been proven to be an effective\nmethod for solving many natural language processing (NLP) tasks. However, due\nto the large number of parameters in many state-of-the-art NLP models,\nincluding BERT, the process of fine-tuning is computationally expensive. One\nattractive solution to this issue is parameter-efficient fine-tuning, which\ninvolves modifying only a minimal segment of the model while keeping the\nremainder unchanged. Yet, it remains unclear which segment of the BERT model is\ncrucial for fine-tuning. In this paper, we first analyze different components\nin the BERT model to pinpoint which one undergoes the most significant changes\nafter fine-tuning. We find that output LayerNorm changes more than any other\ncomponents when fine-tuned for different General Language Understanding\nEvaluation (GLUE) tasks. Then we show that only fine-tuning the LayerNorm can\nreach comparable, or in some cases better, performance to full fine-tuning and\nother parameter-efficient fine-tuning methods. Moreover, we use Fisher\ninformation to determine the most critical subset of LayerNorm and demonstrate\nthat many NLP tasks in the GLUE benchmark can be solved by fine-tuning only a\nsmall portion of LayerNorm with negligible performance degradation.",
"authors": "Taha ValizadehAslani, Hualou Liang",
"published": "2024-03-29",
"updated": "2024-03-29",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.LG"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2203.06904v2",
"title": "Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models",
"abstract": "Despite the success, the process of fine-tuning large-scale PLMs brings\nprohibitive adaptation costs. In fact, fine-tuning all the parameters of a\ncolossal model and retaining separate instances for different tasks are\npractically infeasible. This necessitates a new branch of research focusing on\nthe parameter-efficient adaptation of PLMs, dubbed as delta tuning in this\npaper. In contrast with the standard fine-tuning, delta tuning only fine-tunes\na small portion of the model parameters while keeping the rest untouched,\nlargely reducing both the computation and storage costs. Recent studies have\ndemonstrated that a series of delta tuning methods with distinct tuned\nparameter selection could achieve performance on a par with full-parameter\nfine-tuning, suggesting a new promising way of stimulating large-scale PLMs. In\nthis paper, we first formally describe the problem of delta tuning and then\ncomprehensively review recent delta tuning approaches. We also propose a\nunified categorization criterion that divide existing delta tuning methods into\nthree groups: addition-based, specification-based, and reparameterization-based\nmethods. Though initially proposed as an efficient method to steer large\nmodels, we believe that some of the fascinating evidence discovered along with\ndelta tuning could help further reveal the mechanisms of PLMs and even deep\nneural networks. To this end, we discuss the theoretical principles underlying\nthe effectiveness of delta tuning and propose frameworks to interpret delta\ntuning from the perspective of optimization and optimal control, respectively.\nFurthermore, we provide a holistic empirical study of representative methods,\nwhere results on over 100 NLP tasks demonstrate a comprehensive performance\ncomparison of different approaches. The experimental results also cover the\nanalysis of combinatorial, scaling and transferable properties of delta tuning.",
"authors": "Ning Ding, Yujia Qin, Guang Yang, Fuchao Wei, Zonghan Yang, Yusheng Su, Shengding Hu, Yulin Chen, Chi-Min Chan, Weize Chen, Jing Yi, Weilin Zhao, Xiaozhi Wang, Zhiyuan Liu, Hai-Tao Zheng, Jianfei Chen, Yang Liu, Jie Tang, Juanzi Li, Maosong Sun",
"published": "2022-03-14",
"updated": "2022-03-15",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"cs.LG"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2305.16742v1",
"title": "Parameter-Efficient Fine-Tuning without Introducing New Latency",
"abstract": "Parameter-efficient fine-tuning (PEFT) of pre-trained language models has\nrecently demonstrated remarkable achievements, effectively matching the\nperformance of full fine-tuning while utilizing significantly fewer trainable\nparameters, and consequently addressing the storage and communication\nconstraints. Nonetheless, various PEFT methods are limited by their inherent\ncharacteristics. In the case of sparse fine-tuning, which involves modifying\nonly a small subset of the existing parameters, the selection of fine-tuned\nparameters is task- and domain-specific, making it unsuitable for federated\nlearning. On the other hand, PEFT methods with adding new parameters typically\nintroduce additional inference latency. In this paper, we demonstrate the\nfeasibility of generating a sparse mask in a task-agnostic manner, wherein all\ndownstream tasks share a common mask. Our approach, which relies solely on the\nmagnitude information of pre-trained parameters, surpasses existing\nmethodologies by a significant margin when evaluated on the GLUE benchmark.\nAdditionally, we introduce a novel adapter technique that directly applies the\nadapter to pre-trained parameters instead of the hidden representation, thereby\nachieving identical inference speed to that of full fine-tuning. Through\nextensive experiments, our proposed method attains a new state-of-the-art\noutcome in terms of both performance and storage efficiency, storing only 0.03%\nparameters of full fine-tuning.",
"authors": "Baohao Liao, Yan Meng, Christof Monz",
"published": "2023-05-26",
"updated": "2023-05-26",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"cs.LG"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2308.06522v1",
"title": "SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models",
"abstract": "Transfer learning via fine-tuning pre-trained transformer models has gained\nsignificant success in delivering state-of-the-art results across various NLP\ntasks. In the absence of centralized data, Federated Learning (FL) can benefit\nfrom distributed and private data of the FL edge clients for fine-tuning.\nHowever, due to the limited communication, computation, and storage\ncapabilities of edge devices and the huge sizes of popular transformer models,\nefficient fine-tuning is crucial to make federated training feasible. This work\nexplores the opportunities and challenges associated with applying parameter\nefficient fine-tuning (PEFT) methods in different FL settings for language\ntasks. Specifically, our investigation reveals that as the data across users\nbecomes more diverse, the gap between fully fine-tuning the model and employing\nPEFT methods widens. To bridge this performance gap, we propose a method called\nSLoRA, which overcomes the key limitations of LoRA in high heterogeneous data\nscenarios through a novel data-driven initialization technique. Our\nexperimental results demonstrate that SLoRA achieves performance comparable to\nfull fine-tuning, with significant sparse updates with approximately $\\sim 1\\%$\ndensity while reducing training time by up to $90\\%$.",
"authors": "Sara Babakniya, Ahmed Roushdy Elkordy, Yahya H. Ezzeldin, Qingfeng Liu, Kee-Bong Song, Mostafa El-Khamy, Salman Avestimehr",
"published": "2023-08-12",
"updated": "2023-08-12",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AI"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2402.18331v2",
"title": "FineDiffusion: Scaling up Diffusion Models for Fine-grained Image Generation with 10,000 Classes",
"abstract": "The class-conditional image generation based on diffusion models is renowned\nfor generating high-quality and diverse images. However, most prior efforts\nfocus on generating images for general categories, e.g., 1000 classes in\nImageNet-1k. A more challenging task, large-scale fine-grained image\ngeneration, remains the boundary to explore. In this work, we present a\nparameter-efficient strategy, called FineDiffusion, to fine-tune large\npre-trained diffusion models scaling to large-scale fine-grained image\ngeneration with 10,000 categories. FineDiffusion significantly accelerates\ntraining and reduces storage overhead by only fine-tuning tiered class\nembedder, bias terms, and normalization layers' parameters. To further improve\nthe image generation quality of fine-grained categories, we propose a novel\nsampling method for fine-grained image generation, which utilizes\nsuperclass-conditioned guidance, specifically tailored for fine-grained\ncategories, to replace the conventional classifier-free guidance sampling.\nCompared to full fine-tuning, FineDiffusion achieves a remarkable 1.56x\ntraining speed-up and requires storing merely 1.77% of the total model\nparameters, while achieving state-of-the-art FID of 9.776 on image generation\nof 10,000 classes. Extensive qualitative and quantitative experiments\ndemonstrate the superiority of our method compared to other parameter-efficient\nfine-tuning methods. The code and more generated results are available at our\nproject website: https://finediffusion.github.io/.",
"authors": "Ziying Pan, Kun Wang, Gang Li, Feihong He, Xiwang Li, Yongxuan Lai",
"published": "2024-02-28",
"updated": "2024-04-07",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2401.06432v2",
"title": "Heterogeneous LoRA for Federated Fine-tuning of On-Device Foundation Models",
"abstract": "Foundation models (FMs) adapt well to specific domains or tasks with\nfine-tuning, and federated learning (FL) enables the potential for\nprivacy-preserving fine-tuning of the FMs with on-device local data. For\nfederated fine-tuning of FMs, we consider the FMs with small to medium\nparameter sizes of single digit billion at maximum, referred to as on-device\nFMs (ODFMs) that can be deployed on devices for inference but can only be\nfine-tuned with parameter efficient methods. In our work, we tackle the data\nand system heterogeneity problem of federated fine-tuning of ODFMs by proposing\na novel method using heterogeneous low-rank approximations (LoRAs), namely\nHetLoRA. First, we show that the naive approach of using homogeneous LoRA ranks\nacross devices face a trade-off between overfitting and slow convergence, and\nthus propose HetLoRA, which allows heterogeneous ranks across client devices\nand efficiently aggregates and distributes these heterogeneous LoRA modules. By\napplying rank self-pruning locally and sparsity-weighted aggregation at the\nserver, HetLoRA combines the advantages of high and low-rank LoRAs, which\nachieves improved convergence speed and final performance compared to\nhomogeneous LoRA. Furthermore, HetLoRA offers enhanced computation efficiency\ncompared to full fine-tuning, making it suitable for federated fine-tuning\nacross heterogeneous devices.",
"authors": "Yae Jee Cho, Luyang Liu, Zheng Xu, Aldi Fahrezi, Gauri Joshi",
"published": "2024-01-12",
"updated": "2024-02-20",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.DC"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2401.04051v1",
"title": "Empirical Analysis of Efficient Fine-Tuning Methods for Large Pre-Trained Language Models",
"abstract": "Fine-tuning large pre-trained language models for downstream tasks remains a\ncritical challenge in natural language processing. This paper presents an\nempirical analysis comparing two efficient fine-tuning methods - BitFit and\nadapter modules - to standard full model fine-tuning. Experiments conducted on\nGLUE benchmark datasets (MRPC, COLA, STS-B) reveal several key insights. The\nBitFit approach, which trains only bias terms and task heads, matches full\nfine-tuning performance across varying amounts of training data and time\nconstraints. It demonstrates remarkable stability even with only 30\\% of data,\noutperforming full fine-tuning at intermediate data levels. Adapter modules\nexhibit high variability, with inconsistent gains over default models. The\nfindings indicate BitFit offers an attractive balance between performance and\nparameter efficiency. Our work provides valuable perspectives on model tuning,\nemphasizing robustness and highlighting BitFit as a promising alternative for\nresource-constrained or streaming task settings. The analysis offers actionable\nguidelines for efficient adaptation of large pre-trained models, while\nillustrating open challenges in stabilizing techniques like adapter modules.",
"authors": "Nigel Doering, Cyril Gorlla, Trevor Tuttle, Adhvaith Vijay",
"published": "2024-01-08",
"updated": "2024-01-08",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.CL"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2205.12453v2",
"title": "Know Where You're Going: Meta-Learning for Parameter-Efficient Fine-Tuning",
"abstract": "A recent family of techniques, dubbed lightweight fine-tuning methods,\nfacilitates parameter-efficient transfer learning by updating only a small set\nof additional parameters while keeping the parameters of the pretrained\nlanguage model frozen. While proven to be an effective method, there are no\nexisting studies on if and how such knowledge of the downstream fine-tuning\napproach should affect the pretraining stage. In this work, we show that taking\nthe ultimate choice of fine-tuning method into consideration boosts the\nperformance of parameter-efficient fine-tuning. By relying on\noptimization-based meta-learning using MAML with certain modifications for our\ndistinct purpose, we prime the pretrained model specifically for\nparameter-efficient fine-tuning, resulting in gains of up to 1.7 points on\ncross-lingual NER fine-tuning. Our ablation settings and analyses further\nreveal that the tweaks we introduce in MAML are crucial for the attained gains.",
"authors": "Mozhdeh Gheini, Xuezhe Ma, Jonathan May",
"published": "2022-05-25",
"updated": "2022-12-08",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2312.14327v1",
"title": "Parameter Efficient Tuning Allows Scalable Personalization of LLMs for Text Entry: A Case Study on Abbreviation Expansion",
"abstract": "Abbreviation expansion is a strategy used to speed up communication by\nlimiting the amount of typing and using a language model to suggest expansions.\nHere we look at personalizing a Large Language Model's (LLM) suggestions based\non prior conversations to enhance the relevance of predictions, particularly\nwhen the user data is small (~1000 samples). Specifically, we compare\nfine-tuning, prompt-tuning, and retrieval augmented generation of expanded text\nsuggestions for abbreviated inputs. Our case study with a deployed 8B parameter\nLLM on a real user living with ALS, and experiments on movie character\npersonalization indicates that (1) customization may be necessary in some\nscenarios and prompt-tuning generalizes well to those, (2) fine-tuning on\nin-domain data (with as few as 600 samples) still shows some gains, however (3)\nretrieval augmented few-shot selection also outperforms fine-tuning. (4)\nParameter efficient tuning allows for efficient and scalable personalization.\nFor prompt-tuning, we also find that initializing the learned \"soft-prompts\" to\nuser relevant concept tokens leads to higher accuracy than random\ninitialization.",
"authors": "Katrin Tomanek, Shanqing Cai, Subhashini Venugopalan",
"published": "2023-12-21",
"updated": "2023-12-21",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2405.00201v1",
"title": "SPAFIT: Stratified Progressive Adaptation Fine-tuning for Pre-trained Large Language Models",
"abstract": "Full fine-tuning is a popular approach to adapt Transformer-based pre-trained\nlarge language models to a specific downstream task. However, the substantial\nrequirements for computational power and storage have discouraged its\nwidespread use. Moreover, increasing evidence of catastrophic forgetting and\noverparameterization in the Transformer architecture has motivated researchers\nto seek more efficient fine-tuning (PEFT) methods. Commonly known\nparameter-efficient fine-tuning methods like LoRA and BitFit are typically\napplied across all layers of the model. We propose a PEFT method, called\nStratified Progressive Adaptation Fine-tuning (SPAFIT), based on the\nlocalization of different types of linguistic knowledge to specific layers of\nthe model. Our experiments, conducted on nine tasks from the GLUE benchmark,\nshow that our proposed SPAFIT method outperforms other PEFT methods while\nfine-tuning only a fraction of the parameters adjusted by other methods.",
"authors": "Samir Arora, Liangliang Wang",
"published": "2024-04-30",
"updated": "2024-04-30",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2212.10929v1",
"title": "SPT: Semi-Parametric Prompt Tuning for Multitask Prompted Learning",
"abstract": "Pre-trained large language models can efficiently interpolate human-written\nprompts in a natural way. Multitask prompted learning can help generalization\nthrough a diverse set of tasks at once, thus enhancing the potential for more\neffective downstream fine-tuning. To perform efficient multitask-inference in\nthe same batch, parameter-efficient fine-tuning methods such as prompt tuning\nhave been proposed. However, the existing prompt tuning methods may lack\ngeneralization. We propose SPT, a semi-parametric prompt tuning method for\nmultitask prompted learning. The novel component of SPT is a memory bank from\nwhere memory prompts are retrieved based on discrete prompts. Extensive\nexperiments, such as (i) fine-tuning a full language model with SPT on 31\ndifferent tasks from 8 different domains and evaluating zero-shot\ngeneralization on 9 heldout datasets under 5 NLP task categories and (ii)\npretraining SPT on the GLUE datasets and evaluating fine-tuning on the\nSuperGLUE datasets, demonstrate effectiveness of SPT.",
"authors": "M Saiful Bari, Aston Zhang, Shuai Zheng, Xingjian Shi, Yi Zhu, Shafiq Joty, Mu Li",
"published": "2022-12-21",
"updated": "2022-12-21",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"cs.LG"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2305.07491v1",
"title": "A Comprehensive Analysis of Adapter Efficiency",
"abstract": "Adapters have been positioned as a parameter-efficient fine-tuning (PEFT)\napproach, whereby a minimal number of parameters are added to the model and\nfine-tuned. However, adapters have not been sufficiently analyzed to understand\nif PEFT translates to benefits in training/deployment efficiency and\nmaintainability/extensibility. Through extensive experiments on many adapters,\ntasks, and languages in supervised and cross-lingual zero-shot settings, we\nclearly show that for Natural Language Understanding (NLU) tasks, the parameter\nefficiency in adapters does not translate to efficiency gains compared to full\nfine-tuning of models. More precisely, adapters are relatively expensive to\ntrain and have slightly higher deployment latency. Furthermore, the\nmaintainability/extensibility benefits of adapters can be achieved with simpler\napproaches like multi-task training via full fine-tuning, which also provide\nrelatively faster training times. We, therefore, recommend that for moderately\nsized models for NLU tasks, practitioners should rely on full fine-tuning or\nmulti-task training rather than using adapters. Our code is available at\nhttps://github.com/AI4Bharat/adapter-efficiency.",
"authors": "Nandini Mundra, Sumanth Doddapaneni, Raj Dabre, Anoop Kunchukuttan, Ratish Puduppully, Mitesh M. Khapra",
"published": "2023-05-12",
"updated": "2023-05-12",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2106.04647v2",
"title": "Compacter: Efficient Low-Rank Hypercomplex Adapter Layers",
"abstract": "Adapting large-scale pretrained language models to downstream tasks via\nfine-tuning is the standard method for achieving state-of-the-art performance\non NLP benchmarks. However, fine-tuning all weights of models with millions or\nbillions of parameters is sample-inefficient, unstable in low-resource\nsettings, and wasteful as it requires storing a separate copy of the model for\neach task. Recent work has developed parameter-efficient fine-tuning methods,\nbut these approaches either still require a relatively large number of\nparameters or underperform standard fine-tuning. In this work, we propose\nCompacter, a method for fine-tuning large-scale language models with a better\ntrade-off between task performance and the number of trainable parameters than\nprior work. Compacter accomplishes this by building on top of ideas from\nadapters, low-rank optimization, and parameterized hypercomplex multiplication\nlayers. Specifically, Compacter inserts task-specific weight matrices into a\npretrained model's weights, which are computed efficiently as a sum of\nKronecker products between shared \"slow\" weights and \"fast\" rank-one matrices\ndefined per Compacter layer. By only training 0.047% of a pretrained model's\nparameters, Compacter performs on par with standard fine-tuning on GLUE and\noutperforms standard fine-tuning on SuperGLUE and low-resource settings. Our\ncode is publicly available at~\\url{https://github.com/rabeehk/compacter}.",
"authors": "Rabeeh Karimi Mahabadi, James Henderson, Sebastian Ruder",
"published": "2021-06-08",
"updated": "2021-11-27",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2205.11277v2",
"title": "When does Parameter-Efficient Transfer Learning Work for Machine Translation?",
"abstract": "Parameter-efficient fine-tuning methods (PEFTs) offer the promise of adapting\nlarge pre-trained models while only tuning a small number of parameters. They\nhave been shown to be competitive with full model fine-tuning for many\ndownstream tasks. However, prior work indicates that PEFTs may not work as well\nfor machine translation (MT), and there is no comprehensive study showing when\nPEFTs work for MT. We conduct a comprehensive empirical study of PEFTs for MT,\nconsidering (1) various parameter budgets, (2) a diverse set of language-pairs,\nand (3) different pre-trained models. We find that 'adapters', in which small\nfeed-forward networks are added after every layer, are indeed on par with full\nmodel fine-tuning when the parameter budget corresponds to 10% of total model\nparameters. Nevertheless, as the number of tuned parameters decreases, the\nperformance of PEFTs decreases. The magnitude of this decrease depends on the\nlanguage pair, with PEFTs particularly struggling for distantly related\nlanguage-pairs. We find that using PEFTs with a larger pre-trained model\noutperforms full fine-tuning with a smaller model, and for smaller training\ndata sizes, PEFTs outperform full fine-tuning for the same pre-trained model.",
"authors": "Ahmet \u00dcst\u00fcn, Asa Cooper Stickland",
"published": "2022-05-23",
"updated": "2022-10-24",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2312.08881v1",
"title": "AdaptIR: Parameter Efficient Multi-task Adaptation for Pre-trained Image Restoration Models",
"abstract": "Pre-training has shown promising results on various image restoration tasks,\nwhich is usually followed by full fine-tuning for each specific downstream task\n(e.g., image denoising). However, such full fine-tuning usually suffers from\nthe problems of heavy computational cost in practice, due to the massive\nparameters of pre-trained restoration models, thus limiting its real-world\napplications. Recently, Parameter Efficient Transfer Learning (PETL) offers an\nefficient alternative solution to full fine-tuning, yet still faces great\nchallenges for pre-trained image restoration models, due to the diversity of\ndifferent degradations. To address these issues, we propose AdaptIR, a novel\nparameter efficient transfer learning method for adapting pre-trained\nrestoration models. Specifically, the proposed method consists of a\nmulti-branch inception structure to orthogonally capture local spatial, global\nspatial, and channel interactions. In this way, it allows powerful\nrepresentations under a very low parameter budget. Extensive experiments\ndemonstrate that the proposed method can achieve comparable or even better\nperformance than full fine-tuning, while only using 0.6% parameters. Code is\navailable at https://github.com/csguoh/AdaptIR.",
"authors": "Hang Guo, Tao Dai, Yuanchao Bai, Bin Chen, Shu-Tao Xia, Zexuan Zhu",
"published": "2023-12-12",
"updated": "2023-12-12",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2303.15822v1",
"title": "One Adapter for All Programming Languages? Adapter Tuning for Code Search and Summarization",
"abstract": "As pre-trained models automate many code intelligence tasks, a widely used\nparadigm is to fine-tune a model on the task dataset for each programming\nlanguage. A recent study reported that multilingual fine-tuning benefits a\nrange of tasks and models. However, we find that multilingual fine-tuning leads\nto performance degradation on recent models UniXcoder and CodeT5.\n To alleviate the potentially catastrophic forgetting issue in multilingual\nmodels, we fix all pre-trained model parameters, insert the parameter-efficient\nstructure adapter, and fine-tune it. Updating only 0.6\\% of the overall\nparameters compared to full-model fine-tuning for each programming language,\nadapter tuning yields consistent improvements on code search and summarization\ntasks, achieving state-of-the-art results. In addition, we experimentally show\nits effectiveness in cross-lingual and low-resource scenarios. Multilingual\nfine-tuning with 200 samples per programming language approaches the results\nfine-tuned with the entire dataset on code summarization. Our experiments on\nthree probing tasks show that adapter tuning significantly outperforms\nfull-model fine-tuning and effectively overcomes catastrophic forgetting.",
"authors": "Deze Wang, Boxing Chen, Shanshan Li, Wei Luo, Shaoliang Peng, Wei Dong, Xiangke Liao",
"published": "2023-03-28",
"updated": "2023-03-28",
"primary_cat": "cs.SE",
"cats": [
"cs.SE",
"cs.AI"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2309.00363v1",
"title": "FederatedScope-LLM: A Comprehensive Package for Fine-tuning Large Language Models in Federated Learning",
"abstract": "LLMs have demonstrated great capabilities in various NLP tasks. Different\nentities can further improve the performance of those LLMs on their specific\ndownstream tasks by fine-tuning LLMs. When several entities have similar\ninterested tasks, but their data cannot be shared because of privacy concerns\nregulations, federated learning (FL) is a mainstream solution to leverage the\ndata of different entities. However, fine-tuning LLMs in federated learning\nsettings still lacks adequate support from existing FL frameworks because it\nhas to deal with optimizing the consumption of significant communication and\ncomputational resources, data preparation for different tasks, and distinct\ninformation protection demands. This paper first discusses these challenges of\nfederated fine-tuning LLMs, and introduces our package FS-LLM as a main\ncontribution, which consists of the following components: (1) we build an\nend-to-end benchmarking pipeline, automizing the processes of dataset\npreprocessing, federated fine-tuning execution, and performance evaluation on\nfederated LLM fine-tuning; (2) we provide comprehensive federated\nparameter-efficient fine-tuning algorithm implementations and versatile\nprogramming interfaces for future extension in FL scenarios with low\ncommunication and computation costs, even without accessing the full model; (3)\nwe adopt several accelerating and resource-efficient operators for fine-tuning\nLLMs with limited resources and the flexible pluggable sub-routines for\ninterdisciplinary study. We conduct extensive experiments to validate the\neffectiveness of FS-LLM and benchmark advanced LLMs with state-of-the-art\nparameter-efficient fine-tuning algorithms in FL settings, which also yields\nvaluable insights into federated fine-tuning LLMs for the research community.\nTo facilitate further research and adoption, we release FS-LLM at\nhttps://github.com/alibaba/FederatedScope/tree/llm.",
"authors": "Weirui Kuang, Bingchen Qian, Zitao Li, Daoyuan Chen, Dawei Gao, Xuchen Pan, Yuexiang Xie, Yaliang Li, Bolin Ding, Jingren Zhou",
"published": "2023-09-01",
"updated": "2023-09-01",
"primary_cat": "cs.LG",
"cats": [
"cs.LG"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2102.05207v3",
"title": "Transfer Reinforcement Learning across Homotopy Classes",
"abstract": "The ability for robots to transfer their learned knowledge to new tasks --\nwhere data is scarce -- is a fundamental challenge for successful robot\nlearning. While fine-tuning has been well-studied as a simple but effective\ntransfer approach in the context of supervised learning, it is not as\nwell-explored in the context of reinforcement learning. In this work, we study\nthe problem of fine-tuning in transfer reinforcement learning when tasks are\nparameterized by their reward functions, which are known beforehand. We\nconjecture that fine-tuning drastically underperforms when source and target\ntrajectories are part of different homotopy classes. We demonstrate that\nfine-tuning policy parameters across homotopy classes compared to fine-tuning\nwithin a homotopy class requires more interaction with the environment, and in\ncertain cases is impossible. We propose a novel fine-tuning algorithm,\nEase-In-Ease-Out fine-tuning, that consists of a relaxing stage and a\ncurriculum learning stage to enable transfer learning across homotopy classes.\nFinally, we evaluate our approach on several robotics-inspired simulated\nenvironments and empirically verify that the Ease-In-Ease-Out fine-tuning\nmethod can successfully fine-tune in a sample-efficient way compared to\nexisting baselines.",
"authors": "Zhangjie Cao, Minae Kwon, Dorsa Sadigh",
"published": "2021-02-10",
"updated": "2021-08-08",
"primary_cat": "cs.RO",
"cats": [
"cs.RO"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2210.17451v2",
"title": "AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning",
"abstract": "Standard fine-tuning of large pre-trained language models (PLMs) for\ndownstream tasks requires updating hundreds of millions to billions of\nparameters, and storing a large copy of the PLM weights for every task\nresulting in increased cost for storing, sharing and serving the models. To\naddress this, parameter-efficient fine-tuning (PEFT) techniques were introduced\nwhere small trainable components are injected in the PLM and updated during\nfine-tuning. We propose AdaMix as a general PEFT method that tunes a mixture of\nadaptation modules -- given the underlying PEFT method of choice -- introduced\nin each Transformer layer while keeping most of the PLM weights frozen. For\ninstance, AdaMix can leverage a mixture of adapters like Houlsby or a mixture\nof low rank decomposition matrices like LoRA to improve downstream task\nperformance over the corresponding PEFT methods for fully supervised and\nfew-shot NLU and NLG tasks. Further, we design AdaMix such that it matches the\nsame computational cost and the number of tunable parameters as the underlying\nPEFT method. By only tuning 0.1-0.2% of PLM parameters, we show that AdaMix\noutperforms SOTA parameter-efficient fine-tuning and full model fine-tuning for\nboth NLU and NLG tasks.",
"authors": "Yaqing Wang, Sahaj Agarwal, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, Jianfeng Gao",
"published": "2022-10-31",
"updated": "2022-11-02",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"cs.LG"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2305.17333v3",
"title": "Fine-Tuning Language Models with Just Forward Passes",
"abstract": "Fine-tuning language models (LMs) has yielded success on diverse downstream\ntasks, but as LMs grow in size, backpropagation requires a prohibitively large\namount of memory. Zeroth-order (ZO) methods can in principle estimate gradients\nusing only two forward passes but are theorized to be catastrophically slow for\noptimizing large models. In this work, we propose a memory-efficient\nzerothorder optimizer (MeZO), adapting the classical ZO-SGD method to operate\nin-place, thereby fine-tuning LMs with the same memory footprint as inference.\nFor example, with a single A100 80GB GPU, MeZO can train a 30-billion parameter\nmodel, whereas fine-tuning with backpropagation can train only a 2.7B LM with\nthe same budget. We conduct comprehensive experiments across model types\n(masked and autoregressive LMs), model scales (up to 66B), and downstream tasks\n(classification, multiple-choice, and generation). Our results demonstrate that\n(1) MeZO significantly outperforms in-context learning and linear probing; (2)\nMeZO achieves comparable performance to fine-tuning with backpropagation across\nmultiple tasks, with up to 12x memory reduction and up to 2x GPU-hour reduction\nin our implementation; (3) MeZO is compatible with both full-parameter and\nparameter-efficient tuning techniques such as LoRA and prefix tuning; (4) MeZO\ncan effectively optimize non-differentiable objectives (e.g., maximizing\naccuracy or F1). We support our empirical findings with theoretical insights,\nhighlighting how adequate pre-training and task prompts enable MeZO to\nfine-tune huge models, despite classical ZO analyses suggesting otherwise.",
"authors": "Sadhika Malladi, Tianyu Gao, Eshaan Nichani, Alex Damian, Jason D. Lee, Danqi Chen, Sanjeev Arora",
"published": "2023-05-27",
"updated": "2024-01-11",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.CL"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2403.11621v1",
"title": "Let's Focus on Neuron: Neuron-Level Supervised Fine-tuning for Large Language Model",
"abstract": "Large Language Models (LLMs) are composed of neurons that exhibit various\nbehaviors and roles, which become increasingly diversified as models scale.\nRecent studies have revealed that not all neurons are active across different\ndatasets, and this sparsity correlates positively with the task-specific\nability, leading to advancements in model pruning and training efficiency.\nTraditional fine-tuning methods engage all parameters of LLMs, which is\ncomputationally expensive and may not be necessary. In contrast,\nParameter-Efficient Fine-Tuning (PEFT) approaches aim to minimize the number of\ntrainable parameters, yet they still operate at a relatively macro scale (e.g.,\nlayer-level). We introduce Neuron-Level Fine-Tuning (NeFT), a novel approach\nthat refines the granularity of parameter training down to the individual\nneuron, enabling more precise and computationally efficient model updates. The\nexperimental results show that NeFT not only exceeded the performance of\nfull-parameter fine-tuning and PEFT but also provided insights into the\nanalysis of neurons.",
"authors": "Haoyun Xu, Runzhe Zhan, Derek F. Wong, Lidia S. Chao",
"published": "2024-03-18",
"updated": "2024-03-18",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2403.08433v1",
"title": "An Empirical Study of Parameter Efficient Fine-tuning on Vision-Language Pre-train Model",
"abstract": "Recent studies applied Parameter Efficient Fine-Tuning techniques (PEFTs) to\nefficiently narrow the performance gap between pre-training and downstream.\nThere are two important factors for various PEFTs, namely, the accessible data\nsize and fine-tunable parameter size. A natural expectation for PEFTs is that\nthe performance of various PEFTs is positively related to the data size and\nfine-tunable parameter size. However, according to the evaluation of five PEFTs\non two downstream vision-language (VL) tasks, we find that such an intuition\nholds only if the downstream data and task are not consistent with\npre-training. For downstream fine-tuning consistent with pre-training, data\nsize no longer affects the performance, while the influence of fine-tunable\nparameter size is not monotonous. We believe such an observation could guide\nthe choice of training strategy for various PEFTs.",
"authors": "Yuxin Tian, Mouxing Yang, Yunfan Li, Dayiheng Liu, Xingzhang Ren, Xi Peng, Jiancheng Lv",
"published": "2024-03-13",
"updated": "2024-03-13",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2404.04316v1",
"title": "Parameter Efficient Quasi-Orthogonal Fine-Tuning via Givens Rotation",
"abstract": "With the increasingly powerful performances and enormous scales of Pretrained\nLanguage Models (PLMs), promoting parameter efficiency in fine-tuning has\nbecome a crucial need for effective and efficient adaptation to various\ndownstream tasks. One representative line of fine-tuning methods is Orthogonal\nFine-tuning (OFT), which rigorously preserves the angular distances within the\nparameter space to preserve the pretrained knowledge. Despite the empirical\neffectiveness, OFT still suffers low parameter efficiency at $\\mathcal{O}(d^2)$\nand limited capability of downstream adaptation. Inspired by Givens rotation,\nin this paper, we proposed quasi-Givens Orthogonal Fine-Tuning (qGOFT) to\naddress the problems. We first use $\\mathcal{O}(d)$ Givens rotations to\naccomplish arbitrary orthogonal transformation in $SO(d)$ with provable\nequivalence, reducing parameter complexity from $\\mathcal{O}(d^2)$ to\n$\\mathcal{O}(d)$. Then we introduce flexible norm and relative angular\nadjustments under soft orthogonality regularization to enhance the adaptation\ncapability of downstream semantic deviations. Extensive experiments on various\ntasks and PLMs validate the effectiveness of our methods.",
"authors": "Xinyu Ma, Xu Chu, Zhibang Yang, Yang Lin, Xin Gao, Junfeng Zhao",
"published": "2024-04-05",
"updated": "2024-04-05",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AI",
"cs.CL"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2301.01821v1",
"title": "Parameter-Efficient Fine-Tuning Design Spaces",
"abstract": "Parameter-efficient fine-tuning aims to achieve performance comparable to\nfine-tuning, using fewer trainable parameters. Several strategies (e.g.,\nAdapters, prefix tuning, BitFit, and LoRA) have been proposed. However, their\ndesigns are hand-crafted separately, and it remains unclear whether certain\ndesign patterns exist for parameter-efficient fine-tuning. Thus, we present a\nparameter-efficient fine-tuning design paradigm and discover design patterns\nthat are applicable to different experimental settings. Instead of focusing on\ndesigning another individual tuning strategy, we introduce parameter-efficient\nfine-tuning design spaces that parameterize tuning structures and tuning\nstrategies. Specifically, any design space is characterized by four components:\nlayer grouping, trainable parameter allocation, tunable groups, and strategy\nassignment. Starting from an initial design space, we progressively refine the\nspace based on the model quality of each design choice and make greedy\nselection at each stage over these four components. We discover the following\ndesign patterns: (i) group layers in a spindle pattern; (ii) allocate the\nnumber of trainable parameters to layers uniformly; (iii) tune all the groups;\n(iv) assign proper tuning strategies to different groups. These design patterns\nresult in new parameter-efficient fine-tuning methods. We show experimentally\nthat these methods consistently and significantly outperform investigated\nparameter-efficient fine-tuning strategies across different backbone models and\ndifferent tasks in natural language processing.",
"authors": "Jiaao Chen, Aston Zhang, Xingjian Shi, Mu Li, Alex Smola, Diyi Yang",
"published": "2023-01-04",
"updated": "2023-01-04",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2402.02801v1",
"title": "KS-Lottery: Finding Certified Lottery Tickets for Multilingual Language Models",
"abstract": "The lottery ticket hypothesis posits the existence of ``winning tickets''\nwithin a randomly initialized neural network. Do winning tickets exist for LLMs\nin fine-tuning scenarios? How can we find such winning tickets? In this paper,\nwe propose KS-Lottery, a method to identify a small subset of LLM parameters\nhighly effective in multilingual fine-tuning. Our key idea is to use\nKolmogorov-Smirnov Test to analyze the distribution shift of parameters before\nand after fine-tuning. We further theoretically prove that KS-Lottery can find\nthe certified winning tickets in the embedding layer, fine-tuning on the found\nparameters is guaranteed to perform as well as full fine-tuning. Comparing\nKS-Lottery with other parameter-efficient tuning algorithms on translation\ntasks, the experimental results show that KS-Lottery finds a much smaller set\nof parameters for fine-tuning while achieving the comparable performance as\nfull fine-tuning LLM. Surprisingly, we find that fine-tuning 18 tokens'\nembedding of LLaMA suffices to reach the fine-tuning translation performance.\nCode and model will be released to the public.",
"authors": "Fei Yuan, Chang Ma, Shuai Yuan, Qiushi Sun, Lei Li",
"published": "2024-02-05",
"updated": "2024-02-05",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"cs.LG"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2312.15681v1",
"title": "Partial Fine-Tuning: A Successor to Full Fine-Tuning for Vision Transformers",
"abstract": "Fine-tuning pre-trained foundation models has gained significant popularity\nin various research fields. Existing methods for fine-tuning can be roughly\ndivided into two categories, namely Parameter-Efficient Fine-Tuning and\nHigh-Performance Fine-Tuning. The former aims at improving efficiency, while\nthe latter focuses on enhancing performance. Beyond these methods, we\ndemonstrate that Partial Fine-Tuning can be an innovative and promising\ndirection capable of concurrently enhancing both efficiency and accuracy. We\nfirst validate eight manually-defined partial fine-tuning strategies across\nkinds of datasets and vision transformer architectures, and find that some\npartial fine-tuning strategies (e.g., ffn only or attention only) can achieve\nbetter performance with fewer tuned parameters than full fine-tuning, and\nselecting appropriate layers is critical to partial fine-tuning. Thus, we\npropose a novel fine-tuned angle metric to guide the selection of appropriate\nlayers for partial fine-tuning, making it flexible to be adapted to various\nscenarios for more practicable partial fine-tuning. Additionally, we show that\npartial fine-tuning can serve as a new dimension for Model Soups, improving\nboth the model performance and generalization with fewer tuned parameters.\nComprehensive experiments on a wide range of datasets and models validate the\ngreat potential of partial fine-tuning.",
"authors": "Peng Ye, Yongqi Huang, Chongjun Tu, Minglei Li, Tao Chen, Tong He, Wanli Ouyang",
"published": "2023-12-25",
"updated": "2023-12-25",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.AI"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2401.05605v1",
"title": "Scaling Laws for Forgetting When Fine-Tuning Large Language Models",
"abstract": "We study and quantify the problem of forgetting when fine-tuning pre-trained\nlarge language models (LLMs) on a downstream task. We find that\nparameter-efficient fine-tuning (PEFT) strategies, such as Low-Rank Adapters\n(LoRA), still suffer from catastrophic forgetting. In particular, we identify a\nstrong inverse linear relationship between the fine-tuning performance and the\namount of forgetting when fine-tuning LLMs with LoRA. We further obtain precise\nscaling laws that show forgetting increases as a shifted power law in the\nnumber of parameters fine-tuned and the number of update steps. We also examine\nthe impact of forgetting on knowledge, reasoning, and the safety guardrails\ntrained into Llama 2 7B chat. Our study suggests that forgetting cannot be\navoided through early stopping or by varying the number of parameters\nfine-tuned. We believe this opens up an important safety-critical direction for\nfuture research to evaluate and develop fine-tuning schemes which mitigate\nforgetting",
"authors": "Damjan Kalajdzievski",
"published": "2024-01-11",
"updated": "2024-01-11",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.LG",
"I.2.7"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2212.03916v2",
"title": "Transfer learning for chemically accurate interatomic neural network potentials",
"abstract": "Developing machine learning-based interatomic potentials from ab-initio\nelectronic structure methods remains a challenging task for computational\nchemistry and materials science. This work studies the capability of transfer\nlearning, in particular discriminative fine-tuning, for efficiently generating\nchemically accurate interatomic neural network potentials on organic molecules\nfrom the MD17 and ANI data sets. We show that pre-training the network\nparameters on data obtained from density functional calculations considerably\nimproves the sample efficiency of models trained on more accurate ab-initio\ndata. Additionally, we show that fine-tuning with energy labels alone can\nsuffice to obtain accurate atomic forces and run large-scale atomistic\nsimulations, provided a well-designed fine-tuning data set. We also investigate\npossible limitations of transfer learning, especially regarding the design and\nsize of the pre-training and fine-tuning data sets. Finally, we provide GM-NN\npotentials pre-trained and fine-tuned on the ANI-1x and ANI-1ccx data sets,\nwhich can easily be fine-tuned on and applied to organic molecules.",
"authors": "Viktor Zaverkin, David Holzm\u00fcller, Luca Bonfirraro, Johannes K\u00e4stner",
"published": "2022-12-07",
"updated": "2023-01-28",
"primary_cat": "physics.comp-ph",
"cats": [
"physics.comp-ph",
"stat.ML"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2211.08682v3",
"title": "Parameter-Efficient Tuning on Layer Normalization for Pre-trained Language Models",
"abstract": "Conventional fine-tuning encounters increasing difficulties given the size of\ncurrent Pre-trained Language Models, which makes parameter-efficient tuning\nbecome the focal point of frontier research. Previous methods in this field add\ntunable adapters into MHA or/and FFN of Transformer blocks to enable PLMs\nachieve transferability. However, as an important part of Transformer\narchitecture, the power of layer normalization for parameter-efficent tuning is\nignored. In this paper, we first propose LN-tuning, by tuning the gain and bias\nterm of Layer Normalization module with only 0.03\\% parameters, which is of\nhigh time-efficency and significantly superior to baselines which are less than\n0.1\\% tunable parameters. Further, we study the unified framework of combining\nLN-tuning with previous ones and we find that: (1) the unified framework of\ncombining prefix-tuning, the adapter-based method working on MHA, and LN-tuning\nachieves SOTA performance. (2) unified framework which tunes MHA and LayerNorm\nsimultaneously can get performance improvement but those which tune FFN and\nLayerNorm simultaneous will cause performance decrease. Ablation study\nvalidates LN-tuning is of no abundant parameters and gives a further\nunderstanding of it.",
"authors": "Wang Qi, Yu-Ping Ruan, Yuan Zuo, Taihao Li",
"published": "2022-11-16",
"updated": "2022-12-09",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2405.05493v1",
"title": "Parameter-Efficient Fine-Tuning With Adapters",
"abstract": "In the arena of language model fine-tuning, the traditional approaches, such\nas Domain-Adaptive Pretraining (DAPT) and Task-Adaptive Pretraining (TAPT),\nalthough effective, but computational intensive. This research introduces a\nnovel adaptation method utilizing the UniPELT framework as a base and added a\nPromptTuning Layer, which significantly reduces the number of trainable\nparameters while maintaining competitive performance across various benchmarks.\nOur method employs adapters, which enable efficient transfer of pretrained\nmodels to new tasks with minimal retraining of the base model parameters. We\nevaluate our approach using three diverse datasets: the GLUE benchmark, a\ndomain-specific dataset comprising four distinct areas, and the Stanford\nQuestion Answering Dataset 1.1 (SQuAD). Our results demonstrate that our\ncustomized adapter-based method achieves performance comparable to full model\nfine-tuning, DAPT+TAPT and UniPELT strategies while requiring fewer or\nequivalent amount of parameters. This parameter efficiency not only alleviates\nthe computational burden but also expedites the adaptation process. The study\nunderlines the potential of adapters in achieving high performance with\nsignificantly reduced resource consumption, suggesting a promising direction\nfor future research in parameter-efficient fine-tuning.",
"authors": "Keyu Chen, Yuan Pang, Zi Yang",
"published": "2024-05-09",
"updated": "2024-05-09",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2402.15179v2",
"title": "Advancing Parameter Efficiency in Fine-tuning via Representation Editing",
"abstract": "Parameter Efficient Fine-Tuning (PEFT) has gained significant attention for\nits ability to achieve competitive results while updating only a small subset\nof trainable parameters. Despite the promising performance of current PEFT\nmethods, they present challenges in hyperparameter selection, such as\ndetermining the rank of LoRA or Adapter, or specifying the length of soft\nprompts. In addressing these challenges, we propose a novel approach to\nfine-tuning neural models, termed Representation EDiting (RED), which scales\nand biases the representation produced at each layer. RED substantially reduces\nthe number of trainable parameters by a factor of $25,700$ compared to full\nparameter fine-tuning, and by a factor of $32$ compared to LoRA. Remarkably,\nRED achieves comparable or superior results to full parameter fine-tuning and\nother PEFT methods. Extensive experiments were conducted across models of\nvarying architectures and scales, including RoBERTa, GPT-2, T5, and Llama-2,\nand the results demonstrate the efficiency and efficacy of RED, positioning it\nas a promising PEFT approach for large neural models.",
"authors": "Muling Wu, Wenhao Liu, Xiaohua Wang, Tianlong Li, Changze Lv, Zixuan Ling, Jianhao Zhu, Cenyuan Zhang, Xiaoqing Zheng, Xuanjing Huang",
"published": "2024-02-23",
"updated": "2024-02-28",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.CL"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2310.18339v1",
"title": "MOELoRA: An MOE-based Parameter Efficient Fine-Tuning Method for Multi-task Medical Applications",
"abstract": "The recent surge in the field of Large Language Models (LLMs) has gained\nsignificant attention in numerous domains. In order to tailor an LLM to a\nspecific domain such as a web-based healthcare system, fine-tuning with domain\nknowledge is necessary. However, two issues arise during fine-tuning LLMs for\nmedical applications. The first is the problem of task variety, where there are\nnumerous distinct tasks in real-world medical scenarios. This diversity often\nresults in suboptimal fine-tuning due to data imbalance and seesawing problems.\nAdditionally, the high cost of fine-tuning can be prohibitive, impeding the\napplication of LLMs. The large number of parameters in LLMs results in enormous\ntime and computational consumption during fine-tuning, which is difficult to\njustify. To address these two issues simultaneously, we propose a novel\nparameter-efficient fine-tuning framework for multi-task medical applications\ncalled MOELoRA. The framework aims to capitalize on the benefits of both MOE\nfor multi-task learning and LoRA for parameter-efficient fine-tuning. To unify\nMOE and LoRA, we devise multiple experts as the trainable parameters, where\neach expert consists of a pair of low-rank matrices to maintain a small number\nof trainable parameters. Additionally, we propose a task-motivated gate\nfunction for all MOELoRA layers that can regulate the contributions of each\nexpert and generate distinct parameters for various tasks. To validate the\neffectiveness and practicality of the proposed method, we conducted\ncomprehensive experiments on a public multi-task Chinese medical dataset. The\nexperimental results demonstrate that MOELoRA outperforms existing\nparameter-efficient fine-tuning methods. The implementation is available online\nfor convenient reproduction of our experiments.",
"authors": "Qidong Liu, Xian Wu, Xiangyu Zhao, Yuanshao Zhu, Derong Xu, Feng Tian, Yefeng Zheng",
"published": "2023-10-21",
"updated": "2023-10-21",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2403.11366v2",
"title": "JORA: JAX Tensor-Parallel LoRA Library for Retrieval Augmented Fine-Tuning",
"abstract": "The scaling of Large Language Models (LLMs) for retrieval-based tasks,\nparticularly in Retrieval Augmented Generation (RAG), faces significant memory\nconstraints, especially when fine-tuning extensive prompt sequences. Current\nopen-source libraries support full-model inference and fine-tuning across\nmultiple GPUs but fall short of accommodating the efficient parameter\ndistribution required for retrieved context. Addressing this gap, we introduce\na novel framework for PEFT-compatible fine-tuning of Llama-2 models, leveraging\ndistributed training. Our framework uniquely utilizes JAX's just-in-time (JIT)\ncompilation and tensor-sharding for efficient resource management, thereby\nenabling accelerated fine-tuning with reduced memory requirements. This\nadvancement significantly improves the scalability and feasibility of\nfine-tuning LLMs for complex RAG applications, even on systems with limited GPU\nresources. Our experiments show more than 12x improvement in runtime compared\nto Hugging Face/DeepSpeed implementation with four GPUs while consuming less\nthan half the VRAM per GPU.",
"authors": "Anique Tahir, Lu Cheng, Huan Liu",
"published": "2024-03-17",
"updated": "2024-03-19",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.CL",
"cs.DC"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2401.13942v2",
"title": "StyleInject: Parameter Efficient Tuning of Text-to-Image Diffusion Models",
"abstract": "The ability to fine-tune generative models for text-to-image generation tasks\nis crucial, particularly facing the complexity involved in accurately\ninterpreting and visualizing textual inputs. While LoRA is efficient for\nlanguage model adaptation, it often falls short in text-to-image tasks due to\nthe intricate demands of image generation, such as accommodating a broad\nspectrum of styles and nuances. To bridge this gap, we introduce StyleInject, a\nspecialized fine-tuning approach tailored for text-to-image models. StyleInject\ncomprises multiple parallel low-rank parameter matrices, maintaining the\ndiversity of visual features. It dynamically adapts to varying styles by\nadjusting the variance of visual features based on the characteristics of the\ninput signal. This approach significantly minimizes the impact on the original\nmodel's text-image alignment capabilities while adeptly adapting to various\nstyles in transfer learning. StyleInject proves particularly effective in\nlearning from and enhancing a range of advanced, community-fine-tuned\ngenerative models. Our comprehensive experiments, including both small-sample\nand large-scale data fine-tuning as well as base model distillation, show that\nStyleInject surpasses traditional LoRA in both text-image semantic consistency\nand human preference evaluation, all while ensuring greater parameter\nefficiency.",
"authors": "Mohan Zhou, Yalong Bai, Qing Yang, Tiejun Zhao",
"published": "2024-01-25",
"updated": "2024-05-10",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2305.15212v1",
"title": "Towards Adaptive Prefix Tuning for Parameter-Efficient Language Model Fine-tuning",
"abstract": "Fine-tuning large pre-trained language models on various downstream tasks\nwith whole parameters is prohibitively expensive. Hence, Parameter-efficient\nfine-tuning has attracted attention that only optimizes a few task-specific\nparameters with the frozen pre-trained model. In this work, we focus on prefix\ntuning, which only optimizes continuous prefix vectors (i.e. pseudo tokens)\ninserted into Transformer layers. Based on the observation that the learned\nsyntax and semantics representation varies a lot at different layers, we argue\nthat the adaptive prefix will be further tailored to each layer than the fixed\none, enabling the fine-tuning more effective and efficient. Thus, we propose\nAdaptive Prefix Tuning (APT) to adjust the prefix in terms of both fine-grained\ntoken level and coarse-grained layer level with a gate mechanism. Experiments\non the SuperGLUE and NER datasets show the effectiveness of APT. In addition,\ntaking the gate as a probing, we validate the efficiency and effectiveness of\nthe variable prefix.",
"authors": "Zhen-Ru Zhang, Chuanqi Tan, Haiyang Xu, Chengyu Wang, Jun Huang, Songfang Huang",
"published": "2023-05-24",
"updated": "2023-05-24",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2403.02271v1",
"title": "RIFF: Learning to Rephrase Inputs for Few-shot Fine-tuning of Language Models",
"abstract": "Pre-trained Language Models (PLMs) can be accurately fine-tuned for\ndownstream text processing tasks. Recently, researchers have introduced several\nparameter-efficient fine-tuning methods that optimize input prompts or adjust a\nsmall number of model parameters (e.g LoRA). In this study, we explore the\nimpact of altering the input text of the original task in conjunction with\nparameter-efficient fine-tuning methods. To most effectively rewrite the input\ntext, we train a few-shot paraphrase model with a Maximum-Marginal Likelihood\nobjective. Using six few-shot text classification datasets, we show that\nenriching data with paraphrases at train and test time enhances the performance\nbeyond what can be achieved with parameter-efficient fine-tuning alone.",
"authors": "Saeed Najafi, Alona Fyshe",
"published": "2024-03-04",
"updated": "2024-03-04",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.LG"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2212.05901v1",
"title": "Parameter-Efficient Finetuning of Transformers for Source Code",
"abstract": "Pretrained Transformers achieve state-of-the-art performance in various\ncode-processing tasks but may be too large to be deployed. As software\ndevelopment tools often incorporate modules for various purposes which may\npotentially use a single instance of the pretrained model, it appears relevant\nto utilize parameter-efficient fine-tuning for the pretrained models of code.\nIn this work, we test two widely used approaches, adapters and LoRA, which were\ninitially tested on NLP tasks, on four code-processing tasks. We find that\nthough the efficient fine-tuning approaches may achieve comparable or higher\nperformance than the standard, full, fine-tuning in code understanding tasks,\nthey underperform full fine-tuning in code-generative tasks. These results\nunderline the importance of testing efficient fine-tuning approaches on other\ndomains than NLP and motivate future research in efficient fine-tuning for\nsource code.",
"authors": "Shamil Ayupov, Nadezhda Chirkova",
"published": "2022-12-12",
"updated": "2022-12-12",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.LG",
"cs.SE"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2308.14604v3",
"title": "SAM-PARSER: Fine-tuning SAM Efficiently by Parameter Space Reconstruction",
"abstract": "Segment Anything Model (SAM) has received remarkable attention as it offers a\npowerful and versatile solution for object segmentation in images. However,\nfine-tuning SAM for downstream segmentation tasks under different scenarios\nremains a challenge, as the varied characteristics of different scenarios\nnaturally requires diverse model parameter spaces. Most existing fine-tuning\nmethods attempt to bridge the gaps among different scenarios by introducing a\nset of new parameters to modify SAM's original parameter space. Unlike these\nworks, in this paper, we propose fine-tuning SAM efficiently by parameter space\nreconstruction (SAM-PARSER), which introduce nearly zero trainable parameters\nduring fine-tuning. In SAM-PARSER, we assume that SAM's original parameter\nspace is relatively complete, so that its bases are able to reconstruct the\nparameter space of a new scenario. We obtain the bases by matrix decomposition,\nand fine-tuning the coefficients to reconstruct the parameter space tailored to\nthe new scenario by an optimal linear combination of the bases. Experimental\nresults show that SAM-PARSER exhibits superior segmentation performance across\nvarious scenarios, while reducing the number of trainable parameters by\n$\\approx 290$ times compared with current parameter-efficient fine-tuning\nmethods.",
"authors": "Zelin Peng, Zhengqin Xu, Zhilin Zeng, Xiaokang Yang, Wei Shen",
"published": "2023-08-28",
"updated": "2023-12-18",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2403.08484v1",
"title": "Data-oriented Dynamic Fine-tuning Parameter Selection Strategy for FISH Mask based Efficient Fine-tuning",
"abstract": "In view of the huge number of parameters of Large language models (LLMs) ,\ntuning all parameters is very costly, and accordingly fine-tuning specific\nparameters is more sensible. Most of parameter efficient fine-tuning (PEFT)\nconcentrate on parameter selection strategies, such as additive method,\nselective method and reparametrization-based method. However, there are few\nmethods that consider the impact of data samples on parameter selecting, such\nas Fish Mask based method. Fish Mask randomly choose a part of data samples and\ntreat them equally during parameter selection, which is unable to dynamically\nselect optimal parameters for inconstant data distributions. In this work, we\nadopt a data-oriented perspective, then proposing an IRD ($\\mathrm{\\underline\nI}$terative sample-parameter $\\mathrm{\\underline R}$ange $\\mathrm{\\underline\nD}$ecreasing) algorithm to search the best setting of sample-parameter pair for\nFISH Mask. In each iteration, by searching the set of samples and parameters\nwith larger Fish information, IRD can find better sample-parameter pair in most\nscale. We demonstrate the effectiveness and rationality of proposed strategy by\nconducting experiments on GLUE benchmark. Experimental results show our\nstrategy optimizes the parameter selection and achieves preferable performance.",
"authors": "Ming Dong, Kang Xue, Bolong Zheng, Tingting He",
"published": "2024-03-13",
"updated": "2024-03-13",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2401.10544v1",
"title": "AAT: Adapting Audio Transformer for Various Acoustics Recognition Tasks",
"abstract": "Recently, Transformers have been introduced into the field of acoustics\nrecognition. They are pre-trained on large-scale datasets using methods such as\nsupervised learning and semi-supervised learning, demonstrating robust\ngenerality--It fine-tunes easily to downstream tasks and shows more robust\nperformance. However, the predominant fine-tuning method currently used is\nstill full fine-tuning, which involves updating all parameters during training.\nThis not only incurs significant memory usage and time costs but also\ncompromises the model's generality. Other fine-tuning methods either struggle\nto address this issue or fail to achieve matching performance. Therefore, we\nconducted a comprehensive analysis of existing fine-tuning methods and proposed\nan efficient fine-tuning approach based on Adapter tuning, namely AAT. The core\nidea is to freeze the audio Transformer model and insert extra learnable\nAdapters, efficiently acquiring downstream task knowledge without compromising\nthe model's original generality. Extensive experiments have shown that our\nmethod achieves performance comparable to or even superior to full fine-tuning\nwhile optimizing only 7.118% of the parameters. It also demonstrates\nsuperiority over other fine-tuning methods.",
"authors": "Yun Liang, Hai Lin, Shaojian Qiu, Yihang Zhang",
"published": "2024-01-19",
"updated": "2024-01-19",
"primary_cat": "cs.SD",
"cats": [
"cs.SD",
"cs.AI",
"eess.AS"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2303.17051v2",
"title": "Towards foundation models and few-shot parameter-efficient fine-tuning for volumetric organ segmentation",
"abstract": "With the recent raise of foundation models in computer vision and NLP, the\npretrain-and-adapt strategy, where a large-scale model is fine-tuned on\ndownstream tasks, is gaining popularity. However, traditional fine-tuning\napproaches may still require significant resources and yield sub-optimal\nresults when the labeled data of the target task is scarce. This is especially\nthe case in clinical settings. To address this challenge, we formalize few-shot\nefficient fine-tuning (FSEFT), a novel and realistic setting for medical image\nsegmentation. Furthermore, we introduce a novel parameter-efficient fine-tuning\nstrategy tailored to medical image segmentation, with (a) spatial adapter\nmodules that are more appropriate for dense prediction tasks; and (b) a\nconstrained transductive inference, which leverages task-specific prior\nknowledge. Our comprehensive experiments on a collection of public CT datasets\nfor organ segmentation reveal the limitations of standard fine-tuning methods\nin few-shot scenarios, point to the potential of vision adapters and\ntransductive inference, and confirm the suitability of foundation models.",
"authors": "Julio Silva-Rodr\u00edguez, Jose Dolz, Ismail Ben Ayed",
"published": "2023-03-29",
"updated": "2023-09-29",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2308.03303v1",
"title": "LoRA-FA: Memory-efficient Low-rank Adaptation for Large Language Models Fine-tuning",
"abstract": "The low-rank adaptation (LoRA) method can largely reduce the amount of\ntrainable parameters for fine-tuning large language models (LLMs), however, it\nstill requires expensive activation memory to update low-rank weights. Reducing\nthe number of LoRA layers or using activation recomputation could harm the\nfine-tuning performance or increase the computational overhead. In this work,\nwe present LoRA-FA, a memory-efficient fine-tuning method that reduces the\nactivation memory without performance degradation and expensive recomputation.\nLoRA-FA chooses to freeze the projection-down weight of $A$ and update the\nprojection-up weight of $B$ in each LoRA layer. It ensures the change of model\nweight reside in a low-rank space during LLMs fine-tuning, while eliminating\nthe requirement to store full-rank input activations. We conduct extensive\nexperiments across multiple model types (RoBERTa, T5, LLaMA) and model scales.\nOur results show that LoRA-FA can always achieve close fine-tuning accuracy\nacross different tasks compared to full parameter fine-tuning and LoRA.\nFurthermore, LoRA-FA can reduce the overall memory cost by up to 1.4$\\times$\ncompared to LoRA.",
"authors": "Longteng Zhang, Lin Zhang, Shaohuai Shi, Xiaowen Chu, Bo Li",
"published": "2023-08-07",
"updated": "2023-08-07",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2310.04742v3",
"title": "Parameter Efficient Multi-task Model Fusion with Partial Linearization",
"abstract": "Large pre-trained models have enabled significant advances in machine\nlearning and served as foundation components. Model fusion methods, such as\ntask arithmetic, have been proven to be powerful and scalable to incorporate\nfine-tuned weights from different tasks into a multi-task model. However,\nefficiently fine-tuning large pre-trained models on multiple downstream tasks\nremains challenging, leading to inefficient multi-task model fusion. In this\nwork, we propose a novel method to improve multi-task fusion for\nparameter-efficient fine-tuning techniques like LoRA fine-tuning. Specifically,\nour approach partially linearizes only the adapter modules and applies task\narithmetic over the linearized adapters. This allows us to leverage the the\nadvantages of model fusion over linearized fine-tuning, while still performing\nfine-tuning and inference efficiently. We demonstrate that our partial\nlinearization technique enables a more effective fusion of multiple tasks into\na single model, outperforming standard adapter tuning and task arithmetic\nalone. Experimental results demonstrate the capabilities of our proposed\npartial linearization technique to effectively construct unified multi-task\nmodels via the fusion of fine-tuned task vectors. We evaluate performance over\nan increasing number of tasks and find that our approach outperforms standard\nparameter-efficient fine-tuning techniques. The results highlight the benefits\nof partial linearization for scalable and efficient multi-task model fusion.\nThe code is available at https://github.com/tanganke/peta",
"authors": "Anke Tang, Li Shen, Yong Luo, Yibing Zhan, Han Hu, Bo Du, Yixin Chen, Dacheng Tao",
"published": "2023-10-07",
"updated": "2024-03-11",
"primary_cat": "cs.LG",
"cats": [
"cs.LG"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2404.09022v1",
"title": "Navigating the Landscape of Large Language Models: A Comprehensive Review and Analysis of Paradigms and Fine-Tuning Strategies",
"abstract": "With the surge of ChatGPT,the use of large models has significantly\nincreased,rapidly rising to prominence across the industry and sweeping across\nthe internet. This article is a comprehensive review of fine-tuning methods for\nlarge models. This paper investigates the latest technological advancements and\nthe application of advanced methods in aspects such as task-adaptive\nfine-tuning,domain-adaptive fine-tuning,few-shot learning,knowledge\ndistillation,multi-task learning,parameter-efficient fine-tuning,and dynamic\nfine-tuning.",
"authors": "Benjue Weng",
"published": "2024-04-13",
"updated": "2024-04-13",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AI",
"cs.CL"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2310.07147v1",
"title": "QFT: Quantized Full-parameter Tuning of LLMs with Affordable Resources",
"abstract": "Large Language Models (LLMs) have showcased remarkable impacts across a wide\nspectrum of natural language processing tasks. Fine-tuning these pre-trained\nmodels on downstream datasets provides further significant performance gains,\nbut this process has been challenging due to its extraordinary resource\nrequirements. To this end, existing efforts focus on parameter-efficient\nfine-tuning, which, unfortunately, fail to capitalize on the powerful potential\nof full-parameter fine-tuning. In this work, we propose QFT, a novel Quantized\nFull-parameter Tuning framework for LLMs that enables memory-efficient\nfine-tuning without harming performance. Our framework incorporates two novel\nideas: (i) we adopt the efficient Lion optimizer, which only keeps track of the\nmomentum and has consistent update magnitudes for each parameter, an inherent\nadvantage for robust quantization; and (ii) we quantize all model states and\nstore them as integer values, and present a gradient flow and parameter update\nscheme for the quantized weights. As a result, QFT reduces the model state\nmemory to 21% of the standard solution while achieving comparable performance,\ne.g., tuning a LLaMA-7B model requires only <30GB of memory, satisfied by a\nsingle A6000 GPU.",
"authors": "Zhikai Li, Xiaoxuan Liu, Banghua Zhu, Zhen Dong, Qingyi Gu, Kurt Keutzer",
"published": "2023-10-11",
"updated": "2023-10-11",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.LG"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
},
{
"url": "http://arxiv.org/abs/2401.12200v1",
"title": "APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference",
"abstract": "Fine-tuning and inference with large Language Models (LM) are generally known\nto be expensive. Parameter-efficient fine-tuning over pretrained LMs reduces\ntraining memory by updating a small number of LM parameters but does not\nimprove inference efficiency. Structured pruning improves LM inference\nefficiency by removing consistent parameter blocks, yet often increases\ntraining memory and time. To improve both training and inference efficiency, we\nintroduce APT that adaptively prunes and tunes parameters for the LMs. At the\nearly stage of fine-tuning, APT dynamically adds salient tuning parameters for\nfast and accurate convergence while discarding unimportant parameters for\nefficiency. Compared to baselines, our experiments show that APT maintains up\nto 98% task performance when pruning RoBERTa and T5 models with 40% parameters\nleft while keeping 86.4% LLaMA models' performance with 70% parameters\nremained. Furthermore, APT speeds up LMs fine-tuning by up to 8x and reduces\nlarge LMs memory training footprint by up to 70%.",
"authors": "Bowen Zhao, Hannaneh Hajishirzi, Qingqing Cao",
"published": "2024-01-22",
"updated": "2024-01-22",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.LG"
],
"category": "Parameter AND Efficient AND Fine AND Tuning"
}
]