diff --git "a/related_34K/test_related_short_2404.16895v3.json" "b/related_34K/test_related_short_2404.16895v3.json" new file mode 100644--- /dev/null +++ "b/related_34K/test_related_short_2404.16895v3.json" @@ -0,0 +1,1409 @@ +[ + { + "url": "http://arxiv.org/abs/2404.16895v3", + "title": "QuERLoc: Towards Next-Generation Localization with Quantum-Enhanced Ranging", + "abstract": "Remarkable advances have been achieved in localization techniques in past\ndecades, rendering it one of the most important technologies indispensable to\nour daily lives. In this paper, we investigate a novel localization approach\nfor future computing by presenting QuERLoc, the first study on localization\nusing quantum-enhanced ranging. By fine-tuning the evolution of an entangled\nquantum probe, quantum ranging can output the information integrated in the\nprobe as a specific mapping of distance-related parameters. QuERLoc is inspired\nby this unique property to measure a special combination of distances between a\ntarget sensor and multiple anchors within one single physical measurement.\nLeveraging this capability, QuERLoc settles two drawbacks of classical\nlocalization approaches: (i) the target-anchor distances must be measured\nindividually and sequentially, and (ii) the resulting optimization problems are\nnon-convex and are sensitive to noise. We first present the theoretical\nformulation of preparing the probing quantum state and controlling its dynamic\nto induce a convexified localization problem, and then solve it efficiently via\noptimization. We conduct extensive numerical analysis of QuERLoc under various\nsettings. The results show that QuERLoc consistently outperforms classical\napproaches in accuracy and closely follows the theoretical lowerbound, while\nmaintaining low time complexity. It achieves a minimum reduction of 73% in RMSE\nand 97.6% in time consumption compared to baselines. By introducing range-based\nquantum localization to the mobile computing community and showing its superior\nperformance, QuERLoc sheds light on next-generation localization technologies\nand opens up new directions for future research.", + "authors": "Entong He, Yuxiang Yang, Chenshu Wu", + "published": "2024-04-25", + "updated": "2024-05-04", + "primary_cat": "cs.ET", + "cats": [ + "cs.ET" + ], + "label": "Original Paper", + "paper_cat": "Parameter AND Efficient AND Fine AND Tuning", + "gt": "Range-based Localization Range-based localization has been a subject of intense study, which involves two key problems: ranging and localization. Ranging is usually done by reversing the propagation distances from various signals, e.g., GPS, WiFi, mmWave, ultrasound, etc, with different ranging models, e.g., AoA, TDoA, RSSI, etc. Many efforts have been made towards localization with certain structure of distance information, including the trilaterationbased algorithms, conic relaxation, and MDS-MAP. Solving problems induced by trilateration often requires the use of linearization or pseudo-linearization [21], and the performance deteriorates significantly due to inaccurate distance measurements and error accumulation, thus further refinement is required [3]. In [38, 41], noise-tolerant trilateration-based algorithms are proposed. Localization by conic relaxation converts non-convex constraints in the problem formulation into convex ones. In [35], So and Ye studied the theory of semidefinite programming (SDP) in sensor network localization, while in [40], Luo et al. applied SDP technique to TDoA localization. Tseng [36] proposed second-order cone programming (SOCP) method as an efficient variant of SDP. Although relaxation method achieves high accuracy in estimating sensor locations, its complexity is in general not satisfactory [1], and is thus only applicable to small-scale problems. Multidimensional scaling (MDS) is a special technique aimed at finding low-dimensional representations for high-dimensional data. MDS-MAP [32] constructs a relative map through distance matrix, and localizes, nodes by transforming the map into a absolute map with sufficient and accurate distance measurements. Quantum Metrology Quantum metrology [12\u201314] emerged as an increasingly important research area, where quantum entanglement and coherence are harnessed to boost the precision of sensing beyond the limit of classical sensors in various fundamental scenarios, including thermometry [26], reference frame alignment [6], and distance measurement [11]. Controlled evolution of quantum system is also widely studied, largely based on the control theory so as to create certain state evolution in realizing different sensing tasks [15, 33]. Besides theoretical works, a primitive quantum sensor network has been lately implemented [24], while recent experiment has demonstrated the feasibility of generating large Greenberger\u2013Horne\u2013Zeilinger (GHZ) state [44]. Experimental works demonstrate the feasibility to prepare widely-used probes in quantum metrology, including the ones utilized by QuERLoc. Quantum-assisted Localization There is no significant amount of work presented in the interdisciplinary field of quantum information and localization. A few existing works enhance fingerprintbased localization by accelerating computation in fingerprint database searching using quantum algorithms. Grover in [16] improves the asymptotic time of searching in an unstructured dataset from \ud835\udc42(\ud835\udc5b) to \ud835\udc42(\u221a\ud835\udc5b). Buhrman et al. [4] introduces the concept of quantum fingerprints and proves its exponential improvement in storage complexity compared to classical one. Subsequent works include the quantum fingerprint localization [34], two-stage transmitter localization method with quantum sensor network [43], and machine learning-based WiFi sensing localization augmented with quantum transfer learning [19]. To our awareness, there is no prior work on range-based localization with quantum ranging. 2 QuERLoc ,", + "pre_questions": [], + "main_content": "Introduction to Quantum Control and Dynamics. Chapman and Hall/CRC. [11] Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone. 2001. Quantum-enhanced positioning and clock synchronization. Nature 412, 6845 (July 2001), 417\u2013419. [12] Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone. 2004. QuantumEnhanced Measurements: Beating the Standard Quantum Limit. Science 306, 5700 (Nov. 2004), 1330\u20131336. [13] Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone. 2006. Quantum Metrology. Phys. Rev. Lett. 96 (Jan 2006), 010401. Issue 1. [14] Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone. 2011. Advances in quantum metrology. Nature Photonics 5, 4 (March 2011), 222\u2013229. [15] Sandeep K. Goyal, Subhashish Banerjee, and Sibasish Ghosh. 2012. Effect of control procedures on the evolution of entanglement in open quantum systems. Physical Review A 85, 1 (Jan. 2012). [16] Lov K. Grover. 1997. Quantum Mechanics Helps in Searching for a Needle in a Haystack. Phys. Rev. Lett. 79 (Jul 1997), 325\u2013328. Issue 2. [17] Wayne M. Itano, D. J. Heinzen, J. J. Bollinger, and D. J. Wineland. 1990. Quantum Zeno effect. Phys. Rev. A 41 (Mar 1990), 2295\u20132300. Issue 5. [18] Haotian Jiang, Tarun Kathuria, Yin Tat Lee, Swati Padmanabhan, and Zhao Song. 2020. A Faster Interior Point Method for Semidefinite Programming. In 2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS). IEEE. [19] Toshiaki Koike-Akino, Pu Wang, and Ye Wang. 2022. Quantum Transfer Learning for Wi-Fi Sensing. In ICC 2022 IEEE International Conference on Communications. [20] Manikanta Kotaru, Kiran Joshi, Dinesh Bharadia, and Sachin Katti. 2015. SpotFi: Decimeter Level Localization Using WiFi. In Proceedings of the 2015 ACM Conference on Special Interest Group on Data Communication (SIGCOMM \u201915). ACM, 269\u2013282. [21] Lanxin Lin, H.C. So, Frankie K.W. Chan, Y.T. Chan, and K.C. Ho. 2013. A new constrained weighted least squares algorithm for TDOA-based localization. Signal Processing 93, 11 (Nov. 2013), 2872\u20132878. [22] Jie Liu, Bodhi Priyantha, Ted Hart, Heitor S. Ramos, Antonio A. F. Loureiro, and Qiang Wang. 2012. Energy efficient GPS sensing with cloud offloading. In Proceedings of the 10th ACM Conference on Embedded Network Sensor Systems (SenSys \u201912). ACM, 85\u201398. [23] Jie Liu, Biao Wu, and Qian Niu. 2003. Nonlinear Evolution of Quantum States in the Adiabatic Regime. Phys. Rev. Lett. 90 (May 2003), 170404. Issue 17. [24] Li-Zheng Liu, Yu-Zhe Zhang, Zheng-Da Li, Rui Zhang, Xu-Fei Yin, Yue-Yang Fei, Li Li, Nai-Le Liu, Feihu Xu, Yu-Ao Chen, and Jian-Wei Pan. 2020. Distributed quantum phase estimation with entangled photons. Nature Photonics 15, 2 (Nov. 2020), 137\u2013142. [25] Guoqiang Mao, Bar\u0131\u015f Fidan, and Brian D.O. Anderson. 2007. Wireless sensor network localization techniques. Computer Networks 51, 10 (July 2007), 2529\u20132553. [26] Mohammad Mehboudi, Anna Sanpera, and Luis A Correa. 2019. Thermometry in the quantum regime: recent theoretical progress. Journal of Physics A: Mathematical and Theoretical 52, 30 (July 2019), 303001. [27] Kosuke Mitarai, Kiichiro Toyoizumi, and Wataru Mizukami. 2023. Perturbation theory with quantum signal processing. Quantum 7 (May 2023), 1000. [28] Michael A. Nielsen and Isaac L. Chuang. 2012. Quantum Computation and Quantum Information: 10th Anniversary Edition. Cambridge University Press. [29] Luca Pezz\u00e9 and Augusto Smerzi. 2014. Quantum theory of phase estimation. arXiv preprint arXiv:1411.5164 (2014). [30] Anshul Rai, Krishna Kant Chintalapudi, Venkata N. Padmanabhan, and Rijurekha Sen. 2012. Zee: zero-effort crowdsourcing for indoor localization. In Proceedings of the 18th annual international conference on Mobile computing and networking (Mobicom\u201912). ACM, 293\u2013304. [31] Jorma J Rissanen. 1996. Fisher information and stochastic complexity. IEEE transactions on information theory 42, 1 (1996), 40\u201347. [32] Yi Shang, Wheeler Ruml, Ying Zhang, and Markus P. J. Fromherz. 2003. Localization from mere connectivity. In Proceedings of the 4th ACM International Symposium on Mobile Ad Hoc Networking & Computing (MobiHoc \u201903). ACM, 201\u2013212. [33] Moshe Shapiro and Paul Brumer. 2006. Quantum control of bound and continuum state dynamics. Physics Reports 425, 4 (March 2006), 195\u2013264. [34] Ahmed Shokry and Moustafa Youssef. 2022. A Quantum Algorithm for RFbased Fingerprinting Localization Systems. In 2022 IEEE 47th Conference on Local Computer Networks (LCN). IEEE. [35] Anthony Man-Cho So and Yinyu Ye. 2006. Theory of semidefinite programming for Sensor Network Localization. Mathematical Programming 109, 2\u20133 (Sept. 2006), 367\u2013384. [36] Paul Tseng. 2007. Second-Order Cone Programming Relaxation of Sensor Network Localization. SIAM Journal on Optimization 18, 1 (Jan. 2007), 156\u2013185. [37] Sabine Van Huffel and Hongyuan Zha. 1993. The total least squares problem. Elsevier, 377\u2013408. [38] Fu Xiao, Lei Chen, Chaoheng Sha, Lijuan Sun, Ruchuan Wang, Alex X. Liu, and Faraz Ahmed. 2018. Noise Tolerant Localization for Sensor Networks. IEEE/ACM Transactions on Networking 26, 4 (Aug. 2018), 1701\u20131714. [39] Qingjun Xiao, Bin Xiao, Kai Bu, and Jiannong Cao. 2014. Iterative Localization of Wireless Sensor Networks: An Accurate and Robust Approach. IEEE/ACM Transactions on Networking 22, 2 (April 2014), 608\u2013621. [40] Kehu Yang, Gang Wang, and Zhi-Quan Luo. 2009. Efficient Convex Relaxation Methods for Robust Target Localization by a Sensor Network Using Time Differences of Arrivals. IEEE Transactions on Signal Processing 57, 7 (2009), 2775\u20132784. [41] Z. Yang, Y. Liu, and X.-Y. Li. 2009. Beyond trilateration: On the localizability of wireless ad-hoc networks. In IEEE INFOCOM 2009. IEEE. [42] Kegen Yu, Ian Sharp, and Y Jay Guo. 2009. Ground-based wireless positioning. [43] Caitao Zhan and Himanshu Gupta. 2023. Quantum Sensor Network Algorithms for Transmitter Localization. In 2023 IEEE International Conference on Quantum Computing and Engineering (QCE). IEEE. [44] Sheng Zhang, Yu-Kai Wu, Chang Li, Nan Jiang, Yun-Fei Pu, and Lu-Ming Duan. 2022. Quantum-Memory-Enhanced Preparation of Nonlocal Graph States. Phys. Rev. Lett. 128 (Feb 2022), 080501. Issue 8. [45] Junyi Zhou and Jing Shi. 2008. RFID localization algorithms and applications\u2014a review. Journal of Intelligent Manufacturing 20, 6 (Aug. 2008), 695\u2013707. 10" + }, + { + "url": "http://arxiv.org/abs/2205.08590v1", + "title": "Quantum Transfer Learning for Wi-Fi Sensing", + "abstract": "Beyond data communications, commercial-off-the-shelf Wi-Fi devices can be\nused to monitor human activities, track device locomotion, and sense the\nambient environment. In particular, spatial beam attributes that are inherently\navailable in the 60-GHz IEEE 802.11ad/ay standards have shown to be effective\nin terms of overhead and channel measurement granularity for these indoor\nsensing tasks. In this paper, we investigate transfer learning to mitigate\ndomain shift in human monitoring tasks when Wi-Fi settings and environments\nchange over time. As a proof-of-concept study, we consider quantum neural\nnetworks (QNN) as well as classical deep neural networks (DNN) for the future\nquantum-ready society. The effectiveness of both DNN and QNN is validated by an\nin-house experiment for human pose recognition, achieving greater than 90%\naccuracy with a limited data size.", + "authors": "Toshiaki Koike-Akino, Pu Wang, Ye Wang", + "published": "2022-05-17", + "updated": "2022-05-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NI", + "eess.SP", + "quant-ph" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/quant-ph/9706033v2", + "title": "Quantum Mechanics helps in searching for a needle in a haystack", + "abstract": "Quantum mechanics can speed up a range of search applications over unsorted\ndata. For example imagine a phone directory containing N names arranged in\ncompletely random order. To find someone's phone number with a probability of\n50%, any classical algorithm (whether deterministic or probabilistic) will need\nto access the database a minimum of O(N) times. Quantum mechanical systems can\nbe in a superposition of states and simultaneously examine multiple names. By\nproperly adjusting the phases of various operations, successful computations\nreinforce each other while others interfere randomly. As a result, the desired\nphone number can be obtained in only O(sqrt(N)) accesses to the database.", + "authors": "Lov K. Grover", + "published": "1997-06-13", + "updated": "1997-07-17", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1811.03988v3", + "title": "Thermometry in the quantum regime: Recent theoretical progress", + "abstract": "Controlling and measuring the temperature in different devices and platforms\nthat operate in the quantum regime is, without any doubt, essential for any\npotential application. In this review, we report the most recent theoretical\ndevelopments dealing with accurate estimation of very low temperatures in\nquantum systems. Together with the emerging experimental techniques and\ndevelopments of measurement protocols, the theory of quantum thermometry will\ndecisively impinge and shape the forthcoming quantum technologies. While\ncurrent quantum thermometric methods differ greatly depending on the\nexperimental platform, the achievable precision, and the temperature range of\ninterest, the theory of quantum thermometry is built under a unifying framework\nat the crossroads of quantum metrology, open quantum systems, and quantum\nmany-body physics. At a fundamental level, theoretical quantum thermometry is\nconcerned with finding the ultimate bounds and scaling laws that limit the\nprecision of temperature estimation for systems in and out-of-thermal\nequilibrium. At a more practical level, it provides tools to formulate precise,\nyet feasible, thermometric protocols for relevant experimental architectures.\nLast but not least, the theory of quantum thermometry examines genuine quantum\nfeatures, like entanglement and coherence, for their exploitation in\nenhanced-resolution thermometry.", + "authors": "Mohammad Mehboudi, Anna Sanpera, Luis A. Correa", + "published": "2018-11-09", + "updated": "2019-07-03", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph", + "cond-mat.stat-mech", + "math-ph", + "math.MP" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0405095v2", + "title": "Efficient use of quantum resources for the transmission of a reference frame", + "abstract": "We propose a covariant protocol for transmitting reference frames encoded on\n$N$ spins, achieving sensitivity $N^{-2}$ without the need of a pre-established\nreference frame and without using entanglement between sender and receiver. The\nprotocol exploits the use of equivalent representations, which were overlooked\nin the previous literature.", + "authors": "G. Chiribella, G. M. D'Ariano, P. Perinotti, M. F. Sacchi", + "published": "2004-05-17", + "updated": "2004-08-24", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0103006v3", + "title": "Quantum enhanced positioning and clock synchronization", + "abstract": "A wide variety of positioning and ranging procedures are based on repeatedly\nsending electromagnetic pulses through space and measuring their time of\narrival. This paper shows that quantum entanglement and squeezing can be\nemployed to overcome the classical power/bandwidth limits on these procedures,\nenhancing their accuracy. Frequency entangled pulses could be used to construct\nquantum positioning systems (QPS), to perform clock synchronization, or to do\nranging (quantum radar): all of these techniques exhibit a similar enhancement\ncompared with analogous protocols that use classical light. Quantum\nentanglement and squeezing have been exploited in the context of\ninterferometry, frequency measurements, lithography, and algorithms. Here, the\nproblem of positioning a party (say Alice) with respect to a fixed array of\nreference points will be analyzed.", + "authors": "V. Giovannetti, S. Lloyd, L. Maccone", + "published": "2001-03-02", + "updated": "2001-06-01", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2102.11679v1", + "title": "Distributed quantum phase estimation with entangled photons", + "abstract": "Distributed quantum metrology can enhance the sensitivity for sensing\nspatially distributed parameters beyond the classical limits. Here we\ndemonstrate distributed quantum phase estimation with discrete variables to\nachieve Heisenberg limit phase measurements. Based on parallel entanglement in\nmodes and particles, we demonstrate distributed quantum sensing for both\nindividual phase shifts and an averaged phase shift, with an error reduction up\nto 1.4 dB and 2.7 dB below the shot-noise limit. Furthermore, we demonstrate a\ncombined strategy with parallel mode entanglement and multiple passes of the\nphase shifter in each mode. In particular, our experiment uses six entangled\nphotons with each photon passing the phase shifter up to six times, and\nachieves a total number of photon passes N=21 at an error reduction up to 4.7\ndB below the shot-noise limit. Our research provides a faithful verification of\nthe benefit of entanglement and coherence for distributed quantum sensing in\ngeneral quantum networks.", + "authors": "Li-Zheng Liu, Yu-Zhe Zhang, Zheng-Da Li, Rui Zhang, Xu-Fei Yin, Yue-Yang Fei, Li Li, Nai-Le Liu, Feihu Xu, Yu-Ao Chen, Jian-Wei Pan", + "published": "2021-02-23", + "updated": "2021-02-23", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.02260v4", + "title": "Quantum Sensor Network Algorithms for Transmitter Localization", + "abstract": "A quantum sensor (QS) is able to measure various physical phenomena with\nextreme sensitivity. QSs have been used in several applications such as atomic\ninterferometers, but few applications of a quantum sensor network (QSN) have\nbeen proposed or developed. We look at a natural application of QSN --\nlocalization of an event (in particular, of a wireless signal transmitter). In\nthis paper, we develop effective quantum-based techniques for the localization\nof a transmitter using a QSN. Our approaches pose the localization problem as a\nwell-studied quantum state discrimination (QSD) problem and address the\nchallenges in its application to the localization problem. In particular, a\nquantum state discrimination solution can suffer from a high probability of\nerror, especially when the number of states (i.e., the number of potential\ntransmitter locations in our case) can be high. We address this challenge by\ndeveloping a two-level localization approach, which localizes the transmitter\nat a coarser granularity in the first level, and then, in a finer granularity\nin the second level. We address the additional challenge of the impracticality\nof general measurements by developing new schemes that replace the QSD's\nmeasurement operator with a trained parameterized hybrid quantum-classical\ncircuit. Our evaluation results using a custom-built simulator show that our\nbest scheme is able to achieve meter-level (1-5m) localization accuracy; in the\ncase of discrete locations, it achieves near-perfect (99-100\\%) classification\naccuracy.", + "authors": "Caitao Zhan, Himanshu Gupta", + "published": "2022-11-04", + "updated": "2023-08-01", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1102.4403v2", + "title": "Effect of control procedures on the evolution of entanglement in open quantum systems", + "abstract": "The effect of a number of mechanisms designed to suppress decoherence in open\nquantum systems are studied with respect to their effectiveness at slowing down\nthe loss of entanglement. The effect of photonic band-gap materials and\nfrequency modulation of the system-bath coupling are along expected lines in\nthis regard. However, other control schemes, like resonance fluorescence,\nachieve quite the contrary: increasing the strength of the control kills\nentanglement off faster. The effect of dynamic decoupling schemes on two\nqualitatively different system-bath interactions are studied in depth. Dynamic\ndecoupling control has the expected effect of slowing down the decay of\nentanglement in a two-qubit system coupled to a harmonic oscillator bath under\nnon-demolition interaction. However, non-trivial phenomena are observed when a\nJosephson charge qubit, strongly coupled to a random telegraph noise bath, is\nsubject to decoupling pulses. The most striking of these reflects the resonance\nfluorescence scenario in that an increase in the pulse strength decreases\ndecoherence but also speeds up the sudden death of entanglement. This\ndemonstrates that the behaviour of decoherence and entanglement in time can be\nqualitatively different in the strong-coupling non-Markovian regime.", + "authors": "Sandeep K Goyal, Subhashish Banerjee, Sibasish Ghosh", + "published": "2011-02-22", + "updated": "2013-07-07", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph", + "cond-mat.other" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2202.13386v1", + "title": "Quantum-Memory-Enhanced Preparation of Nonlocal Graph States", + "abstract": "Graph states are an important class of multipartite entangled states.\nPrevious experimental generation of graph states and in particular the\nGreenberger-Horne-Zeilinger (GHZ) states in linear optics quantum information\nschemes is subjected to an exponential decay in efficiency versus the system\nsize, which limits its large-scale applications in quantum networks. Here we\ndemonstrate an efficient scheme to prepare graph states with only a polynomial\noverhead using long-lived atomic quantum memories. We generate atom-photon\nentangled states in two atomic ensembles asynchronously, retrieve the stored\natomic excitations only when both sides succeed, and further project them into\na four-photon GHZ state. We measure the fidelity of this GHZ state and further\ndemonstrate its applications in the violation of Bell-type inequalities and in\nquantum cryptography. Our work demonstrates the prospect of efficient\ngeneration of multipartite entangled states in large-scale distributed systems\nwith applications in quantum information processing and metrology.", + "authors": "Sheng Zhang, Yu-Kai Wu, Chang Li, Nan Jiang, Yun-Fei Pu, Lu-Ming Duan", + "published": "2022-02-27", + "updated": "2022-02-27", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph", + "physics.atom-ph", + "physics.optics" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0102001v1", + "title": "Quantum fingerprinting", + "abstract": "Classical fingerprinting associates with each string a shorter string (its\nfingerprint), such that, with high probability, any two distinct strings can be\ndistinguished by comparing their fingerprints alone. The fingerprints can be\nexponentially smaller than the original strings if the parties preparing the\nfingerprints share a random key, but not if they only have access to\nuncorrelated random sources. In this paper we show that fingerprints consisting\nof quantum information can be made exponentially smaller than the original\nstrings without any correlations or entanglement between the parties: we give a\nscheme where the quantum fingerprints are exponentially shorter than the\noriginal strings and we give a test that distinguishes any two unknown quantum\nfingerprints with high probability. Our scheme implies an exponential\nquantum/classical gap for the equality problem in the simultaneous message\npassing model of communication complexity. We optimize several aspects of our\nscheme.", + "authors": "Harry Buhrman, Richard Cleve, John Watrous, Ronald de Wolf", + "published": "2001-02-01", + "updated": "2001-02-01", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2403.06407v1", + "title": "Can LLMs' Tuning Methods Work in Medical Multimodal Domain?", + "abstract": "While large language models (LLMs) excel in world knowledge understanding,\nadapting them to specific subfields requires precise adjustments. Due to the\nmodel's vast scale, traditional global fine-tuning methods for large models can\nbe computationally expensive and impact generalization. To address this\nchallenge, a range of innovative Parameters-Efficient Fine-Tuning (PEFT)\nmethods have emerged and achieved remarkable success in both LLMs and Large\nVision-Language Models (LVLMs). In the medical domain, fine-tuning a medical\nVision-Language Pretrained (VLP) model is essential for adapting it to specific\ntasks. Can the fine-tuning methods for large models be transferred to the\nmedical field to enhance transfer learning efficiency? In this paper, we delve\ninto the fine-tuning methods of LLMs and conduct extensive experiments to\ninvestigate the impact of fine-tuning methods for large models on existing\nmultimodal models in the medical domain from the training data level and the\nmodel structure level. We show the different impacts of fine-tuning methods for\nlarge models on medical VLMs and develop the most efficient ways to fine-tune\nmedical VLP models. We hope this research can guide medical domain researchers\nin optimizing VLMs' training costs, fostering the broader application of VLMs\nin healthcare fields. Code and dataset will be released upon acceptance.", + "authors": "Jiawei Chen, Yue Jiang, Dingkang Yang, Mingcheng Li, Jinjie Wei, Ziyun Qian, Lihua Zhang", + "published": "2024-03-11", + "updated": "2024-03-11", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2310.04742v3", + "title": "Parameter Efficient Multi-task Model Fusion with Partial Linearization", + "abstract": "Large pre-trained models have enabled significant advances in machine\nlearning and served as foundation components. Model fusion methods, such as\ntask arithmetic, have been proven to be powerful and scalable to incorporate\nfine-tuned weights from different tasks into a multi-task model. However,\nefficiently fine-tuning large pre-trained models on multiple downstream tasks\nremains challenging, leading to inefficient multi-task model fusion. In this\nwork, we propose a novel method to improve multi-task fusion for\nparameter-efficient fine-tuning techniques like LoRA fine-tuning. Specifically,\nour approach partially linearizes only the adapter modules and applies task\narithmetic over the linearized adapters. This allows us to leverage the the\nadvantages of model fusion over linearized fine-tuning, while still performing\nfine-tuning and inference efficiently. We demonstrate that our partial\nlinearization technique enables a more effective fusion of multiple tasks into\na single model, outperforming standard adapter tuning and task arithmetic\nalone. Experimental results demonstrate the capabilities of our proposed\npartial linearization technique to effectively construct unified multi-task\nmodels via the fusion of fine-tuned task vectors. We evaluate performance over\nan increasing number of tasks and find that our approach outperforms standard\nparameter-efficient fine-tuning techniques. The results highlight the benefits\nof partial linearization for scalable and efficient multi-task model fusion.\nThe code is available at https://github.com/tanganke/peta", + "authors": "Anke Tang, Li Shen, Yong Luo, Yibing Zhan, Han Hu, Bo Du, Yixin Chen, Dacheng Tao", + "published": "2023-10-07", + "updated": "2024-03-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2312.14327v1", + "title": "Parameter Efficient Tuning Allows Scalable Personalization of LLMs for Text Entry: A Case Study on Abbreviation Expansion", + "abstract": "Abbreviation expansion is a strategy used to speed up communication by\nlimiting the amount of typing and using a language model to suggest expansions.\nHere we look at personalizing a Large Language Model's (LLM) suggestions based\non prior conversations to enhance the relevance of predictions, particularly\nwhen the user data is small (~1000 samples). Specifically, we compare\nfine-tuning, prompt-tuning, and retrieval augmented generation of expanded text\nsuggestions for abbreviated inputs. Our case study with a deployed 8B parameter\nLLM on a real user living with ALS, and experiments on movie character\npersonalization indicates that (1) customization may be necessary in some\nscenarios and prompt-tuning generalizes well to those, (2) fine-tuning on\nin-domain data (with as few as 600 samples) still shows some gains, however (3)\nretrieval augmented few-shot selection also outperforms fine-tuning. (4)\nParameter efficient tuning allows for efficient and scalable personalization.\nFor prompt-tuning, we also find that initializing the learned \"soft-prompts\" to\nuser relevant concept tokens leads to higher accuracy than random\ninitialization.", + "authors": "Katrin Tomanek, Shanqing Cai, Subhashini Venugopalan", + "published": "2023-12-21", + "updated": "2023-12-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2302.04870v1", + "title": "Offsite-Tuning: Transfer Learning without Full Model", + "abstract": "Transfer learning is important for foundation models to adapt to downstream\ntasks. However, many foundation models are proprietary, so users must share\ntheir data with model owners to fine-tune the models, which is costly and raise\nprivacy concerns. Moreover, fine-tuning large foundation models is\ncomputation-intensive and impractical for most downstream users. In this paper,\nwe propose Offsite-Tuning, a privacy-preserving and efficient transfer learning\nframework that can adapt billion-parameter foundation models to downstream data\nwithout access to the full model. In offsite-tuning, the model owner sends a\nlight-weight adapter and a lossy compressed emulator to the data owner, who\nthen fine-tunes the adapter on the downstream data with the emulator's\nassistance. The fine-tuned adapter is then returned to the model owner, who\nplugs it into the full model to create an adapted foundation model.\nOffsite-tuning preserves both parties' privacy and is computationally more\nefficient than the existing fine-tuning methods that require access to the full\nmodel weights. We demonstrate the effectiveness of offsite-tuning on various\nlarge language and vision foundation models. Offsite-tuning can achieve\ncomparable accuracy as full model fine-tuning while being privacy-preserving\nand efficient, achieving 6.5x speedup and 5.6x memory reduction. Code is\navailable at https://github.com/mit-han-lab/offsite-tuning.", + "authors": "Guangxuan Xiao, Ji Lin, Song Han", + "published": "2023-02-09", + "updated": "2023-02-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.CV", + "cs.LG" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2305.13235v2", + "title": "SPARSEFIT: Few-shot Prompting with Sparse Fine-tuning for Jointly Generating Predictions and Natural Language Explanations", + "abstract": "Explaining the decisions of neural models is crucial for ensuring their\ntrustworthiness at deployment time. Using Natural Language Explanations (NLEs)\nto justify a model's predictions has recently gained increasing interest.\nHowever, this approach usually demands large datasets of human-written NLEs for\nthe ground-truth answers, which are expensive and potentially infeasible for\nsome applications. For models to generate high-quality NLEs when only a few\nNLEs are available, the fine-tuning of Pre-trained Language Models (PLMs) in\nconjunction with prompt-based learning recently emerged. However, PLMs\ntypically have billions of parameters, making fine-tuning expensive. We propose\nSparseFit, a sparse few-shot fine-tuning strategy that leverages discrete\nprompts to jointly generate predictions and NLEs. We experiment with SparseFit\non the T5 model and four datasets and compare it against state-of-the-art\nparameter-efficient fine-tuning techniques. We perform automatic and human\nevaluations to assess the quality of the model-generated NLEs, finding that\nfine-tuning only 6.8% of the model parameters leads to competitive results for\nboth the task performance and the quality of the NLEs.", + "authors": "Jesus Solano, Oana-Maria Camburu, Pasquale Minervini", + "published": "2023-05-22", + "updated": "2023-05-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2402.11709v1", + "title": "GNNavi: Navigating the Information Flow in Large Language Models by Graph Neural Network", + "abstract": "Large Language Models (LLMs) exhibit strong In-Context Learning (ICL)\ncapabilities when prompts with demonstrations are applied to them. However,\nfine-tuning still remains crucial to further enhance their adaptability.\nPrompt-based fine-tuning proves to be an effective fine-tuning method in\nlow-data scenarios, but high demands on computing resources limit its\npracticality. We address this issue by introducing a prompt-based\nparameter-efficient fine-tuning (PEFT) approach. GNNavi leverages insights into\nICL's information flow dynamics, which indicates that label words act in\nprompts as anchors for information propagation. GNNavi employs a Graph Neural\nNetwork (GNN) layer to precisely guide the aggregation and distribution of\ninformation flow during the processing of prompts by hardwiring the desired\ninformation flow into the GNN. Our experiments on text classification tasks\nwith GPT-2 and Llama2 shows GNNavi surpasses standard prompt-based fine-tuning\nmethods in few-shot settings by updating just 0.2% to 0.5% of parameters. We\ncompare GNNavi with prevalent PEFT approaches, such as prefix tuning, LoRA and\nAdapter in terms of performance and efficiency. Our analysis reveals that\nGNNavi enhances information flow and ensures a clear aggregation process.", + "authors": "Shuzhou Yuan, Ercong Nie, Michael F\u00e4rber, Helmut Schmid, Hinrich Sch\u00fctze", + "published": "2024-02-18", + "updated": "2024-02-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2304.10880v3", + "title": "Med-Tuning: Parameter-Efficient Transfer Learning with Fine-Grained Feature Enhancement for Medical Volumetric Segmentation", + "abstract": "Deep learning-based medical volumetric segmentation methods either train the\nmodel from scratch or follow the standard ``pre-training then fine-tuning\"\nparadigm. Although fine-tuning a pre-trained model on downstream tasks can\nharness its representation power, the standard full fine-tuning is costly in\nterms of computation and memory footprint. In this paper, we present the study\non parameter-efficient transfer learning for medical volumetric segmentation\nand propose a new framework named Med-Tuning based on intra-stage feature\nenhancement and inter-stage feature interaction. Additionally, aiming at\nexploiting the intrinsic global properties of Fourier Transform for\nparameter-efficient transfer learning, a new adapter block namely Med-Adapter\nwith a well-designed Fourier Transform branch is proposed for effectively and\nefficiently modeling the crucial global context for medical volumetric\nsegmentation. Given a large-scale pre-trained model on 2D natural images, our\nmethod can exploit both the crucial spatial multi-scale feature and volumetric\ncorrelations along slices for accurate segmentation. Extensive experiments on\nthree benchmark datasets (including CT and MRI) show that our method can\nachieve better results than previous parameter-efficient transfer learning\nmethods on segmentation tasks, with much less tuned parameter costs. Compared\nto full fine-tuning, our method reduces the fine-tuned model parameters by up\nto 4x, with even better segmentation performance. The code will be made\npublicly available at https://github.com/jessie-chen99/Med-Tuning.", + "authors": "Wenxuan Wang, Jiachen Shen, Chen Chen, Jianbo Jiao, Jing Liu, Yan Zhang, Shanshan Song, Jiangyun Li", + "published": "2023-04-21", + "updated": "2023-11-30", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2401.16405v2", + "title": "Scaling Sparse Fine-Tuning to Large Language Models", + "abstract": "Large Language Models (LLMs) are difficult to fully fine-tune (e.g., with\ninstructions or human feedback) due to their sheer number of parameters. A\nfamily of parameter-efficient sparse fine-tuning methods have proven promising\nin terms of performance but their memory requirements increase proportionally\nto the size of the LLMs. In this work, we scale sparse fine-tuning to\nstate-of-the-art LLMs like LLaMA 2 7B and 13B. We propose SpIEL, a novel sparse\nfine-tuning method which, for a desired density level, maintains an array of\nparameter indices and the deltas of these parameters relative to their\npretrained values. It iterates over: (a) updating the active deltas, (b)\npruning indices (based on the change of magnitude of their deltas) and (c)\nregrowth of indices. For regrowth, we explore two criteria based on either the\naccumulated gradients of a few candidate parameters or their approximate\nmomenta estimated using the efficient SM3 optimizer. We experiment with\ninstruction-tuning of LLMs on standard dataset mixtures, finding that SpIEL is\noften superior to popular parameter-efficient fine-tuning methods like LoRA\n(low-rank adaptation) in terms of performance and comparable in terms of run\ntime. We additionally show that SpIEL is compatible with both quantization and\nefficient optimizers, to facilitate scaling to ever-larger model sizes. We\nrelease the code for SpIEL at https://github.com/AlanAnsell/peft and for the\ninstruction-tuning experiments at https://github.com/ducdauge/sft-llm.", + "authors": "Alan Ansell, Ivan Vuli\u0107, Hannah Sterz, Anna Korhonen, Edoardo M. Ponti", + "published": "2024-01-29", + "updated": "2024-02-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2309.08513v5", + "title": "SCT: A Simple Baseline for Parameter-Efficient Fine-Tuning via Salient Channels", + "abstract": "Pre-trained vision transformers have strong representation benefits to\nvarious downstream tasks. Recently, many parameter-efficient fine-tuning (PEFT)\nmethods have been proposed, and their experiments demonstrate that tuning only\n1\\% extra parameters could surpass full fine-tuning in low-data resource\nscenarios. However, these methods overlook the task-specific information when\nfine-tuning diverse downstream tasks. In this paper, we propose a simple yet\neffective method called \"Salient Channel Tuning\" (SCT) to leverage the\ntask-specific information by forwarding the model with the task images to\nselect partial channels in a feature map that enables us to tune only 1/8\nchannels leading to significantly lower parameter costs. Experiments on 19\nvisual transfer learning downstream tasks demonstrate that our SCT outperforms\nfull fine-tuning on 18 out of 19 tasks by adding only 0.11M parameters of the\nViT-B, which is 780$\\times$ fewer than its full fine-tuning counterpart.\nFurthermore, experiments on domain generalization and few-shot classification\nfurther demonstrate the effectiveness and generic of our approach. The code is\navailable at https://github.com/showlab/SCT.", + "authors": "Henry Hengyuan Zhao, Pichao Wang, Yuyang Zhao, Hao Luo, Fan Wang, Mike Zheng Shou", + "published": "2023-09-15", + "updated": "2024-04-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2205.11277v2", + "title": "When does Parameter-Efficient Transfer Learning Work for Machine Translation?", + "abstract": "Parameter-efficient fine-tuning methods (PEFTs) offer the promise of adapting\nlarge pre-trained models while only tuning a small number of parameters. They\nhave been shown to be competitive with full model fine-tuning for many\ndownstream tasks. However, prior work indicates that PEFTs may not work as well\nfor machine translation (MT), and there is no comprehensive study showing when\nPEFTs work for MT. We conduct a comprehensive empirical study of PEFTs for MT,\nconsidering (1) various parameter budgets, (2) a diverse set of language-pairs,\nand (3) different pre-trained models. We find that 'adapters', in which small\nfeed-forward networks are added after every layer, are indeed on par with full\nmodel fine-tuning when the parameter budget corresponds to 10% of total model\nparameters. Nevertheless, as the number of tuned parameters decreases, the\nperformance of PEFTs decreases. The magnitude of this decrease depends on the\nlanguage pair, with PEFTs particularly struggling for distantly related\nlanguage-pairs. We find that using PEFTs with a larger pre-trained model\noutperforms full fine-tuning with a smaller model, and for smaller training\ndata sizes, PEFTs outperform full fine-tuning for the same pre-trained model.", + "authors": "Ahmet \u00dcst\u00fcn, Asa Cooper Stickland", + "published": "2022-05-23", + "updated": "2022-10-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2401.05605v1", + "title": "Scaling Laws for Forgetting When Fine-Tuning Large Language Models", + "abstract": "We study and quantify the problem of forgetting when fine-tuning pre-trained\nlarge language models (LLMs) on a downstream task. We find that\nparameter-efficient fine-tuning (PEFT) strategies, such as Low-Rank Adapters\n(LoRA), still suffer from catastrophic forgetting. In particular, we identify a\nstrong inverse linear relationship between the fine-tuning performance and the\namount of forgetting when fine-tuning LLMs with LoRA. We further obtain precise\nscaling laws that show forgetting increases as a shifted power law in the\nnumber of parameters fine-tuned and the number of update steps. We also examine\nthe impact of forgetting on knowledge, reasoning, and the safety guardrails\ntrained into Llama 2 7B chat. Our study suggests that forgetting cannot be\navoided through early stopping or by varying the number of parameters\nfine-tuned. We believe this opens up an important safety-critical direction for\nfuture research to evaluate and develop fine-tuning schemes which mitigate\nforgetting", + "authors": "Damjan Kalajdzievski", + "published": "2024-01-11", + "updated": "2024-01-11", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG", + "I.2.7" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2208.09847v1", + "title": "Scattered or Connected? An Optimized Parameter-efficient Tuning Approach for Information Retrieval", + "abstract": "Pre-training and fine-tuning have achieved significant advances in the\ninformation retrieval (IR). A typical approach is to fine-tune all the\nparameters of large-scale pre-trained models (PTMs) on downstream tasks. As the\nmodel size and the number of tasks increase greatly, such approach becomes less\nfeasible and prohibitively expensive. Recently, a variety of\nparameter-efficient tuning methods have been proposed in natural language\nprocessing (NLP) that only fine-tune a small number of parameters while still\nattaining strong performance. Yet there has been little effort to explore\nparameter-efficient tuning for IR.\n In this work, we first conduct a comprehensive study of existing\nparameter-efficient tuning methods at both the retrieval and re-ranking stages.\nUnlike the promising results in NLP, we find that these methods cannot achieve\ncomparable performance to full fine-tuning at both stages when updating less\nthan 1\\% of the original model parameters. More importantly, we find that the\nexisting methods are just parameter-efficient, but not learning-efficient as\nthey suffer from unstable training and slow convergence. To analyze the\nunderlying reason, we conduct a theoretical analysis and show that the\nseparation of the inserted trainable modules makes the optimization difficult.\nTo alleviate this issue, we propose to inject additional modules alongside the\n\\acp{PTM} to make the original scattered modules connected. In this way, all\nthe trainable modules can form a pathway to smooth the loss surface and thus\nhelp stabilize the training process. Experiments at both retrieval and\nre-ranking stages show that our method outperforms existing parameter-efficient\nmethods significantly, and achieves comparable or even better performance over\nfull fine-tuning.", + "authors": "Xinyu Ma, Jiafeng Guo, Ruqing Zhang, Yixing Fan, Xueqi Cheng", + "published": "2022-08-21", + "updated": "2022-08-21", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "H.3.3" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2401.12200v1", + "title": "APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference", + "abstract": "Fine-tuning and inference with large Language Models (LM) are generally known\nto be expensive. Parameter-efficient fine-tuning over pretrained LMs reduces\ntraining memory by updating a small number of LM parameters but does not\nimprove inference efficiency. Structured pruning improves LM inference\nefficiency by removing consistent parameter blocks, yet often increases\ntraining memory and time. To improve both training and inference efficiency, we\nintroduce APT that adaptively prunes and tunes parameters for the LMs. At the\nearly stage of fine-tuning, APT dynamically adds salient tuning parameters for\nfast and accurate convergence while discarding unimportant parameters for\nefficiency. Compared to baselines, our experiments show that APT maintains up\nto 98% task performance when pruning RoBERTa and T5 models with 40% parameters\nleft while keeping 86.4% LLaMA models' performance with 70% parameters\nremained. Furthermore, APT speeds up LMs fine-tuning by up to 8x and reduces\nlarge LMs memory training footprint by up to 70%.", + "authors": "Bowen Zhao, Hannaneh Hajishirzi, Qingqing Cao", + "published": "2024-01-22", + "updated": "2024-01-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2309.12109v1", + "title": "PEFTT: Parameter-Efficient Fine-Tuning for low-resource Tibetan pre-trained language models", + "abstract": "In this era of large language models (LLMs), the traditional training of\nmodels has become increasingly unimaginable for regular users and institutions.\nThe exploration of efficient fine-tuning for high-resource languages on these\nmodels is an undeniable trend that is gradually gaining popularity. However,\nthere has been very little exploration for various low-resource languages, such\nas Tibetan. Research in Tibetan NLP is inherently scarce and limited. While\nthere is currently no existing large language model for Tibetan due to its\nlow-resource nature, that day will undoubtedly arrive. Therefore, research on\nefficient fine-tuning for low-resource language models like Tibetan is highly\nnecessary. Our research can serve as a reference to fill this crucial gap.\nEfficient fine-tuning strategies for pre-trained language models (PLMs) in\nTibetan have seen minimal exploration. We conducted three types of efficient\nfine-tuning experiments on the publicly available TNCC-title dataset:\n\"prompt-tuning,\" \"Adapter lightweight fine-tuning,\" and \"prompt-tuning +\nAdapter fine-tuning.\" The experimental results demonstrate significant\nimprovements using these methods, providing valuable insights for advancing\nTibetan language applications in the context of pre-trained models.", + "authors": "Zhou Mingjun, Daiqing Zhuoma, Qun Nuo, Nyima Tashi", + "published": "2023-09-21", + "updated": "2023-09-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2110.06274v2", + "title": "LiST: Lite Prompted Self-training Makes Parameter-Efficient Few-shot Learners", + "abstract": "We present a new method LiST is short for Lite Prompted Self-Training for\nparameter-efficient fine-tuning of large pre-trained language models (PLMs) for\nfew-shot learning. LiST improves over recent methods that adopt prompt-based\nfine-tuning (FN) using two key techniques. The first is the use of\nself-training to leverage large amounts of unlabeled data for prompt-based FN\nin few-shot settings. We use self-training in conjunction with meta-learning\nfor re-weighting noisy pseudo-prompt labels. Self-training is expensive as it\nrequires updating all the model parameters repetitively. Therefore, we use a\nsecond technique for light-weight fine-tuning where we introduce a small number\nof task-specific parameters that are fine-tuned during self-training while\nkeeping the PLM encoder frozen. Our experiments show that LiST can effectively\nleverage unlabeled data to improve the model performance for few-shot learning.\nAdditionally, the fine-tuning is efficient as it only updates a small\npercentage of parameters and the overall model footprint is reduced since\nseveral tasks can share a common PLM encoder as backbone. A comprehensive study\non six NLU tasks demonstrate LiST to improve by 35% over classic fine-tuning\nand 6% over prompt-based FN with 96% reduction in number of trainable\nparameters when fine-tuned with no more than 30 labeled examples from each\ntask. With only 14M tunable parameters, LiST outperforms GPT-3 in-context\nlearning by 33% on few-shot NLU tasks.", + "authors": "Yaqing Wang, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, Jianfeng Gao", + "published": "2021-10-12", + "updated": "2022-05-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2106.04647v2", + "title": "Compacter: Efficient Low-Rank Hypercomplex Adapter Layers", + "abstract": "Adapting large-scale pretrained language models to downstream tasks via\nfine-tuning is the standard method for achieving state-of-the-art performance\non NLP benchmarks. However, fine-tuning all weights of models with millions or\nbillions of parameters is sample-inefficient, unstable in low-resource\nsettings, and wasteful as it requires storing a separate copy of the model for\neach task. Recent work has developed parameter-efficient fine-tuning methods,\nbut these approaches either still require a relatively large number of\nparameters or underperform standard fine-tuning. In this work, we propose\nCompacter, a method for fine-tuning large-scale language models with a better\ntrade-off between task performance and the number of trainable parameters than\nprior work. Compacter accomplishes this by building on top of ideas from\nadapters, low-rank optimization, and parameterized hypercomplex multiplication\nlayers. Specifically, Compacter inserts task-specific weight matrices into a\npretrained model's weights, which are computed efficiently as a sum of\nKronecker products between shared \"slow\" weights and \"fast\" rank-one matrices\ndefined per Compacter layer. By only training 0.047% of a pretrained model's\nparameters, Compacter performs on par with standard fine-tuning on GLUE and\noutperforms standard fine-tuning on SuperGLUE and low-resource settings. Our\ncode is publicly available at~\\url{https://github.com/rabeehk/compacter}.", + "authors": "Rabeeh Karimi Mahabadi, James Henderson, Sebastian Ruder", + "published": "2021-06-08", + "updated": "2021-11-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2303.17051v2", + "title": "Towards foundation models and few-shot parameter-efficient fine-tuning for volumetric organ segmentation", + "abstract": "With the recent raise of foundation models in computer vision and NLP, the\npretrain-and-adapt strategy, where a large-scale model is fine-tuned on\ndownstream tasks, is gaining popularity. However, traditional fine-tuning\napproaches may still require significant resources and yield sub-optimal\nresults when the labeled data of the target task is scarce. This is especially\nthe case in clinical settings. To address this challenge, we formalize few-shot\nefficient fine-tuning (FSEFT), a novel and realistic setting for medical image\nsegmentation. Furthermore, we introduce a novel parameter-efficient fine-tuning\nstrategy tailored to medical image segmentation, with (a) spatial adapter\nmodules that are more appropriate for dense prediction tasks; and (b) a\nconstrained transductive inference, which leverages task-specific prior\nknowledge. Our comprehensive experiments on a collection of public CT datasets\nfor organ segmentation reveal the limitations of standard fine-tuning methods\nin few-shot scenarios, point to the potential of vision adapters and\ntransductive inference, and confirm the suitability of foundation models.", + "authors": "Julio Silva-Rodr\u00edguez, Jose Dolz, Ismail Ben Ayed", + "published": "2023-03-29", + "updated": "2023-09-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2205.12453v2", + "title": "Know Where You're Going: Meta-Learning for Parameter-Efficient Fine-Tuning", + "abstract": "A recent family of techniques, dubbed lightweight fine-tuning methods,\nfacilitates parameter-efficient transfer learning by updating only a small set\nof additional parameters while keeping the parameters of the pretrained\nlanguage model frozen. While proven to be an effective method, there are no\nexisting studies on if and how such knowledge of the downstream fine-tuning\napproach should affect the pretraining stage. In this work, we show that taking\nthe ultimate choice of fine-tuning method into consideration boosts the\nperformance of parameter-efficient fine-tuning. By relying on\noptimization-based meta-learning using MAML with certain modifications for our\ndistinct purpose, we prime the pretrained model specifically for\nparameter-efficient fine-tuning, resulting in gains of up to 1.7 points on\ncross-lingual NER fine-tuning. Our ablation settings and analyses further\nreveal that the tweaks we introduce in MAML are crucial for the attained gains.", + "authors": "Mozhdeh Gheini, Xuezhe Ma, Jonathan May", + "published": "2022-05-25", + "updated": "2022-12-08", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2304.08109v2", + "title": "A Comparative Study between Full-Parameter and LoRA-based Fine-Tuning on Chinese Instruction Data for Instruction Following Large Language Model", + "abstract": "Recently, the instruction-tuning of large language models is a crucial area\nof research in the field of natural language processing. Due to resource and\ncost limitations, several researchers have employed parameter-efficient tuning\ntechniques, such as LoRA, for instruction tuning, and have obtained encouraging\nresults In comparison to full-parameter fine-tuning, LoRA-based tuning\ndemonstrates salient benefits in terms of training costs. In this study, we\nundertook experimental comparisons between full-parameter fine-tuning and\nLoRA-based tuning methods, utilizing LLaMA as the base model. The experimental\nresults show that the selection of the foundational model, training dataset\nscale, learnable parameter quantity, and model training cost are all important\nfactors. We hope that the experimental conclusions of this paper can provide\ninspiration for training large language models, especially in the field of\nChinese, and help researchers find a better trade-off strategy between training\ncost and model performance. To facilitate the reproduction of the paper's\nresults, the dataset, model and code will be released.", + "authors": "Xianghui Sun, Yunjie Ji, Baochang Ma, Xiangang Li", + "published": "2023-04-17", + "updated": "2023-04-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2405.05493v1", + "title": "Parameter-Efficient Fine-Tuning With Adapters", + "abstract": "In the arena of language model fine-tuning, the traditional approaches, such\nas Domain-Adaptive Pretraining (DAPT) and Task-Adaptive Pretraining (TAPT),\nalthough effective, but computational intensive. This research introduces a\nnovel adaptation method utilizing the UniPELT framework as a base and added a\nPromptTuning Layer, which significantly reduces the number of trainable\nparameters while maintaining competitive performance across various benchmarks.\nOur method employs adapters, which enable efficient transfer of pretrained\nmodels to new tasks with minimal retraining of the base model parameters. We\nevaluate our approach using three diverse datasets: the GLUE benchmark, a\ndomain-specific dataset comprising four distinct areas, and the Stanford\nQuestion Answering Dataset 1.1 (SQuAD). Our results demonstrate that our\ncustomized adapter-based method achieves performance comparable to full model\nfine-tuning, DAPT+TAPT and UniPELT strategies while requiring fewer or\nequivalent amount of parameters. This parameter efficiency not only alleviates\nthe computational burden but also expedites the adaptation process. The study\nunderlines the potential of adapters in achieving high performance with\nsignificantly reduced resource consumption, suggesting a promising direction\nfor future research in parameter-efficient fine-tuning.", + "authors": "Keyu Chen, Yuan Pang, Zi Yang", + "published": "2024-05-09", + "updated": "2024-05-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2309.00363v1", + "title": "FederatedScope-LLM: A Comprehensive Package for Fine-tuning Large Language Models in Federated Learning", + "abstract": "LLMs have demonstrated great capabilities in various NLP tasks. Different\nentities can further improve the performance of those LLMs on their specific\ndownstream tasks by fine-tuning LLMs. When several entities have similar\ninterested tasks, but their data cannot be shared because of privacy concerns\nregulations, federated learning (FL) is a mainstream solution to leverage the\ndata of different entities. However, fine-tuning LLMs in federated learning\nsettings still lacks adequate support from existing FL frameworks because it\nhas to deal with optimizing the consumption of significant communication and\ncomputational resources, data preparation for different tasks, and distinct\ninformation protection demands. This paper first discusses these challenges of\nfederated fine-tuning LLMs, and introduces our package FS-LLM as a main\ncontribution, which consists of the following components: (1) we build an\nend-to-end benchmarking pipeline, automizing the processes of dataset\npreprocessing, federated fine-tuning execution, and performance evaluation on\nfederated LLM fine-tuning; (2) we provide comprehensive federated\nparameter-efficient fine-tuning algorithm implementations and versatile\nprogramming interfaces for future extension in FL scenarios with low\ncommunication and computation costs, even without accessing the full model; (3)\nwe adopt several accelerating and resource-efficient operators for fine-tuning\nLLMs with limited resources and the flexible pluggable sub-routines for\ninterdisciplinary study. We conduct extensive experiments to validate the\neffectiveness of FS-LLM and benchmark advanced LLMs with state-of-the-art\nparameter-efficient fine-tuning algorithms in FL settings, which also yields\nvaluable insights into federated fine-tuning LLMs for the research community.\nTo facilitate further research and adoption, we release FS-LLM at\nhttps://github.com/alibaba/FederatedScope/tree/llm.", + "authors": "Weirui Kuang, Bingchen Qian, Zitao Li, Daoyuan Chen, Dawei Gao, Xuchen Pan, Yuexiang Xie, Yaliang Li, Bolin Ding, Jingren Zhou", + "published": "2023-09-01", + "updated": "2023-09-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2402.18331v2", + "title": "FineDiffusion: Scaling up Diffusion Models for Fine-grained Image Generation with 10,000 Classes", + "abstract": "The class-conditional image generation based on diffusion models is renowned\nfor generating high-quality and diverse images. However, most prior efforts\nfocus on generating images for general categories, e.g., 1000 classes in\nImageNet-1k. A more challenging task, large-scale fine-grained image\ngeneration, remains the boundary to explore. In this work, we present a\nparameter-efficient strategy, called FineDiffusion, to fine-tune large\npre-trained diffusion models scaling to large-scale fine-grained image\ngeneration with 10,000 categories. FineDiffusion significantly accelerates\ntraining and reduces storage overhead by only fine-tuning tiered class\nembedder, bias terms, and normalization layers' parameters. To further improve\nthe image generation quality of fine-grained categories, we propose a novel\nsampling method for fine-grained image generation, which utilizes\nsuperclass-conditioned guidance, specifically tailored for fine-grained\ncategories, to replace the conventional classifier-free guidance sampling.\nCompared to full fine-tuning, FineDiffusion achieves a remarkable 1.56x\ntraining speed-up and requires storing merely 1.77% of the total model\nparameters, while achieving state-of-the-art FID of 9.776 on image generation\nof 10,000 classes. Extensive qualitative and quantitative experiments\ndemonstrate the superiority of our method compared to other parameter-efficient\nfine-tuning methods. The code and more generated results are available at our\nproject website: https://finediffusion.github.io/.", + "authors": "Ziying Pan, Kun Wang, Gang Li, Feihong He, Xiwang Li, Yongxuan Lai", + "published": "2024-02-28", + "updated": "2024-04-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2008.03156v1", + "title": "Better Fine-Tuning by Reducing Representational Collapse", + "abstract": "Although widely adopted, existing approaches for fine-tuning pre-trained\nlanguage models have been shown to be unstable across hyper-parameter settings,\nmotivating recent work on trust region methods. In this paper, we present a\nsimplified and efficient method rooted in trust region theory that replaces\npreviously used adversarial objectives with parametric noise (sampling from\neither a normal or uniform distribution), thereby discouraging representation\nchange during fine-tuning when possible without hurting performance. We also\nintroduce a new analysis to motivate the use of trust region methods more\ngenerally, by studying representational collapse; the degradation of\ngeneralizable representations from pre-trained models as they are fine-tuned\nfor a specific end task. Extensive experiments show that our fine-tuning method\nmatches or exceeds the performance of previous trust region methods on a range\nof understanding and generation tasks (including DailyMail/CNN, Gigaword,\nReddit TIFU, and the GLUE benchmark), while also being much faster. We also\nshow that it is less prone to representation collapse; the pre-trained models\nmaintain more generalizable representations every time they are fine-tuned.", + "authors": "Armen Aghajanyan, Akshat Shrivastava, Anchit Gupta, Naman Goyal, Luke Zettlemoyer, Sonal Gupta", + "published": "2020-08-06", + "updated": "2020-08-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL", + "stat.ML" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2403.11366v2", + "title": "JORA: JAX Tensor-Parallel LoRA Library for Retrieval Augmented Fine-Tuning", + "abstract": "The scaling of Large Language Models (LLMs) for retrieval-based tasks,\nparticularly in Retrieval Augmented Generation (RAG), faces significant memory\nconstraints, especially when fine-tuning extensive prompt sequences. Current\nopen-source libraries support full-model inference and fine-tuning across\nmultiple GPUs but fall short of accommodating the efficient parameter\ndistribution required for retrieved context. Addressing this gap, we introduce\na novel framework for PEFT-compatible fine-tuning of Llama-2 models, leveraging\ndistributed training. Our framework uniquely utilizes JAX's just-in-time (JIT)\ncompilation and tensor-sharding for efficient resource management, thereby\nenabling accelerated fine-tuning with reduced memory requirements. This\nadvancement significantly improves the scalability and feasibility of\nfine-tuning LLMs for complex RAG applications, even on systems with limited GPU\nresources. Our experiments show more than 12x improvement in runtime compared\nto Hugging Face/DeepSpeed implementation with four GPUs while consuming less\nthan half the VRAM per GPU.", + "authors": "Anique Tahir, Lu Cheng, Huan Liu", + "published": "2024-03-17", + "updated": "2024-03-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL", + "cs.DC" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2203.06904v2", + "title": "Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models", + "abstract": "Despite the success, the process of fine-tuning large-scale PLMs brings\nprohibitive adaptation costs. In fact, fine-tuning all the parameters of a\ncolossal model and retaining separate instances for different tasks are\npractically infeasible. This necessitates a new branch of research focusing on\nthe parameter-efficient adaptation of PLMs, dubbed as delta tuning in this\npaper. In contrast with the standard fine-tuning, delta tuning only fine-tunes\na small portion of the model parameters while keeping the rest untouched,\nlargely reducing both the computation and storage costs. Recent studies have\ndemonstrated that a series of delta tuning methods with distinct tuned\nparameter selection could achieve performance on a par with full-parameter\nfine-tuning, suggesting a new promising way of stimulating large-scale PLMs. In\nthis paper, we first formally describe the problem of delta tuning and then\ncomprehensively review recent delta tuning approaches. We also propose a\nunified categorization criterion that divide existing delta tuning methods into\nthree groups: addition-based, specification-based, and reparameterization-based\nmethods. Though initially proposed as an efficient method to steer large\nmodels, we believe that some of the fascinating evidence discovered along with\ndelta tuning could help further reveal the mechanisms of PLMs and even deep\nneural networks. To this end, we discuss the theoretical principles underlying\nthe effectiveness of delta tuning and propose frameworks to interpret delta\ntuning from the perspective of optimization and optimal control, respectively.\nFurthermore, we provide a holistic empirical study of representative methods,\nwhere results on over 100 NLP tasks demonstrate a comprehensive performance\ncomparison of different approaches. The experimental results also cover the\nanalysis of combinatorial, scaling and transferable properties of delta tuning.", + "authors": "Ning Ding, Yujia Qin, Guang Yang, Fuchao Wei, Zonghan Yang, Yusheng Su, Shengding Hu, Yulin Chen, Chi-Min Chan, Weize Chen, Jing Yi, Weilin Zhao, Xiaozhi Wang, Zhiyuan Liu, Hai-Tao Zheng, Jianfei Chen, Yang Liu, Jie Tang, Juanzi Li, Maosong Sun", + "published": "2022-03-14", + "updated": "2022-03-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2210.00036v2", + "title": "Differentially Private Bias-Term only Fine-tuning of Foundation Models", + "abstract": "We study the problem of differentially private (DP) fine-tuning of large\npre-trained models -- a recent privacy-preserving approach suitable for solving\ndownstream tasks with sensitive data. Existing work has demonstrated that high\naccuracy is possible under strong privacy constraint, yet requires significant\ncomputational overhead or modifications to the network architecture.\n We propose differentially private bias-term fine-tuning (DP-BiTFiT), which\nmatches the state-of-the-art accuracy for DP algorithms and the efficiency of\nthe standard BiTFiT. DP-BiTFiT is model agnostic (not modifying the network\narchitecture), parameter efficient (only training about $0.1\\%$ of the\nparameters), and computation efficient (almost removing the overhead caused by\nDP, in both the time and space complexity). On a wide range of tasks, DP-BiTFiT\nis $2\\sim 30\\times$ faster and uses $2\\sim 8\\times$ less memory than DP full\nfine-tuning, even faster than the standard full fine-tuning. This amazing\nefficiency enables us to conduct DP fine-tuning on language and vision tasks\nwith long-sequence texts and high-resolution images, which were computationally\ndifficult using existing methods.", + "authors": "Zhiqi Bu, Yu-Xiang Wang, Sheng Zha, George Karypis", + "published": "2022-09-30", + "updated": "2022-10-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL", + "cs.CR", + "cs.CV" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2405.02710v1", + "title": "Enhancing News Summarization with ELearnFit through Efficient In-Context Learning and Efficient Fine-Tuning", + "abstract": "With the deluge of information delivered by the daily news cycle, there is a\ngrowing need to effectively and efficiently summarize news feeds for quick\nconsumption. We leverage large language models (LLMs), with their advanced\nlearning and generative abilities as compared to conventional language models,\nto generate concise and coherent summaries for news articles from the XSum\ndataset. Our paper focuses on two key aspects of LLMs: Efficient in-context\nLearning (ELearn) and Parameter Efficient Fine-tuning (EFit). Under ELearn, we\nfind that increasing the number of shots in prompts and utilizing simple\ntemplates generally improve the quality of summaries. We also find that\nutilizing relevant examples in few-shot learning for ELearn does not improve\nmodel performance. In addition, we studied EFit using different methods and\ndemonstrate that fine-tuning the first layer of LLMs produces better outcomes\nas compared to fine-tuning other layers or utilizing LoRA. We also find that\nleveraging more relevant training samples using selective layers does not\nresult in better performance. By combining ELearn and EFit, we create a new\nmodel (ELearnFit) that leverages the benefits of both few-shot learning and\nfine-tuning and produces superior performance to either model alone. We also\nuse ELearnFit to highlight the trade-offs between prompting and fine-tuning,\nespecially for situations where only a limited number of annotated samples are\navailable. Ultimately, our research provides practical techniques to optimize\nnews summarization during the prompting and fine-tuning stages and enhances the\nsynthesis of news articles.", + "authors": "Che Guan, Andrew Chin, Puya Vahabi", + "published": "2024-05-04", + "updated": "2024-05-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2312.08881v1", + "title": "AdaptIR: Parameter Efficient Multi-task Adaptation for Pre-trained Image Restoration Models", + "abstract": "Pre-training has shown promising results on various image restoration tasks,\nwhich is usually followed by full fine-tuning for each specific downstream task\n(e.g., image denoising). However, such full fine-tuning usually suffers from\nthe problems of heavy computational cost in practice, due to the massive\nparameters of pre-trained restoration models, thus limiting its real-world\napplications. Recently, Parameter Efficient Transfer Learning (PETL) offers an\nefficient alternative solution to full fine-tuning, yet still faces great\nchallenges for pre-trained image restoration models, due to the diversity of\ndifferent degradations. To address these issues, we propose AdaptIR, a novel\nparameter efficient transfer learning method for adapting pre-trained\nrestoration models. Specifically, the proposed method consists of a\nmulti-branch inception structure to orthogonally capture local spatial, global\nspatial, and channel interactions. In this way, it allows powerful\nrepresentations under a very low parameter budget. Extensive experiments\ndemonstrate that the proposed method can achieve comparable or even better\nperformance than full fine-tuning, while only using 0.6% parameters. Code is\navailable at https://github.com/csguoh/AdaptIR.", + "authors": "Hang Guo, Tao Dai, Yuanchao Bai, Bin Chen, Shu-Tao Xia, Zexuan Zhu", + "published": "2023-12-12", + "updated": "2023-12-12", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2308.06522v1", + "title": "SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models", + "abstract": "Transfer learning via fine-tuning pre-trained transformer models has gained\nsignificant success in delivering state-of-the-art results across various NLP\ntasks. In the absence of centralized data, Federated Learning (FL) can benefit\nfrom distributed and private data of the FL edge clients for fine-tuning.\nHowever, due to the limited communication, computation, and storage\ncapabilities of edge devices and the huge sizes of popular transformer models,\nefficient fine-tuning is crucial to make federated training feasible. This work\nexplores the opportunities and challenges associated with applying parameter\nefficient fine-tuning (PEFT) methods in different FL settings for language\ntasks. Specifically, our investigation reveals that as the data across users\nbecomes more diverse, the gap between fully fine-tuning the model and employing\nPEFT methods widens. To bridge this performance gap, we propose a method called\nSLoRA, which overcomes the key limitations of LoRA in high heterogeneous data\nscenarios through a novel data-driven initialization technique. Our\nexperimental results demonstrate that SLoRA achieves performance comparable to\nfull fine-tuning, with significant sparse updates with approximately $\\sim 1\\%$\ndensity while reducing training time by up to $90\\%$.", + "authors": "Sara Babakniya, Ahmed Roushdy Elkordy, Yahya H. Ezzeldin, Qingfeng Liu, Kee-Bong Song, Mostafa El-Khamy, Salman Avestimehr", + "published": "2023-08-12", + "updated": "2023-08-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2212.03916v2", + "title": "Transfer learning for chemically accurate interatomic neural network potentials", + "abstract": "Developing machine learning-based interatomic potentials from ab-initio\nelectronic structure methods remains a challenging task for computational\nchemistry and materials science. This work studies the capability of transfer\nlearning, in particular discriminative fine-tuning, for efficiently generating\nchemically accurate interatomic neural network potentials on organic molecules\nfrom the MD17 and ANI data sets. We show that pre-training the network\nparameters on data obtained from density functional calculations considerably\nimproves the sample efficiency of models trained on more accurate ab-initio\ndata. Additionally, we show that fine-tuning with energy labels alone can\nsuffice to obtain accurate atomic forces and run large-scale atomistic\nsimulations, provided a well-designed fine-tuning data set. We also investigate\npossible limitations of transfer learning, especially regarding the design and\nsize of the pre-training and fine-tuning data sets. Finally, we provide GM-NN\npotentials pre-trained and fine-tuned on the ANI-1x and ANI-1ccx data sets,\nwhich can easily be fine-tuned on and applied to organic molecules.", + "authors": "Viktor Zaverkin, David Holzm\u00fcller, Luca Bonfirraro, Johannes K\u00e4stner", + "published": "2022-12-07", + "updated": "2023-01-28", + "primary_cat": "physics.comp-ph", + "cats": [ + "physics.comp-ph", + "stat.ML" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2203.12119v2", + "title": "Visual Prompt Tuning", + "abstract": "The current modus operandi in adapting pre-trained models involves updating\nall the backbone parameters, ie, full fine-tuning. This paper introduces Visual\nPrompt Tuning (VPT) as an efficient and effective alternative to full\nfine-tuning for large-scale Transformer models in vision. Taking inspiration\nfrom recent advances in efficiently tuning large language models, VPT\nintroduces only a small amount (less than 1% of model parameters) of trainable\nparameters in the input space while keeping the model backbone frozen. Via\nextensive experiments on a wide variety of downstream recognition tasks, we\nshow that VPT achieves significant performance gains compared to other\nparameter efficient tuning protocols. Most importantly, VPT even outperforms\nfull fine-tuning in many cases across model capacities and training data\nscales, while reducing per-task storage cost.", + "authors": "Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, Ser-Nam Lim", + "published": "2022-03-23", + "updated": "2022-07-20", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2307.11764v2", + "title": "Sensi-BERT: Towards Sensitivity Driven Fine-Tuning for Parameter-Efficient BERT", + "abstract": "Large pre-trained language models have recently gained significant traction\ndue to their improved performance on various down-stream tasks like text\nclassification and question answering, requiring only few epochs of\nfine-tuning. However, their large model sizes often prohibit their applications\non resource-constrained edge devices. Existing solutions of yielding\nparameter-efficient BERT models largely rely on compute-exhaustive training and\nfine-tuning. Moreover, they often rely on additional compute heavy models to\nmitigate the performance gap. In this paper, we present Sensi-BERT, a\nsensitivity driven efficient fine-tuning of BERT models that can take an\noff-the-shelf pre-trained BERT model and yield highly parameter-efficient\nmodels for downstream tasks. In particular, we perform sensitivity analysis to\nrank each individual parameter tensor, that then is used to trim them\naccordingly during fine-tuning for a given parameter or FLOPs budget. Our\nexperiments show the efficacy of Sensi-BERT across different downstream tasks\nincluding MNLI, QQP, QNLI, SST-2 and SQuAD, showing better performance at\nsimilar or smaller parameter budget compared to various alternatives.", + "authors": "Souvik Kundu, Sharath Nittur Sridhar, Maciej Szankin, Sairam Sundaresan", + "published": "2023-07-14", + "updated": "2023-08-31", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2401.10544v1", + "title": "AAT: Adapting Audio Transformer for Various Acoustics Recognition Tasks", + "abstract": "Recently, Transformers have been introduced into the field of acoustics\nrecognition. They are pre-trained on large-scale datasets using methods such as\nsupervised learning and semi-supervised learning, demonstrating robust\ngenerality--It fine-tunes easily to downstream tasks and shows more robust\nperformance. However, the predominant fine-tuning method currently used is\nstill full fine-tuning, which involves updating all parameters during training.\nThis not only incurs significant memory usage and time costs but also\ncompromises the model's generality. Other fine-tuning methods either struggle\nto address this issue or fail to achieve matching performance. Therefore, we\nconducted a comprehensive analysis of existing fine-tuning methods and proposed\nan efficient fine-tuning approach based on Adapter tuning, namely AAT. The core\nidea is to freeze the audio Transformer model and insert extra learnable\nAdapters, efficiently acquiring downstream task knowledge without compromising\nthe model's original generality. Extensive experiments have shown that our\nmethod achieves performance comparable to or even superior to full fine-tuning\nwhile optimizing only 7.118% of the parameters. It also demonstrates\nsuperiority over other fine-tuning methods.", + "authors": "Yun Liang, Hai Lin, Shaojian Qiu, Yihang Zhang", + "published": "2024-01-19", + "updated": "2024-01-19", + "primary_cat": "cs.SD", + "cats": [ + "cs.SD", + "cs.AI", + "eess.AS" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2404.17245v1", + "title": "Parameter Efficient Fine-tuning of Self-supervised ViTs without Catastrophic Forgetting", + "abstract": "Artificial neural networks often suffer from catastrophic forgetting, where\nlearning new concepts leads to a complete loss of previously acquired\nknowledge. We observe that this issue is particularly magnified in vision\ntransformers (ViTs), where post-pre-training and fine-tuning on new tasks can\nsignificantly degrade the model's original general abilities. For instance, a\nDINO ViT-Base/16 pre-trained on ImageNet-1k loses over 70% accuracy on\nImageNet-1k after just 10 iterations of fine-tuning on CIFAR-100. Overcoming\nthis stability-plasticity dilemma is crucial for enabling ViTs to continuously\nlearn and adapt to new domains while preserving their initial knowledge. In\nthis work, we study two new parameter-efficient fine-tuning strategies:\n(1)~Block Expansion, and (2) Low-rank adaptation (LoRA). Our experiments reveal\nthat using either Block Expansion or LoRA on self-supervised pre-trained ViTs\nsurpass fully fine-tuned ViTs in new domains while offering significantly\ngreater parameter efficiency. Notably, we find that Block Expansion experiences\nonly a minimal performance drop in the pre-training domain, thereby effectively\nmitigating catastrophic forgetting in pre-trained ViTs.", + "authors": "Reza Akbarian Bafghi, Nidhin Harilal, Claire Monteleoni, Maziar Raissi", + "published": "2024-04-26", + "updated": "2024-04-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2310.18339v1", + "title": "MOELoRA: An MOE-based Parameter Efficient Fine-Tuning Method for Multi-task Medical Applications", + "abstract": "The recent surge in the field of Large Language Models (LLMs) has gained\nsignificant attention in numerous domains. In order to tailor an LLM to a\nspecific domain such as a web-based healthcare system, fine-tuning with domain\nknowledge is necessary. However, two issues arise during fine-tuning LLMs for\nmedical applications. The first is the problem of task variety, where there are\nnumerous distinct tasks in real-world medical scenarios. This diversity often\nresults in suboptimal fine-tuning due to data imbalance and seesawing problems.\nAdditionally, the high cost of fine-tuning can be prohibitive, impeding the\napplication of LLMs. The large number of parameters in LLMs results in enormous\ntime and computational consumption during fine-tuning, which is difficult to\njustify. To address these two issues simultaneously, we propose a novel\nparameter-efficient fine-tuning framework for multi-task medical applications\ncalled MOELoRA. The framework aims to capitalize on the benefits of both MOE\nfor multi-task learning and LoRA for parameter-efficient fine-tuning. To unify\nMOE and LoRA, we devise multiple experts as the trainable parameters, where\neach expert consists of a pair of low-rank matrices to maintain a small number\nof trainable parameters. Additionally, we propose a task-motivated gate\nfunction for all MOELoRA layers that can regulate the contributions of each\nexpert and generate distinct parameters for various tasks. To validate the\neffectiveness and practicality of the proposed method, we conducted\ncomprehensive experiments on a public multi-task Chinese medical dataset. The\nexperimental results demonstrate that MOELoRA outperforms existing\nparameter-efficient fine-tuning methods. The implementation is available online\nfor convenient reproduction of our experiments.", + "authors": "Qidong Liu, Xian Wu, Xiangyu Zhao, Yuanshao Zhu, Derong Xu, Feng Tian, Yefeng Zheng", + "published": "2023-10-21", + "updated": "2023-10-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2312.15698v3", + "title": "RepairLLaMA: Efficient Representations and Fine-Tuned Adapters for Program Repair", + "abstract": "Automated Program Repair (APR) has evolved significantly with the advent of\nLarge Language Models (LLMs). Fine-tuning LLMs for program repair is a recent\navenue of research, with many dimensions which have not been explored. Existing\nwork mostly fine-tunes LLMs with naive code representations and is\nfundamentally limited in its ability to fine-tune larger LLMs. To address this\nproblem, we propose RepairLLaMA, a novel program repair approach that combines\n1) code representations for APR and 2) the state-of-the-art parameter-efficient\nLLM fine-tuning technique called LoRA. This results in RepairLLaMA producing a\nhighly effective `program repair adapter' for fixing bugs with language models.\nOur experiments demonstrate the validity of both concepts. First, fine-tuning\nadapters with program repair specific code representations enables the model to\nuse meaningful repair signals. Second, parameter-efficient fine-tuning helps\nfine-tuning to converge and contributes to the effectiveness of the repair\nadapter to fix data-points outside the fine-tuning data distribution. Overall,\nRepairLLaMA correctly fixes 125 Defects4J v2 and 82 HumanEval-Java bugs,\noutperforming all baselines.", + "authors": "Andr\u00e9 Silva, Sen Fang, Martin Monperrus", + "published": "2023-12-25", + "updated": "2024-03-11", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.LG" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2110.04366v3", + "title": "Towards a Unified View of Parameter-Efficient Transfer Learning", + "abstract": "Fine-tuning large pre-trained language models on downstream tasks has become\nthe de-facto learning paradigm in NLP. However, conventional approaches\nfine-tune all the parameters of the pre-trained model, which becomes\nprohibitive as the model size and the number of tasks grow. Recent work has\nproposed a variety of parameter-efficient transfer learning methods that only\nfine-tune a small number of (extra) parameters to attain strong performance.\nWhile effective, the critical ingredients for success and the connections among\nthe various methods are poorly understood. In this paper, we break down the\ndesign of state-of-the-art parameter-efficient transfer learning methods and\npresent a unified framework that establishes connections between them.\nSpecifically, we re-frame them as modifications to specific hidden states in\npre-trained models, and define a set of design dimensions along which different\nmethods vary, such as the function to compute the modification and the position\nto apply the modification. Through comprehensive empirical studies across\nmachine translation, text summarization, language understanding, and text\nclassification benchmarks, we utilize the unified view to identify important\ndesign choices in previous methods. Furthermore, our unified framework enables\nthe transfer of design elements across different approaches, and as a result we\nare able to instantiate new parameter-efficient fine-tuning methods that tune\nless parameters than previous methods while being more effective, achieving\ncomparable results to fine-tuning all parameters on all four tasks.", + "authors": "Junxian He, Chunting Zhou, Xuezhe Ma, Taylor Berg-Kirkpatrick, Graham Neubig", + "published": "2021-10-08", + "updated": "2022-02-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2106.03164v1", + "title": "On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation", + "abstract": "Adapter-based tuning has recently arisen as an alternative to fine-tuning. It\nworks by adding light-weight adapter modules to a pretrained language model\n(PrLM) and only updating the parameters of adapter modules when learning on a\ndownstream task. As such, it adds only a few trainable parameters per new task,\nallowing a high degree of parameter sharing. Prior studies have shown that\nadapter-based tuning often achieves comparable results to fine-tuning. However,\nexisting work only focuses on the parameter-efficient aspect of adapter-based\ntuning while lacking further investigation on its effectiveness. In this paper,\nwe study the latter. We first show that adapter-based tuning better mitigates\nforgetting issues than fine-tuning since it yields representations with less\ndeviation from those generated by the initial PrLM. We then empirically compare\nthe two tuning methods on several downstream NLP tasks and settings. We\ndemonstrate that 1) adapter-based tuning outperforms fine-tuning on\nlow-resource and cross-lingual tasks; 2) it is more robust to overfitting and\nless sensitive to changes in learning rates.", + "authors": "Ruidan He, Linlin Liu, Hai Ye, Qingyu Tan, Bosheng Ding, Liying Cheng, Jia-Wei Low, Lidong Bing, Luo Si", + "published": "2021-06-06", + "updated": "2021-06-06", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2404.09022v1", + "title": "Navigating the Landscape of Large Language Models: A Comprehensive Review and Analysis of Paradigms and Fine-Tuning Strategies", + "abstract": "With the surge of ChatGPT,the use of large models has significantly\nincreased,rapidly rising to prominence across the industry and sweeping across\nthe internet. This article is a comprehensive review of fine-tuning methods for\nlarge models. This paper investigates the latest technological advancements and\nthe application of advanced methods in aspects such as task-adaptive\nfine-tuning,domain-adaptive fine-tuning,few-shot learning,knowledge\ndistillation,multi-task learning,parameter-efficient fine-tuning,and dynamic\nfine-tuning.", + "authors": "Benjue Weng", + "published": "2024-04-13", + "updated": "2024-04-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2401.13942v2", + "title": "StyleInject: Parameter Efficient Tuning of Text-to-Image Diffusion Models", + "abstract": "The ability to fine-tune generative models for text-to-image generation tasks\nis crucial, particularly facing the complexity involved in accurately\ninterpreting and visualizing textual inputs. While LoRA is efficient for\nlanguage model adaptation, it often falls short in text-to-image tasks due to\nthe intricate demands of image generation, such as accommodating a broad\nspectrum of styles and nuances. To bridge this gap, we introduce StyleInject, a\nspecialized fine-tuning approach tailored for text-to-image models. StyleInject\ncomprises multiple parallel low-rank parameter matrices, maintaining the\ndiversity of visual features. It dynamically adapts to varying styles by\nadjusting the variance of visual features based on the characteristics of the\ninput signal. This approach significantly minimizes the impact on the original\nmodel's text-image alignment capabilities while adeptly adapting to various\nstyles in transfer learning. StyleInject proves particularly effective in\nlearning from and enhancing a range of advanced, community-fine-tuned\ngenerative models. Our comprehensive experiments, including both small-sample\nand large-scale data fine-tuning as well as base model distillation, show that\nStyleInject surpasses traditional LoRA in both text-image semantic consistency\nand human preference evaluation, all while ensuring greater parameter\nefficiency.", + "authors": "Mohan Zhou, Yalong Bai, Qing Yang, Tiejun Zhao", + "published": "2024-01-25", + "updated": "2024-05-10", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2305.16742v1", + "title": "Parameter-Efficient Fine-Tuning without Introducing New Latency", + "abstract": "Parameter-efficient fine-tuning (PEFT) of pre-trained language models has\nrecently demonstrated remarkable achievements, effectively matching the\nperformance of full fine-tuning while utilizing significantly fewer trainable\nparameters, and consequently addressing the storage and communication\nconstraints. Nonetheless, various PEFT methods are limited by their inherent\ncharacteristics. In the case of sparse fine-tuning, which involves modifying\nonly a small subset of the existing parameters, the selection of fine-tuned\nparameters is task- and domain-specific, making it unsuitable for federated\nlearning. On the other hand, PEFT methods with adding new parameters typically\nintroduce additional inference latency. In this paper, we demonstrate the\nfeasibility of generating a sparse mask in a task-agnostic manner, wherein all\ndownstream tasks share a common mask. Our approach, which relies solely on the\nmagnitude information of pre-trained parameters, surpasses existing\nmethodologies by a significant margin when evaluated on the GLUE benchmark.\nAdditionally, we introduce a novel adapter technique that directly applies the\nadapter to pre-trained parameters instead of the hidden representation, thereby\nachieving identical inference speed to that of full fine-tuning. Through\nextensive experiments, our proposed method attains a new state-of-the-art\noutcome in terms of both performance and storage efficiency, storing only 0.03%\nparameters of full fine-tuning.", + "authors": "Baohao Liao, Yan Meng, Christof Monz", + "published": "2023-05-26", + "updated": "2023-05-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2212.05901v1", + "title": "Parameter-Efficient Finetuning of Transformers for Source Code", + "abstract": "Pretrained Transformers achieve state-of-the-art performance in various\ncode-processing tasks but may be too large to be deployed. As software\ndevelopment tools often incorporate modules for various purposes which may\npotentially use a single instance of the pretrained model, it appears relevant\nto utilize parameter-efficient fine-tuning for the pretrained models of code.\nIn this work, we test two widely used approaches, adapters and LoRA, which were\ninitially tested on NLP tasks, on four code-processing tasks. We find that\nthough the efficient fine-tuning approaches may achieve comparable or higher\nperformance than the standard, full, fine-tuning in code understanding tasks,\nthey underperform full fine-tuning in code-generative tasks. These results\nunderline the importance of testing efficient fine-tuning approaches on other\ndomains than NLP and motivate future research in efficient fine-tuning for\nsource code.", + "authors": "Shamil Ayupov, Nadezhda Chirkova", + "published": "2022-12-12", + "updated": "2022-12-12", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG", + "cs.SE" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2312.15681v1", + "title": "Partial Fine-Tuning: A Successor to Full Fine-Tuning for Vision Transformers", + "abstract": "Fine-tuning pre-trained foundation models has gained significant popularity\nin various research fields. Existing methods for fine-tuning can be roughly\ndivided into two categories, namely Parameter-Efficient Fine-Tuning and\nHigh-Performance Fine-Tuning. The former aims at improving efficiency, while\nthe latter focuses on enhancing performance. Beyond these methods, we\ndemonstrate that Partial Fine-Tuning can be an innovative and promising\ndirection capable of concurrently enhancing both efficiency and accuracy. We\nfirst validate eight manually-defined partial fine-tuning strategies across\nkinds of datasets and vision transformer architectures, and find that some\npartial fine-tuning strategies (e.g., ffn only or attention only) can achieve\nbetter performance with fewer tuned parameters than full fine-tuning, and\nselecting appropriate layers is critical to partial fine-tuning. Thus, we\npropose a novel fine-tuned angle metric to guide the selection of appropriate\nlayers for partial fine-tuning, making it flexible to be adapted to various\nscenarios for more practicable partial fine-tuning. Additionally, we show that\npartial fine-tuning can serve as a new dimension for Model Soups, improving\nboth the model performance and generalization with fewer tuned parameters.\nComprehensive experiments on a wide range of datasets and models validate the\ngreat potential of partial fine-tuning.", + "authors": "Peng Ye, Yongqi Huang, Chongjun Tu, Minglei Li, Tao Chen, Tong He, Wanli Ouyang", + "published": "2023-12-25", + "updated": "2023-12-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2403.20284v1", + "title": "LayerNorm: A key component in parameter-efficient fine-tuning", + "abstract": "Fine-tuning a pre-trained model, such as Bidirectional Encoder\nRepresentations from Transformers (BERT), has been proven to be an effective\nmethod for solving many natural language processing (NLP) tasks. However, due\nto the large number of parameters in many state-of-the-art NLP models,\nincluding BERT, the process of fine-tuning is computationally expensive. One\nattractive solution to this issue is parameter-efficient fine-tuning, which\ninvolves modifying only a minimal segment of the model while keeping the\nremainder unchanged. Yet, it remains unclear which segment of the BERT model is\ncrucial for fine-tuning. In this paper, we first analyze different components\nin the BERT model to pinpoint which one undergoes the most significant changes\nafter fine-tuning. We find that output LayerNorm changes more than any other\ncomponents when fine-tuned for different General Language Understanding\nEvaluation (GLUE) tasks. Then we show that only fine-tuning the LayerNorm can\nreach comparable, or in some cases better, performance to full fine-tuning and\nother parameter-efficient fine-tuning methods. Moreover, we use Fisher\ninformation to determine the most critical subset of LayerNorm and demonstrate\nthat many NLP tasks in the GLUE benchmark can be solved by fine-tuning only a\nsmall portion of LayerNorm with negligible performance degradation.", + "authors": "Taha ValizadehAslani, Hualou Liang", + "published": "2024-03-29", + "updated": "2024-03-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2401.15207v2", + "title": "HiFT: A Hierarchical Full Parameter Fine-Tuning Strategy", + "abstract": "Full-parameter fine-tuning has become the go-to choice for adapting language\nmodels (LMs) to downstream tasks due to its excellent performance. As LMs grow\nin size, fine-tuning the full parameters of LMs requires a prohibitively large\namount of GPU memory. Existing approaches utilize zeroth-order optimizer to\nconserve GPU memory, which can potentially compromise the performance of LMs as\nnon-zero order optimizers tend to converge more readily on most downstream\ntasks. In this paper, we propose a novel optimizer-independent end-to-end\nhierarchical fine-tuning strategy, HiFT, which only updates a subset of\nparameters at each training step. HiFT can significantly reduce the amount of\ngradients and optimizer state parameters residing in GPU memory at the same\ntime, thereby reducing GPU memory usage. Our results demonstrate that: (1) HiFT\nachieves comparable performance to parameter-efficient fine-tuning and standard\nfull parameter fine-tuning. (2) HiFT supports various optimizers including\nAdamW, AdaGrad, SGD, etc. (3) HiFT can save more than 60\\% GPU memory compared\nwith standard full-parameter fine-tuning for 7B model. (4) HiFT enables\nfull-parameter fine-tuning of a 7B model on single 48G A6000 with a precision\nof 32 using the AdamW optimizer, without using any memory saving techniques.", + "authors": "Yongkang Liu, Yiqun Zhang, Qian Li, Tong Liu, Shi Feng, Daling Wang, Yifei Zhang, Hinrich Sch\u00fctze", + "published": "2024-01-26", + "updated": "2024-02-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2304.05216v1", + "title": "Towards Efficient Fine-tuning of Pre-trained Code Models: An Experimental Study and Beyond", + "abstract": "Recently, fine-tuning pre-trained code models such as CodeBERT on downstream\ntasks has achieved great success in many software testing and analysis tasks.\nWhile effective and prevalent, fine-tuning the pre-trained parameters incurs a\nlarge computational cost. In this paper, we conduct an extensive experimental\nstudy to explore what happens to layer-wise pre-trained representations and\ntheir encoded code knowledge during fine-tuning. We then propose efficient\nalternatives to fine-tune the large pre-trained code model based on the above\nfindings. Our experimental study shows that (1) lexical, syntactic and\nstructural properties of source code are encoded in the lower, intermediate,\nand higher layers, respectively, while the semantic property spans across the\nentire model. (2) The process of fine-tuning preserves most of the code\nproperties. Specifically, the basic code properties captured by lower and\nintermediate layers are still preserved during fine-tuning. Furthermore, we\nfind that only the representations of the top two layers change most during\nfine-tuning for various downstream tasks. (3) Based on the above findings, we\npropose Telly to efficiently fine-tune pre-trained code models via layer\nfreezing. The extensive experimental results on five various downstream tasks\ndemonstrate that training parameters and the corresponding time cost are\ngreatly reduced, while performances are similar or better. Replication package\nincluding source code, datasets, and online Appendix is available at:\n\\url{https://github.com/DeepSoftwareAnalytics/Telly}.", + "authors": "Ensheng Shi, Yanlin Wang, Hongyu Zhang, Lun Du, Shi Han, Dongmei Zhang, Hongbin Sun", + "published": "2023-04-11", + "updated": "2023-04-11", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI", + "cs.CL" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2401.16137v1", + "title": "X-PEFT: eXtremely Parameter-Efficient Fine-Tuning for Extreme Multi-Profile Scenarios", + "abstract": "Parameter-efficient fine-tuning (PEFT) techniques, such as adapter tuning,\naim to fine-tune a pre-trained language model (PLM) using a minimal number of\nparameters for a specific task or profile. Although adapter tuning provides\nincreased parameter efficiency compared to full-model fine-tuning, it\nintroduces a small set of additional parameters attached to a PLM for each\nprofile. This can become problematic in practical applications with multiple\nprofiles, particularly when a significant increase in the number of profiles\nlinearly boosts the total number of additional parameters. To mitigate this\nissue, we introduce X-PEFT, a novel PEFT method that leverages a multitude of\ngiven adapters by fine-tuning an extremely small set of compact tensors for a\nnew profile, which serve as binary masks to adaptively select the given\nadapters. To efficiently validate our proposed method, we implement it using a\nlarge number of trained or untrained (random) adapters. We evaluate the\nperformance of X-PEFT through LaMP and GLUE tasks and demonstrate that it\neither matches or surpasses the effectiveness of conventional adapter tuning,\ndespite reducing the memory requirements per profile by a factor of 10,000\ncompared to it.", + "authors": "Namju Kwak, Taesup Kim", + "published": "2024-01-29", + "updated": "2024-01-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2403.02271v1", + "title": "RIFF: Learning to Rephrase Inputs for Few-shot Fine-tuning of Language Models", + "abstract": "Pre-trained Language Models (PLMs) can be accurately fine-tuned for\ndownstream text processing tasks. Recently, researchers have introduced several\nparameter-efficient fine-tuning methods that optimize input prompts or adjust a\nsmall number of model parameters (e.g LoRA). In this study, we explore the\nimpact of altering the input text of the original task in conjunction with\nparameter-efficient fine-tuning methods. To most effectively rewrite the input\ntext, we train a few-shot paraphrase model with a Maximum-Marginal Likelihood\nobjective. Using six few-shot text classification datasets, we show that\nenriching data with paraphrases at train and test time enhances the performance\nbeyond what can be achieved with parameter-efficient fine-tuning alone.", + "authors": "Saeed Najafi, Alona Fyshe", + "published": "2024-03-04", + "updated": "2024-03-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2312.00700v1", + "title": "GIFT: Generative Interpretable Fine-Tuning Transformers", + "abstract": "We present GIFT (Generative Interpretable Fine-tuning Transformers) for\nfine-tuning pretrained (often large) Transformer models at downstream tasks in\na parameter-efficient way with built-in interpretability. Our GIFT is a deep\nparameter-residual learning method, which addresses two problems in fine-tuning\na pretrained Transformer model: Where to apply the parameter-efficient\nfine-tuning (PEFT) to be extremely lightweight yet sufficiently expressive, and\nHow to learn the PEFT to better exploit the knowledge of the pretrained model\nin a direct way? For the former, we select the final projection (linear) layer\nin the multi-head self-attention of a Transformer model, and verify its\neffectiveness. For the latter, in contrast to the prior art that directly\nintroduce new model parameters (often in low-rank approximation form) to be\nlearned in fine-tuning with downstream data, we propose a method for learning\nto generate the fine-tuning parameters. Our GIFT is a hyper-Transformer which\ntake as input the pretrained parameters of the projection layer to generate its\nfine-tuning parameters using a proposed Parameter-to-Cluster Attention (PaCa).\nThe PaCa results in a simple clustering-based forward explainer that plays the\nrole of semantic segmentation in testing. In experiments, our proposed GIFT is\ntested on the VTAB benchmark and the fine-grained visual classification (FGVC)\nbenchmark. It obtains significantly better performance than the prior art. Our\ncode is available at https://github.com/savadikarc/gift", + "authors": "Chinmay Savadikar, Xi Song, Tianfu Wu", + "published": "2023-12-01", + "updated": "2023-12-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2305.15348v1", + "title": "READ: Recurrent Adaptation of Large Transformers", + "abstract": "Fine-tuning large-scale Transformers has led to the explosion of many AI\napplications across Natural Language Processing and Computer Vision tasks.\nHowever, fine-tuning all pre-trained model parameters becomes impractical as\nthe model size and number of tasks increase. Parameter-efficient transfer\nlearning (PETL) methods aim to address these challenges. While effective in\nreducing the number of trainable parameters, PETL methods still require\nsignificant energy and computational resources to fine-tune. In this paper, we\nintroduce \\textbf{RE}current \\textbf{AD}aption (READ) -- a lightweight and\nmemory-efficient fine-tuning method -- to overcome the limitations of the\ncurrent PETL approaches. Specifically, READ inserts a small RNN network\nalongside the backbone model so that the model does not have to back-propagate\nthrough the large backbone network. Through comprehensive empirical evaluation\nof the GLUE benchmark, we demonstrate READ can achieve a $56\\%$ reduction in\nthe training memory consumption and an $84\\%$ reduction in the GPU energy usage\nwhile retraining high model quality compared to full-tuning. Additionally, the\nmodel size of READ does not grow with the backbone model size, making it a\nhighly scalable solution for fine-tuning large Transformers.", + "authors": "Sid Wang, John Nguyen, Ke Li, Carole-Jean Wu", + "published": "2023-05-24", + "updated": "2023-05-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2210.17451v2", + "title": "AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning", + "abstract": "Standard fine-tuning of large pre-trained language models (PLMs) for\ndownstream tasks requires updating hundreds of millions to billions of\nparameters, and storing a large copy of the PLM weights for every task\nresulting in increased cost for storing, sharing and serving the models. To\naddress this, parameter-efficient fine-tuning (PEFT) techniques were introduced\nwhere small trainable components are injected in the PLM and updated during\nfine-tuning. We propose AdaMix as a general PEFT method that tunes a mixture of\nadaptation modules -- given the underlying PEFT method of choice -- introduced\nin each Transformer layer while keeping most of the PLM weights frozen. For\ninstance, AdaMix can leverage a mixture of adapters like Houlsby or a mixture\nof low rank decomposition matrices like LoRA to improve downstream task\nperformance over the corresponding PEFT methods for fully supervised and\nfew-shot NLU and NLG tasks. Further, we design AdaMix such that it matches the\nsame computational cost and the number of tunable parameters as the underlying\nPEFT method. By only tuning 0.1-0.2% of PLM parameters, we show that AdaMix\noutperforms SOTA parameter-efficient fine-tuning and full model fine-tuning for\nboth NLU and NLG tasks.", + "authors": "Yaqing Wang, Sahaj Agarwal, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, Jianfeng Gao", + "published": "2022-10-31", + "updated": "2022-11-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2405.04126v1", + "title": "Refining Joint Text and Source Code Embeddings for Retrieval Task with Parameter-Efficient Fine-Tuning", + "abstract": "The latest developments in Natural Language Processing (NLP) have\ndemonstrated remarkable progress in a code-text retrieval problem. As the\nTransformer-based models used in this task continue to increase in size, the\ncomputational costs and time required for end-to-end fine-tuning become\nsubstantial. This poses a significant challenge for adapting and utilizing\nthese models when computational resources are limited. Motivated by these\nconcerns, we propose a fine-tuning framework that leverages Parameter-Efficient\nFine-Tuning (PEFT) techniques. Moreover, we adopt contrastive learning\nobjectives to improve the quality of bimodal representations learned by\ntransformer models. Additionally, for PEFT methods we provide extensive\nbenchmarking, the lack of which has been highlighted as a crucial problem in\nthe literature. Based on the thorough experimentation with the CodeT5+ model\nconducted on two datasets, we demonstrate that the proposed fine-tuning\nframework has the potential to improve code-text retrieval performance by\ntuning only 0.4% parameters at most.", + "authors": "Karim Galliamov, Leila Khaertdinova, Karina Denisova", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SE" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2403.08433v1", + "title": "An Empirical Study of Parameter Efficient Fine-tuning on Vision-Language Pre-train Model", + "abstract": "Recent studies applied Parameter Efficient Fine-Tuning techniques (PEFTs) to\nefficiently narrow the performance gap between pre-training and downstream.\nThere are two important factors for various PEFTs, namely, the accessible data\nsize and fine-tunable parameter size. A natural expectation for PEFTs is that\nthe performance of various PEFTs is positively related to the data size and\nfine-tunable parameter size. However, according to the evaluation of five PEFTs\non two downstream vision-language (VL) tasks, we find that such an intuition\nholds only if the downstream data and task are not consistent with\npre-training. For downstream fine-tuning consistent with pre-training, data\nsize no longer affects the performance, while the influence of fine-tunable\nparameter size is not monotonous. We believe such an observation could guide\nthe choice of training strategy for various PEFTs.", + "authors": "Yuxin Tian, Mouxing Yang, Yunfan Li, Dayiheng Liu, Xingzhang Ren, Xi Peng, Jiancheng Lv", + "published": "2024-03-13", + "updated": "2024-03-13", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2210.08823v3", + "title": "Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning", + "abstract": "Existing fine-tuning methods either tune all parameters of the pre-trained\nmodel (full fine-tuning), which is not efficient, or only tune the last linear\nlayer (linear probing), which suffers a significant accuracy drop compared to\nthe full fine-tuning. In this paper, we propose a new parameter-efficient\nfine-tuning method termed as SSF, representing that researchers only need to\nScale and Shift the deep Features extracted by a pre-trained model to catch up\nwith the performance of full fine-tuning. In this way, SSF also surprisingly\noutperforms other parameter-efficient fine-tuning approaches even with a\nsmaller number of tunable parameters. Furthermore, different from some existing\nparameter-efficient fine-tuning methods (e.g., Adapter or VPT) that introduce\nthe extra parameters and computational cost in the training and inference\nstages, SSF only adds learnable parameters during the training stage, and these\nadditional parameters can be merged into the original pre-trained model weights\nvia re-parameterization in the inference phase. With the proposed SSF, our\nmodel obtains 2.46% (90.72% vs. 88.54%) and 11.48% (73.10% vs. 65.57%)\nperformance improvement on FGVC and VTAB-1k in terms of Top-1 accuracy compared\nto the full fine-tuning but only fine-tuning about 0.3M parameters. We also\nconduct amounts of experiments in various model families (CNNs, Transformers,\nand MLPs) and datasets. Results on 26 image classification datasets in total\nand 3 robustness & out-of-distribution datasets show the effectiveness of SSF.\nCode is available at https://github.com/dongzelian/SSF.", + "authors": "Dongze Lian, Daquan Zhou, Jiashi Feng, Xinchao Wang", + "published": "2022-10-17", + "updated": "2023-01-15", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2310.05393v2", + "title": "Hierarchical Side-Tuning for Vision Transformers", + "abstract": "Fine-tuning pre-trained Vision Transformers (ViT) has consistently\ndemonstrated promising performance in the realm of visual recognition. However,\nadapting large pre-trained models to various tasks poses a significant\nchallenge. This challenge arises from the need for each model to undergo an\nindependent and comprehensive fine-tuning process, leading to substantial\ncomputational and memory demands. While recent advancements in\nParameter-efficient Transfer Learning (PETL) have demonstrated their ability to\nachieve superior performance compared to full fine-tuning with a smaller subset\nof parameter updates, they tend to overlook dense prediction tasks such as\nobject detection and segmentation. In this paper, we introduce Hierarchical\nSide-Tuning (HST), a novel PETL approach that enables ViT transfer to various\ndownstream tasks effectively. Diverging from existing methods that exclusively\nfine-tune parameters within input spaces or certain modules connected to the\nbackbone, we tune a lightweight and hierarchical side network (HSN) that\nleverages intermediate activations extracted from the backbone and generates\nmulti-scale features to make predictions. To validate HST, we conducted\nextensive experiments encompassing diverse visual tasks, including\nclassification, object detection, instance segmentation, and semantic\nsegmentation. Notably, our method achieves state-of-the-art average Top-1\naccuracy of 76.0% on VTAB-1k, all while fine-tuning a mere 0.78M parameters.\nWhen applied to object detection tasks on COCO testdev benchmark, HST even\nsurpasses full fine-tuning and obtains better performance with 49.7 box AP and\n43.2 mask AP using Cascade Mask R-CNN.", + "authors": "Weifeng Lin, Ziheng Wu, Jiayu Chen, Wentao Yang, Mingxin Huang, Jun Huang, Lianwen Jin", + "published": "2023-10-09", + "updated": "2023-10-10", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2310.17491v2", + "title": "FedPEAT: Convergence of Federated Learning, Parameter-Efficient Fine Tuning, and Emulator Assisted Tuning for Artificial Intelligence Foundation Models with Mobile Edge Computing", + "abstract": "The emergence of foundation models, including language and vision models, has\nreshaped AI's landscape, offering capabilities across various applications.\nDeploying and fine-tuning these large models, like GPT-3 and BERT, presents\nchallenges, especially in the current foundation model era. We introduce\nEmulator-Assisted Tuning (EAT) combined with Parameter-Efficient Fine-Tuning\n(PEFT) to form Parameter-Efficient Emulator-Assisted Tuning (PEAT). Further, we\nexpand this into federated learning as Federated PEAT (FedPEAT). FedPEAT uses\nadapters, emulators, and PEFT for federated model tuning, enhancing model\nprivacy and memory efficiency. Adapters adjust pre-trained models, while\nemulators give a compact representation of original models, addressing both\nprivacy and efficiency. Adaptable to various neural networks, our approach also\nuses deep reinforcement learning for hyper-parameter optimization. We tested\nFedPEAT in a unique scenario with a server participating in collaborative\nfederated tuning, showcasing its potential in tackling foundation model\nchallenges.", + "authors": "Terence Jie Chua, Wenhan Yu, Jun Zhao, Kwok-Yan Lam", + "published": "2023-10-26", + "updated": "2024-02-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NI" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2307.08122v2", + "title": "Tangent Transformers for Composition, Privacy and Removal", + "abstract": "We introduce Tangent Attention Fine-Tuning (TAFT), a method for fine-tuning\nlinearized transformers obtained by computing a First-order Taylor Expansion\naround a pre-trained initialization. We show that the Jacobian-Vector Product\nresulting from linearization can be computed efficiently in a single forward\npass, reducing training and inference cost to the same order of magnitude as\nits original non-linear counterpart, while using the same number of parameters.\nFurthermore, we show that, when applied to various downstream visual\nclassification tasks, the resulting Tangent Transformer fine-tuned with TAFT\ncan perform comparably with fine-tuning the original non-linear network. Since\nTangent Transformers are linear with respect to the new set of weights, and the\nresulting fine-tuning loss is convex, we show that TAFT enjoys several\nadvantages compared to non-linear fine-tuning when it comes to model\ncomposition, parallel training, machine unlearning, and differential privacy.", + "authors": "Tian Yu Liu, Aditya Golatkar, Stefano Soatto", + "published": "2023-07-16", + "updated": "2023-07-20", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2210.16032v1", + "title": "Parameter-efficient transfer learning of pre-trained Transformer models for speaker verification using adapters", + "abstract": "Recently, the pre-trained Transformer models have received a rising interest\nin the field of speech processing thanks to their great success in various\ndownstream tasks. However, most fine-tuning approaches update all the\nparameters of the pre-trained model, which becomes prohibitive as the model\nsize grows and sometimes results in overfitting on small datasets. In this\npaper, we conduct a comprehensive analysis of applying parameter-efficient\ntransfer learning (PETL) methods to reduce the required learnable parameters\nfor adapting to speaker verification tasks. Specifically, during the\nfine-tuning process, the pre-trained models are frozen, and only lightweight\nmodules inserted in each Transformer block are trainable (a method known as\nadapters). Moreover, to boost the performance in a cross-language low-resource\nscenario, the Transformer model is further tuned on a large intermediate\ndataset before directly fine-tuning it on a small dataset. With updating fewer\nthan 4% of parameters, (our proposed) PETL-based methods achieve comparable\nperformances with full fine-tuning methods (Vox1-O: 0.55%, Vox1-E: 0.82%,\nVox1-H:1.73%).", + "authors": "Junyi Peng, Themos Stafylakis, Rongzhi Gu, Old\u0159ich Plchot, Ladislav Mo\u0161ner, Luk\u00e1\u0161 Burget, Jan \u010cernock\u00fd", + "published": "2022-10-28", + "updated": "2022-10-28", + "primary_cat": "eess.AS", + "cats": [ + "eess.AS", + "cs.SD", + "eess.SP" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2302.06600v2", + "title": "Task-Specific Skill Localization in Fine-tuned Language Models", + "abstract": "Pre-trained language models can be fine-tuned to solve diverse NLP tasks,\nincluding in few-shot settings. Thus fine-tuning allows the model to quickly\npick up task-specific ``skills,'' but there has been limited study of where\nthese newly-learnt skills reside inside the massive model. This paper\nintroduces the term skill localization for this problem and proposes a\nsolution. Given the downstream task and a model fine-tuned on that task, a\nsimple optimization is used to identify a very small subset of parameters\n($\\sim0.01$% of model parameters) responsible for ($>95$%) of the model's\nperformance, in the sense that grafting the fine-tuned values for just this\ntiny subset onto the pre-trained model gives performance almost as well as the\nfine-tuned model. While reminiscent of recent works on parameter-efficient\nfine-tuning, the novel aspects here are that: (i) No further re-training is\nneeded on the subset (unlike, say, with lottery tickets). (ii) Notable\nimprovements are seen over vanilla fine-tuning with respect to calibration of\npredictions in-distribution ($40$-$90$% error reduction) as well as the quality\nof predictions out-of-distribution (OOD). In models trained on multiple tasks,\na stronger notion of skill localization is observed, where the sparse regions\ncorresponding to different tasks are almost disjoint, and their overlap (when\nit happens) is a proxy for task similarity. Experiments suggest that\nlocalization via grafting can assist certain forms of continual learning.", + "authors": "Abhishek Panigrahi, Nikunj Saunshi, Haoyu Zhao, Sanjeev Arora", + "published": "2023-02-13", + "updated": "2023-07-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2301.01821v1", + "title": "Parameter-Efficient Fine-Tuning Design Spaces", + "abstract": "Parameter-efficient fine-tuning aims to achieve performance comparable to\nfine-tuning, using fewer trainable parameters. Several strategies (e.g.,\nAdapters, prefix tuning, BitFit, and LoRA) have been proposed. However, their\ndesigns are hand-crafted separately, and it remains unclear whether certain\ndesign patterns exist for parameter-efficient fine-tuning. Thus, we present a\nparameter-efficient fine-tuning design paradigm and discover design patterns\nthat are applicable to different experimental settings. Instead of focusing on\ndesigning another individual tuning strategy, we introduce parameter-efficient\nfine-tuning design spaces that parameterize tuning structures and tuning\nstrategies. Specifically, any design space is characterized by four components:\nlayer grouping, trainable parameter allocation, tunable groups, and strategy\nassignment. Starting from an initial design space, we progressively refine the\nspace based on the model quality of each design choice and make greedy\nselection at each stage over these four components. We discover the following\ndesign patterns: (i) group layers in a spindle pattern; (ii) allocate the\nnumber of trainable parameters to layers uniformly; (iii) tune all the groups;\n(iv) assign proper tuning strategies to different groups. These design patterns\nresult in new parameter-efficient fine-tuning methods. We show experimentally\nthat these methods consistently and significantly outperform investigated\nparameter-efficient fine-tuning strategies across different backbone models and\ndifferent tasks in natural language processing.", + "authors": "Jiaao Chen, Aston Zhang, Xingjian Shi, Mu Li, Alex Smola, Diyi Yang", + "published": "2023-01-04", + "updated": "2023-01-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2402.11417v1", + "title": "LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models", + "abstract": "Various parameter-efficient fine-tuning (PEFT) techniques have been proposed\nto enable computationally efficient fine-tuning while maintaining model\nperformance. However, existing PEFT methods are still limited by the growing\nnumber of trainable parameters with the rapid deployment of Large Language\nModels (LLMs). To address this challenge, we present LoRETTA, an\nultra-parameter-efficient framework that significantly reduces trainable\nparameters through tensor-train decomposition. Specifically, we propose two\nmethods, named {LoRETTA}$_{adp}$ and {LoRETTA}$_{rep}$. The former employs\ntensorized adapters, offering a high-performance yet lightweight approach for\nthe fine-tuning of LLMs. The latter emphasizes fine-tuning via weight\nparameterization with a set of small tensor factors. LoRETTA achieves\ncomparable or better performance than most widely used PEFT methods with up to\n$100\\times$ fewer parameters on the LLaMA-2-7B models. Furthermore, empirical\nresults demonstrate that the proposed method effectively improves training\nefficiency, enjoys better multi-task learning performance, and enhances the\nanti-overfitting capability. Plug-and-play codes built upon the Huggingface\nframework and PEFT library will be released.", + "authors": "Yifan Yang, Jiajun Zhou, Ngai Wong, Zheng Zhang", + "published": "2024-02-18", + "updated": "2024-02-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2307.13770v1", + "title": "E^2VPT: An Effective and Efficient Approach for Visual Prompt Tuning", + "abstract": "As the size of transformer-based models continues to grow, fine-tuning these\nlarge-scale pretrained vision models for new tasks has become increasingly\nparameter-intensive. Parameter-efficient learning has been developed to reduce\nthe number of tunable parameters during fine-tuning. Although these methods\nshow promising results, there is still a significant performance gap compared\nto full fine-tuning. To address this challenge, we propose an Effective and\nEfficient Visual Prompt Tuning (E^2VPT) approach for large-scale\ntransformer-based model adaptation. Specifically, we introduce a set of\nlearnable key-value prompts and visual prompts into self-attention and input\nlayers, respectively, to improve the effectiveness of model fine-tuning.\nMoreover, we design a prompt pruning procedure to systematically prune low\nimportance prompts while preserving model performance, which largely enhances\nthe model's efficiency. Empirical results demonstrate that our approach\noutperforms several state-of-the-art baselines on two benchmarks, with\nconsiderably low parameter usage (e.g., 0.32% of model parameters on VTAB-1k).\nOur code is available at https://github.com/ChengHan111/E2VPT.", + "authors": "Cheng Han, Qifan Wang, Yiming Cui, Zhiwen Cao, Wenguan Wang, Siyuan Qi, Dongfang Liu", + "published": "2023-07-25", + "updated": "2023-07-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2310.03123v1", + "title": "Efficient Federated Prompt Tuning for Black-box Large Pre-trained Models", + "abstract": "With the blowout development of pre-trained models (PTMs), the efficient\ntuning of these models for diverse downstream applications has emerged as a\npivotal research concern. Although recent investigations into prompt tuning\nhave provided promising avenues, three salient challenges persist: (1) memory\nconstraint: the continuous growth in the size of open-source PTMs renders\nfine-tuning, even a fraction of their parameters, challenging for many\npractitioners. (2) model privacy: existing PTMs often function as public API\nservices, with their parameters inaccessible for effective or tailored\nfine-tuning. (3) data privacy: the fine-tuning of PTMs necessitates\nhigh-quality datasets, which are typically localized and not shared to public.\nTo optimally harness each local dataset while navigating memory constraints and\npreserving privacy, we propose Federated Black-Box Prompt Tuning (Fed-BBPT).\nThis innovative approach eschews reliance on parameter architectures and\nprivate dataset access, instead capitalizing on a central server that aids\nlocal users in collaboratively training a prompt generator through regular\naggregation. Local users leverage API-driven learning via a zero-order\noptimizer, obviating the need for PTM deployment. Relative to extensive\nfine-tuning, Fed-BBPT proficiently sidesteps memory challenges tied to PTM\nstorage and fine-tuning on local machines, tapping into comprehensive,\nhigh-quality, yet private training datasets. A thorough evaluation across 40\ndatasets spanning CV and NLP tasks underscores the robustness of our proposed\nmodel.", + "authors": "Zihao Lin, Yan Sun, Yifan Shi, Xueqian Wang, Lifu Huang, Li Shen, Dacheng Tao", + "published": "2023-10-04", + "updated": "2023-10-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2310.17041v1", + "title": "On Surgical Fine-tuning for Language Encoders", + "abstract": "Fine-tuning all the layers of a pre-trained neural language encoder (either\nusing all the parameters or using parameter-efficient methods) is often the\nde-facto way of adapting it to a new task. We show evidence that for different\ndownstream language tasks, fine-tuning only a subset of layers is sufficient to\nobtain performance that is close to and often better than fine-tuning all the\nlayers in the language encoder. We propose an efficient metric based on the\ndiagonal of the Fisher information matrix (FIM score), to select the candidate\nlayers for selective fine-tuning. We show, empirically on GLUE and SuperGLUE\ntasks and across distinct language encoders, that this metric can effectively\nselect layers leading to a strong downstream performance. Our work highlights\nthat task-specific information corresponding to a given downstream task is\noften localized within a few layers, and tuning only those is sufficient for\nstrong performance. Additionally, we demonstrate the robustness of the FIM\nscore to rank layers in a manner that remains constant during the optimization\nprocess.", + "authors": "Abhilasha Lodha, Gayatri Belapurkar, Saloni Chalkapurkar, Yuanming Tao, Reshmi Ghosh, Samyadeep Basu, Dmitrii Petrov, Soundararajan Srinivasan", + "published": "2023-10-25", + "updated": "2023-10-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.IR" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2401.06432v2", + "title": "Heterogeneous LoRA for Federated Fine-tuning of On-Device Foundation Models", + "abstract": "Foundation models (FMs) adapt well to specific domains or tasks with\nfine-tuning, and federated learning (FL) enables the potential for\nprivacy-preserving fine-tuning of the FMs with on-device local data. For\nfederated fine-tuning of FMs, we consider the FMs with small to medium\nparameter sizes of single digit billion at maximum, referred to as on-device\nFMs (ODFMs) that can be deployed on devices for inference but can only be\nfine-tuned with parameter efficient methods. In our work, we tackle the data\nand system heterogeneity problem of federated fine-tuning of ODFMs by proposing\na novel method using heterogeneous low-rank approximations (LoRAs), namely\nHetLoRA. First, we show that the naive approach of using homogeneous LoRA ranks\nacross devices face a trade-off between overfitting and slow convergence, and\nthus propose HetLoRA, which allows heterogeneous ranks across client devices\nand efficiently aggregates and distributes these heterogeneous LoRA modules. By\napplying rank self-pruning locally and sparsity-weighted aggregation at the\nserver, HetLoRA combines the advantages of high and low-rank LoRAs, which\nachieves improved convergence speed and final performance compared to\nhomogeneous LoRA. Furthermore, HetLoRA offers enhanced computation efficiency\ncompared to full fine-tuning, making it suitable for federated fine-tuning\nacross heterogeneous devices.", + "authors": "Yae Jee Cho, Luyang Liu, Zheng Xu, Aldi Fahrezi, Gauri Joshi", + "published": "2024-01-12", + "updated": "2024-02-20", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.DC" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2403.08484v1", + "title": "Data-oriented Dynamic Fine-tuning Parameter Selection Strategy for FISH Mask based Efficient Fine-tuning", + "abstract": "In view of the huge number of parameters of Large language models (LLMs) ,\ntuning all parameters is very costly, and accordingly fine-tuning specific\nparameters is more sensible. Most of parameter efficient fine-tuning (PEFT)\nconcentrate on parameter selection strategies, such as additive method,\nselective method and reparametrization-based method. However, there are few\nmethods that consider the impact of data samples on parameter selecting, such\nas Fish Mask based method. Fish Mask randomly choose a part of data samples and\ntreat them equally during parameter selection, which is unable to dynamically\nselect optimal parameters for inconstant data distributions. In this work, we\nadopt a data-oriented perspective, then proposing an IRD ($\\mathrm{\\underline\nI}$terative sample-parameter $\\mathrm{\\underline R}$ange $\\mathrm{\\underline\nD}$ecreasing) algorithm to search the best setting of sample-parameter pair for\nFISH Mask. In each iteration, by searching the set of samples and parameters\nwith larger Fish information, IRD can find better sample-parameter pair in most\nscale. We demonstrate the effectiveness and rationality of proposed strategy by\nconducting experiments on GLUE benchmark. Experimental results show our\nstrategy optimizes the parameter selection and achieves preferable performance.", + "authors": "Ming Dong, Kang Xue, Bolong Zheng, Tingting He", + "published": "2024-03-13", + "updated": "2024-03-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2205.12410v2", + "title": "AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning", + "abstract": "Standard fine-tuning of large pre-trained language models (PLMs) for\ndownstream tasks requires updating hundreds of millions to billions of\nparameters, and storing a large copy of the PLM weights for every task\nresulting in increased cost for storing, sharing and serving the models. To\naddress this, parameter-efficient fine-tuning (PEFT) techniques were introduced\nwhere small trainable components are injected in the PLM and updated during\nfine-tuning. We propose AdaMix as a general PEFT method that tunes a mixture of\nadaptation modules -- given the underlying PEFT method of choice -- introduced\nin each Transformer layer while keeping most of the PLM weights frozen. For\ninstance, AdaMix can leverage a mixture of adapters like Houlsby or a mixture\nof low rank decomposition matrices like LoRA to improve downstream task\nperformance over the corresponding PEFT methods for fully supervised and\nfew-shot NLU and NLG tasks. Further, we design AdaMix such that it matches the\nsame computational cost and the number of tunable parameters as the underlying\nPEFT method. By only tuning 0.1-0.2% of PLM parameters, we show that AdaMix\noutperforms SOTA parameter-efficient fine-tuning and full model fine-tuning for\nboth NLU and NLG tasks.", + "authors": "Yaqing Wang, Sahaj Agarwal, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, Jianfeng Gao", + "published": "2022-05-24", + "updated": "2022-11-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2312.11875v2", + "title": "Sparse is Enough in Fine-tuning Pre-trained Large Language Models", + "abstract": "With the prevalence of pre-training-fine-tuning paradigm, how to efficiently\nadapt the pre-trained model to the downstream tasks has been an intriguing\nissue. Parameter-Efficient Fine-Tuning (PEFT) methods have been proposed for\nlow-cost adaptation. Although PEFT has demonstrated effectiveness and been\nwidely applied, the underlying principles are still unclear. In this paper, we\nadopt the PAC-Bayesian generalization error bound, viewing pre-training as a\nshift of prior distribution which leads to a tighter bound for generalization\nerror. We validate this shift from the perspectives of oscillations in the loss\nlandscape and the quasi-sparsity in gradient distribution. Based on this, we\npropose a gradient-based sparse fine-tuning algorithm, named Sparse Increment\nFine-Tuning (SIFT), and validate its effectiveness on a range of tasks\nincluding the GLUE Benchmark and Instruction-tuning. The code is accessible at\nhttps://github.com/song-wx/SIFT/.", + "authors": "Weixi Song, Zuchao Li, Lefei Zhang, Hai Zhao, Bo Du", + "published": "2023-12-19", + "updated": "2024-05-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2402.15179v2", + "title": "Advancing Parameter Efficiency in Fine-tuning via Representation Editing", + "abstract": "Parameter Efficient Fine-Tuning (PEFT) has gained significant attention for\nits ability to achieve competitive results while updating only a small subset\nof trainable parameters. Despite the promising performance of current PEFT\nmethods, they present challenges in hyperparameter selection, such as\ndetermining the rank of LoRA or Adapter, or specifying the length of soft\nprompts. In addressing these challenges, we propose a novel approach to\nfine-tuning neural models, termed Representation EDiting (RED), which scales\nand biases the representation produced at each layer. RED substantially reduces\nthe number of trainable parameters by a factor of $25,700$ compared to full\nparameter fine-tuning, and by a factor of $32$ compared to LoRA. Remarkably,\nRED achieves comparable or superior results to full parameter fine-tuning and\nother PEFT methods. Extensive experiments were conducted across models of\nvarying architectures and scales, including RoBERTa, GPT-2, T5, and Llama-2,\nand the results demonstrate the efficiency and efficacy of RED, positioning it\nas a promising PEFT approach for large neural models.", + "authors": "Muling Wu, Wenhao Liu, Xiaohua Wang, Tianlong Li, Changze Lv, Zixuan Ling, Jianhao Zhu, Cenyuan Zhang, Xiaoqing Zheng, Xuanjing Huang", + "published": "2024-02-23", + "updated": "2024-02-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2312.08900v1", + "title": "Context-PEFT: Efficient Multi-Modal, Multi-Task Fine-Tuning", + "abstract": "This paper introduces a novel Parameter-Efficient Fine-Tuning (PEFT)\nframework for multi-modal, multi-task transfer learning with pre-trained\nlanguage models. PEFT techniques such as LoRA, BitFit and IA3 have demonstrated\ncomparable performance to full fine-tuning of pre-trained models for specific\ndownstream tasks, all while demanding significantly fewer trainable parameters\nand reduced GPU memory consumption. However, in the context of multi-modal\nfine-tuning, the need for architectural modifications or full fine-tuning often\nbecomes apparent. To address this we propose Context-PEFT, which learns\ndifferent groups of adaptor parameters based on the token's domain. This\napproach enables LoRA-like weight injection without requiring additional\narchitectural changes. Our method is evaluated on the COCO captioning task,\nwhere it outperforms full fine-tuning under similar data constraints while\nsimultaneously offering a substantially more parameter-efficient and\ncomputationally economical solution.", + "authors": "Avelina Asada Hadji-Kyriacou, Ognjen Arandjelovic", + "published": "2023-12-14", + "updated": "2023-12-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2303.15647v1", + "title": "Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning", + "abstract": "This paper presents a systematic overview and comparison of\nparameter-efficient fine-tuning methods covering over 40 papers published\nbetween February 2019 and February 2023. These methods aim to resolve the\ninfeasibility and impracticality of fine-tuning large language models by only\ntraining a small set of parameters. We provide a taxonomy that covers a broad\nrange of methods and present a detailed method comparison with a specific focus\non real-life efficiency and fine-tuning multibillion-scale language models.", + "authors": "Vladislav Lialin, Vijeta Deshpande, Anna Rumshisky", + "published": "2023-03-28", + "updated": "2023-03-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2310.18602v1", + "title": "Device-Edge Cooperative Fine-Tuning of Foundation Models as a 6G Service", + "abstract": "Foundation models (FoMos), referring to large-scale AI models, possess\nhuman-like capabilities and are able to perform competitively in the domain of\nhuman intelligence. The breakthrough in FoMos has inspired researchers to\ndeploy such models in the sixth-generation (6G) mobile networks for automating\na broad range of tasks in next-generation mobile applications. While the sizes\nof FoMos are reaching their peaks, their next phase is expected to focus on\nfine-tuning the models to specific downstream tasks. This inspires us to\npropose the vision of FoMo fine-tuning as a 6G service. Its key feature is the\nexploitation of existing parameter-efficient fine-tuning (PEFT) techniques to\ntweak only a small fraction of model weights for a FoMo to become customized\nfor a specific task. To materialize the said vision, we survey the\nstate-of-the-art PEFT and then present a novel device-edge fine-tuning (DEFT)\nframework for providing efficient and privacy-preserving fine-tuning services\nat the 6G network edge. The framework consists of the following comprehensive\nset of techniques: 1) Control of fine-tuning parameter sizes in different\ntransformer blocks of a FoMo; 2) Over-the-air computation for realizing neural\nconnections in DEFT; 3) Federated DEFT in a multi-device system by downloading\na FoMo emulator or gradients; 4) On-the-fly prompt-ensemble tuning; 5)\nDevice-to-device prompt transfer among devices. Experiments are conducted using\npre-trained FoMos with up to 11 billion parameters to demonstrate the\neffectiveness of DEFT techniques. The article is concluded by presenting future\nresearch opportunities.", + "authors": "Hai Wu, Xu Chen, Kaibin Huang", + "published": "2023-10-28", + "updated": "2023-10-28", + "primary_cat": "cs.NI", + "cats": [ + "cs.NI", + "cs.IT", + "math.IT" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2208.02070v1", + "title": "Efficient Fine-Tuning of Compressed Language Models with Learners", + "abstract": "Fine-tuning BERT-based models is resource-intensive in memory, computation,\nand time. While many prior works aim to improve inference efficiency via\ncompression techniques, e.g., pruning, these works do not explicitly address\nthe computational challenges of training to downstream tasks. We introduce\nLearner modules and priming, novel methods for fine-tuning that exploit the\noverparameterization of pre-trained language models to gain benefits in\nconvergence speed and resource utilization. Learner modules navigate the double\nbind of 1) training efficiently by fine-tuning a subset of parameters, and 2)\ntraining effectively by ensuring quick convergence and high metric scores. Our\nresults on DistilBERT demonstrate that learners perform on par with or surpass\nthe baselines. Learners train 7x fewer parameters than state-of-the-art methods\non GLUE. On CoLA, learners fine-tune 20% faster, and have significantly lower\nresource utilization.", + "authors": "Danilo Vucetic, Mohammadreza Tayaranian, Maryam Ziaeefard, James J. Clark, Brett H. Meyer, Warren J. Gross", + "published": "2022-08-03", + "updated": "2022-08-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2308.03303v1", + "title": "LoRA-FA: Memory-efficient Low-rank Adaptation for Large Language Models Fine-tuning", + "abstract": "The low-rank adaptation (LoRA) method can largely reduce the amount of\ntrainable parameters for fine-tuning large language models (LLMs), however, it\nstill requires expensive activation memory to update low-rank weights. Reducing\nthe number of LoRA layers or using activation recomputation could harm the\nfine-tuning performance or increase the computational overhead. In this work,\nwe present LoRA-FA, a memory-efficient fine-tuning method that reduces the\nactivation memory without performance degradation and expensive recomputation.\nLoRA-FA chooses to freeze the projection-down weight of $A$ and update the\nprojection-up weight of $B$ in each LoRA layer. It ensures the change of model\nweight reside in a low-rank space during LLMs fine-tuning, while eliminating\nthe requirement to store full-rank input activations. We conduct extensive\nexperiments across multiple model types (RoBERTa, T5, LLaMA) and model scales.\nOur results show that LoRA-FA can always achieve close fine-tuning accuracy\nacross different tasks compared to full parameter fine-tuning and LoRA.\nFurthermore, LoRA-FA can reduce the overall memory cost by up to 1.4$\\times$\ncompared to LoRA.", + "authors": "Longteng Zhang, Lin Zhang, Shaohuai Shi, Xiaowen Chu, Bo Li", + "published": "2023-08-07", + "updated": "2023-08-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2004.14129v1", + "title": "How fine can fine-tuning be? Learning efficient language models", + "abstract": "State-of-the-art performance on language understanding tasks is now achieved\nwith increasingly large networks; the current record holder has billions of\nparameters. Given a language model pre-trained on massive unlabeled text\ncorpora, only very light supervised fine-tuning is needed to learn a task: the\nnumber of fine-tuning steps is typically five orders of magnitude lower than\nthe total parameter count. Does this mean that fine-tuning only introduces\nsmall differences from the pre-trained model in the parameter space? If so, can\none avoid storing and computing an entire model for each task? In this work, we\naddress these questions by using Bidirectional Encoder Representations from\nTransformers (BERT) as an example. As expected, we find that the fine-tuned\nmodels are close in parameter space to the pre-trained one, with the closeness\nvarying from layer to layer. We show that it suffices to fine-tune only the\nmost critical layers. Further, we find that there are surprisingly many good\nsolutions in the set of sparsified versions of the pre-trained model. As a\nresult, fine-tuning of huge language models can be achieved by simply setting a\ncertain number of entries in certain layers of the pre-trained parameters to\nzero, saving both task-specific parameter storage and computational cost.", + "authors": "Evani Radiya-Dixit, Xin Wang", + "published": "2020-04-24", + "updated": "2020-04-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG", + "stat.ML" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2312.03694v3", + "title": "Parameter-Efficient Transfer Learning of Audio Spectrogram Transformers", + "abstract": "The common modus operandi of fine-tuning large pre-trained Transformer models\nentails the adaptation of all their parameters (i.e., full fine-tuning). While\nachieving striking results on multiple tasks, this approach becomes unfeasible\nas the model size and the number of downstream tasks increase. In natural\nlanguage processing and computer vision, parameter-efficient approaches like\nprompt-tuning and adapters have emerged as solid alternatives by fine-tuning\nonly a small number of extra parameters, without sacrificing performance\naccuracy. For audio classification tasks, the Audio Spectrogram Transformer\nmodel shows impressive results. However, surprisingly, how to efficiently adapt\nit to several downstream tasks has not been tackled before. In this paper, we\nbridge this gap and present a detailed investigation of common\nparameter-efficient methods, revealing that adapters and LoRA consistently\noutperform the other methods across four benchmarks. Whereas adapters prove to\nbe more efficient in few-shot learning settings, LoRA turns out to scale better\nas we increase the number of learnable parameters. We finally carry out\nablation studies to find the best configuration for adapters and LoRA.", + "authors": "Umberto Cappellazzo, Daniele Falavigna, Alessio Brutti, Mirco Ravanelli", + "published": "2023-12-06", + "updated": "2024-01-11", + "primary_cat": "eess.AS", + "cats": [ + "eess.AS" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2305.07491v1", + "title": "A Comprehensive Analysis of Adapter Efficiency", + "abstract": "Adapters have been positioned as a parameter-efficient fine-tuning (PEFT)\napproach, whereby a minimal number of parameters are added to the model and\nfine-tuned. However, adapters have not been sufficiently analyzed to understand\nif PEFT translates to benefits in training/deployment efficiency and\nmaintainability/extensibility. Through extensive experiments on many adapters,\ntasks, and languages in supervised and cross-lingual zero-shot settings, we\nclearly show that for Natural Language Understanding (NLU) tasks, the parameter\nefficiency in adapters does not translate to efficiency gains compared to full\nfine-tuning of models. More precisely, adapters are relatively expensive to\ntrain and have slightly higher deployment latency. Furthermore, the\nmaintainability/extensibility benefits of adapters can be achieved with simpler\napproaches like multi-task training via full fine-tuning, which also provide\nrelatively faster training times. We, therefore, recommend that for moderately\nsized models for NLU tasks, practitioners should rely on full fine-tuning or\nmulti-task training rather than using adapters. Our code is available at\nhttps://github.com/AI4Bharat/adapter-efficiency.", + "authors": "Nandini Mundra, Sumanth Doddapaneni, Raj Dabre, Anoop Kunchukuttan, Ratish Puduppully, Mitesh M. Khapra", + "published": "2023-05-12", + "updated": "2023-05-12", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2305.15212v1", + "title": "Towards Adaptive Prefix Tuning for Parameter-Efficient Language Model Fine-tuning", + "abstract": "Fine-tuning large pre-trained language models on various downstream tasks\nwith whole parameters is prohibitively expensive. Hence, Parameter-efficient\nfine-tuning has attracted attention that only optimizes a few task-specific\nparameters with the frozen pre-trained model. In this work, we focus on prefix\ntuning, which only optimizes continuous prefix vectors (i.e. pseudo tokens)\ninserted into Transformer layers. Based on the observation that the learned\nsyntax and semantics representation varies a lot at different layers, we argue\nthat the adaptive prefix will be further tailored to each layer than the fixed\none, enabling the fine-tuning more effective and efficient. Thus, we propose\nAdaptive Prefix Tuning (APT) to adjust the prefix in terms of both fine-grained\ntoken level and coarse-grained layer level with a gate mechanism. Experiments\non the SuperGLUE and NER datasets show the effectiveness of APT. In addition,\ntaking the gate as a probing, we validate the efficiency and effectiveness of\nthe variable prefix.", + "authors": "Zhen-Ru Zhang, Chuanqi Tan, Haiyang Xu, Chengyu Wang, Jun Huang, Songfang Huang", + "published": "2023-05-24", + "updated": "2023-05-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2401.04051v1", + "title": "Empirical Analysis of Efficient Fine-Tuning Methods for Large Pre-Trained Language Models", + "abstract": "Fine-tuning large pre-trained language models for downstream tasks remains a\ncritical challenge in natural language processing. This paper presents an\nempirical analysis comparing two efficient fine-tuning methods - BitFit and\nadapter modules - to standard full model fine-tuning. Experiments conducted on\nGLUE benchmark datasets (MRPC, COLA, STS-B) reveal several key insights. The\nBitFit approach, which trains only bias terms and task heads, matches full\nfine-tuning performance across varying amounts of training data and time\nconstraints. It demonstrates remarkable stability even with only 30\\% of data,\noutperforming full fine-tuning at intermediate data levels. Adapter modules\nexhibit high variability, with inconsistent gains over default models. The\nfindings indicate BitFit offers an attractive balance between performance and\nparameter efficiency. Our work provides valuable perspectives on model tuning,\nemphasizing robustness and highlighting BitFit as a promising alternative for\nresource-constrained or streaming task settings. The analysis offers actionable\nguidelines for efficient adaptation of large pre-trained models, while\nillustrating open challenges in stabilizing techniques like adapter modules.", + "authors": "Nigel Doering, Cyril Gorlla, Trevor Tuttle, Adhvaith Vijay", + "published": "2024-01-08", + "updated": "2024-01-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2303.15822v1", + "title": "One Adapter for All Programming Languages? Adapter Tuning for Code Search and Summarization", + "abstract": "As pre-trained models automate many code intelligence tasks, a widely used\nparadigm is to fine-tune a model on the task dataset for each programming\nlanguage. A recent study reported that multilingual fine-tuning benefits a\nrange of tasks and models. However, we find that multilingual fine-tuning leads\nto performance degradation on recent models UniXcoder and CodeT5.\n To alleviate the potentially catastrophic forgetting issue in multilingual\nmodels, we fix all pre-trained model parameters, insert the parameter-efficient\nstructure adapter, and fine-tune it. Updating only 0.6\\% of the overall\nparameters compared to full-model fine-tuning for each programming language,\nadapter tuning yields consistent improvements on code search and summarization\ntasks, achieving state-of-the-art results. In addition, we experimentally show\nits effectiveness in cross-lingual and low-resource scenarios. Multilingual\nfine-tuning with 200 samples per programming language approaches the results\nfine-tuned with the entire dataset on code summarization. Our experiments on\nthree probing tasks show that adapter tuning significantly outperforms\nfull-model fine-tuning and effectively overcomes catastrophic forgetting.", + "authors": "Deze Wang, Boxing Chen, Shanshan Li, Wei Luo, Shaoliang Peng, Wei Dong, Xiangke Liao", + "published": "2023-03-28", + "updated": "2023-03-28", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2405.00732v1", + "title": "LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4, A Technical Report", + "abstract": "Low Rank Adaptation (LoRA) has emerged as one of the most widely adopted\nmethods for Parameter Efficient Fine-Tuning (PEFT) of Large Language Models\n(LLMs). LoRA reduces the number of trainable parameters and memory usage while\nachieving comparable performance to full fine-tuning. We aim to assess the\nviability of training and serving LLMs fine-tuned with LoRA in real-world\napplications. First, we measure the quality of LLMs fine-tuned with quantized\nlow rank adapters across 10 base models and 31 tasks for a total of 310 models.\nWe find that 4-bit LoRA fine-tuned models outperform base models by 34 points\nand GPT-4 by 10 points on average. Second, we investigate the most effective\nbase models for fine-tuning and assess the correlative and predictive\ncapacities of task complexity heuristics in forecasting the outcomes of\nfine-tuning. Finally, we evaluate the latency and concurrency capabilities of\nLoRAX, an open-source Multi-LoRA inference server that facilitates the\ndeployment of multiple LoRA fine-tuned models on a single GPU using shared base\nmodel weights and dynamic adapter loading. LoRAX powers LoRA Land, a web\napplication that hosts 25 LoRA fine-tuned Mistral-7B LLMs on a single NVIDIA\nA100 GPU with 80GB memory. LoRA Land highlights the quality and\ncost-effectiveness of employing multiple specialized LLMs over a single,\ngeneral-purpose LLM.", + "authors": "Justin Zhao, Timothy Wang, Wael Abid, Geoffrey Angus, Arnav Garg, Jeffery Kinnison, Alex Sherstinsky, Piero Molino, Travis Addair, Devvret Rishi", + "published": "2024-04-29", + "updated": "2024-04-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2404.04316v1", + "title": "Parameter Efficient Quasi-Orthogonal Fine-Tuning via Givens Rotation", + "abstract": "With the increasingly powerful performances and enormous scales of Pretrained\nLanguage Models (PLMs), promoting parameter efficiency in fine-tuning has\nbecome a crucial need for effective and efficient adaptation to various\ndownstream tasks. One representative line of fine-tuning methods is Orthogonal\nFine-tuning (OFT), which rigorously preserves the angular distances within the\nparameter space to preserve the pretrained knowledge. Despite the empirical\neffectiveness, OFT still suffers low parameter efficiency at $\\mathcal{O}(d^2)$\nand limited capability of downstream adaptation. Inspired by Givens rotation,\nin this paper, we proposed quasi-Givens Orthogonal Fine-Tuning (qGOFT) to\naddress the problems. We first use $\\mathcal{O}(d)$ Givens rotations to\naccomplish arbitrary orthogonal transformation in $SO(d)$ with provable\nequivalence, reducing parameter complexity from $\\mathcal{O}(d^2)$ to\n$\\mathcal{O}(d)$. Then we introduce flexible norm and relative angular\nadjustments under soft orthogonality regularization to enhance the adaptation\ncapability of downstream semantic deviations. Extensive experiments on various\ntasks and PLMs validate the effectiveness of our methods.", + "authors": "Xinyu Ma, Xu Chu, Zhibang Yang, Yang Lin, Xin Gao, Junfeng Zhao", + "published": "2024-04-05", + "updated": "2024-04-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2401.04151v1", + "title": "Chain of LoRA: Efficient Fine-tuning of Language Models via Residual Learning", + "abstract": "Fine-tuning is the primary methodology for tailoring pre-trained large\nlanguage models to specific tasks. As the model's scale and the diversity of\ntasks expand, parameter-efficient fine-tuning methods are of paramount\nimportance. One of the most widely used family of methods is low-rank\nadaptation (LoRA) and its variants. LoRA encodes weight update as the product\nof two low-rank matrices. Despite its advantages, LoRA falls short of\nfull-parameter fine-tuning in terms of generalization error for certain tasks.\n We introduce Chain of LoRA (COLA), an iterative optimization framework\ninspired by the Frank-Wolfe algorithm, to bridge the gap between LoRA and full\nparameter fine-tuning, without incurring additional computational costs or\nmemory overheads. COLA employs a residual learning procedure where it merges\nlearned LoRA modules into the pre-trained language model parameters and\nre-initilize optimization for new born LoRA modules. We provide theoretical\nconvergence guarantees as well as empirical results to validate the\neffectiveness of our algorithm. Across various models (OPT and llama-2) and\nseven benchmarking tasks, we demonstrate that COLA can consistently outperform\nLoRA without additional computational or memory costs.", + "authors": "Wenhan Xia, Chengwei Qin, Elad Hazan", + "published": "2024-01-08", + "updated": "2024-01-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2403.11621v1", + "title": "Let's Focus on Neuron: Neuron-Level Supervised Fine-tuning for Large Language Model", + "abstract": "Large Language Models (LLMs) are composed of neurons that exhibit various\nbehaviors and roles, which become increasingly diversified as models scale.\nRecent studies have revealed that not all neurons are active across different\ndatasets, and this sparsity correlates positively with the task-specific\nability, leading to advancements in model pruning and training efficiency.\nTraditional fine-tuning methods engage all parameters of LLMs, which is\ncomputationally expensive and may not be necessary. In contrast,\nParameter-Efficient Fine-Tuning (PEFT) approaches aim to minimize the number of\ntrainable parameters, yet they still operate at a relatively macro scale (e.g.,\nlayer-level). We introduce Neuron-Level Fine-Tuning (NeFT), a novel approach\nthat refines the granularity of parameter training down to the individual\nneuron, enabling more precise and computationally efficient model updates. The\nexperimental results show that NeFT not only exceeded the performance of\nfull-parameter fine-tuning and PEFT but also provided insights into the\nanalysis of neurons.", + "authors": "Haoyun Xu, Runzhe Zhan, Derek F. Wong, Lidia S. Chao", + "published": "2024-03-18", + "updated": "2024-03-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2312.06353v3", + "title": "Federated Full-Parameter Tuning of Billion-Sized Language Models with Communication Cost under 18 Kilobytes", + "abstract": "Pre-trained large language models (LLMs) need fine-tuning to improve their\nresponsiveness to natural language instructions. Federated learning offers a\nway to fine-tune LLMs using the abundant data on end devices without\ncompromising data privacy. Most existing federated fine-tuning methods for LLMs\nrely on parameter-efficient fine-tuning techniques, which may not reach the\nperformance height possible with full-parameter tuning. However, federated\nfull-parameter tuning of LLMs is a non-trivial problem due to the immense\ncommunication cost. This work introduces FedKSeed that employs zeroth-order\noptimization with a finite set of random seeds. It significantly reduces\ntransmission requirements between the server and clients to just a few random\nseeds and scalar gradients, amounting to only a few thousand bytes, making\nfederated full-parameter tuning of billion-sized LLMs possible on devices.\nBuilding on it, we develop a strategy enabling probability-differentiated seed\nsampling, prioritizing perturbations with greater impact on model accuracy.\nExperiments across six scenarios with various LLMs, datasets and data\npartitions demonstrate that our approach outperforms existing federated LLM\nfine-tuning methods in both communication efficiency and new task\ngeneralization.", + "authors": "Zhen Qin, Daoyuan Chen, Bingchen Qian, Bolin Ding, Yaliang Li, Shuiguang Deng", + "published": "2023-12-11", + "updated": "2024-01-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.DC" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2304.12272v1", + "title": "AMR Parsing with Instruction Fine-tuned Pre-trained Language Models", + "abstract": "Instruction fine-tuned language models on a collection of instruction\nannotated datasets (FLAN) have shown highly effective to improve model\nperformance and generalization to unseen tasks. However, a majority of standard\nparsing tasks including abstract meaning representation (AMR), universal\ndependency (UD), semantic role labeling (SRL) has been excluded from the FLAN\ncollections for both model training and evaluations. In this paper, we take one\nof such instruction fine-tuned pre-trained language models, i.e. FLAN-T5, and\nfine-tune them for AMR parsing. Our extensive experiments on various AMR\nparsing tasks including AMR2.0, AMR3.0 and BioAMR indicate that FLAN-T5\nfine-tuned models out-perform previous state-of-the-art models across all\ntasks. In addition, full fine-tuning followed by the parameter efficient\nfine-tuning, LoRA, further improves the model performances, setting new\nstate-of-the-arts in Smatch on AMR2.0 (86.4), AMR3.0 (84.9) and BioAMR (82.3).", + "authors": "Young-Suk Lee, Ram\u00f3n Fernandez Astudillo, Radu Florian, Tahira Naseem, Salim Roukos", + "published": "2023-04-24", + "updated": "2023-04-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2404.16385v1", + "title": "Efficiency in Focus: LayerNorm as a Catalyst for Fine-tuning Medical Visual Language Pre-trained Models", + "abstract": "In the realm of Medical Visual Language Models (Med-VLMs), the quest for\nuniversal efficient fine-tuning mechanisms remains paramount, especially given\nresearchers in interdisciplinary fields are often extremely short of training\nresources, yet largely unexplored. Given the unique challenges in the medical\ndomain, such as limited data scope and significant domain-specific\nrequirements, evaluating and adapting Parameter-Efficient Fine-Tuning (PEFT)\nmethods specifically for Med-VLMs is essential. Most of the current PEFT\nmethods on Med-VLMs have yet to be comprehensively investigated but mainly\nfocus on adding some components to the model's structure or input. However,\nfine-tuning intrinsic model components often yields better generality and\nconsistency, and its impact on the ultimate performance of Med-VLMs has been\nwidely overlooked and remains understudied. In this paper, we endeavour to\nexplore an alternative to traditional PEFT methods, especially the impact of\nfine-tuning LayerNorm layers, FFNs and Attention layers on the Med-VLMs. Our\ncomprehensive studies span both small-scale and large-scale Med-VLMs,\nevaluating their performance under various fine-tuning paradigms across tasks\nsuch as Medical Visual Question Answering and Medical Imaging Report\nGeneration. The findings reveal unique insights into the effects of intrinsic\nparameter fine-tuning methods on fine-tuning Med-VLMs to downstream tasks and\nexpose fine-tuning solely the LayerNorm layers not only surpasses the\nefficiency of traditional PEFT methods but also retains the model's accuracy\nand generalization capabilities across a spectrum of medical downstream tasks.\nThe experiments show LayerNorm fine-tuning's superior adaptability and\nscalability, particularly in the context of large-scale Med-VLMs.", + "authors": "Jiawei Chen, Dingkang Yang, Yue Jiang, Mingcheng Li, Jinjie Wei, Xiaolu Hou, Lihua Zhang", + "published": "2024-04-25", + "updated": "2024-04-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2310.07147v1", + "title": "QFT: Quantized Full-parameter Tuning of LLMs with Affordable Resources", + "abstract": "Large Language Models (LLMs) have showcased remarkable impacts across a wide\nspectrum of natural language processing tasks. Fine-tuning these pre-trained\nmodels on downstream datasets provides further significant performance gains,\nbut this process has been challenging due to its extraordinary resource\nrequirements. To this end, existing efforts focus on parameter-efficient\nfine-tuning, which, unfortunately, fail to capitalize on the powerful potential\nof full-parameter fine-tuning. In this work, we propose QFT, a novel Quantized\nFull-parameter Tuning framework for LLMs that enables memory-efficient\nfine-tuning without harming performance. Our framework incorporates two novel\nideas: (i) we adopt the efficient Lion optimizer, which only keeps track of the\nmomentum and has consistent update magnitudes for each parameter, an inherent\nadvantage for robust quantization; and (ii) we quantize all model states and\nstore them as integer values, and present a gradient flow and parameter update\nscheme for the quantized weights. As a result, QFT reduces the model state\nmemory to 21% of the standard solution while achieving comparable performance,\ne.g., tuning a LLaMA-7B model requires only <30GB of memory, satisfied by a\nsingle A6000 GPU.", + "authors": "Zhikai Li, Xiaoxuan Liu, Banghua Zhu, Zhen Dong, Qingyi Gu, Kurt Keutzer", + "published": "2023-10-11", + "updated": "2023-10-11", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2312.10136v2", + "title": "Gradient-based Parameter Selection for Efficient Fine-Tuning", + "abstract": "With the growing size of pre-trained models, full fine-tuning and storing all\nthe parameters for various downstream tasks is costly and infeasible. In this\npaper, we propose a new parameter-efficient fine-tuning method, Gradient-based\nParameter Selection (GPS), demonstrating that only tuning a few selected\nparameters from the pre-trained model while keeping the remainder of the model\nfrozen can generate similar or better performance compared with the full model\nfine-tuning method. Different from the existing popular and state-of-the-art\nparameter-efficient fine-tuning approaches, our method does not introduce any\nadditional parameters and computational costs during both the training and\ninference stages. Another advantage is the model-agnostic and non-destructive\nproperty, which eliminates the need for any other design specific to a\nparticular model. Compared with the full fine-tuning, GPS achieves 3.33%\n(91.78% vs. 88.45%, FGVC) and 9.61% (73.1% vs. 65.57%, VTAB) improvement of the\naccuracy with tuning only 0.36% parameters of the pre-trained model on average\nover 24 image classification tasks; it also demonstrates a significant\nimprovement of 17% and 16.8% in mDice and mIoU, respectively, on medical image\nsegmentation task. Moreover, GPS achieves state-of-the-art performance compared\nwith existing PEFT methods.", + "authors": "Zhi Zhang, Qizhe Zhang, Zijun Gao, Renrui Zhang, Ekaterina Shutova, Shiji Zhou, Shanghang Zhang", + "published": "2023-12-15", + "updated": "2024-05-04", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + }, + { + "url": "http://arxiv.org/abs/2305.17333v3", + "title": "Fine-Tuning Language Models with Just Forward Passes", + "abstract": "Fine-tuning language models (LMs) has yielded success on diverse downstream\ntasks, but as LMs grow in size, backpropagation requires a prohibitively large\namount of memory. Zeroth-order (ZO) methods can in principle estimate gradients\nusing only two forward passes but are theorized to be catastrophically slow for\noptimizing large models. In this work, we propose a memory-efficient\nzerothorder optimizer (MeZO), adapting the classical ZO-SGD method to operate\nin-place, thereby fine-tuning LMs with the same memory footprint as inference.\nFor example, with a single A100 80GB GPU, MeZO can train a 30-billion parameter\nmodel, whereas fine-tuning with backpropagation can train only a 2.7B LM with\nthe same budget. We conduct comprehensive experiments across model types\n(masked and autoregressive LMs), model scales (up to 66B), and downstream tasks\n(classification, multiple-choice, and generation). Our results demonstrate that\n(1) MeZO significantly outperforms in-context learning and linear probing; (2)\nMeZO achieves comparable performance to fine-tuning with backpropagation across\nmultiple tasks, with up to 12x memory reduction and up to 2x GPU-hour reduction\nin our implementation; (3) MeZO is compatible with both full-parameter and\nparameter-efficient tuning techniques such as LoRA and prefix tuning; (4) MeZO\ncan effectively optimize non-differentiable objectives (e.g., maximizing\naccuracy or F1). We support our empirical findings with theoretical insights,\nhighlighting how adequate pre-training and task prompts enable MeZO to\nfine-tune huge models, despite classical ZO analyses suggesting otherwise.", + "authors": "Sadhika Malladi, Tianyu Gao, Eshaan Nichani, Alex Damian, Jason D. Lee, Danqi Chen, Sanjeev Arora", + "published": "2023-05-27", + "updated": "2024-01-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "category": "Parameter AND Efficient AND Fine AND Tuning" + } +] \ No newline at end of file