AcademicEval / intro_28K_G /test_introduction_long_2404.16621v1.json
XaiverZ's picture
syn
43407ba
raw
history blame
86.2 kB
{
"url": "http://arxiv.org/abs/2404.16621v1",
"title": "Hippocrates: An Open-Source Framework for Advancing Large Language Models in Healthcare",
"abstract": "The integration of Large Language Models (LLMs) into healthcare promises to\ntransform medical diagnostics, research, and patient care. Yet, the progression\nof medical LLMs faces obstacles such as complex training requirements, rigorous\nevaluation demands, and the dominance of proprietary models that restrict\nacademic exploration. Transparent, comprehensive access to LLM resources is\nessential for advancing the field, fostering reproducibility, and encouraging\ninnovation in healthcare AI. We present Hippocrates, an open-source LLM\nframework specifically developed for the medical domain. In stark contrast to\nprevious efforts, it offers unrestricted access to its training datasets,\ncodebase, checkpoints, and evaluation protocols. This open approach is designed\nto stimulate collaborative research, allowing the community to build upon,\nrefine, and rigorously evaluate medical LLMs within a transparent ecosystem.\nAlso, we introduce Hippo, a family of 7B models tailored for the medical\ndomain, fine-tuned from Mistral and LLaMA2 through continual pre-training,\ninstruction tuning, and reinforcement learning from human and AI feedback. Our\nmodels outperform existing open medical LLMs models by a large-margin, even\nsurpassing models with 70B parameters. Through Hippocrates, we aspire to unlock\nthe full potential of LLMs not just to advance medical knowledge and patient\ncare but also to democratize the benefits of AI research in healthcare, making\nthem available across the globe.",
"authors": "Emre Can Acikgoz, Osman Batur \u0130nce, Rayene Bench, Arda An\u0131l Boz, \u0130lker Kesen, Aykut Erdem, Erkut Erdem",
"published": "2024-04-25",
"updated": "2024-04-25",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AI",
"cs.CL"
],
"label": "Original Paper",
"paper_cat": "LLM Fairness",
"gt": "The remarkable success of Large Language Models (LLMs) across diverse NLP tasks has revolutionized artificial intelligence (Touvron et al., 2023b; Bai et al., 2023; Jiang et al., 2023; OpenAI, 2023; Google, 2023). Despite their impressive generalization capabilities, LLMs encounter challenges in clinical contexts, primarily due to a deficiency in domain-specific knowledge and the intricacies of medical terminology. Bridging this gap, in this work, we introduce Hippocrates (named after the Ancient Greek \u201cFather of Medicine\u201d), a state-of- the-art, fully open-source framework designed to elevate LLMs\u2019 proficiency in medical reasoning. We publicly share our training data, complete training and evaluations codes, along with intermediate model checkpoints. Our framework marks an important step towards democratizing advancements in medical LLMs. Previous attempts to develop advanced medical LLMs yielded promising results by further training them (Labrak et al., 2024), supervised fine-tuning them (Li et al., 2023; Han et al., 2023; Toma et al., 2023), or both (Wu et al., 2023; Chen et al., 2023), via special medical- text corpus and medical instruction datasets. However, the data collection, pre-training, \u2217Corresponding author, [email protected] 1 arXiv:2404.16621v1 [cs.LG] 25 Apr 2024 Hippocrates Oct 2022 Apr 2023 Jul 2023 Aug 2023 Sep 2023 Nov 2023 Dec 2023 Mar 2024 30 40 50 60 MedQA Accuracy (%) BioGPT 1.5B (27.2) MedAlpaca 7B (36.6) LLaMA-2 7B (39.5) PMC-LLaMA 13B (46.3) Mistral 7B (48.9) Qwen 72B (53.4) Meditron 70B (58.5) Hippo- 7B (50.8) Hippo- 7B (59.9) Figure 1: The evolution of medical LLM performances on the MedQA dataset. Our 7B Hippo- and Hippo- models achieve 50.8% and 59.9% 5-shot accuracy, respectively. Hippo- outperforms all existing open models, including even those with 70B parameters. and finetuning stages may include considerable complexity, which makes reproducing, analyzing, and comparing the recent LLMs in that domain challenging. On the other hand, closed models, e.g. GPT4 (OpenAI, 2023), Gemini (Google, 2023), Med-PaLM (Singhal et al., 2023b), trained on closed-domain datasets make their results non-reproducible, not to mention substantial computational costs and further complicate the understanding of which components are crucial to the success of these advanced medical frameworks. In this work, we provide full access to our framework, from the data sources to the training configurations and the reproducible evaluation protocols. We conduct a detailed empirical analysis to identify the impact of various design elements on LLM performance, leading to a domain-adapted framework that demonstrates superior performance on multiple medical benchmarks. Based on these insights, we develop a step-by-step guide for the efficient training of medical-LLMs. Our research efforts yield two advanced 7B parameter models, Hippo- and Hippo- . As shown in Fig. 1, our models not only outperform existing 7B and 13B models by a significant margin but also deliver results on par with, and in some cases exceeding, those of 70B models. We argue that the development of a broad, varied collection of open models is crucial for deepening our knowledge of language models and enhancing their applicability across various domains. In addition, we adopt a novel strategy for structuring our instruction tuning (IT) dataset, dividing it into two distinct components: the General Instruction Dataset and the Evaluation Instruction Dataset. The General dataset is designed to enable unbiased assessments by avoiding overlap with downstream task data, marking a departure from previous method- ologies. On the other hand, the Evaluation Instruction Dataset, which incorporates training splits from evaluation benchmarks, facilitates direct comparisons with existing models (Chen et al., 2023). Notably, for the first time in the medical domain, our approach incorpo- rates preference learning from medical professionals into the model development process, utilizing RLAIF (Lee et al., 2023b) and GPT4 for annotating preferences. For model evaluation, we employ the well-established EleutherAI framework1 (Gao et al., 2021), conducting tests across a set of six varied medical downstream tasks. These include MedMCQA (Pal et al., 2022), PubmedQA (Jin et al., 2019), MedQA (Jin et al., 2021), and the USMLE-step1, USMLE-step2, and USMLE-step3. Leveraging this framework allows for straightforward replication of any LLM\u2019s results, eliminating the necessity for additional fine-tuning or the repetitive execution of evaluation scripts for each new model.",
"main_content": "Fig. 2 shows the overall workflow of the Hippocrates framework, starting from domainspecific pre-training and progressing through supervised fine-tuning and reinforcement 1https://github.com/EleutherAI/lm-evaluation-harness 2 Hippocrates Medical Knowledge Injection Medical Instruction Tuning Medical Preference Learning 298M Tokens Medical Guidelines, PMC-Patients, PubMedQA-train Language Modeling Predict next token Domain Adapted Model 696K Samples Flashcards, GenMedGPT, Platypus, HealthCareMagic, UMLS, Relations, Wikidoc, Patient-Info, MedicationQA \u2022 Query Answer 1 Answer 2 Prompt GPT-4 Preference Dataset Preference-Dataset Pre-training Data Instruction Data Model LLaMA2 ( ) 7B Mistral ( ) 7B Training Method 15K Samples Language Modeling { Instruction Finetuning Predict next token for responses Model Domain Adapted Model Training Method Supervised Fine-tuning Medical SFT Model { Reinforcement Learning Optimize for medical preferences Model Medical SFT Model Training Method DPO Medical Preference Model Evaluation Benchmark Data Inference Evaluation Framework MedMCQA MedQA PubMedQA USMLE-step1 USMLE-step2 USMLE-step3 Dataset Format Question + Answer Question + Answer Abs + Question + Answer Question + Answer Question + Answer Question + Answer Eleuther AI\u2019s Language Model Evaluation Harness Objective Log-Likelihood Evaluation Method Choose answer with the highest likelihood score Prompting In-Context Learning (ICL) strategies Approach Zero-Shot Few-Shot Chain-of-Thought (CoT) Methods MedMCQA-train, MedQA-train, PubMedQA-train General Eval Figure 2: An overview of the Hippocrates framework, illustrating the four critical phases including (1) continued pre-training, (2) supervised fine-tuning, (3) reinforcement learning from AI-generated feedback, and (4) the comprehensive evaluation pipeline. learning from AI-generated feedback to an extensive evaluation phase. This pipeline ensures our models are precisely tailored and rigorously tested for the medical domain. 2.1 Continued Pre-training Data A key aspect of our methodology is the integration of specialized medical knowledge through an extensive pre-training corpus, assembled from three specialized datasets: Medical Guidelines, PMC-Patients, and PubMedQA-contexts. The Medical Guidelines dataset comprises clinical practice guidelines, is used for training Meditron models (Chen et al., 2023). The PMC-Patients dataset (Zhao et al., 2023) consists of patient summaries extracted from case reports within PubMed Central (PMC). Additionally, the PubMedQA-contexts dataset is constructed by extracting the context field of each sample in the training split of the benchmark (Jin et al., 2019). Detailed descriptions and specifications of each dataset are available in Table 1. This extensive corpus, consisting of roughly 300M training tokens, forms the foundation of our models, ensuring their proficiency in navigating medical terminology and practices. We systematically assessed the impact of each dataset, both individually and in combination, to optimize our model\u2019s performance. Dataset Source License Size (MB) #Samples #Tokens Medical Guidelines Meditron Apache 2.0 License 382.6 37,970 96M PMC-Patients Pubmed Central CC BY-NC-SA 4.0 462.3 167,034 122M PubMedQA-train PubMedQA MIT License 290.2 211,269 80M Total 1,135.1 416,273 298M Table 1: Summary of the datasets used for continued pre-training, showing their sources, licence information and data statistics. 2.2 Supervised Fine-Tuning Data Developing effective medical LLMs requires blending domain-specific knowledge with sophisticated reasoning abilities. Previous models often utilized instruction data consisting of samples from the training or test sets of evaluation benchmarks. We also considered this setup, but additionally investigated an alternative involving generic medical data. Consequently, we constructed two sets of IT datasets: the General Instructions Data and the Evaluation Instructions Data. 3 Hippocrates General Instructions Data. This dataset aggregates more than 400K samples from nine different datasets, each derived from the instruction corpora of previous studies (Li et al., 2023; Han et al., 2023; Wu et al., 2023; Lee et al., 2023a). By excluding data from the training or test splits of downstream QA benchmarks, we aim to minimize bias and improve the model\u2019s generalization capabilities across different reasoning tasks. A pre-processing protocol was employed to remove superfluous words and web URLs, ensuring the data\u2019s quality and relevance. The detailed statistics of the dataset are presented in Table 2. Dataset Source License Size (MB) #Samples #Tokens Medical Flashcards MedAlpaca No commercialized use 18.8 33,955 3.9M GenMedGPT-5k ChatDoctor Apache 2.0 3.1 5,452 0.6M Open-Platypus Platypus CC BY-NC-SA 4.0 32.9 24,926 9.5M HealthCareMagic-100k ChatDoctor Apache 2.0 143.8 112,165 32.3M UMLS PMC-LLaMA CC BY 4.0 23.0 49,057 4.6M UMLS-Relations PMC-LLaMA CC BY 4.0 21.7 50,000 4.3M WikiDoc MedAlpaca CC BY-SA 4.0 11.0 10,000 2.6M WikiDoc-Patient-Info MedAlpaca CC BY-SA 4.0 3.7 5,942 0.8M MedicationQA PMC-LLaMA CC BY 4.0 0.4 552 0.1M Total 258.4 292,049 58.7M Table 2: Summary of General Instructions Data, describing the datasets used, their sources, together with their licence information, and size. Evaluation Instructions Data. This dataset was formed to examine the effects of including instruction samples directly from downstream tasks, a common practice in existing studies (Chen et al., 2023; Han et al., 2023; Wu et al., 2023). Instruction-response pairs were crafted using the training splits of various benchmarks, following the templates established in Meditron (Chen et al., 2023). We conducted a series of experiments to assess the distinct influence of each split on each task, both individually and collectively. The details about the Evaluation Instruction Data is given in Table 3. Dataset Source License Size (MB) #Samples #Tokens MedMCQA-train MedMCQA MIT License 114.4 182,822 24.9M MedQA-train MedQA MIT License 14.2 10,178 3.4M PubMedQA-train PubMedQA MIT License 76.3 211,269 95.9M Total 204.9 404,269 124.2M Table 3: Summary of Evaluation Instructions dataset, showing which training splits of the downstream tasks they are derived from and their data statistics. Beyond independently utilizing these datasets for supervised fine-tuning, we also examined the impact of individual datasets as well as the collective effect of combining them on model performance (refer to Appendix G). 2.3 Medical Preference Data Constructing a preference dataset typically involves generating diverse responses to identical queries using LLMs, which are subsequently evaluated by human annotators to identify the most accurate response. This method, however, can become prohibitively expensive, both in terms of computation for generating responses and the financial and time investments required for manual annotation. To circumvent these issues, we leveraged the iCliniq-10k dataset (Li et al., 2023), containing 10K authentic patient-doctor dialogues from icliniq.com. Each dialogue features a patient question accompanied by three different answers: one from an actual doctor, and the others from ChatGPT and ChatDoctor (Li et al., 2023). We conducted a thorough preprocessing of this dataset to eliminate any irrelevant or extraneous information. 4 Hippocrates Medical RLAIF. To reduce annotation costs, we adopted the RLAIF methodology (Lee et al., 2023b) in the medical domain for the first time. Utilizing detailed prompts based on patient inquiries from the iCliniq-10k dataset, we used GPT4 (OpenAI, 2023) to determine the optimal response based on predefined instructions. These instructions were derived from those used in qualitative assessments by medical professionals in Med-PaLM (Singhal et al., 2022; 2023a), with minor modifications. This annotation approach amounted to a cost of $120. The exact prompt structure for applying RLAIF with GPT4 is given in Appendix J, Figure 7. Validation. To test the reliability of GPT4\u2019s capacity to replicate medical expert annotations, we subjected 250 samples from our dataset to careful examination by two medical doctors, given them the same instructions that we provided in the prompt to GPT4. Our analysis revealed compelling results. When comparing GPT4\u2019s annotations against those of MD-1, GPT4 demonstrated a Kappa Score of 0.376, indicating moderate agreement, and an accuracy of 68.9%. The comparison with MD-2 showed even stronger results, with GPT4 achieving a Kappa Score of 0.672, suggesting substantial agreement, alongside an 83.6% accuracy. Interestingly, the inter-annotator agreement between the two doctors themselves yielded a Kappa Score of 0.416 and an accuracy of 70.8%, situating GPT4\u2019s performance firmly within the range of human expert variability. These findings not only affirm GPT4\u2019s aptitude for medical annotation but also highlight its potential to serve as a cost-effective alternative to human annotators in medical research and application settings. These findings suggest that GPT4 is capable of effectively mimicking medical doctor preferences, potentially eliminating the need for costly doctor annotations. Consequently, we compiled a comprehensive medical doctor preference dataset, consisting of 15,258 samples, to further align our LLMs with real-world clinical decision-making processes and enhance their accuracy in interpreting and responding to medical queries. 2.4 Training Methodology Our training strategy includes several phases: injection of medical knowledge through continued pre-training, domain-specific instruction tuning, and reinforcement learning from AI-generated feedback for improved alignment with medical experts. Employing the LLaMA Factory framework (hiyouga, 2023), we adhere to replicable and high-performance training standards. Moreover, we adopt the Low-Rank Adaptation (LoRA) technique Hu et al. (2021) for training efficiency and precision. LoRA enhances LLMs by selectively updating weights within additional trainable layers, thereby accelerating the training process, minimizing memory usage, and mitigating overfitting and catastrophic forgetting. Our foundational models, LLaMA2 7B (Touvron et al., 2023b) and Mistral 7B (Jiang et al., 2023), are selected based on their robust performance across medical benchmarks, demonstrating their capacity to excel without extensive training modifications. The zero-shot performances of these generic baseline models is presented at the beginning of Table 5. Continued pre-training. To equip our base LLMs with domain-specific medical expertise, we extend their pre-training on a carefully curated medical text corpus as described in Section 2.1. This stage employs traditional language modeling, focusing on next-token prediction. During this phase, both models undergo continued pre-training using LoRA, specifically adapting the fully connected layers. The parameters for LoRA are carefully set, with the rank (r) at 8 and alpha (\u03b1) at 16, to optimize learning. We use the AdamW optimizer and adjust the learning rate using a cosine scheduling, starting from an initial value of 1e-4. The batch size per device was initialized to be 8, with gradient accumulations of 2, culminating in an effective global batch size of 16, and the models are trained for a single epoch. The rationale and empirical support for our choices regarding the dataset, LoRA configurations, and overall optimization strategy are comprehensively analyzed in Appendix G. Supervised Finetuning. After continued pre-training, models undergo fine-tuning with an Instruction Tuning (IT) dataset to closely mirror medical directives, aligning model 5 Hippocrates outputs with clinical requirements. We have tested with the datasets described in Section 2.2 and found that MedQA-train IT works better than the other options. This fine-tuning phase also employs LoRA to all fully connected layers with both rank (r) and alpha (\u03b1) set to 32 for balanced efficiency and computational overhead. AdamW optimizer is used with a learning rate of 1e \u22124. To prevent model overfitting, loss calculation focuses solely on the responses. The training spanned 3 epochs with a batch size of 8 per-device and gradient accumulation set to 2. We also conducted experiments on direct fine-tuning of the base LLMs to evaluate the impact of continued pre-training (see Section 4.1) and performed a comprehensive analysis on dataset splits and fine-tuning hyperparameters (see Appendix G). Medical Preference Learning. Finally, the instruction-tuned models are further trained with a recent and popular technique called direct preference optimization (DPO) (Rafailov et al., 2023). In DPO, reinforcement learning is bypassed which allows for direct optimization based on preference data. Unlike RLHF, the responses in DPO need not be derived from the LLM being optimized. Central to DPO is the development of a loss function that evaluates the likelihood of a preferred response over a less preferred one, steering the LLM towards this goal. This makes DPO more stable and significantly reduces computational demands. The outcome of all this are our medical LLMs, named Hippoand Hippo, built upon the pre-trained LLaMA2 7B and Mistral 7B models. These models were refined through a comprehensive process that included continued pre-training and/or instruction tuning using our carefully curated medical datasets. Following this, we also explored the impact of aligning the models with clinical preferences by conducting further training on medical preference data. 3 Main Results For an objective evaluation of domain-specific knowledge and reasoning capabilities in LLMs, a detailed and fair evaluation framework is essential. In alignment with methodologies adopted in prior research (Singhal et al., 2022; Han et al., 2023; Wu et al., 2023; Toma et al., 2023; Singhal et al., 2023a; Chen et al., 2023), we selected six widely recognized medical question-answering datasets, namely MedMCQA (Pal et al., 2022), MedQA (Jin et al., 2021), PubMedQA (Jin et al., 2019) and USMLE Step 1-3 (Han et al., 2023), to assess models performances (See Table 4 for details). Performance metrics were derived through the use of the EleutherAI evaluation framework (Gao et al., 2021), ensuring a standardized approach to measuring model effectiveness in handling domain-specific queries. Dataset Source Format #Samples #Choices License MedMCQA-test MedMCQA Question + Answer 4,183 4 MIT MedQA-test MedQA Question + Answer 1,273 5 MIT PubMedQA-test PubMedQA Abstract + Question + Answer 1,000 3 MIT USMLE-step1 USMLE Question + Answer 94 5 MIT USMLE-step2 USMLE Question + Answer 109 6 MIT USMLE-step3 USMLE Question + Answer 122 5 MIT Table 4: Summary of the evaluation benchmark datasets, describing the format, the number of test samples, the number of choices, and the licence info. 3.1 Experimental Setup In our evaluation, we included a spectrum of leading LLMs, spanning general and medical LLMs, varying in scale from 1.5B to an advanced 70B parameters. Here we report the performances of our top-performing models for an accurate comparison. To ensure a fair and easily replicable assessment of these medical models, we utilized the Eleuther AI Language Model Evaluation Harness (Gao et al., 2021), a unified evaluation framework specifically designed for evaluating generative LLMs. This framework also serves as the evaluation tool for the Open LLM Leaderboard2 (Beeching et al., 2023). 2https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard 6 Hippocrates Model MedMCQA MedQA PubmedQA USMLE-1 USMLE-2 USMLE-3 Avg. 0-shot/5-shot 0-shot/5-shot 0-shot/5-shot 0-shot/5-shot 0-shot/5-shot 0-shot/5-shot 0-shot/5-shot Gemma 2b 26.2/27.7 27.8/30.6 59.1/60.8 20.2/16.0 18.4/30.3 24.6/20.5 29.4/31.0 LLaMA-2 7b 34.4/39.4 29.3/39.5 72.3/72.4 18.1/22.3 22.9/33.0 27.1/32.0 34.0/39.8 Falcon 7b 30.5/31.8 27.9/31.0 65.3/64.4 18.1/25.5 26.6/20.2 23.8/25.4 32.0/33.0 Vicuna 7b 35.9/39.0 35.1/41.2 70.9/74.5 25.5/31.9 27.5/31.2 33.6/35.3 38.1/42.2 Mistral 7b 39.3/48.5 36.8/48.9 76.3/77.8 24.5/50.0 31.2/42.2 27.9/43.4 39.3/51.8 BioMedLM 32.2/29.6 29.3/30.6 55.2/55.2 15.9/22.3 19.3/18.4 23.0/31.2 25.9/31.2 BioGPT-Large 33.1/30.1 31.3/27.2 60.1/47.7 22.3/19.2 22.0/14.7 23.0/23.0 32.0/27.0 MedAlpaca 7b 35.8/37.5 36.1/36.6 73.2/70.6 22.3/27.7 27.5/32.1 29.5/37.7 37.4/40.4 PMC-LLaMA 7b 31.5/33.0 28.0/29.5 66.5/68.4 21.3/19.2 23.9/19.3 22.1/22.1 32.2/31.9 Meditron 7b 34.0/38.2 32.0/39.3 71.6/75.7 16.0/29.8 25.7/30.3 23.8/32.0 33.9/40.9 Bio-Mistral 7b 36.4/42.4 35.0/42.1 73.4/75.1 24.5/28.7 27.5/34.9 27.9/44.3 37.5/31.9 LLaMA-2 13b 38.2/43.9 34.3/43.3 75.9/71.9 20.2/38.3 22.0/29.4 23.0/38.5 35.6/40.9 Vicuna 13b 39.7/44.3 35.9/45.9 75.6/75.0 24.5/40.4 26.6/35.8 23.8/46.7 37.7/44.6 MedAlpaca 13b 32.5/33.3 31.8/34.3 72.6/72.5 24.5/23.4 24.5/26.6 30.3/29.5 36.0/44.2 PMC-LLaMA 13b 39.1/44.5 37.8/46.3 76.8/76.5 30.9/35.1 22.9/36.7 26.2/29.5 39.0/44.8 LLaMA-2 70b 42.8/ 52.0 44.9/56.1 73.2/77.8 31.9/59.6 44.0/57.8 44.3/53.3 46.8/59.4 Qwen 72b 50.5/59.2 47.7/53.4 77.2/76.8 45.7/67.0 43.1/56.9 38.5/61.5 50.5/62.5 ClinicalCamel 70b 43.7/53.4 45.5/58.5 73.6/77.6 40.4/59.6 43.1/60.6 42.6/60.7 48.2/61.7 Meditron 70b 43.4/51.9 44.9/58.5 76.4/80.0 35.1/57.5 41.3/56.9 37.7/59.8 46.5/60.8 Hippo7b 54.3/53.9 50.6/50.8 74.7/76.6 46.8/40.4 41.3/39.5 50.0/43.4 53.0/50.8 Hippo7b 49.7/51.8 59.2/59.9 77.1/78.1 60.6/61.7 66.1/64.2 56.6/56.6 61.6/62.1 Table 5: Comparative analysis of generic and medical LLMs across downstream medical tasks in 0-shot and 5-shot learning settings. The best and the second-best performance are highlighted in bold and underline, respectively. LM-Evaluation-Harness operates on a Log-Likelihood objective, which calculates the negative log-likelihood for each potential answer in response to a given query. The answer is then chosen based on the highest likelihood score, indicating it as the most probable choice. During evaluation, each prompt includes a question and corresponding choices, separated by a new line. For PubMedQA, the abstract provides contextual grounding for the model\u2019s decision-making process. Examples of these prompts are provided in the Appendix I. 3.2 Results We present a comparative analysis of our novel models, Hippoand Hippo, against a set of established base LLMs and medical-specific LLMs, in Table 5. Our evaluation includes both zero-shot and few-shot (specifically, 5-shot) learning scenarios. Demonstrating superior performance, our Hippo models outperform traditional pretrained models in zero-shot evaluations and maintain their superiority in the 5-shot context. Remarkably, Hippoand Hipponot only beat models with 7 billion and 13 billion parameters but also exceed the capabilities of those with 70 billion parameters. This outstanding performance highlights the adaptability and precision of our models, showing their remarkable ability to significantly boost prediction accuracy with minimal input examples. 4 Analysis 4.1 Contribution of Each Training Stage Hippo. Our evaluation methodology for the LLaMA2 7B model covers successive training stages: Continued Pre-training (CP), Instruction Tuning (SFT), and Direct Preference Optimization (DPO). As listed in Table 6, the base model LLaMA2 7B initially achieves an average accuracy of 34.0 across benchmarks. The CP stage marginally increases accuracy to 34.4, indicating initial benefits from domain-focused continued pre-training. The subsequent introduction of SFT yields a substantial performance boost to an average accuracy of 50.3, demonstrating the critical role of customized instruction in enhancing the model\u2019s capabilities in understanding and answering medical queries. Integrating CP with SFT 7 Hippocrates Model MedMCQA MedQA PubmedQA USMLE-1 USMLE-2 USMLE-3 Avg. LLaMA2 7b 34.4 29.3 72.3 18.1 22.9 27.1 34.0 + CP 34.6 31.9 72.8 20.2 25.7 21.3 34.4 + SFT 52.7 49.7 75.7 37.2 42.2 44.3 50.3 + CP + SFT 54.3 50.6 74.7 46.8 41.3 50.0 53.0 + CP + SFT + DPO 54.4 50.4 74.8 46.8 39.5 49.2 52.5 + CP + SFT + DPO + CoT 54.0 50.3 73.3 48.9 43.7 45.1 52.6 Mistral 7b 39.3 36.8 76.3 24.5 31.2 27.9 39.3 + CP 40.5 37.2 74.9 29.8 33.9 29.5 41.0 + SFT 49.7 59.2 77.1 60.6 66.1 56.6 61.6 + CP + SFT 51.5 60.9 76.5 55.3 65.1 57.4 61.1 + CP + SFT + DPO 49.3 57.3 77.3 56.4 62.4 54.9 59.6 + CP + SFT + DPO + CoT 51.0 60.9 63.5 59.6 59.6 63.9 59.8 Table 6: Hippoand Hippo: Analysis of Continued Pretraining, Instruction Tuning, and Direct Preference Optimization. This table demonstrates the incremental impact of Continued Pretraining (CP) on medical text data, Instruction Tuning (SFT), and Direct Preference Optimization (DPO) on the zero-shot capabilities of the LLaMA2 7B and Mistral 7B models across a range of medical benchmarks, including MedMCQA, MedQA, PubmedQA, and the USMLE series. The results, aggregated and individual, underline the significance of each methodological advancement in enhancing the model\u2019s proficiency in interpreting and responding to complex medical queries, thereby providing a granular view of performance improvements at each stage of model optimization. further improves this performance to 53.0, highlighting the combined value of domain knowledge and specific instruction tuning. The final DPO stage slightly decreases the model\u2019s performance to 52.5, albeit with a slight increase in accuracy for MedMCQA and PubMedQA, illustrating DPO\u2019s refined impact on model preference alignment. This sequence delineates the incremental enhancements attributable to each training phase, with SFT marking a pivotal improvement. The composite model, LLaMA2 + CP + SFT, is thus designated as Hippofor its distinguished performance across our benchmarks. Hippo. Following the approach for Hippo, the training evolution for the Mistral 7B model reveals gradual improvement in the model\u2019s proficiency in medical questionanswering. Initial results from the baseline Mistral 7B model, as shown in Table 6, show an average benchmark accuracy of 39.3. Implementing CP slightly improves this to 41.0, reflecting the positive yet modest impact of domain-specific continued pre-training. The pivotal SFT stage significantly raises the performance, achieving an average accuracy of 61.6, emphasizing the critical role of customized instruction in enhancing the model\u2019s interpretative and response capabilities for medical inquiries. Interestingly, combining CP and SFT results in a slight reduction to 61.1, suggesting a complex interaction between domain pre-training and instruction tuning. The subsequent application of DPO slightly lowers the overall score to 59.6, similar to the pattern observed for Hippo, with targeted performance adjustment. Based on comprehensive analysis, Mistral 7b + SFT is selected to represent Hippo, credited for its exceptional performance across all benchmarks. 4.2 Chain-of-Thought (CoT) Prompting The CoT prompting technique (Wei et al., 2023) enhances an LLM\u2019s ability to tackle complex queries by guiding it to articulate intermediate reasoning steps. This method improves the model\u2019s responses by structuring its problem-solving process. In our study, we applied CoT prompting for in-context learning, adopting a slightly altered instruction utilized in (Pal & Sankarasubbu, 2024b): \u201dThe following is a multiple choice question about medical knowledge. Solve it in a step-by-step fashion, starting by summarizing the available information. Output a single option from the four options as the final answer.\u201d. However, the application of CoT prompting in our experiments with downstream medical tasks did not consistently enhance our models\u2019 performance, as shown in Table 6. 8 Hippocrates 4.3 Influencing Examples We explore the application of Influence Functions to understand the behavior of LLMs (Grosse et al., 2023) \u2013 in our context, particularly those trained with domain-specific datasets like medical text. This technique quantifies the effect of single training instances on the model\u2019s predictions, improving the transparency of the AI models. This is increasingly important as the field of Explainable AI (XAI) grows to make AI systems more interpretable and accountable. However, the complexity of LLMs, which process vast amounts of data, highlights the necessity for efficient methods to perform this analysis. We believe incorporating this tool to our evaluation framework will prove useful for future studies. In the supplementary material (Appendix H), we present our analysis results, highlighting the most and least influential training examples for a MedQA dataset question and its model response. Notably, the most influential example shares overlapping medical concepts, in contrast to no shared concepts with the least influential training example. 4.4 Uncertainty Quantification In our study, we conducted an uncertainty quantification experiment on Hippoto understand its performance on the MedMCQA, MedQA, and PubMedQA datasets, as shown in Fig.3. Our findings reveal that our model consistently assigns higher probabilities to questions it answers correctly across all datasets, suggesting an ability to self-calibrate its certainty. The model\u2019s confidence is notably higher on MedMCQA, possibly reflecting the dataset\u2019s relative simplicity. In contrast, its confidence on PubMedQA is comparatively lower, likely due to the dataset\u2019s complexity. Additionally, the model\u2019s confidence changes with different training stages; CPT leads to more conservative estimates, SFT boosts confidence, and adding DPO leads to variable confidence, with noticeable effects in MedMCQA and MedQA. These outcomes emphasize a complex relationship between training approaches and confidence calibration in the model. 0 1 2 3 Density MedMCQA CPT CPT + SFT CPT + SFT + DPO 0 1 2 3 Density MedQA 0.2 0.4 0.6 0.8 1.0 0 1 2 3 Density PubMedQA 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 Correct Incorrect Figure 3: Uncertainty quantification for our best-performing 5-shot Hippomodel., where we plot the probability distributions assigned by the model to both correct predictions and incorrect predictions on the MedMCQA, MedQA, and PubMedQA datasets. We present additional negative results in Appendix J, which we anticipate will be beneficial for the community. By sharing these findings, we aim to encourage further investigations. 5 Conclusion In this study, we have introduced Hippocrates, a comprehensive and open-source framework tailored for the medical domain, addressing a wide array of challenges faced by medical LLMs. We provide openly available datasets and establish an intuitive benchmark using the LM-Evaluation-Harness tool. We also introduce Hippoand Hippo, two 7B models demonstrating superior performance. Our work makes substantial contributions to the field by combining in-depth empirical research with a structured training methodology, offering invaluable insights and tools for future research not only in healthcare but in any area requiring domain-specific adaptation of LLMs. 9 Hippocrates",
"additional_graph_info": {
"graph": [],
"node_feat": {
"Emre Can Acikgoz": [
{
"url": "http://arxiv.org/abs/2404.16621v1",
"title": "Hippocrates: An Open-Source Framework for Advancing Large Language Models in Healthcare",
"abstract": "The integration of Large Language Models (LLMs) into healthcare promises to\ntransform medical diagnostics, research, and patient care. Yet, the progression\nof medical LLMs faces obstacles such as complex training requirements, rigorous\nevaluation demands, and the dominance of proprietary models that restrict\nacademic exploration. Transparent, comprehensive access to LLM resources is\nessential for advancing the field, fostering reproducibility, and encouraging\ninnovation in healthcare AI. We present Hippocrates, an open-source LLM\nframework specifically developed for the medical domain. In stark contrast to\nprevious efforts, it offers unrestricted access to its training datasets,\ncodebase, checkpoints, and evaluation protocols. This open approach is designed\nto stimulate collaborative research, allowing the community to build upon,\nrefine, and rigorously evaluate medical LLMs within a transparent ecosystem.\nAlso, we introduce Hippo, a family of 7B models tailored for the medical\ndomain, fine-tuned from Mistral and LLaMA2 through continual pre-training,\ninstruction tuning, and reinforcement learning from human and AI feedback. Our\nmodels outperform existing open medical LLMs models by a large-margin, even\nsurpassing models with 70B parameters. Through Hippocrates, we aspire to unlock\nthe full potential of LLMs not just to advance medical knowledge and patient\ncare but also to democratize the benefits of AI research in healthcare, making\nthem available across the globe.",
"authors": "Emre Can Acikgoz, Osman Batur \u0130nce, Rayene Bench, Arda An\u0131l Boz, \u0130lker Kesen, Aykut Erdem, Erkut Erdem",
"published": "2024-04-25",
"updated": "2024-04-25",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AI",
"cs.CL"
],
"main_content": "Fig. 2 shows the overall workflow of the Hippocrates framework, starting from domainspecific pre-training and progressing through supervised fine-tuning and reinforcement 1https://github.com/EleutherAI/lm-evaluation-harness 2 Hippocrates Medical Knowledge Injection Medical Instruction Tuning Medical Preference Learning 298M Tokens Medical Guidelines, PMC-Patients, PubMedQA-train Language Modeling Predict next token Domain Adapted Model 696K Samples Flashcards, GenMedGPT, Platypus, HealthCareMagic, UMLS, Relations, Wikidoc, Patient-Info, MedicationQA \u2022 Query Answer 1 Answer 2 Prompt GPT-4 Preference Dataset Preference-Dataset Pre-training Data Instruction Data Model LLaMA2 ( ) 7B Mistral ( ) 7B Training Method 15K Samples Language Modeling { Instruction Finetuning Predict next token for responses Model Domain Adapted Model Training Method Supervised Fine-tuning Medical SFT Model { Reinforcement Learning Optimize for medical preferences Model Medical SFT Model Training Method DPO Medical Preference Model Evaluation Benchmark Data Inference Evaluation Framework MedMCQA MedQA PubMedQA USMLE-step1 USMLE-step2 USMLE-step3 Dataset Format Question + Answer Question + Answer Abs + Question + Answer Question + Answer Question + Answer Question + Answer Eleuther AI\u2019s Language Model Evaluation Harness Objective Log-Likelihood Evaluation Method Choose answer with the highest likelihood score Prompting In-Context Learning (ICL) strategies Approach Zero-Shot Few-Shot Chain-of-Thought (CoT) Methods MedMCQA-train, MedQA-train, PubMedQA-train General Eval Figure 2: An overview of the Hippocrates framework, illustrating the four critical phases including (1) continued pre-training, (2) supervised fine-tuning, (3) reinforcement learning from AI-generated feedback, and (4) the comprehensive evaluation pipeline. learning from AI-generated feedback to an extensive evaluation phase. This pipeline ensures our models are precisely tailored and rigorously tested for the medical domain. 2.1 Continued Pre-training Data A key aspect of our methodology is the integration of specialized medical knowledge through an extensive pre-training corpus, assembled from three specialized datasets: Medical Guidelines, PMC-Patients, and PubMedQA-contexts. The Medical Guidelines dataset comprises clinical practice guidelines, is used for training Meditron models (Chen et al., 2023). The PMC-Patients dataset (Zhao et al., 2023) consists of patient summaries extracted from case reports within PubMed Central (PMC). Additionally, the PubMedQA-contexts dataset is constructed by extracting the context field of each sample in the training split of the benchmark (Jin et al., 2019). Detailed descriptions and specifications of each dataset are available in Table 1. This extensive corpus, consisting of roughly 300M training tokens, forms the foundation of our models, ensuring their proficiency in navigating medical terminology and practices. We systematically assessed the impact of each dataset, both individually and in combination, to optimize our model\u2019s performance. Dataset Source License Size (MB) #Samples #Tokens Medical Guidelines Meditron Apache 2.0 License 382.6 37,970 96M PMC-Patients Pubmed Central CC BY-NC-SA 4.0 462.3 167,034 122M PubMedQA-train PubMedQA MIT License 290.2 211,269 80M Total 1,135.1 416,273 298M Table 1: Summary of the datasets used for continued pre-training, showing their sources, licence information and data statistics. 2.2 Supervised Fine-Tuning Data Developing effective medical LLMs requires blending domain-specific knowledge with sophisticated reasoning abilities. Previous models often utilized instruction data consisting of samples from the training or test sets of evaluation benchmarks. We also considered this setup, but additionally investigated an alternative involving generic medical data. Consequently, we constructed two sets of IT datasets: the General Instructions Data and the Evaluation Instructions Data. 3 Hippocrates General Instructions Data. This dataset aggregates more than 400K samples from nine different datasets, each derived from the instruction corpora of previous studies (Li et al., 2023; Han et al., 2023; Wu et al., 2023; Lee et al., 2023a). By excluding data from the training or test splits of downstream QA benchmarks, we aim to minimize bias and improve the model\u2019s generalization capabilities across different reasoning tasks. A pre-processing protocol was employed to remove superfluous words and web URLs, ensuring the data\u2019s quality and relevance. The detailed statistics of the dataset are presented in Table 2. Dataset Source License Size (MB) #Samples #Tokens Medical Flashcards MedAlpaca No commercialized use 18.8 33,955 3.9M GenMedGPT-5k ChatDoctor Apache 2.0 3.1 5,452 0.6M Open-Platypus Platypus CC BY-NC-SA 4.0 32.9 24,926 9.5M HealthCareMagic-100k ChatDoctor Apache 2.0 143.8 112,165 32.3M UMLS PMC-LLaMA CC BY 4.0 23.0 49,057 4.6M UMLS-Relations PMC-LLaMA CC BY 4.0 21.7 50,000 4.3M WikiDoc MedAlpaca CC BY-SA 4.0 11.0 10,000 2.6M WikiDoc-Patient-Info MedAlpaca CC BY-SA 4.0 3.7 5,942 0.8M MedicationQA PMC-LLaMA CC BY 4.0 0.4 552 0.1M Total 258.4 292,049 58.7M Table 2: Summary of General Instructions Data, describing the datasets used, their sources, together with their licence information, and size. Evaluation Instructions Data. This dataset was formed to examine the effects of including instruction samples directly from downstream tasks, a common practice in existing studies (Chen et al., 2023; Han et al., 2023; Wu et al., 2023). Instruction-response pairs were crafted using the training splits of various benchmarks, following the templates established in Meditron (Chen et al., 2023). We conducted a series of experiments to assess the distinct influence of each split on each task, both individually and collectively. The details about the Evaluation Instruction Data is given in Table 3. Dataset Source License Size (MB) #Samples #Tokens MedMCQA-train MedMCQA MIT License 114.4 182,822 24.9M MedQA-train MedQA MIT License 14.2 10,178 3.4M PubMedQA-train PubMedQA MIT License 76.3 211,269 95.9M Total 204.9 404,269 124.2M Table 3: Summary of Evaluation Instructions dataset, showing which training splits of the downstream tasks they are derived from and their data statistics. Beyond independently utilizing these datasets for supervised fine-tuning, we also examined the impact of individual datasets as well as the collective effect of combining them on model performance (refer to Appendix G). 2.3 Medical Preference Data Constructing a preference dataset typically involves generating diverse responses to identical queries using LLMs, which are subsequently evaluated by human annotators to identify the most accurate response. This method, however, can become prohibitively expensive, both in terms of computation for generating responses and the financial and time investments required for manual annotation. To circumvent these issues, we leveraged the iCliniq-10k dataset (Li et al., 2023), containing 10K authentic patient-doctor dialogues from icliniq.com. Each dialogue features a patient question accompanied by three different answers: one from an actual doctor, and the others from ChatGPT and ChatDoctor (Li et al., 2023). We conducted a thorough preprocessing of this dataset to eliminate any irrelevant or extraneous information. 4 Hippocrates Medical RLAIF. To reduce annotation costs, we adopted the RLAIF methodology (Lee et al., 2023b) in the medical domain for the first time. Utilizing detailed prompts based on patient inquiries from the iCliniq-10k dataset, we used GPT4 (OpenAI, 2023) to determine the optimal response based on predefined instructions. These instructions were derived from those used in qualitative assessments by medical professionals in Med-PaLM (Singhal et al., 2022; 2023a), with minor modifications. This annotation approach amounted to a cost of $120. The exact prompt structure for applying RLAIF with GPT4 is given in Appendix J, Figure 7. Validation. To test the reliability of GPT4\u2019s capacity to replicate medical expert annotations, we subjected 250 samples from our dataset to careful examination by two medical doctors, given them the same instructions that we provided in the prompt to GPT4. Our analysis revealed compelling results. When comparing GPT4\u2019s annotations against those of MD-1, GPT4 demonstrated a Kappa Score of 0.376, indicating moderate agreement, and an accuracy of 68.9%. The comparison with MD-2 showed even stronger results, with GPT4 achieving a Kappa Score of 0.672, suggesting substantial agreement, alongside an 83.6% accuracy. Interestingly, the inter-annotator agreement between the two doctors themselves yielded a Kappa Score of 0.416 and an accuracy of 70.8%, situating GPT4\u2019s performance firmly within the range of human expert variability. These findings not only affirm GPT4\u2019s aptitude for medical annotation but also highlight its potential to serve as a cost-effective alternative to human annotators in medical research and application settings. These findings suggest that GPT4 is capable of effectively mimicking medical doctor preferences, potentially eliminating the need for costly doctor annotations. Consequently, we compiled a comprehensive medical doctor preference dataset, consisting of 15,258 samples, to further align our LLMs with real-world clinical decision-making processes and enhance their accuracy in interpreting and responding to medical queries. 2.4 Training Methodology Our training strategy includes several phases: injection of medical knowledge through continued pre-training, domain-specific instruction tuning, and reinforcement learning from AI-generated feedback for improved alignment with medical experts. Employing the LLaMA Factory framework (hiyouga, 2023), we adhere to replicable and high-performance training standards. Moreover, we adopt the Low-Rank Adaptation (LoRA) technique Hu et al. (2021) for training efficiency and precision. LoRA enhances LLMs by selectively updating weights within additional trainable layers, thereby accelerating the training process, minimizing memory usage, and mitigating overfitting and catastrophic forgetting. Our foundational models, LLaMA2 7B (Touvron et al., 2023b) and Mistral 7B (Jiang et al., 2023), are selected based on their robust performance across medical benchmarks, demonstrating their capacity to excel without extensive training modifications. The zero-shot performances of these generic baseline models is presented at the beginning of Table 5. Continued pre-training. To equip our base LLMs with domain-specific medical expertise, we extend their pre-training on a carefully curated medical text corpus as described in Section 2.1. This stage employs traditional language modeling, focusing on next-token prediction. During this phase, both models undergo continued pre-training using LoRA, specifically adapting the fully connected layers. The parameters for LoRA are carefully set, with the rank (r) at 8 and alpha (\u03b1) at 16, to optimize learning. We use the AdamW optimizer and adjust the learning rate using a cosine scheduling, starting from an initial value of 1e-4. The batch size per device was initialized to be 8, with gradient accumulations of 2, culminating in an effective global batch size of 16, and the models are trained for a single epoch. The rationale and empirical support for our choices regarding the dataset, LoRA configurations, and overall optimization strategy are comprehensively analyzed in Appendix G. Supervised Finetuning. After continued pre-training, models undergo fine-tuning with an Instruction Tuning (IT) dataset to closely mirror medical directives, aligning model 5 Hippocrates outputs with clinical requirements. We have tested with the datasets described in Section 2.2 and found that MedQA-train IT works better than the other options. This fine-tuning phase also employs LoRA to all fully connected layers with both rank (r) and alpha (\u03b1) set to 32 for balanced efficiency and computational overhead. AdamW optimizer is used with a learning rate of 1e \u22124. To prevent model overfitting, loss calculation focuses solely on the responses. The training spanned 3 epochs with a batch size of 8 per-device and gradient accumulation set to 2. We also conducted experiments on direct fine-tuning of the base LLMs to evaluate the impact of continued pre-training (see Section 4.1) and performed a comprehensive analysis on dataset splits and fine-tuning hyperparameters (see Appendix G). Medical Preference Learning. Finally, the instruction-tuned models are further trained with a recent and popular technique called direct preference optimization (DPO) (Rafailov et al., 2023). In DPO, reinforcement learning is bypassed which allows for direct optimization based on preference data. Unlike RLHF, the responses in DPO need not be derived from the LLM being optimized. Central to DPO is the development of a loss function that evaluates the likelihood of a preferred response over a less preferred one, steering the LLM towards this goal. This makes DPO more stable and significantly reduces computational demands. The outcome of all this are our medical LLMs, named Hippoand Hippo, built upon the pre-trained LLaMA2 7B and Mistral 7B models. These models were refined through a comprehensive process that included continued pre-training and/or instruction tuning using our carefully curated medical datasets. Following this, we also explored the impact of aligning the models with clinical preferences by conducting further training on medical preference data. 3 Main Results For an objective evaluation of domain-specific knowledge and reasoning capabilities in LLMs, a detailed and fair evaluation framework is essential. In alignment with methodologies adopted in prior research (Singhal et al., 2022; Han et al., 2023; Wu et al., 2023; Toma et al., 2023; Singhal et al., 2023a; Chen et al., 2023), we selected six widely recognized medical question-answering datasets, namely MedMCQA (Pal et al., 2022), MedQA (Jin et al., 2021), PubMedQA (Jin et al., 2019) and USMLE Step 1-3 (Han et al., 2023), to assess models performances (See Table 4 for details). Performance metrics were derived through the use of the EleutherAI evaluation framework (Gao et al., 2021), ensuring a standardized approach to measuring model effectiveness in handling domain-specific queries. Dataset Source Format #Samples #Choices License MedMCQA-test MedMCQA Question + Answer 4,183 4 MIT MedQA-test MedQA Question + Answer 1,273 5 MIT PubMedQA-test PubMedQA Abstract + Question + Answer 1,000 3 MIT USMLE-step1 USMLE Question + Answer 94 5 MIT USMLE-step2 USMLE Question + Answer 109 6 MIT USMLE-step3 USMLE Question + Answer 122 5 MIT Table 4: Summary of the evaluation benchmark datasets, describing the format, the number of test samples, the number of choices, and the licence info. 3.1 Experimental Setup In our evaluation, we included a spectrum of leading LLMs, spanning general and medical LLMs, varying in scale from 1.5B to an advanced 70B parameters. Here we report the performances of our top-performing models for an accurate comparison. To ensure a fair and easily replicable assessment of these medical models, we utilized the Eleuther AI Language Model Evaluation Harness (Gao et al., 2021), a unified evaluation framework specifically designed for evaluating generative LLMs. This framework also serves as the evaluation tool for the Open LLM Leaderboard2 (Beeching et al., 2023). 2https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard 6 Hippocrates Model MedMCQA MedQA PubmedQA USMLE-1 USMLE-2 USMLE-3 Avg. 0-shot/5-shot 0-shot/5-shot 0-shot/5-shot 0-shot/5-shot 0-shot/5-shot 0-shot/5-shot 0-shot/5-shot Gemma 2b 26.2/27.7 27.8/30.6 59.1/60.8 20.2/16.0 18.4/30.3 24.6/20.5 29.4/31.0 LLaMA-2 7b 34.4/39.4 29.3/39.5 72.3/72.4 18.1/22.3 22.9/33.0 27.1/32.0 34.0/39.8 Falcon 7b 30.5/31.8 27.9/31.0 65.3/64.4 18.1/25.5 26.6/20.2 23.8/25.4 32.0/33.0 Vicuna 7b 35.9/39.0 35.1/41.2 70.9/74.5 25.5/31.9 27.5/31.2 33.6/35.3 38.1/42.2 Mistral 7b 39.3/48.5 36.8/48.9 76.3/77.8 24.5/50.0 31.2/42.2 27.9/43.4 39.3/51.8 BioMedLM 32.2/29.6 29.3/30.6 55.2/55.2 15.9/22.3 19.3/18.4 23.0/31.2 25.9/31.2 BioGPT-Large 33.1/30.1 31.3/27.2 60.1/47.7 22.3/19.2 22.0/14.7 23.0/23.0 32.0/27.0 MedAlpaca 7b 35.8/37.5 36.1/36.6 73.2/70.6 22.3/27.7 27.5/32.1 29.5/37.7 37.4/40.4 PMC-LLaMA 7b 31.5/33.0 28.0/29.5 66.5/68.4 21.3/19.2 23.9/19.3 22.1/22.1 32.2/31.9 Meditron 7b 34.0/38.2 32.0/39.3 71.6/75.7 16.0/29.8 25.7/30.3 23.8/32.0 33.9/40.9 Bio-Mistral 7b 36.4/42.4 35.0/42.1 73.4/75.1 24.5/28.7 27.5/34.9 27.9/44.3 37.5/31.9 LLaMA-2 13b 38.2/43.9 34.3/43.3 75.9/71.9 20.2/38.3 22.0/29.4 23.0/38.5 35.6/40.9 Vicuna 13b 39.7/44.3 35.9/45.9 75.6/75.0 24.5/40.4 26.6/35.8 23.8/46.7 37.7/44.6 MedAlpaca 13b 32.5/33.3 31.8/34.3 72.6/72.5 24.5/23.4 24.5/26.6 30.3/29.5 36.0/44.2 PMC-LLaMA 13b 39.1/44.5 37.8/46.3 76.8/76.5 30.9/35.1 22.9/36.7 26.2/29.5 39.0/44.8 LLaMA-2 70b 42.8/ 52.0 44.9/56.1 73.2/77.8 31.9/59.6 44.0/57.8 44.3/53.3 46.8/59.4 Qwen 72b 50.5/59.2 47.7/53.4 77.2/76.8 45.7/67.0 43.1/56.9 38.5/61.5 50.5/62.5 ClinicalCamel 70b 43.7/53.4 45.5/58.5 73.6/77.6 40.4/59.6 43.1/60.6 42.6/60.7 48.2/61.7 Meditron 70b 43.4/51.9 44.9/58.5 76.4/80.0 35.1/57.5 41.3/56.9 37.7/59.8 46.5/60.8 Hippo7b 54.3/53.9 50.6/50.8 74.7/76.6 46.8/40.4 41.3/39.5 50.0/43.4 53.0/50.8 Hippo7b 49.7/51.8 59.2/59.9 77.1/78.1 60.6/61.7 66.1/64.2 56.6/56.6 61.6/62.1 Table 5: Comparative analysis of generic and medical LLMs across downstream medical tasks in 0-shot and 5-shot learning settings. The best and the second-best performance are highlighted in bold and underline, respectively. LM-Evaluation-Harness operates on a Log-Likelihood objective, which calculates the negative log-likelihood for each potential answer in response to a given query. The answer is then chosen based on the highest likelihood score, indicating it as the most probable choice. During evaluation, each prompt includes a question and corresponding choices, separated by a new line. For PubMedQA, the abstract provides contextual grounding for the model\u2019s decision-making process. Examples of these prompts are provided in the Appendix I. 3.2 Results We present a comparative analysis of our novel models, Hippoand Hippo, against a set of established base LLMs and medical-specific LLMs, in Table 5. Our evaluation includes both zero-shot and few-shot (specifically, 5-shot) learning scenarios. Demonstrating superior performance, our Hippo models outperform traditional pretrained models in zero-shot evaluations and maintain their superiority in the 5-shot context. Remarkably, Hippoand Hipponot only beat models with 7 billion and 13 billion parameters but also exceed the capabilities of those with 70 billion parameters. This outstanding performance highlights the adaptability and precision of our models, showing their remarkable ability to significantly boost prediction accuracy with minimal input examples. 4 Analysis 4.1 Contribution of Each Training Stage Hippo. Our evaluation methodology for the LLaMA2 7B model covers successive training stages: Continued Pre-training (CP), Instruction Tuning (SFT), and Direct Preference Optimization (DPO). As listed in Table 6, the base model LLaMA2 7B initially achieves an average accuracy of 34.0 across benchmarks. The CP stage marginally increases accuracy to 34.4, indicating initial benefits from domain-focused continued pre-training. The subsequent introduction of SFT yields a substantial performance boost to an average accuracy of 50.3, demonstrating the critical role of customized instruction in enhancing the model\u2019s capabilities in understanding and answering medical queries. Integrating CP with SFT 7 Hippocrates Model MedMCQA MedQA PubmedQA USMLE-1 USMLE-2 USMLE-3 Avg. LLaMA2 7b 34.4 29.3 72.3 18.1 22.9 27.1 34.0 + CP 34.6 31.9 72.8 20.2 25.7 21.3 34.4 + SFT 52.7 49.7 75.7 37.2 42.2 44.3 50.3 + CP + SFT 54.3 50.6 74.7 46.8 41.3 50.0 53.0 + CP + SFT + DPO 54.4 50.4 74.8 46.8 39.5 49.2 52.5 + CP + SFT + DPO + CoT 54.0 50.3 73.3 48.9 43.7 45.1 52.6 Mistral 7b 39.3 36.8 76.3 24.5 31.2 27.9 39.3 + CP 40.5 37.2 74.9 29.8 33.9 29.5 41.0 + SFT 49.7 59.2 77.1 60.6 66.1 56.6 61.6 + CP + SFT 51.5 60.9 76.5 55.3 65.1 57.4 61.1 + CP + SFT + DPO 49.3 57.3 77.3 56.4 62.4 54.9 59.6 + CP + SFT + DPO + CoT 51.0 60.9 63.5 59.6 59.6 63.9 59.8 Table 6: Hippoand Hippo: Analysis of Continued Pretraining, Instruction Tuning, and Direct Preference Optimization. This table demonstrates the incremental impact of Continued Pretraining (CP) on medical text data, Instruction Tuning (SFT), and Direct Preference Optimization (DPO) on the zero-shot capabilities of the LLaMA2 7B and Mistral 7B models across a range of medical benchmarks, including MedMCQA, MedQA, PubmedQA, and the USMLE series. The results, aggregated and individual, underline the significance of each methodological advancement in enhancing the model\u2019s proficiency in interpreting and responding to complex medical queries, thereby providing a granular view of performance improvements at each stage of model optimization. further improves this performance to 53.0, highlighting the combined value of domain knowledge and specific instruction tuning. The final DPO stage slightly decreases the model\u2019s performance to 52.5, albeit with a slight increase in accuracy for MedMCQA and PubMedQA, illustrating DPO\u2019s refined impact on model preference alignment. This sequence delineates the incremental enhancements attributable to each training phase, with SFT marking a pivotal improvement. The composite model, LLaMA2 + CP + SFT, is thus designated as Hippofor its distinguished performance across our benchmarks. Hippo. Following the approach for Hippo, the training evolution for the Mistral 7B model reveals gradual improvement in the model\u2019s proficiency in medical questionanswering. Initial results from the baseline Mistral 7B model, as shown in Table 6, show an average benchmark accuracy of 39.3. Implementing CP slightly improves this to 41.0, reflecting the positive yet modest impact of domain-specific continued pre-training. The pivotal SFT stage significantly raises the performance, achieving an average accuracy of 61.6, emphasizing the critical role of customized instruction in enhancing the model\u2019s interpretative and response capabilities for medical inquiries. Interestingly, combining CP and SFT results in a slight reduction to 61.1, suggesting a complex interaction between domain pre-training and instruction tuning. The subsequent application of DPO slightly lowers the overall score to 59.6, similar to the pattern observed for Hippo, with targeted performance adjustment. Based on comprehensive analysis, Mistral 7b + SFT is selected to represent Hippo, credited for its exceptional performance across all benchmarks. 4.2 Chain-of-Thought (CoT) Prompting The CoT prompting technique (Wei et al., 2023) enhances an LLM\u2019s ability to tackle complex queries by guiding it to articulate intermediate reasoning steps. This method improves the model\u2019s responses by structuring its problem-solving process. In our study, we applied CoT prompting for in-context learning, adopting a slightly altered instruction utilized in (Pal & Sankarasubbu, 2024b): \u201dThe following is a multiple choice question about medical knowledge. Solve it in a step-by-step fashion, starting by summarizing the available information. Output a single option from the four options as the final answer.\u201d. However, the application of CoT prompting in our experiments with downstream medical tasks did not consistently enhance our models\u2019 performance, as shown in Table 6. 8 Hippocrates 4.3 Influencing Examples We explore the application of Influence Functions to understand the behavior of LLMs (Grosse et al., 2023) \u2013 in our context, particularly those trained with domain-specific datasets like medical text. This technique quantifies the effect of single training instances on the model\u2019s predictions, improving the transparency of the AI models. This is increasingly important as the field of Explainable AI (XAI) grows to make AI systems more interpretable and accountable. However, the complexity of LLMs, which process vast amounts of data, highlights the necessity for efficient methods to perform this analysis. We believe incorporating this tool to our evaluation framework will prove useful for future studies. In the supplementary material (Appendix H), we present our analysis results, highlighting the most and least influential training examples for a MedQA dataset question and its model response. Notably, the most influential example shares overlapping medical concepts, in contrast to no shared concepts with the least influential training example. 4.4 Uncertainty Quantification In our study, we conducted an uncertainty quantification experiment on Hippoto understand its performance on the MedMCQA, MedQA, and PubMedQA datasets, as shown in Fig.3. Our findings reveal that our model consistently assigns higher probabilities to questions it answers correctly across all datasets, suggesting an ability to self-calibrate its certainty. The model\u2019s confidence is notably higher on MedMCQA, possibly reflecting the dataset\u2019s relative simplicity. In contrast, its confidence on PubMedQA is comparatively lower, likely due to the dataset\u2019s complexity. Additionally, the model\u2019s confidence changes with different training stages; CPT leads to more conservative estimates, SFT boosts confidence, and adding DPO leads to variable confidence, with noticeable effects in MedMCQA and MedQA. These outcomes emphasize a complex relationship between training approaches and confidence calibration in the model. 0 1 2 3 Density MedMCQA CPT CPT + SFT CPT + SFT + DPO 0 1 2 3 Density MedQA 0.2 0.4 0.6 0.8 1.0 0 1 2 3 Density PubMedQA 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 Correct Incorrect Figure 3: Uncertainty quantification for our best-performing 5-shot Hippomodel., where we plot the probability distributions assigned by the model to both correct predictions and incorrect predictions on the MedMCQA, MedQA, and PubMedQA datasets. We present additional negative results in Appendix J, which we anticipate will be beneficial for the community. By sharing these findings, we aim to encourage further investigations. 5 Conclusion In this study, we have introduced Hippocrates, a comprehensive and open-source framework tailored for the medical domain, addressing a wide array of challenges faced by medical LLMs. We provide openly available datasets and establish an intuitive benchmark using the LM-Evaluation-Harness tool. We also introduce Hippoand Hippo, two 7B models demonstrating superior performance. Our work makes substantial contributions to the field by combining in-depth empirical research with a structured training methodology, offering invaluable insights and tools for future research not only in healthcare but in any area requiring domain-specific adaptation of LLMs. 9 Hippocrates",
"introduction": "The remarkable success of Large Language Models (LLMs) across diverse NLP tasks has revolutionized artificial intelligence (Touvron et al., 2023b; Bai et al., 2023; Jiang et al., 2023; OpenAI, 2023; Google, 2023). Despite their impressive generalization capabilities, LLMs encounter challenges in clinical contexts, primarily due to a deficiency in domain-specific knowledge and the intricacies of medical terminology. Bridging this gap, in this work, we introduce Hippocrates (named after the Ancient Greek \u201cFather of Medicine\u201d), a state-of- the-art, fully open-source framework designed to elevate LLMs\u2019 proficiency in medical reasoning. We publicly share our training data, complete training and evaluations codes, along with intermediate model checkpoints. Our framework marks an important step towards democratizing advancements in medical LLMs. Previous attempts to develop advanced medical LLMs yielded promising results by further training them (Labrak et al., 2024), supervised fine-tuning them (Li et al., 2023; Han et al., 2023; Toma et al., 2023), or both (Wu et al., 2023; Chen et al., 2023), via special medical- text corpus and medical instruction datasets. However, the data collection, pre-training, \u2217Corresponding author, [email protected] 1 arXiv:2404.16621v1 [cs.LG] 25 Apr 2024 Hippocrates Oct 2022 Apr 2023 Jul 2023 Aug 2023 Sep 2023 Nov 2023 Dec 2023 Mar 2024 30 40 50 60 MedQA Accuracy (%) BioGPT 1.5B (27.2) MedAlpaca 7B (36.6) LLaMA-2 7B (39.5) PMC-LLaMA 13B (46.3) Mistral 7B (48.9) Qwen 72B (53.4) Meditron 70B (58.5) Hippo- 7B (50.8) Hippo- 7B (59.9) Figure 1: The evolution of medical LLM performances on the MedQA dataset. Our 7B Hippo- and Hippo- models achieve 50.8% and 59.9% 5-shot accuracy, respectively. Hippo- outperforms all existing open models, including even those with 70B parameters. and finetuning stages may include considerable complexity, which makes reproducing, analyzing, and comparing the recent LLMs in that domain challenging. On the other hand, closed models, e.g. GPT4 (OpenAI, 2023), Gemini (Google, 2023), Med-PaLM (Singhal et al., 2023b), trained on closed-domain datasets make their results non-reproducible, not to mention substantial computational costs and further complicate the understanding of which components are crucial to the success of these advanced medical frameworks. In this work, we provide full access to our framework, from the data sources to the training configurations and the reproducible evaluation protocols. We conduct a detailed empirical analysis to identify the impact of various design elements on LLM performance, leading to a domain-adapted framework that demonstrates superior performance on multiple medical benchmarks. Based on these insights, we develop a step-by-step guide for the efficient training of medical-LLMs. Our research efforts yield two advanced 7B parameter models, Hippo- and Hippo- . As shown in Fig. 1, our models not only outperform existing 7B and 13B models by a significant margin but also deliver results on par with, and in some cases exceeding, those of 70B models. We argue that the development of a broad, varied collection of open models is crucial for deepening our knowledge of language models and enhancing their applicability across various domains. In addition, we adopt a novel strategy for structuring our instruction tuning (IT) dataset, dividing it into two distinct components: the General Instruction Dataset and the Evaluation Instruction Dataset. The General dataset is designed to enable unbiased assessments by avoiding overlap with downstream task data, marking a departure from previous method- ologies. On the other hand, the Evaluation Instruction Dataset, which incorporates training splits from evaluation benchmarks, facilitates direct comparisons with existing models (Chen et al., 2023). Notably, for the first time in the medical domain, our approach incorpo- rates preference learning from medical professionals into the model development process, utilizing RLAIF (Lee et al., 2023b) and GPT4 for annotating preferences. For model evaluation, we employ the well-established EleutherAI framework1 (Gao et al., 2021), conducting tests across a set of six varied medical downstream tasks. These include MedMCQA (Pal et al., 2022), PubmedQA (Jin et al., 2019), MedQA (Jin et al., 2021), and the USMLE-step1, USMLE-step2, and USMLE-step3. Leveraging this framework allows for straightforward replication of any LLM\u2019s results, eliminating the necessity for additional fine-tuning or the repetitive execution of evaluation scripts for each new model."
},
{
"url": "http://arxiv.org/abs/2211.01736v2",
"title": "Transformers on Multilingual Clause-Level Morphology",
"abstract": "This paper describes our winning systems in MRL: The 1st Shared Task on\nMultilingual Clause-level Morphology (EMNLP 2022 Workshop) designed by KUIS AI\nNLP team. We present our work for all three parts of the shared task:\ninflection, reinflection, and analysis. We mainly explore transformers with two\napproaches: (i) training models from scratch in combination with data\naugmentation, and (ii) transfer learning with prefix-tuning at multilingual\nmorphological tasks. Data augmentation significantly improves performance for\nmost languages in the inflection and reinflection tasks. On the other hand,\nPrefix-tuning on a pre-trained mGPT model helps us to adapt analysis tasks in\nlow-data and multilingual settings. While transformer architectures with data\naugmentation achieved the most promising results for inflection and\nreinflection tasks, prefix-tuning on mGPT received the highest results for the\nanalysis task. Our systems received 1st place in all three tasks in MRL 2022.",
"authors": "Emre Can Acikgoz, Tilek Chubakov, M\u00fcge Kural, G\u00f6zde G\u00fcl \u015eahin, Deniz Yuret",
"published": "2022-11-03",
"updated": "2022-11-13",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"cs.LG"
],
"main_content": "In this section, first we cover the model architectures and training strategies that we have used (Vaswani et al., 2017; Shliazhko et al., 2022; Li and Liang, 2021), and then discuss our data augmentation strategies in details (Anastasopoulos and Neubig, 2019). 2.1 Vanilla Transformer We used a modified version of vanilla Transformer architecture in Vaswani et al. (2017) which contains 4 layers of encoder and decoder with 4 multihead attentions. The embedding size and the feedforward dimension is set to 256 and 1024, respectively. As suggested in Wu et al. (2021), we used layer normalization before the self-attention and feed-forward layers of the network that leads to slightly better results. We used these in inflection and reinflections tasks. 2.2 Prefix-Tuning Using prefix-tuning reduces computational costs by optimizing a small continuous task-specific vectors, called prefixes, while keeping frozen all the other parameters of the LLM. We added two prefixes, called virtual tokens in Li and Liang (2021), the gradient optimization made across these prefixes that is described in the Figure 1. We used Shliazhko et al. (2022) weights during prompting. Prefix-tuning method outperforms other fine-tuning approaches in low-data resources and better adapts to unseen topics during prompting (Li and Liang, 2021). 2.3 Data Augmentation Hallucinating the data for low-resource languages results with a remarkable performance increase for inflection Anastasopoulos and Neubig (2019). The hallucinated data is generated by replacing the stem Figure 2: In order to create the hallucinated samples, we first align the characters of the lemma and the inflected forms. After that, we substitute the stem parts of the input with random characters that comes from the validation set and test set, as shown in the figure. characters of the aligned word with random characters by using the validation or test sets (see Fig. 2). This way, the amount increase in the training data helps the model to learn and generalize rare seen samples. On the other hand, the amount of hallucinated data that will be added to the training set, hyperparameter N, is also another parameter that directly effects our accuracy. Therefore, hyperparameter N needs to be decided specifically for each language according to corresponding language\u2019s complexity and topology. 3 Experimental Settings 3.1 Dataset In the shared task, there are eight different languages with varying linguistic complexity which comes from different language families: English, French, German, Hebrew, Russian, Swahili, Spanish, Turkish. For Hebrew there are two versions as Hebrew-vocalized and Hebrew-unvocalized. Training data contains 10,000 instances for each language and there are 1,000 samples both in development set and test set. Swahili and Spanish are the surprise languages that announced two weeks before the final submission day, together with the unlabeled test data for each language. 3.2 Evaluation Models are evaluated according to Exact Match (EM), Edit Distance (ED), and F1 accuracy. For task1 (inflection) and task2 (reinflection) ED is the leaderboard metric. For task3 (analysis), F1 score is the objective. EM accuracy represents the ratio of correctly predicted lemma and features, and ED is calculated based on Levenshtein Distance which indicates how different two strings are, (the ground truth and prediction for our case) from each other. F1 accuracy is the harmonic mean of the precision and recall. F1 accuracy is upweighted for the lemma score in our task. In the leaderboard, the results are averaged across each language. Task1: In\ufb02ection Task2: Rein\ufb02ection Task3: Analysis Model Transformer + D.A. Transformer Pre\ufb01x Tuning Metrics F1\u2191 EM\u2191 ED\u2193 F1\u2191 EM\u2191 ED\u2193 F1\u2191 EM\u2191 ED\u2193 Deu 97.71 91.80 0.241 92.40 66.50 0.788 95.89 83.40 0.991 Eng 98.02 88.90 0.221 95.42 72.30 0.477 99.61 98.50 0.064 Fra 98.59 93.20 0.124 92.64 68.30 0.758 95.63 81.90 0.933 Heb 97.73 89.80 0.550 94.00 83.30 0.796 92.84 73.50 1.322 Heb-Unvoc 97.96 94.20 0.113 86.70 57.70 1.002 82.09 36.20 2.044 Rus 97.57 87.70 0.828 97.29 84.90 0.854 97.51 88.60 3.252 Swa 99.72 99.61 0.019 92.05 84.47 0.182 90.51 62.63 3.114 Spa 98.79 92.00 0.199 96.42 77.60 0.480 98.11 89.40 0.560 Tur 97.50 89.80 0.333 95.36 84.70 0.593 95.36 84.70 0.593 Average 91.89 98.18 0.292 93.14 74.72 0.705 94.17 77.65 1.430 Table 2: Results on the test sets for all tasks and languages with the corresponding models. Edit Distance is the leaderboard ranking metric for Task1: In\ufb02ection and Task2: Rein\ufb02ection, and F1 score is used for leaderboard ranking in Task3: Analysis. D.A. indicates data augmentation. 3.3 Shared Task Multilingual Clause-level Morphology (MRL 2022) contains three different tasks as Task1: In\ufb02ection, Task2: Rein\ufb02ection, and Task3: Analysis. As KUIS AI team, we have attended each of them separately. 3.3.1 Task1: In\ufb02ection The goal of the task is to produce the output clause and its features forgiven verbal lemma and a set of morphological features, see Table 1. For in\ufb02ection task, we have trained a vanilla transformer model from scratch by adding some hallucinated data for the training set. The data hallucination method, discussed in 2.3, improved our results signi\ufb01cantly. As suggested in Wu et al. (2021), we observed the effect of the large batch sizes that results with an increase in accuracy. Thus, we set the batch size to 400 and we trained our model for 20 epochs. We used Adam optimizer by setting \u03b21 to 0.9 and \u03b22 to 0.98. We started with a learning rate of 0.001 with 4,000 warm-up steps. Then, we decrease it with the inverse of the square-root for the remaining steps. We have used label smoothing with a factor of 0.1 and applied the same dropout rate of 0.3. 3.3.2 Task2: Rein\ufb02ection In rein\ufb02ection the task is to generate the desired output format as in in\ufb02ection; however, the input is consist of an in\ufb02ected clause, its corresponding features, and a new set of features that represents the desired output form. We again use the same vanilla Transformer architecture, and exactly the same training parameters that we have used in in\ufb02ection task. We tried both (i) giving the all source data as input, and (ii) using only the in\ufb02ected clause and its desired features. We have examined that, both our EM and ED accuracy increased in a large manner when we ignore source clause\u2019s features in input before feeding it to the model. 3.3.3 Task3: Analysis Analysis task can be seen as the opposite of the in\ufb02ection task. For given clauses and its features, we try to generate the lemma and the corresponding morphological features. We used the pre\ufb01xtuning method for the analysis task. The pre\ufb01x template was given as the source and the features were masked. During prompting, we gave the clauselevel in input and the target lemma together with its features were expected from the output, like a machine translation task. The source and target are given together with the trainable pre\ufb01xes, i.e. continuous prompt vectors, and the gradient optimization made across these pre\ufb01xes. For the mGPTbased Pre\ufb01x-Tuning model, we have used the Huggingface, Wolf et al. (2019) and the corresponding model weights sberbank-ai/mGPT. The pre\ufb01xes were trained for 10 epochs with a batch size of 5 due computational resource constraints. We used Adam optimizer with weight decay \ufb01x which is introduced in Loshchilov and Hutter (2017) with \u03b21=0.9 and \u03b22=0.999. The learning rate is initialized to 5 \u00d7 10\u22125 and a linear scheduler is used without any warm-up steps. System In\ufb02ection Rein\ufb02ection Analysis Transformer Baseline 3.278 4.642 80.00 mT5 Baseline 2.577 2.826 84.50 KUIS AI 0.292 0.705 94.17 Table 3: Submitted results for MRL shared task that is averaged across 9 languages. Metrics for the in\ufb02ection and rein\ufb02ection tasks is the edit distance, and for analysis the metric is averaged F1 score with the lemma being treated as an up-weighted feature. 3.4 Results Our submitted results are provided in Table 2. The announced results by the shared task are in the Table 3 which are evaluated among the provided unlabeled test set. For the in\ufb02ection task, with the help of data augmentation, we have achieved best average edit distance for languages. Specially, for Swahili the edit distance is nearly perfect as well as the exact match. It is followed by Hebrew-Unvoc and French. We observed the highest edit distance and the lowest exact match scores for Russian. At the end, we observed that, reducing edit distance does not always bring better exact match. For the rein\ufb02ection task, using trained transformer models from scratch, we again see the best results for Swahili with the lowest edit distance. This time, the highest edit distance belongs to Hebrew-Unvoc as well as the lowest exact match. The number of words and characters in the examples of task datasets may be the factors and should also be considered. Finally for the analysis, with the help of pre\ufb01xtuning, we achieved the best results for English with highest F1 score. The ease of \ufb01nding English pre-trained models led us to experiment with English-only GPT models, and we subsequently discovered that multilingual GPT gives better results when using pre\ufb01x-tuning. Tuning on mGPT has the lowest performance with Hebrew-Unvoc, due the low ratio of training samples in Hebrew during pre-training compared to other languages. 4 Related Work Word-level morphological tasks have been studied to a great extent, with LSTM (Wu and Cotterell, 2019; Cotterell et al., 2016; Malaviya et al., 2019; Sahin and Steedman, 2018), GRU (Conforti et al., 2018), variants of Transformer Vaswani et al. (2017); Wu et al. (2021) and other neural models (e.g., invertible neural networks (Sahin and Gurevych, 2020)). Unlike word-level, there is limited work on clause-level morpho-syntactic modeling. Goldman and Tsarfaty (2022) presents a new dataset for clause-level morphology covering 4 typologically-different languages (English, German, Turkish, and Hebrew); motivates rede\ufb01ning the problem at the clause-level to enable the crosslinguistical study of neural morphological modeling; and derives clause-level in\ufb02ection, rein\ufb02ection, and analysis tasks together with baseline model results. Pre-trained LLMs have been successfully applied to downstream tasks like sentiment analysis, question answering, named entity recognition, and part-of-speech (POS) tagging (Devlin et al., 2019; Yang et al., 2019; Raffel et al., 2020). Even though, there is limited work on applications of LLMs to morphological tasks, it has been demonstrated that using pre-trained contextualized word embeddings can signi\ufb01cantly improve the performance of models for downstream morphological tasks. Inoue et al. (2022) explored BERT-based classi\ufb01ers for training morphosyntactic tagging models for Arabic and its dialect. Anastasyev (2020) explored the usage of ELMo and BERT embeddings to improve the performance of joint morpho-syntactic parser for Russian. Hofmann et al. (2020) used a \ufb01netuning approach to BERT for the derivational morphology generation task. Finally, Seker et al. (2022) presented a large pre-trained language model for Modern Hebrew that shows promising results at several tasks. On the other hand, since \ufb01ne-tuning LLMs requires to modify and store all the parameters in a LM that results with a huge computational cost. Rebuf\ufb01et al. (2017); Houlsby et al. (2019) used adapter-tuning which adds task-speci\ufb01c layers (adapters) between the each layer of a pre-trained language model and tunes only the 2%-4% parameters of a LM. Similarly, Li and Liang (2021) proposed pre\ufb01x-tuning which is a light-weight alternative method for adapter-tuning that is inspired by prompting. 5 Conclusion In this paper, we described our winning methods multilingual clause-level morphology shared task for in\ufb02ection, rein\ufb02ection, and analysis. Due to the different complexity between tasks and the varying morphological characteristics of languages, there is no single best model that achieves the best results for each task in each language. Thus, we try to implement different types of systems with different objectives. For in\ufb02ection we used a vanilla Transformer adapted from Vaswani et al. (2017) and applying data hallucination substantially improves accuracy (Anastasopoulos and Neubig, 2019). The rein\ufb02ection task is more challenging compared to the other tasks due to its complex input form. To overcome this issue, we have removed the original feature tags from the input. We only used the in\ufb02ected clause and target features in the input. We again used a vanilla Transformer as a model choice. Finally, for the analysis task, we used the pre\ufb01xtuning method based on mGPT. On average, we have achieved the best results for every three tasks among all participants. Acknowledgements This work is supported by KUIS AI Center from Ko\u00e7 University, Istanbul. We gratefully acknowledge this support. Last but not least, we would like to kindly thank our organizers for answering our questions and for the effort they have made to \ufb01x the issues that we struggled during the competition process.",
"introduction": "The shared task on multilingual clause-level mor- phology was designed to provide a benchmark for morphological analysis and generation at the level of clauses for various typologically diverse lan- guages. The shared task is composed of three sub- tasks: in\ufb02ection, rein\ufb02ection and analysis. For the in\ufb02ection task, participants are required to gener- ate an output clause, given a verbal lemma and a speci\ufb01c set of morphological tags (features) as an input. In the rein\ufb02ection task the input is an in- \ufb02ected clause, accompanied by its features (tags). Participants need to predict the target word given a new set of tags (features). Finally, the analysis task requires predicting the underlying lemma and tags (features) given the clauses. 1https://github.com/emrecanacikgoz/ mrl2022 Task1: In\ufb02ection Source Lemma give Features IND;FUT;NOM(1,SG); ACC(3,SG,MASC);DAT(3,SG,FEM) Target Clause I will give him to her Task2: Rein\ufb02ection Source Clause I will give him to her Features IND;FUT;NOM(1,SG); ACC(3,SG,MASC);DAT(3,SG,FEM) Desired Features IND;PRS;NOM(1,PL); ACC(2);DAT(3,PL);NEG Target Desired Clause We don\u2019t give you to them Task3: Analysis Source Clause I will give him to her Target Lemma give Features IND;FUT;NOM(1,SG); ACC(3,SG,MASC);DAT(3,SG,FEM) Table 1: Description of the each three task: in\ufb02ec- tion, rein\ufb02ection, analysis. Task1 (In\ufb02ection). For the given lemma and the features, target is the desired clause.Task2 (Rein\ufb02ection). Input is the clause, its fea- tures, and the desired output features. Target is the de- sired clause that represented by the desired features in the source. Task3 (Analysis). For a given clause, out- put is the corresponding lemma and the morphological features. Literature has examined morphology mainly at the word level, but morphological processes are not con\ufb01ned to words. Phonetic, syntactic, or se- mantic relations can be studied at phrase-level to explain these processes. Thus, this shared task examines phrase-level morphology and questions the generalization of the relations between the lay- ers of language among languages with different morphological features. The shared task includes eight languages with different complexity and vary- ing morphological characteristics: English, French, German, Hebrew, Russian, Spanish, Swahili, and Turkish. In our work, we explored two main approaches: (1) training character-based transformer architec- tures from scratch with data augmentation, (2) adapting a recent pre\ufb01x-tuning method for lan- guage models at multilingual morphological tasks. arXiv:2211.01736v2 [cs.CL] 13 Nov 2022 Figure 1: Task3 (Analysis) example by using pre\ufb01x- tuning method. We freeze all the parameters of the pre-trained mGPT model and only optimize the pre\ufb01x, which are shown inside the red block. Each vertical block denote transformer activations at one time step."
}
]
},
"edge_feat": {}
}
}