AcademicEval / intro_8K /test_introduction_short_2404.16621v1.json
XaiverZ's picture
syn
ed3212e
raw
history blame
33.9 kB
{
"url": "http://arxiv.org/abs/2404.16621v1",
"title": "Hippocrates: An Open-Source Framework for Advancing Large Language Models in Healthcare",
"abstract": "The integration of Large Language Models (LLMs) into healthcare promises to\ntransform medical diagnostics, research, and patient care. Yet, the progression\nof medical LLMs faces obstacles such as complex training requirements, rigorous\nevaluation demands, and the dominance of proprietary models that restrict\nacademic exploration. Transparent, comprehensive access to LLM resources is\nessential for advancing the field, fostering reproducibility, and encouraging\ninnovation in healthcare AI. We present Hippocrates, an open-source LLM\nframework specifically developed for the medical domain. In stark contrast to\nprevious efforts, it offers unrestricted access to its training datasets,\ncodebase, checkpoints, and evaluation protocols. This open approach is designed\nto stimulate collaborative research, allowing the community to build upon,\nrefine, and rigorously evaluate medical LLMs within a transparent ecosystem.\nAlso, we introduce Hippo, a family of 7B models tailored for the medical\ndomain, fine-tuned from Mistral and LLaMA2 through continual pre-training,\ninstruction tuning, and reinforcement learning from human and AI feedback. Our\nmodels outperform existing open medical LLMs models by a large-margin, even\nsurpassing models with 70B parameters. Through Hippocrates, we aspire to unlock\nthe full potential of LLMs not just to advance medical knowledge and patient\ncare but also to democratize the benefits of AI research in healthcare, making\nthem available across the globe.",
"authors": "Emre Can Acikgoz, Osman Batur \u0130nce, Rayene Bench, Arda An\u0131l Boz, \u0130lker Kesen, Aykut Erdem, Erkut Erdem",
"published": "2024-04-25",
"updated": "2024-04-25",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AI",
"cs.CL"
],
"label": "Original Paper",
"paper_cat": "LLM Fairness",
"gt": "The remarkable success of Large Language Models (LLMs) across diverse NLP tasks has revolutionized artificial intelligence (Touvron et al., 2023b; Bai et al., 2023; Jiang et al., 2023; OpenAI, 2023; Google, 2023). Despite their impressive generalization capabilities, LLMs encounter challenges in clinical contexts, primarily due to a deficiency in domain-specific knowledge and the intricacies of medical terminology. Bridging this gap, in this work, we introduce Hippocrates (named after the Ancient Greek \u201cFather of Medicine\u201d), a state-of- the-art, fully open-source framework designed to elevate LLMs\u2019 proficiency in medical reasoning. We publicly share our training data, complete training and evaluations codes, along with intermediate model checkpoints. Our framework marks an important step towards democratizing advancements in medical LLMs. Previous attempts to develop advanced medical LLMs yielded promising results by further training them (Labrak et al., 2024), supervised fine-tuning them (Li et al., 2023; Han et al., 2023; Toma et al., 2023), or both (Wu et al., 2023; Chen et al., 2023), via special medical- text corpus and medical instruction datasets. However, the data collection, pre-training, \u2217Corresponding author, [email protected] 1 arXiv:2404.16621v1 [cs.LG] 25 Apr 2024 Hippocrates Oct 2022 Apr 2023 Jul 2023 Aug 2023 Sep 2023 Nov 2023 Dec 2023 Mar 2024 30 40 50 60 MedQA Accuracy (%) BioGPT 1.5B (27.2) MedAlpaca 7B (36.6) LLaMA-2 7B (39.5) PMC-LLaMA 13B (46.3) Mistral 7B (48.9) Qwen 72B (53.4) Meditron 70B (58.5) Hippo- 7B (50.8) Hippo- 7B (59.9) Figure 1: The evolution of medical LLM performances on the MedQA dataset. Our 7B Hippo- and Hippo- models achieve 50.8% and 59.9% 5-shot accuracy, respectively. Hippo- outperforms all existing open models, including even those with 70B parameters. and finetuning stages may include considerable complexity, which makes reproducing, analyzing, and comparing the recent LLMs in that domain challenging. On the other hand, closed models, e.g. GPT4 (OpenAI, 2023), Gemini (Google, 2023), Med-PaLM (Singhal et al., 2023b), trained on closed-domain datasets make their results non-reproducible, not to mention substantial computational costs and further complicate the understanding of which components are crucial to the success of these advanced medical frameworks. In this work, we provide full access to our framework, from the data sources to the training configurations and the reproducible evaluation protocols. We conduct a detailed empirical analysis to identify the impact of various design elements on LLM performance, leading to a domain-adapted framework that demonstrates superior performance on multiple medical benchmarks. Based on these insights, we develop a step-by-step guide for the efficient training of medical-LLMs. Our research efforts yield two advanced 7B parameter models, Hippo- and Hippo- . As shown in Fig. 1, our models not only outperform existing 7B and 13B models by a significant margin but also deliver results on par with, and in some cases exceeding, those of 70B models. We argue that the development of a broad, varied collection of open models is crucial for deepening our knowledge of language models and enhancing their applicability across various domains. In addition, we adopt a novel strategy for structuring our instruction tuning (IT) dataset, dividing it into two distinct components: the General Instruction Dataset and the Evaluation Instruction Dataset. The General dataset is designed to enable unbiased assessments by avoiding overlap with downstream task data, marking a departure from previous method- ologies. On the other hand, the Evaluation Instruction Dataset, which incorporates training splits from evaluation benchmarks, facilitates direct comparisons with existing models (Chen et al., 2023). Notably, for the first time in the medical domain, our approach incorpo- rates preference learning from medical professionals into the model development process, utilizing RLAIF (Lee et al., 2023b) and GPT4 for annotating preferences. For model evaluation, we employ the well-established EleutherAI framework1 (Gao et al., 2021), conducting tests across a set of six varied medical downstream tasks. These include MedMCQA (Pal et al., 2022), PubmedQA (Jin et al., 2019), MedQA (Jin et al., 2021), and the USMLE-step1, USMLE-step2, and USMLE-step3. Leveraging this framework allows for straightforward replication of any LLM\u2019s results, eliminating the necessity for additional fine-tuning or the repetitive execution of evaluation scripts for each new model.",
"main_content": "Fig. 2 shows the overall workflow of the Hippocrates framework, starting from domainspecific pre-training and progressing through supervised fine-tuning and reinforcement 1https://github.com/EleutherAI/lm-evaluation-harness 2 Hippocrates Medical Knowledge Injection Medical Instruction Tuning Medical Preference Learning 298M Tokens Medical Guidelines, PMC-Patients, PubMedQA-train Language Modeling Predict next token Domain Adapted Model 696K Samples Flashcards, GenMedGPT, Platypus, HealthCareMagic, UMLS, Relations, Wikidoc, Patient-Info, MedicationQA \u2022 Query Answer 1 Answer 2 Prompt GPT-4 Preference Dataset Preference-Dataset Pre-training Data Instruction Data Model LLaMA2 ( ) 7B Mistral ( ) 7B Training Method 15K Samples Language Modeling { Instruction Finetuning Predict next token for responses Model Domain Adapted Model Training Method Supervised Fine-tuning Medical SFT Model { Reinforcement Learning Optimize for medical preferences Model Medical SFT Model Training Method DPO Medical Preference Model Evaluation Benchmark Data Inference Evaluation Framework MedMCQA MedQA PubMedQA USMLE-step1 USMLE-step2 USMLE-step3 Dataset Format Question + Answer Question + Answer Abs + Question + Answer Question + Answer Question + Answer Question + Answer Eleuther AI\u2019s Language Model Evaluation Harness Objective Log-Likelihood Evaluation Method Choose answer with the highest likelihood score Prompting In-Context Learning (ICL) strategies Approach Zero-Shot Few-Shot Chain-of-Thought (CoT) Methods MedMCQA-train, MedQA-train, PubMedQA-train General Eval Figure 2: An overview of the Hippocrates framework, illustrating the four critical phases including (1) continued pre-training, (2) supervised fine-tuning, (3) reinforcement learning from AI-generated feedback, and (4) the comprehensive evaluation pipeline. learning from AI-generated feedback to an extensive evaluation phase. This pipeline ensures our models are precisely tailored and rigorously tested for the medical domain. 2.1 Continued Pre-training Data A key aspect of our methodology is the integration of specialized medical knowledge through an extensive pre-training corpus, assembled from three specialized datasets: Medical Guidelines, PMC-Patients, and PubMedQA-contexts. The Medical Guidelines dataset comprises clinical practice guidelines, is used for training Meditron models (Chen et al., 2023). The PMC-Patients dataset (Zhao et al., 2023) consists of patient summaries extracted from case reports within PubMed Central (PMC). Additionally, the PubMedQA-contexts dataset is constructed by extracting the context field of each sample in the training split of the benchmark (Jin et al., 2019). Detailed descriptions and specifications of each dataset are available in Table 1. This extensive corpus, consisting of roughly 300M training tokens, forms the foundation of our models, ensuring their proficiency in navigating medical terminology and practices. We systematically assessed the impact of each dataset, both individually and in combination, to optimize our model\u2019s performance. Dataset Source License Size (MB) #Samples #Tokens Medical Guidelines Meditron Apache 2.0 License 382.6 37,970 96M PMC-Patients Pubmed Central CC BY-NC-SA 4.0 462.3 167,034 122M PubMedQA-train PubMedQA MIT License 290.2 211,269 80M Total 1,135.1 416,273 298M Table 1: Summary of the datasets used for continued pre-training, showing their sources, licence information and data statistics. 2.2 Supervised Fine-Tuning Data Developing effective medical LLMs requires blending domain-specific knowledge with sophisticated reasoning abilities. Previous models often utilized instruction data consisting of samples from the training or test sets of evaluation benchmarks. We also considered this setup, but additionally investigated an alternative involving generic medical data. Consequently, we constructed two sets of IT datasets: the General Instructions Data and the Evaluation Instructions Data. 3 Hippocrates General Instructions Data. This dataset aggregates more than 400K samples from nine different datasets, each derived from the instruction corpora of previous studies (Li et al., 2023; Han et al., 2023; Wu et al., 2023; Lee et al., 2023a). By excluding data from the training or test splits of downstream QA benchmarks, we aim to minimize bias and improve the model\u2019s generalization capabilities across different reasoning tasks. A pre-processing protocol was employed to remove superfluous words and web URLs, ensuring the data\u2019s quality and relevance. The detailed statistics of the dataset are presented in Table 2. Dataset Source License Size (MB) #Samples #Tokens Medical Flashcards MedAlpaca No commercialized use 18.8 33,955 3.9M GenMedGPT-5k ChatDoctor Apache 2.0 3.1 5,452 0.6M Open-Platypus Platypus CC BY-NC-SA 4.0 32.9 24,926 9.5M HealthCareMagic-100k ChatDoctor Apache 2.0 143.8 112,165 32.3M UMLS PMC-LLaMA CC BY 4.0 23.0 49,057 4.6M UMLS-Relations PMC-LLaMA CC BY 4.0 21.7 50,000 4.3M WikiDoc MedAlpaca CC BY-SA 4.0 11.0 10,000 2.6M WikiDoc-Patient-Info MedAlpaca CC BY-SA 4.0 3.7 5,942 0.8M MedicationQA PMC-LLaMA CC BY 4.0 0.4 552 0.1M Total 258.4 292,049 58.7M Table 2: Summary of General Instructions Data, describing the datasets used, their sources, together with their licence information, and size. Evaluation Instructions Data. This dataset was formed to examine the effects of including instruction samples directly from downstream tasks, a common practice in existing studies (Chen et al., 2023; Han et al., 2023; Wu et al., 2023). Instruction-response pairs were crafted using the training splits of various benchmarks, following the templates established in Meditron (Chen et al., 2023). We conducted a series of experiments to assess the distinct influence of each split on each task, both individually and collectively. The details about the Evaluation Instruction Data is given in Table 3. Dataset Source License Size (MB) #Samples #Tokens MedMCQA-train MedMCQA MIT License 114.4 182,822 24.9M MedQA-train MedQA MIT License 14.2 10,178 3.4M PubMedQA-train PubMedQA MIT License 76.3 211,269 95.9M Total 204.9 404,269 124.2M Table 3: Summary of Evaluation Instructions dataset, showing which training splits of the downstream tasks they are derived from and their data statistics. Beyond independently utilizing these datasets for supervised fine-tuning, we also examined the impact of individual datasets as well as the collective effect of combining them on model performance (refer to Appendix G). 2.3 Medical Preference Data Constructing a preference dataset typically involves generating diverse responses to identical queries using LLMs, which are subsequently evaluated by human annotators to identify the most accurate response. This method, however, can become prohibitively expensive, both in terms of computation for generating responses and the financial and time investments required for manual annotation. To circumvent these issues, we leveraged the iCliniq-10k dataset (Li et al., 2023), containing 10K authentic patient-doctor dialogues from icliniq.com. Each dialogue features a patient question accompanied by three different answers: one from an actual doctor, and the others from ChatGPT and ChatDoctor (Li et al., 2023). We conducted a thorough preprocessing of this dataset to eliminate any irrelevant or extraneous information. 4 Hippocrates Medical RLAIF. To reduce annotation costs, we adopted the RLAIF methodology (Lee et al., 2023b) in the medical domain for the first time. Utilizing detailed prompts based on patient inquiries from the iCliniq-10k dataset, we used GPT4 (OpenAI, 2023) to determine the optimal response based on predefined instructions. These instructions were derived from those used in qualitative assessments by medical professionals in Med-PaLM (Singhal et al., 2022; 2023a), with minor modifications. This annotation approach amounted to a cost of $120. The exact prompt structure for applying RLAIF with GPT4 is given in Appendix J, Figure 7. Validation. To test the reliability of GPT4\u2019s capacity to replicate medical expert annotations, we subjected 250 samples from our dataset to careful examination by two medical doctors, given them the same instructions that we provided in the prompt to GPT4. Our analysis revealed compelling results. When comparing GPT4\u2019s annotations against those of MD-1, GPT4 demonstrated a Kappa Score of 0.376, indicating moderate agreement, and an accuracy of 68.9%. The comparison with MD-2 showed even stronger results, with GPT4 achieving a Kappa Score of 0.672, suggesting substantial agreement, alongside an 83.6% accuracy. Interestingly, the inter-annotator agreement between the two doctors themselves yielded a Kappa Score of 0.416 and an accuracy of 70.8%, situating GPT4\u2019s performance firmly within the range of human expert variability. These findings not only affirm GPT4\u2019s aptitude for medical annotation but also highlight its potential to serve as a cost-effective alternative to human annotators in medical research and application settings. These findings suggest that GPT4 is capable of effectively mimicking medical doctor preferences, potentially eliminating the need for costly doctor annotations. Consequently, we compiled a comprehensive medical doctor preference dataset, consisting of 15,258 samples, to further align our LLMs with real-world clinical decision-making processes and enhance their accuracy in interpreting and responding to medical queries. 2.4 Training Methodology Our training strategy includes several phases: injection of medical knowledge through continued pre-training, domain-specific instruction tuning, and reinforcement learning from AI-generated feedback for improved alignment with medical experts. Employing the LLaMA Factory framework (hiyouga, 2023), we adhere to replicable and high-performance training standards. Moreover, we adopt the Low-Rank Adaptation (LoRA) technique Hu et al. (2021) for training efficiency and precision. LoRA enhances LLMs by selectively updating weights within additional trainable layers, thereby accelerating the training process, minimizing memory usage, and mitigating overfitting and catastrophic forgetting. Our foundational models, LLaMA2 7B (Touvron et al., 2023b) and Mistral 7B (Jiang et al., 2023), are selected based on their robust performance across medical benchmarks, demonstrating their capacity to excel without extensive training modifications. The zero-shot performances of these generic baseline models is presented at the beginning of Table 5. Continued pre-training. To equip our base LLMs with domain-specific medical expertise, we extend their pre-training on a carefully curated medical text corpus as described in Section 2.1. This stage employs traditional language modeling, focusing on next-token prediction. During this phase, both models undergo continued pre-training using LoRA, specifically adapting the fully connected layers. The parameters for LoRA are carefully set, with the rank (r) at 8 and alpha (\u03b1) at 16, to optimize learning. We use the AdamW optimizer and adjust the learning rate using a cosine scheduling, starting from an initial value of 1e-4. The batch size per device was initialized to be 8, with gradient accumulations of 2, culminating in an effective global batch size of 16, and the models are trained for a single epoch. The rationale and empirical support for our choices regarding the dataset, LoRA configurations, and overall optimization strategy are comprehensively analyzed in Appendix G. Supervised Finetuning. After continued pre-training, models undergo fine-tuning with an Instruction Tuning (IT) dataset to closely mirror medical directives, aligning model 5 Hippocrates outputs with clinical requirements. We have tested with the datasets described in Section 2.2 and found that MedQA-train IT works better than the other options. This fine-tuning phase also employs LoRA to all fully connected layers with both rank (r) and alpha (\u03b1) set to 32 for balanced efficiency and computational overhead. AdamW optimizer is used with a learning rate of 1e \u22124. To prevent model overfitting, loss calculation focuses solely on the responses. The training spanned 3 epochs with a batch size of 8 per-device and gradient accumulation set to 2. We also conducted experiments on direct fine-tuning of the base LLMs to evaluate the impact of continued pre-training (see Section 4.1) and performed a comprehensive analysis on dataset splits and fine-tuning hyperparameters (see Appendix G). Medical Preference Learning. Finally, the instruction-tuned models are further trained with a recent and popular technique called direct preference optimization (DPO) (Rafailov et al., 2023). In DPO, reinforcement learning is bypassed which allows for direct optimization based on preference data. Unlike RLHF, the responses in DPO need not be derived from the LLM being optimized. Central to DPO is the development of a loss function that evaluates the likelihood of a preferred response over a less preferred one, steering the LLM towards this goal. This makes DPO more stable and significantly reduces computational demands. The outcome of all this are our medical LLMs, named Hippoand Hippo, built upon the pre-trained LLaMA2 7B and Mistral 7B models. These models were refined through a comprehensive process that included continued pre-training and/or instruction tuning using our carefully curated medical datasets. Following this, we also explored the impact of aligning the models with clinical preferences by conducting further training on medical preference data. 3 Main Results For an objective evaluation of domain-specific knowledge and reasoning capabilities in LLMs, a detailed and fair evaluation framework is essential. In alignment with methodologies adopted in prior research (Singhal et al., 2022; Han et al., 2023; Wu et al., 2023; Toma et al., 2023; Singhal et al., 2023a; Chen et al., 2023), we selected six widely recognized medical question-answering datasets, namely MedMCQA (Pal et al., 2022), MedQA (Jin et al., 2021), PubMedQA (Jin et al., 2019) and USMLE Step 1-3 (Han et al., 2023), to assess models performances (See Table 4 for details). Performance metrics were derived through the use of the EleutherAI evaluation framework (Gao et al., 2021), ensuring a standardized approach to measuring model effectiveness in handling domain-specific queries. Dataset Source Format #Samples #Choices License MedMCQA-test MedMCQA Question + Answer 4,183 4 MIT MedQA-test MedQA Question + Answer 1,273 5 MIT PubMedQA-test PubMedQA Abstract + Question + Answer 1,000 3 MIT USMLE-step1 USMLE Question + Answer 94 5 MIT USMLE-step2 USMLE Question + Answer 109 6 MIT USMLE-step3 USMLE Question + Answer 122 5 MIT Table 4: Summary of the evaluation benchmark datasets, describing the format, the number of test samples, the number of choices, and the licence info. 3.1 Experimental Setup In our evaluation, we included a spectrum of leading LLMs, spanning general and medical LLMs, varying in scale from 1.5B to an advanced 70B parameters. Here we report the performances of our top-performing models for an accurate comparison. To ensure a fair and easily replicable assessment of these medical models, we utilized the Eleuther AI Language Model Evaluation Harness (Gao et al., 2021), a unified evaluation framework specifically designed for evaluating generative LLMs. This framework also serves as the evaluation tool for the Open LLM Leaderboard2 (Beeching et al., 2023). 2https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard 6 Hippocrates Model MedMCQA MedQA PubmedQA USMLE-1 USMLE-2 USMLE-3 Avg. 0-shot/5-shot 0-shot/5-shot 0-shot/5-shot 0-shot/5-shot 0-shot/5-shot 0-shot/5-shot 0-shot/5-shot Gemma 2b 26.2/27.7 27.8/30.6 59.1/60.8 20.2/16.0 18.4/30.3 24.6/20.5 29.4/31.0 LLaMA-2 7b 34.4/39.4 29.3/39.5 72.3/72.4 18.1/22.3 22.9/33.0 27.1/32.0 34.0/39.8 Falcon 7b 30.5/31.8 27.9/31.0 65.3/64.4 18.1/25.5 26.6/20.2 23.8/25.4 32.0/33.0 Vicuna 7b 35.9/39.0 35.1/41.2 70.9/74.5 25.5/31.9 27.5/31.2 33.6/35.3 38.1/42.2 Mistral 7b 39.3/48.5 36.8/48.9 76.3/77.8 24.5/50.0 31.2/42.2 27.9/43.4 39.3/51.8 BioMedLM 32.2/29.6 29.3/30.6 55.2/55.2 15.9/22.3 19.3/18.4 23.0/31.2 25.9/31.2 BioGPT-Large 33.1/30.1 31.3/27.2 60.1/47.7 22.3/19.2 22.0/14.7 23.0/23.0 32.0/27.0 MedAlpaca 7b 35.8/37.5 36.1/36.6 73.2/70.6 22.3/27.7 27.5/32.1 29.5/37.7 37.4/40.4 PMC-LLaMA 7b 31.5/33.0 28.0/29.5 66.5/68.4 21.3/19.2 23.9/19.3 22.1/22.1 32.2/31.9 Meditron 7b 34.0/38.2 32.0/39.3 71.6/75.7 16.0/29.8 25.7/30.3 23.8/32.0 33.9/40.9 Bio-Mistral 7b 36.4/42.4 35.0/42.1 73.4/75.1 24.5/28.7 27.5/34.9 27.9/44.3 37.5/31.9 LLaMA-2 13b 38.2/43.9 34.3/43.3 75.9/71.9 20.2/38.3 22.0/29.4 23.0/38.5 35.6/40.9 Vicuna 13b 39.7/44.3 35.9/45.9 75.6/75.0 24.5/40.4 26.6/35.8 23.8/46.7 37.7/44.6 MedAlpaca 13b 32.5/33.3 31.8/34.3 72.6/72.5 24.5/23.4 24.5/26.6 30.3/29.5 36.0/44.2 PMC-LLaMA 13b 39.1/44.5 37.8/46.3 76.8/76.5 30.9/35.1 22.9/36.7 26.2/29.5 39.0/44.8 LLaMA-2 70b 42.8/ 52.0 44.9/56.1 73.2/77.8 31.9/59.6 44.0/57.8 44.3/53.3 46.8/59.4 Qwen 72b 50.5/59.2 47.7/53.4 77.2/76.8 45.7/67.0 43.1/56.9 38.5/61.5 50.5/62.5 ClinicalCamel 70b 43.7/53.4 45.5/58.5 73.6/77.6 40.4/59.6 43.1/60.6 42.6/60.7 48.2/61.7 Meditron 70b 43.4/51.9 44.9/58.5 76.4/80.0 35.1/57.5 41.3/56.9 37.7/59.8 46.5/60.8 Hippo7b 54.3/53.9 50.6/50.8 74.7/76.6 46.8/40.4 41.3/39.5 50.0/43.4 53.0/50.8 Hippo7b 49.7/51.8 59.2/59.9 77.1/78.1 60.6/61.7 66.1/64.2 56.6/56.6 61.6/62.1 Table 5: Comparative analysis of generic and medical LLMs across downstream medical tasks in 0-shot and 5-shot learning settings. The best and the second-best performance are highlighted in bold and underline, respectively. LM-Evaluation-Harness operates on a Log-Likelihood objective, which calculates the negative log-likelihood for each potential answer in response to a given query. The answer is then chosen based on the highest likelihood score, indicating it as the most probable choice. During evaluation, each prompt includes a question and corresponding choices, separated by a new line. For PubMedQA, the abstract provides contextual grounding for the model\u2019s decision-making process. Examples of these prompts are provided in the Appendix I. 3.2 Results We present a comparative analysis of our novel models, Hippoand Hippo, against a set of established base LLMs and medical-specific LLMs, in Table 5. Our evaluation includes both zero-shot and few-shot (specifically, 5-shot) learning scenarios. Demonstrating superior performance, our Hippo models outperform traditional pretrained models in zero-shot evaluations and maintain their superiority in the 5-shot context. Remarkably, Hippoand Hipponot only beat models with 7 billion and 13 billion parameters but also exceed the capabilities of those with 70 billion parameters. This outstanding performance highlights the adaptability and precision of our models, showing their remarkable ability to significantly boost prediction accuracy with minimal input examples. 4 Analysis 4.1 Contribution of Each Training Stage Hippo. Our evaluation methodology for the LLaMA2 7B model covers successive training stages: Continued Pre-training (CP), Instruction Tuning (SFT), and Direct Preference Optimization (DPO). As listed in Table 6, the base model LLaMA2 7B initially achieves an average accuracy of 34.0 across benchmarks. The CP stage marginally increases accuracy to 34.4, indicating initial benefits from domain-focused continued pre-training. The subsequent introduction of SFT yields a substantial performance boost to an average accuracy of 50.3, demonstrating the critical role of customized instruction in enhancing the model\u2019s capabilities in understanding and answering medical queries. Integrating CP with SFT 7 Hippocrates Model MedMCQA MedQA PubmedQA USMLE-1 USMLE-2 USMLE-3 Avg. LLaMA2 7b 34.4 29.3 72.3 18.1 22.9 27.1 34.0 + CP 34.6 31.9 72.8 20.2 25.7 21.3 34.4 + SFT 52.7 49.7 75.7 37.2 42.2 44.3 50.3 + CP + SFT 54.3 50.6 74.7 46.8 41.3 50.0 53.0 + CP + SFT + DPO 54.4 50.4 74.8 46.8 39.5 49.2 52.5 + CP + SFT + DPO + CoT 54.0 50.3 73.3 48.9 43.7 45.1 52.6 Mistral 7b 39.3 36.8 76.3 24.5 31.2 27.9 39.3 + CP 40.5 37.2 74.9 29.8 33.9 29.5 41.0 + SFT 49.7 59.2 77.1 60.6 66.1 56.6 61.6 + CP + SFT 51.5 60.9 76.5 55.3 65.1 57.4 61.1 + CP + SFT + DPO 49.3 57.3 77.3 56.4 62.4 54.9 59.6 + CP + SFT + DPO + CoT 51.0 60.9 63.5 59.6 59.6 63.9 59.8 Table 6: Hippoand Hippo: Analysis of Continued Pretraining, Instruction Tuning, and Direct Preference Optimization. This table demonstrates the incremental impact of Continued Pretraining (CP) on medical text data, Instruction Tuning (SFT), and Direct Preference Optimization (DPO) on the zero-shot capabilities of the LLaMA2 7B and Mistral 7B models across a range of medical benchmarks, including MedMCQA, MedQA, PubmedQA, and the USMLE series. The results, aggregated and individual, underline the significance of each methodological advancement in enhancing the model\u2019s proficiency in interpreting and responding to complex medical queries, thereby providing a granular view of performance improvements at each stage of model optimization. further improves this performance to 53.0, highlighting the combined value of domain knowledge and specific instruction tuning. The final DPO stage slightly decreases the model\u2019s performance to 52.5, albeit with a slight increase in accuracy for MedMCQA and PubMedQA, illustrating DPO\u2019s refined impact on model preference alignment. This sequence delineates the incremental enhancements attributable to each training phase, with SFT marking a pivotal improvement. The composite model, LLaMA2 + CP + SFT, is thus designated as Hippofor its distinguished performance across our benchmarks. Hippo. Following the approach for Hippo, the training evolution for the Mistral 7B model reveals gradual improvement in the model\u2019s proficiency in medical questionanswering. Initial results from the baseline Mistral 7B model, as shown in Table 6, show an average benchmark accuracy of 39.3. Implementing CP slightly improves this to 41.0, reflecting the positive yet modest impact of domain-specific continued pre-training. The pivotal SFT stage significantly raises the performance, achieving an average accuracy of 61.6, emphasizing the critical role of customized instruction in enhancing the model\u2019s interpretative and response capabilities for medical inquiries. Interestingly, combining CP and SFT results in a slight reduction to 61.1, suggesting a complex interaction between domain pre-training and instruction tuning. The subsequent application of DPO slightly lowers the overall score to 59.6, similar to the pattern observed for Hippo, with targeted performance adjustment. Based on comprehensive analysis, Mistral 7b + SFT is selected to represent Hippo, credited for its exceptional performance across all benchmarks. 4.2 Chain-of-Thought (CoT) Prompting The CoT prompting technique (Wei et al., 2023) enhances an LLM\u2019s ability to tackle complex queries by guiding it to articulate intermediate reasoning steps. This method improves the model\u2019s responses by structuring its problem-solving process. In our study, we applied CoT prompting for in-context learning, adopting a slightly altered instruction utilized in (Pal & Sankarasubbu, 2024b): \u201dThe following is a multiple choice question about medical knowledge. Solve it in a step-by-step fashion, starting by summarizing the available information. Output a single option from the four options as the final answer.\u201d. However, the application of CoT prompting in our experiments with downstream medical tasks did not consistently enhance our models\u2019 performance, as shown in Table 6. 8 Hippocrates 4.3 Influencing Examples We explore the application of Influence Functions to understand the behavior of LLMs (Grosse et al., 2023) \u2013 in our context, particularly those trained with domain-specific datasets like medical text. This technique quantifies the effect of single training instances on the model\u2019s predictions, improving the transparency of the AI models. This is increasingly important as the field of Explainable AI (XAI) grows to make AI systems more interpretable and accountable. However, the complexity of LLMs, which process vast amounts of data, highlights the necessity for efficient methods to perform this analysis. We believe incorporating this tool to our evaluation framework will prove useful for future studies. In the supplementary material (Appendix H), we present our analysis results, highlighting the most and least influential training examples for a MedQA dataset question and its model response. Notably, the most influential example shares overlapping medical concepts, in contrast to no shared concepts with the least influential training example. 4.4 Uncertainty Quantification In our study, we conducted an uncertainty quantification experiment on Hippoto understand its performance on the MedMCQA, MedQA, and PubMedQA datasets, as shown in Fig.3. Our findings reveal that our model consistently assigns higher probabilities to questions it answers correctly across all datasets, suggesting an ability to self-calibrate its certainty. The model\u2019s confidence is notably higher on MedMCQA, possibly reflecting the dataset\u2019s relative simplicity. In contrast, its confidence on PubMedQA is comparatively lower, likely due to the dataset\u2019s complexity. Additionally, the model\u2019s confidence changes with different training stages; CPT leads to more conservative estimates, SFT boosts confidence, and adding DPO leads to variable confidence, with noticeable effects in MedMCQA and MedQA. These outcomes emphasize a complex relationship between training approaches and confidence calibration in the model. 0 1 2 3 Density MedMCQA CPT CPT + SFT CPT + SFT + DPO 0 1 2 3 Density MedQA 0.2 0.4 0.6 0.8 1.0 0 1 2 3 Density PubMedQA 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 Correct Incorrect Figure 3: Uncertainty quantification for our best-performing 5-shot Hippomodel., where we plot the probability distributions assigned by the model to both correct predictions and incorrect predictions on the MedMCQA, MedQA, and PubMedQA datasets. We present additional negative results in Appendix J, which we anticipate will be beneficial for the community. By sharing these findings, we aim to encourage further investigations. 5 Conclusion In this study, we have introduced Hippocrates, a comprehensive and open-source framework tailored for the medical domain, addressing a wide array of challenges faced by medical LLMs. We provide openly available datasets and establish an intuitive benchmark using the LM-Evaluation-Harness tool. We also introduce Hippoand Hippo, two 7B models demonstrating superior performance. Our work makes substantial contributions to the field by combining in-depth empirical research with a structured training methodology, offering invaluable insights and tools for future research not only in healthcare but in any area requiring domain-specific adaptation of LLMs. 9 Hippocrates"
}