diff --git "a/related_34K/test_related_short_2404.17723v2.json" "b/related_34K/test_related_short_2404.17723v2.json" new file mode 100644--- /dev/null +++ "b/related_34K/test_related_short_2404.17723v2.json" @@ -0,0 +1,1442 @@ +[ + { + "url": "http://arxiv.org/abs/2404.17723v2", + "title": "Retrieval-Augmented Generation with Knowledge Graphs for Customer Service Question Answering", + "abstract": "In customer service technical support, swiftly and accurately retrieving\nrelevant past issues is critical for efficiently resolving customer inquiries.\nThe conventional retrieval methods in retrieval-augmented generation (RAG) for\nlarge language models (LLMs) treat a large corpus of past issue tracking\ntickets as plain text, ignoring the crucial intra-issue structure and\ninter-issue relations, which limits performance. We introduce a novel customer\nservice question-answering method that amalgamates RAG with a knowledge graph\n(KG). Our method constructs a KG from historical issues for use in retrieval,\nretaining the intra-issue structure and inter-issue relations. During the\nquestion-answering phase, our method parses consumer queries and retrieves\nrelated sub-graphs from the KG to generate answers. This integration of a KG\nnot only improves retrieval accuracy by preserving customer service structure\ninformation but also enhances answering quality by mitigating the effects of\ntext segmentation. Empirical assessments on our benchmark datasets, utilizing\nkey retrieval (MRR, Recall@K, NDCG@K) and text generation (BLEU, ROUGE, METEOR)\nmetrics, reveal that our method outperforms the baseline by 77.6% in MRR and by\n0.32 in BLEU. Our method has been deployed within LinkedIn's customer service\nteam for approximately six months and has reduced the median per-issue\nresolution time by 28.6%.", + "authors": "Zhentao Xu, Mark Jerome Cruz, Matthew Guevara, Tie Wang, Manasi Deshpande, Xiaofeng Wang, Zheng Li", + "published": "2024-04-26", + "updated": "2024-05-06", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL", + "cs.LG", + "I.2" + ], + "label": "Original Paper", + "paper_cat": "Retrieval AND Augmented AND Generation AND RAG", + "gt": "Question answering (QA) with knowledge graphs (KGs) can be broadly classified into retrieval-based, template-based, and semantic parsing-based methods. Retrieval-based approaches utilize relation extraction [19] or distributed representations [5] to derive answers from KGs, but they face difficulties with questions involving multiple entities. Template-based strategies depend on manually-created templates for encoding complex queries, yet are limited by the scope of available templates [16]. Semantic parsing methods map text to logical forms containing predicates from KGs [4] [14] [21]. Recent advancements in large language models (LLMs) integration with Knowledge Graphs (KGs) have demonstrated notable progress. Jin et al. [7] provide a comprehensive review of this integration, categorizing the roles of LLMs as Predictors, Encoders, and Aligners. For graph-based reasoning, Think-on-Graph [15] and Reasoning-on-Graph [10] enhance LLMs\u2019 reasoning abilities by integrating KGs. Yang et al. [20] propose augmenting LLMs\u2019 factual reasoning across various training phases using KGs. For LLM-based question answering, Wen et al.\u2019s Mindmap [18] and Qi et al. [13] employ KGs to boost LLM inference capabilities in specialized domains such as medicine and food. These contributions underscore the increasing efficacy of LLM and KG combinations in enhancing information retrieval and reasoning tasks.", + "pre_questions": [], + "main_content": "INTRODUCTION Effective technical support in customer service underpins product success, directly influencing customer satisfaction and loyalty. Given the frequent similarity of customer inquiries to previously resolved issues, the rapid and accurate retrieval of relevant past instances is crucial for the efficient resolution of such inquiries. Recent advancements in embedding-based retrieval (EBR), large language models (LLMs), and retrieval-augmented generation (RAG) [8] have significantly enhanced retrieval performance and questionanswering capabilities for the technical support of customer service. This process typically unfolds in two stages: first, historical issue tickets are treated as plain text, segmented into smaller chunks to accommodate the context length constraints of embedding models; each chunk is then converted into an embedding vector for retrieval. Second, during the question-answering phase, the system retrieves the most relevant chunks and feeds them as contexts for LLMs to generate answers in response to queries. Despite its straightforward approach, this method encounters several limitations: \u2022 Limitation 1 Compromised Retrieval Accuracy from Ignoring Structures: Issue tracking documents such as Jira [2] possess inherent structure and are interconnected, with references such as \"issue A is related to/copied from/caused arXiv:2404.17723v2 [cs.IR] 6 May 2024 SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Zhentao Xu, et al. by issue B.\" The conventional approach of compressing documents into text chunks leads to the loss of vital information. Our approach parses issue tickets into trees and further connects individual issue tickets to form an interconnected graph, which maintains this intrinsic relationship among entities, achieving high retrieval performance. Limitation 2 Reduced Answer Quality from Segmenentities, achieving high retrieval performance. \u2022 Limitation 2 Reduced Answer Quality from Segmentation: Segmenting extensive issue tickets into fixed-length segments to accommodate the context length constraints of embedding models can result in the disconnection of related content, leading to incomplete answers. For example, an issue ticket describing an issue at its beginning and its solution at the end may be split during the text segmentation process, resulting in the omission of critical parts of the solution. Our graph-based parsing method overcomes this by preserving the logical coherence of ticket sections, ensuring the delivery of complete and high-quality responses. We introduce an LLM-based customer service question answering system that seamlessly integrates retrieval-augmented generation (RAG) with a knowledge graph (KG). Our system (Figure 1) comprises two phases: First, during the KG construction phase, our system constructs a comprehensive knowledge graph from historical customer service issue tickets. It integrates a tree-structured representation of each issue and interlinks them based on relational context. It also generates embedding for each node to facilitate later semantic searching. Second, during the question-answering phase, our method parses consumer queries to identify named entities and intents. It then navigates within the KG to identify related sub-graphs for generating answers. 3.1 Knowledge Graph Construction 3.1.1 Graph Structure Definition. In defining the knowledge graph structure for historical issue representation, we employ a dual-level architecture that segregates intra-issue and inter-issue relations, as illustrated in Figure 1. The Intra-issue Tree T \ud835\udc56(N, E, R) models each ticket \ud835\udc61\ud835\udc56as a tree, where each node \ud835\udc5b\u2208N, identified by a unique combination (\ud835\udc56,\ud835\udc60), corresponds to a distinct section \ud835\udc60of ticket \ud835\udc61\ud835\udc56, and each edge \ud835\udc52\u2208E and \ud835\udc5f\u2208R signifies the hierarchical connection and type of relations between these sections. The Inter-issue Graph G(T, E, R) represents the network of connections across different tickets, incorporating both explicit links Eexp, defined in issue tracking tickets, and implicit connections Eimp, derived from semantic similarity between tickets. For implicit connections, we leverage cosine similarity between the embedding vectors of ticket titles, a method adaptable to specific use cases. For instance, Figure 1 portrays ticket ENT-22970 as a tree structure with nodes representing sections such as Summary, Description, and Priority. It exhibits a direct clone linkage to PORT-133061, indicating an explicit clone relationship. Additionally, it\u2019s implicitly connected with ENT-1744 and ENT-3547 due to the semantic similarities. 3.1.2 Knowledge Graph Construction. Graph construction is delineated into two phases: intra-ticket parsing and inter-ticket connection. 1) Intra-Ticket Parsing Phase: This phase transforms each text-based ticket \ud835\udc61\ud835\udc56into a tree representation T \ud835\udc56. We employ a hybrid methodology, initially utilizing rule-based extraction for predefined fields, such as code sections identified via keywords. Subsequently, for text not amenable to rule-based parsing, we engage an LLM for parsing. The LLM is directed by a YAML template T template, representing in graph the ticket sections routinely utilized by customer support. 2) Inter-Ticket Connection Phase: Here, individual trees T \ud835\udc56are amalgamated into a comprehensive graph G. Explicit connections Eexp are delineated as specified within tickets, exemplified by designated fields in Jira [2]. Implicit connections Eimp are inferred from textual-semantic similarities across ticket titles, employing embedding techniques and a threshold mechanism to discern the most relevant tickets for each issue ticket. \ud835\udc61\ud835\udc56= \ud835\udc61\ud835\udc56,rule \u222a\ud835\udc61\ud835\udc56,llm T \ud835\udc56= RuleParse(\ud835\udc61\ud835\udc56,rule) + LLMParse(\ud835\udc61\ud835\udc56,llm, T template, prompt) Eexp = {(T \ud835\udc56, T \ud835\udc57) | T \ud835\udc56explicitly connected to T \ud835\udc57} Eimp = {(T \ud835\udc56, T \ud835\udc57) | cos(embed(T \ud835\udc56), embed(T \ud835\udc57)) \u2265\ud835\udf03} To support online embedding E {(T T ) |((T )(T )) \u2265} 3.1.3 Embedding Generation. To support online embedding-based retrieval, we generate embeddings for graph node values using pre-trained text-embedding models like BERT [6] and E5 [17], specifically targeting nodes for text-rich sections such as \"issue summary\", \"issue description\", and \"steps to reproduce\", etc. These embeddings are then stored in a vector database (for instance, QDrant [12])). For most cases the text-length within each node can meet the text-embedding model\u2019s context length constraints, but for certain lengthy texts,we can safely divide the text into smaller chunks for individual embedding without worrying about quality since the text all belong to the same section. Retrieval-Augmented Generation with Knowledge Graphs for Customer Service Question Answering SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Question Intent Embedding-based Retrieval Filtering Filtering Ti cket ENT22970 CSV upl oad er r or , updat i ng user em ai l \" CSV upl oad er r or , updat i ng user em ai l \" HAS_SUM M ARY [ \" user 1\" : \" Do we know how t hese dupl i cat ed pr of i l es got cr eat ed?\" , . . . . \" user 2\" : \" cl eaned up 228 dupl i cat e pr of i l es, r esol ved\" , \" user 1\" : \" t hanks, t i cket cl osed\" ] HAS_CO M M ENTS Fi el ds HAS_FI ELDS Descr i pt i on HAS_DESCRI PTI O N Dat a I ssue HAS_RO O T_CAUSE M aj or St r at egi c HAS_PRI O RI TY HAS_I M PACT_AREA \" Adm i n seei ng sever al er r or s when at t em pt i ng updat e of user em ai l s on dashboar d I D \" xxxxxxxxx' . Tot al num ber of user s af f ect ed ~' yyy' . \" HAS_I SSUE_DESCRI PTI O N Ref er t o t he CSV: ht t ps: / / m i cr osof t . shar epoi nt . com / xxx: 1. O pen t he Dashboar d I D xxxxx; 2. Cl i ck on I nst ances > Pr of i l e; 3. Sear ch f or user s f r om t he CSV f i l e and not e t hat t her e ar e 2 pr of i l es exi st . HAS_STEPS_TO _REPRO DUCE CLO NE_FRO M CLO NE_TO Q uest i on Q uer y: How t o r epr oduce t he i ssue wher e user saw \" csv upl oad er r or i n updat i ng user em ai l \" and has m aj or pr i or i t y t hat was caused by dat a i ssue? I nt ent : \" St eps t o Repr oduce\" Sum m ar y: \" CSV upl oad er r or i n updat i ng user em ai l \" Pr i or i t y: \" M aj or \" Root Cause: \" Dat a I ssue\" Ent i t y Det ect i on Fi nal Answer : based on t he t i cket ENT22970, t he st eps t o r epr oduce t he i ssue i s \" 1. Ref er t o t he CSV: ht t ps: / / m i cr osof t . shar epoi nt . com / xxx 2. O pen t he Dashboar d I D xxxxxxxxx 3. Cl i ck on I nst ances > Pr of i l e 4. Sear ch f or user s f r om t he CSV f i l e and not e t hat t her e ar e 2 pr of i l es t hat com e up. Answer G ener at i on 4 4 5 5 5 5 6 SI M I LAR_TO SI M I LAR_TO Ti cket ENT22970 Ti cket PO RT133061 Ti cket ENT1744 Ti cket ENT3547 CLO NE_FRO M CLO NE_TO intra-ticket tree representation inter-ticket connections Knowledge Graph Construction Retrieval and Question Answering 1 2 Vector Database 3 Text em beddi ng G ener at i on f or Node Val ues I nt ent Cl assi f i cat i on i nt er t i cket connect i on ( i m pl i ci t EBR, expl i ci t ) i nt r at i cket t r ee par si ng Graph Database issue-tracking ticket 1-6 step with step numbers Vector DB Graph DB graph nodes with links Step with LLM Ti cket ENT3547 Lear ni ng ' upl oad csv' opt i on f ai l s . . . . . . . . . Ti cket ENT1744 HTTP PO ST csv upl oad er r or i nt er nal er r or . . . . . . . . . Ti cket PO RT133061 \" CSV upl oad er r or , updat i ng user em ai l \" . . . . . . . . . Legends Figure 1: An overview of our proposed retrieval-augmented generation with knowledge graph framework. The left side of this diagram illustrates the knowledge graph construction; the right side shows the retrieval and question answering process. 3.2 Retrieval and Question Answering 3.2.1 Query Entity Identification and Intent Detection. In this step, we extract the named entities P of type Map(N \u2192V) and the query intent set I from each user query \ud835\udc5e. The method involves parsing each query \ud835\udc5einto a key-value pair, where each key \ud835\udc5b, mentioned within the query, corresponds to an element in the graph template T template, and the value \ud835\udc63represents the information extracted from the query. Concurrently, the query intents I include the entities mentioned in the graph template T template that the query aims to address. We leverage LLM with a suitable prompt in this parsing process. For instance, given the query \ud835\udc5e= \"How to reproduce the login issue where a user can\u2019t log in to LinkedIn?\", the extracted entity is P = Map(\"issue summary\" \u2192\"login issue\", \"issue description\" \u2192\"user can\u2019t log in to LinkedIn\"), and the intent set is I=Set(\"fix solution\"). This method demonstrates notable flexibility in accommodating varied query formulations by leveraging the LLM\u2019s extensive understanding and interpretive capabilities. \ud835\udc43, \ud835\udc3c= LLM(\ud835\udc5e,\ud835\udc47template, prompt) 3.2.2 Embedding-based Retrieval of Sub-graphs. Our method extracts pertinent sub-graphs from the knowledge graph, aligned with user-provided specifics such as \"issue description\" and \"issue summary\", as well as user intentions like \"fix solution\". This process consists of two primary steps: EBR-based ticket identification and LLM-driven subgraph extraction. In the EBR-based ticket identification step, the top \ud835\udc3eticket most relevant historical issue tickets are pinpointed by harnessing the named entity set P derived from user queries. For each entity pair (\ud835\udc58, \ud835\udc63) \u2208P, cosine similarity is computed between the entity value \ud835\udc63and all graph nodes \ud835\udc5bcorresponding to section \ud835\udc58via pretrained text embeddings. Aggregating these node-level scores to ticket-level by summing contributions from nodes belonging to SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Zhentao Xu, et al. the same ticket, we rank and select the top \ud835\udc3eticket tickets. This method presupposes that the occurrence of multiple query entities is indicative of pertinent links, thus improving retrieval precision. \ud835\udc46\ud835\udc47\ud835\udc56= \u2211\ufe01 (\ud835\udc58,\ud835\udc63)\u2208P \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 \u2211\ufe01 \ud835\udc5b\u2208\ud835\udc47\ud835\udc56 I{\ud835\udc5b.sec = \ud835\udc58} \u00b7 cos(embed(\ud835\udc63), embed(\ud835\udc5b.text)) \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb In the LLM-driven subgraph extraction step, the system first rephrases the original user query \ud835\udc5eto include the retrieved ticket ID; the modified query \ud835\udc5e\u2032 is then translated into a graph database language, such as Cypher for Neo4j for question answering. For instance, from the initial query \ud835\udc5e=\"how to reproduce the issue where user saw \u2019csv upload error in updating user email\u2019 with major priority due to a data issue\", the query is reformulated to \"how to reproduce \u2019ENT-22970\u2019 and thereafter transposed into the Cypher query MATCH (j:Ticket {ticket_ID: \u2019ENT-22970\u2019}) -[:HAS_DESCRIPTION]-> (description:Description) -[:HAS_STEPS_TO_REPRODUCE]-> (steps_to_reproduce: StepsToReproduce) RETURN steps_to_reproduce.value. It is noteworthy that the LLM-driven query formulation is sufficiently versatile to retrieve information across subgraphs, whether they originate from the same tree or distinct trees within the knowledge graph. 3.2.3 Answer Generation. Answers are synthesized by correlating retrieved data from Section 3.2.2 with the initial query. The LLM serves as a decoder to formulate responses to user inquiries given the retrieved information. For robust online serving, if query execution encounters issues, a fallback mechanism reverts to a baseline text-based retrieval method 4 EXPERIMENT 4.1 Experiment Design Our evaluation employed a curated \"golden\" dataset comprising typical queries, support tickets, and their authoritative solutions. The control group operated with conventional text-based EBR, while the experimental group applied the methodology outlined in this study. For both groups, we utilized the same LLM, specifically GPT-4 [1], and the same embedding model, E5 [17]. We measured retrieval efficacy using Mean Reciprocal Rank (MRR), recall@K, and NDCG@K. MRR gauges the average inverse rank of the initial correct response, recall@K determines the likelihood of a relevant item\u2019s appearance within the top K selections, and NDCG@K appraises the rank quality by considering both position and pertinence of items. For question-answering performance, we juxtaposed the \"golden\" solutions against the generated responses, utilizing metrics such as BLEU [11], ROUGE [9], and METEOR [3] scores. 4.2 Result and Analysis The retrieval and question-answering performances are presented in Table 1 and Table 2, respectively. Across all metrics, our method demonstrates consistent improvements. Notably, it surpasses the baseline by 77.6% in MRR and by 0.32 in BLEU score, substantiating its superior retrieval efficacy and question-answering accuracy. Table 1: Retrieval Performance MRR Recall@K NDCG@K K=1 K=3 K=1 K=3 Baseline 0.522 0.400 0.640 0.400 0.520 Experiment 0.927 0.860 1.000 0.860 0.946 Table 2: Question Answering Performance BLEU METEOR ROUGE Baseline 0.057 0.279 0.183 Experiment 0.377 0.613 0.546 5 PRODUCTION USE CASE We deployed our method within LinkedIn\u2019s customer service team, covering multiple product lines. The team was split randomly into two groups: one used our system, while the other stuck to traditional manual methods. As shown in Table 3, the group using our system achieved significant gains, reducing the median resolution time per issue by 28.6%. This highlights our system\u2019s effectiveness in enhancing customer service efficiency. Table 3: Customer Support Issue Resolution Time Group Mean P50 P90 Tool Not Used 40 Hours 7 Hours 87 Hours Tool Used 15 hours 5 hours 47 hours 6 CONCLUSIONS AND FUTURE WORK In conclusion, our research significantly advances automated question answering systems for customer service. Integrating retrieval augmented generation (RAG) with a knowledge graph (KG) has improved retrieval and answering metrics, and overall service effectiveness. Future work will focus on: developing an automated mechanism for extracting graph templates, enhancing system adaptability; investigating dynamic updates to the knowledge graph based on user queries to improve real-time responsiveness; and exploring the system\u2019s applicability in other contexts beyond customer service. 7 COMPANY PORTRAIT About LinkedIn: Founded in 2003, LinkedIn connects the world\u2019s professionals to make them more productive and successful. With more than 1 billion members worldwide, including executives from every Fortune 500 company, LinkedIn is the world\u2019s largest professional network. The company has a diversified business model with revenue coming from Talent Solutions, Marketing Solutions, Sales Solutions and Premium Subscriptions products. Headquartered in Silicon Valley, LinkedIn has offices across the globe. Please visit https://www.linkedin.com/company/linkedin/about/ for more information. Retrieval-Augmented Generation with Knowledge Graphs for Customer Service Question Answering SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA 8 PRESENTER BIO Zhentao Xu is a Senior Software Engineer at LinkedIn. He received his M.S. in Robotics and B.S. in Electrical Engineering and Computer Science (EECS) from University of Michigan. His research interests lie in large language models and natural language generation." + }, + { + "url": "http://arxiv.org/abs/2308.10173v1", + "title": "FoodGPT: A Large Language Model in Food Testing Domain with Incremental Pre-training and Knowledge Graph Prompt", + "abstract": "Currently, the construction of large language models in specific domains is\ndone by fine-tuning on a base model. Some models also incorporate knowledge\nbases without the need for pre-training. This is because the base model already\ncontains domain-specific knowledge during the pre-training process. We build a\nlarge language model for food testing. Unlike the above approach, a significant\namount of data in this domain exists in Scanning format for domain standard\ndocuments. In addition, there is a large amount of untrained structured\nknowledge. Therefore, we introduce an incremental pre-training step to inject\nthis knowledge into a large language model. In this paper, we propose a method\nfor handling structured knowledge and scanned documents in incremental\npre-training. To overcome the problem of machine hallucination, we constructe a\nknowledge graph to serve as an external knowledge base for supporting retrieval\nin the large language model. It is worth mentioning that this paper is a\ntechnical report of our pre-release version, and we will report our specific\nexperimental data in future versions.", + "authors": "Zhixiao Qi, Yijiong Yu, Meiqi Tu, Junyi Tan, Yongfeng Huang", + "published": "2023-08-20", + "updated": "2023-08-20", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2312.02783v2", + "title": "Large Language Models on Graphs: A Comprehensive Survey", + "abstract": "Large language models (LLMs), such as GPT4 and LLaMA, are creating\nsignificant advancements in natural language processing, due to their strong\ntext encoding/decoding ability and newly found emergent capability (e.g.,\nreasoning). While LLMs are mainly designed to process pure texts, there are\nmany real-world scenarios where text data is associated with rich structure\ninformation in the form of graphs (e.g., academic networks, and e-commerce\nnetworks) or scenarios where graph data is paired with rich textual information\n(e.g., molecules with descriptions). Besides, although LLMs have shown their\npure text-based reasoning ability, it is underexplored whether such ability can\nbe generalized to graphs (i.e., graph-based reasoning). In this paper, we\nprovide a systematic review of scenarios and techniques related to large\nlanguage models on graphs. We first summarize potential scenarios of adopting\nLLMs on graphs into three categories, namely pure graphs, text-attributed\ngraphs, and text-paired graphs. We then discuss detailed techniques for\nutilizing LLMs on graphs, including LLM as Predictor, LLM as Encoder, and LLM\nas Aligner, and compare the advantages and disadvantages of different schools\nof models. Furthermore, we discuss the real-world applications of such methods\nand summarize open-source codes and benchmark datasets. Finally, we conclude\nwith potential future research directions in this fast-growing field. The\nrelated source can be found at\nhttps://github.com/PeterGriffinJin/Awesome-Language-Model-on-Graphs.", + "authors": "Bowen Jin, Gang Liu, Chi Han, Meng Jiang, Heng Ji, Jiawei Han", + "published": "2023-12-05", + "updated": "2024-02-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1404.4326v1", + "title": "Open Question Answering with Weakly Supervised Embedding Models", + "abstract": "Building computers able to answer questions on any subject is a long standing\ngoal of artificial intelligence. Promising progress has recently been achieved\nby methods that learn to map questions to logical forms or database queries.\nSuch approaches can be effective but at the cost of either large amounts of\nhuman-labeled data or by defining lexicons and grammars tailored by\npractitioners. In this paper, we instead take the radical approach of learning\nto map questions to vectorial feature representations. By mapping answers into\nthe same space one can query any knowledge base independent of its schema,\nwithout requiring any grammar or lexicon. Our method is trained with a new\noptimization procedure combining stochastic gradient descent followed by a\nfine-tuning step using the weak supervision provided by blending automatically\nand collaboratively generated resources. We empirically demonstrate that our\nmodel can capture meaningful signals from its noisy supervision leading to\nmajor improvements over paralex, the only existing method able to be trained on\nsimilar weakly labeled data.", + "authors": "Antoine Bordes, Jason Weston, Nicolas Usunier", + "published": "2014-04-16", + "updated": "2014-04-16", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.01061v2", + "title": "Reasoning on Graphs: Faithful and Interpretable Large Language Model Reasoning", + "abstract": "Large language models (LLMs) have demonstrated impressive reasoning abilities\nin complex tasks. However, they lack up-to-date knowledge and experience\nhallucinations during reasoning, which can lead to incorrect reasoning\nprocesses and diminish their performance and trustworthiness. Knowledge graphs\n(KGs), which capture vast amounts of facts in a structured format, offer a\nreliable source of knowledge for reasoning. Nevertheless, existing KG-based LLM\nreasoning methods only treat KGs as factual knowledge bases and overlook the\nimportance of their structural information for reasoning. In this paper, we\npropose a novel method called reasoning on graphs (RoG) that synergizes LLMs\nwith KGs to enable faithful and interpretable reasoning. Specifically, we\npresent a planning-retrieval-reasoning framework, where RoG first generates\nrelation paths grounded by KGs as faithful plans. These plans are then used to\nretrieve valid reasoning paths from the KGs for LLMs to conduct faithful\nreasoning. Furthermore, RoG not only distills knowledge from KGs to improve the\nreasoning ability of LLMs through training but also allows seamless integration\nwith any arbitrary LLMs during inference. Extensive experiments on two\nbenchmark KGQA datasets demonstrate that RoG achieves state-of-the-art\nperformance on KG reasoning tasks and generates faithful and interpretable\nreasoning results.", + "authors": "Linhao Luo, Yuan-Fang Li, Gholamreza Haffari, Shirui Pan", + "published": "2023-10-02", + "updated": "2024-02-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2308.09729v5", + "title": "MindMap: Knowledge Graph Prompting Sparks Graph of Thoughts in Large Language Models", + "abstract": "Large language models (LLMs) have achieved remarkable performance in natural\nlanguage understanding and generation tasks. However, they often suffer from\nlimitations such as difficulty in incorporating new knowledge, generating\nhallucinations, and explaining their reasoning process. To address these\nchallenges, we propose a novel prompting pipeline, named \\method, that\nleverages knowledge graphs (KGs) to enhance LLMs' inference and transparency.\nOur method enables LLMs to comprehend KG inputs and infer with a combination of\nimplicit and external knowledge. Moreover, our method elicits the mind map of\nLLMs, which reveals their reasoning pathways based on the ontology of\nknowledge. We evaluate our method on diverse question \\& answering tasks,\nespecially in medical domains, and show significant improvements over\nbaselines. We also introduce a new hallucination evaluation benchmark and\nanalyze the effects of different components of our method. Our results\ndemonstrate the effectiveness and robustness of our method in merging knowledge\nfrom LLMs and KGs for combined inference. To reproduce our results and extend\nthe framework further, we make our codebase available at\nhttps://github.com/wyl-willing/MindMap.", + "authors": "Yilin Wen, Zifeng Wang, Jimeng Sun", + "published": "2023-08-17", + "updated": "2024-03-02", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1809.00782v1", + "title": "Open Domain Question Answering Using Early Fusion of Knowledge Bases and Text", + "abstract": "Open Domain Question Answering (QA) is evolving from complex pipelined\nsystems to end-to-end deep neural networks. Specialized neural models have been\ndeveloped for extracting answers from either text alone or Knowledge Bases\n(KBs) alone. In this paper we look at a more practical setting, namely QA over\nthe combination of a KB and entity-linked text, which is appropriate when an\nincomplete KB is available with a large text corpus. Building on recent\nadvances in graph representation learning we propose a novel model, GRAFT-Net,\nfor extracting answers from a question-specific subgraph containing text and KB\nentities and relations. We construct a suite of benchmark tasks for this\nproblem, varying the difficulty of questions, the amount of training data, and\nKB completeness. We show that GRAFT-Net is competitive with the\nstate-of-the-art when tested using either KBs or text alone, and vastly\noutperforms existing methods in the combined setting. Source code is available\nat https://github.com/OceanskySun/GraftNet .", + "authors": "Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, William W. Cohen", + "published": "2018-09-04", + "updated": "2018-09-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2307.07697v6", + "title": "Think-on-Graph: Deep and Responsible Reasoning of Large Language Model on Knowledge Graph", + "abstract": "Although large language models (LLMs) have achieved significant success in\nvarious tasks, they often struggle with hallucination problems, especially in\nscenarios requiring deep and responsible reasoning. These issues could be\npartially addressed by introducing external knowledge graphs (KG) in LLM\nreasoning. In this paper, we propose a new LLM-KG integrating paradigm\n``$\\hbox{LLM}\\otimes\\hbox{KG}$'' which treats the LLM as an agent to\ninteractively explore related entities and relations on KGs and perform\nreasoning based on the retrieved knowledge. We further implement this paradigm\nby introducing a new approach called Think-on-Graph (ToG), in which the LLM\nagent iteratively executes beam search on KG, discovers the most promising\nreasoning paths, and returns the most likely reasoning results. We use a number\nof well-designed experiments to examine and illustrate the following advantages\nof ToG: 1) compared with LLMs, ToG has better deep reasoning power; 2) ToG has\nthe ability of knowledge traceability and knowledge correctability by\nleveraging LLMs reasoning and expert feedback; 3) ToG provides a flexible\nplug-and-play framework for different LLMs, KGs and prompting strategies\nwithout any additional training cost; 4) the performance of ToG with small LLM\nmodels could exceed large LLM such as GPT-4 in certain scenarios and this\nreduces the cost of LLM deployment and application. As a training-free method\nwith lower computational cost and better generality, ToG achieves overall SOTA\nin 6 out of 9 datasets where most previous SOTAs rely on additional training.", + "authors": "Jiashuo Sun, Chengjin Xu, Lumingyuan Tang, Saizhuo Wang, Chen Lin, Yeyun Gong, Lionel M. Ni, Heung-Yeung Shum, Jian Guo", + "published": "2023-07-15", + "updated": "2024-03-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2403.10059v1", + "title": "Repoformer: Selective Retrieval for Repository-Level Code Completion", + "abstract": "Recent advances in retrieval-augmented generation (RAG) have initiated a new\nera in repository-level code completion. However, the invariable use of\nretrieval in existing methods exposes issues in both efficiency and robustness,\nwith a large proportion of the retrieved contexts proving unhelpful or harmful\nto code language models (code LMs). To tackle the challenges, this paper\nproposes a selective RAG framework where retrieval is avoided when unnecessary.\nTo power this framework, we design a self-supervised learning approach that\nenables a code LM to accurately self-evaluate whether retrieval can improve its\noutput quality and robustly leverage the potentially noisy retrieved contexts.\nUsing this LM as both the selective retrieval policy and the generation model,\nour framework consistently outperforms the state-of-the-art prompting with an\ninvariable retrieval approach on diverse benchmarks including RepoEval,\nCrossCodeEval, and a new benchmark. Meanwhile, our selective retrieval strategy\nresults in strong efficiency improvements by as much as 70% inference speedup\nwithout harming the performance. We demonstrate that our framework effectively\naccommodates different generation models, retrievers, and programming\nlanguages. These advancements position our framework as an important step\ntowards more accurate and efficient repository-level code completion.", + "authors": "Di Wu, Wasi Uddin Ahmad, Dejiao Zhang, Murali Krishna Ramanathan, Xiaofei Ma", + "published": "2024-03-15", + "updated": "2024-03-15", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2309.01431v2", + "title": "Benchmarking Large Language Models in Retrieval-Augmented Generation", + "abstract": "Retrieval-Augmented Generation (RAG) is a promising approach for mitigating\nthe hallucination of large language models (LLMs). However, existing research\nlacks rigorous evaluation of the impact of retrieval-augmented generation on\ndifferent large language models, which make it challenging to identify the\npotential bottlenecks in the capabilities of RAG for different LLMs. In this\npaper, we systematically investigate the impact of Retrieval-Augmented\nGeneration on large language models. We analyze the performance of different\nlarge language models in 4 fundamental abilities required for RAG, including\nnoise robustness, negative rejection, information integration, and\ncounterfactual robustness. To this end, we establish Retrieval-Augmented\nGeneration Benchmark (RGB), a new corpus for RAG evaluation in both English and\nChinese. RGB divides the instances within the benchmark into 4 separate\ntestbeds based on the aforementioned fundamental abilities required to resolve\nthe case. Then we evaluate 6 representative LLMs on RGB to diagnose the\nchallenges of current LLMs when applying RAG. Evaluation reveals that while\nLLMs exhibit a certain degree of noise robustness, they still struggle\nsignificantly in terms of negative rejection, information integration, and\ndealing with false information. The aforementioned assessment outcomes indicate\nthat there is still a considerable journey ahead to effectively apply RAG to\nLLMs.", + "authors": "Jiawei Chen, Hongyu Lin, Xianpei Han, Le Sun", + "published": "2023-09-04", + "updated": "2023-12-20", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2404.04302v1", + "title": "CBR-RAG: Case-Based Reasoning for Retrieval Augmented Generation in LLMs for Legal Question Answering", + "abstract": "Retrieval-Augmented Generation (RAG) enhances Large Language Model (LLM)\noutput by providing prior knowledge as context to input. This is beneficial for\nknowledge-intensive and expert reliant tasks, including legal\nquestion-answering, which require evidence to validate generated text outputs.\nWe highlight that Case-Based Reasoning (CBR) presents key opportunities to\nstructure retrieval as part of the RAG process in an LLM. We introduce CBR-RAG,\nwhere CBR cycle's initial retrieval stage, its indexing vocabulary, and\nsimilarity knowledge containers are used to enhance LLM queries with\ncontextually relevant cases. This integration augments the original LLM query,\nproviding a richer prompt. We present an evaluation of CBR-RAG, and examine\ndifferent representations (i.e. general and domain-specific embeddings) and\nmethods of comparison (i.e. inter, intra and hybrid similarity) on the task of\nlegal question-answering. Our results indicate that the context provided by\nCBR's case reuse enforces similarity between relevant components of the\nquestions and the evidence base leading to significant improvements in the\nquality of generated answers.", + "authors": "Nirmalie Wiratunga, Ramitha Abeyratne, Lasal Jayawardena, Kyle Martin, Stewart Massie, Ikechukwu Nkisi-Orji, Ruvan Weerasinghe, Anne Liret, Bruno Fleisch", + "published": "2024-04-04", + "updated": "2024-04-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2312.07559v2", + "title": "PaperQA: Retrieval-Augmented Generative Agent for Scientific Research", + "abstract": "Large Language Models (LLMs) generalize well across language tasks, but\nsuffer from hallucinations and uninterpretability, making it difficult to\nassess their accuracy without ground-truth. Retrieval-Augmented Generation\n(RAG) models have been proposed to reduce hallucinations and provide provenance\nfor how an answer was generated. Applying such models to the scientific\nliterature may enable large-scale, systematic processing of scientific\nknowledge. We present PaperQA, a RAG agent for answering questions over the\nscientific literature. PaperQA is an agent that performs information retrieval\nacross full-text scientific articles, assesses the relevance of sources and\npassages, and uses RAG to provide answers. Viewing this agent as a question\nanswering model, we find it exceeds performance of existing LLMs and LLM agents\non current science QA benchmarks. To push the field closer to how humans\nperform research on scientific literature, we also introduce LitQA, a more\ncomplex benchmark that requires retrieval and synthesis of information from\nfull-text scientific papers across the literature. Finally, we demonstrate\nPaperQA's matches expert human researchers on LitQA.", + "authors": "Jakub L\u00e1la, Odhran O'Donoghue, Aleksandar Shtedritski, Sam Cox, Samuel G. Rodriques, Andrew D. White", + "published": "2023-12-08", + "updated": "2023-12-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2404.12309v1", + "title": "iRAG: An Incremental Retrieval Augmented Generation System for Videos", + "abstract": "Retrieval augmented generation (RAG) systems combine the strengths of\nlanguage generation and information retrieval to power many real-world\napplications like chatbots. Use of RAG for combined understanding of multimodal\ndata such as text, images and videos is appealing but two critical limitations\nexist: one-time, upfront capture of all content in large multimodal data as\ntext descriptions entails high processing times, and not all information in the\nrich multimodal data is typically in the text descriptions. Since the user\nqueries are not known apriori, developing a system for multimodal to text\nconversion and interactive querying of multimodal data is challenging.\n To address these limitations, we propose iRAG, which augments RAG with a\nnovel incremental workflow to enable interactive querying of large corpus of\nmultimodal data. Unlike traditional RAG, iRAG quickly indexes large\nrepositories of multimodal data, and in the incremental workflow, it uses the\nindex to opportunistically extract more details from select portions of the\nmultimodal data to retrieve context relevant to an interactive user query. Such\nan incremental workflow avoids long multimodal to text conversion times,\novercomes information loss issues by doing on-demand query-specific extraction\nof details in multimodal data, and ensures high quality of responses to\ninteractive user queries that are often not known apriori. To the best of our\nknowledge, iRAG is the first system to augment RAG with an incremental workflow\nto support efficient interactive querying of large, real-world multimodal data.\nExperimental results on real-world long videos demonstrate 23x to 25x faster\nvideo to text ingestion, while ensuring that quality of responses to\ninteractive user queries is comparable to responses from a traditional RAG\nwhere all video data is converted to text upfront before any querying.", + "authors": "Md Adnan Arefeen, Biplob Debnath, Md Yusuf Sarwar Uddin, Srimat Chakradhar", + "published": "2024-04-18", + "updated": "2024-04-18", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.IR", + "cs.LG" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2403.00820v1", + "title": "Retrieval Augmented Generation Systems: Automatic Dataset Creation, Evaluation and Boolean Agent Setup", + "abstract": "Retrieval Augmented Generation (RAG) systems have seen huge popularity in\naugmenting Large-Language Model (LLM) outputs with domain specific and time\nsensitive data. Very recently a shift is happening from simple RAG setups that\nquery a vector database for additional information with every user input to\nmore sophisticated forms of RAG. However, different concrete approaches compete\non mostly anecdotal evidence at the moment. In this paper we present a rigorous\ndataset creation and evaluation workflow to quantitatively compare different\nRAG strategies. We use a dataset created this way for the development and\nevaluation of a boolean agent RAG setup: A system in which a LLM can decide\nwhether to query a vector database or not, thus saving tokens on questions that\ncan be answered with internal knowledge. We publish our code and generated\ndataset online.", + "authors": "Tristan Kenneweg, Philip Kenneweg, Barbara Hammer", + "published": "2024-02-26", + "updated": "2024-02-26", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2404.01037v1", + "title": "ARAGOG: Advanced RAG Output Grading", + "abstract": "Retrieval-Augmented Generation (RAG) is essential for integrating external\nknowledge into Large Language Model (LLM) outputs. While the literature on RAG\nis growing, it primarily focuses on systematic reviews and comparisons of new\nstate-of-the-art (SoTA) techniques against their predecessors, with a gap in\nextensive experimental comparisons. This study begins to address this gap by\nassessing various RAG methods' impacts on retrieval precision and answer\nsimilarity. We found that Hypothetical Document Embedding (HyDE) and LLM\nreranking significantly enhance retrieval precision. However, Maximal Marginal\nRelevance (MMR) and Cohere rerank did not exhibit notable advantages over a\nbaseline Naive RAG system, and Multi-query approaches underperformed. Sentence\nWindow Retrieval emerged as the most effective for retrieval precision, despite\nits variable performance on answer similarity. The study confirms the potential\nof the Document Summary Index as a competent retrieval approach. All resources\nrelated to this research are publicly accessible for further investigation\nthrough our GitHub repository ARAGOG (https://github.com/predlico/ARAGOG). We\nwelcome the community to further this exploratory study in RAG systems.", + "authors": "Matou\u0161 Eibich, Shivay Nagpal, Alexander Fred-Ojala", + "published": "2024-04-01", + "updated": "2024-04-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.IR", + "I.2.7" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2307.04642v2", + "title": "TRAQ: Trustworthy Retrieval Augmented Question Answering via Conformal Prediction", + "abstract": "When applied to open-domain question answering, large language models (LLMs)\nfrequently generate incorrect responses based on made-up facts, which are\ncalled $\\textit{hallucinations}$. Retrieval augmented generation (RAG) is a\npromising strategy to avoid hallucinations, but it does not provide guarantees\non its correctness. To address this challenge, we propose the Trustworthy\nRetrieval Augmented Question Answering, or $\\textit{TRAQ}$, which provides the\nfirst end-to-end statistical correctness guarantee for RAG. TRAQ uses conformal\nprediction, a statistical technique for constructing prediction sets that are\nguaranteed to contain the semantically correct response with high probability.\nAdditionally, TRAQ leverages Bayesian optimization to minimize the size of the\nconstructed sets. In an extensive experimental evaluation, we demonstrate that\nTRAQ provides the desired correctness guarantee while reducing prediction set\nsize by 16.2% on average compared to an ablation. The implementation is\navailable at $\\href{https://github.com/shuoli90/TRAQ.git}{TRAQ}$.", + "authors": "Shuo Li, Sangdon Park, Insup Lee, Osbert Bastani", + "published": "2023-07-07", + "updated": "2024-04-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2404.12065v1", + "title": "RAGAR, Your Falsehood RADAR: RAG-Augmented Reasoning for Political Fact-Checking using Multimodal Large Language Models", + "abstract": "The escalating challenge of misinformation, particularly in the context of\npolitical discourse, necessitates advanced solutions for fact-checking. We\nintroduce innovative approaches to enhance the reliability and efficiency of\nmultimodal fact-checking through the integration of Large Language Models\n(LLMs) with Retrieval-augmented Generation (RAG)- based advanced reasoning\ntechniques. This work proposes two novel methodologies, Chain of RAG (CoRAG)\nand Tree of RAG (ToRAG). The approaches are designed to handle multimodal\nclaims by reasoning the next questions that need to be answered based on\nprevious evidence. Our approaches improve the accuracy of veracity predictions\nand the generation of explanations over the traditional fact-checking approach\nof sub-question generation with chain of thought veracity prediction. By\nemploying multimodal LLMs adept at analyzing both text and images, this\nresearch advances the capability of automated systems in identifying and\ncountering misinformation.", + "authors": "M. Abdul Khaliq, P. Chang, M. Ma, B. Pflugfelder, F. Mileti\u0107", + "published": "2024-04-18", + "updated": "2024-04-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.ET", + "cs.MA" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2402.18150v1", + "title": "Unsupervised Information Refinement Training of Large Language Models for Retrieval-Augmented Generation", + "abstract": "Retrieval-augmented generation (RAG) enhances large language models (LLMs) by\nincorporating additional information from retrieval. However, studies have\nshown that LLMs still face challenges in effectively using the retrieved\ninformation, even ignoring it or being misled by it. The key reason is that the\ntraining of LLMs does not clearly make LLMs learn how to utilize input\nretrieved texts with varied quality. In this paper, we propose a novel\nperspective that considers the role of LLMs in RAG as ``Information Refiner'',\nwhich means that regardless of correctness, completeness, or usefulness of\nretrieved texts, LLMs can consistently integrate knowledge within the retrieved\ntexts and model parameters to generate the texts that are more concise,\naccurate, and complete than the retrieved texts. To this end, we propose an\ninformation refinement training method named InFO-RAG that optimizes LLMs for\nRAG in an unsupervised manner. InFO-RAG is low-cost and general across various\ntasks. Extensive experiments on zero-shot prediction of 11 datasets in diverse\ntasks including Question Answering, Slot-Filling, Language Modeling, Dialogue,\nand Code Generation show that InFO-RAG improves the performance of LLaMA2 by an\naverage of 9.39\\% relative points. InFO-RAG also shows advantages in in-context\nlearning and robustness of RAG.", + "authors": "Shicheng Xu, Liang Pang, Mo Yu, Fandong Meng, Huawei Shen, Xueqi Cheng, Jie Zhou", + "published": "2024-02-28", + "updated": "2024-02-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.IR" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2405.02816v1", + "title": "Stochastic RAG: End-to-End Retrieval-Augmented Generation through Expected Utility Maximization", + "abstract": "This paper introduces Stochastic RAG--a novel approach for end-to-end\noptimization of retrieval-augmented generation (RAG) models that relaxes the\nsimplifying assumptions of marginalization and document independence, made in\nmost prior work. Stochastic RAG casts the retrieval process in RAG as a\nstochastic sampling without replacement process. Through this formulation, we\nemploy straight-through Gumbel-top-k that provides a differentiable\napproximation for sampling without replacement and enables effective end-to-end\noptimization for RAG. We conduct extensive experiments on seven diverse\ndatasets on a wide range of tasks, from open-domain question answering to fact\nverification to slot-filling for relation extraction and to dialogue systems.\nBy applying this optimization method to a recent and effective RAG model, we\nadvance state-of-the-art results on six out of seven datasets.", + "authors": "Hamed Zamani, Michael Bendersky", + "published": "2024-05-05", + "updated": "2024-05-05", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.IR", + "cs.LG" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2402.19473v4", + "title": "Retrieval-Augmented Generation for AI-Generated Content: A Survey", + "abstract": "Advancements in model algorithms, the growth of foundational models, and\naccess to high-quality datasets have propelled the evolution of Artificial\nIntelligence Generated Content (AIGC). Despite its notable successes, AIGC\nstill faces hurdles such as updating knowledge, handling long-tail data,\nmitigating data leakage, and managing high training and inference costs.\nRetrieval-Augmented Generation (RAG) has recently emerged as a paradigm to\naddress such challenges. In particular, RAG introduces the information\nretrieval process, which enhances the generation process by retrieving relevant\nobjects from available data stores, leading to higher accuracy and better\nrobustness. In this paper, we comprehensively review existing efforts that\nintegrate RAG technique into AIGC scenarios. We first classify RAG foundations\naccording to how the retriever augments the generator, distilling the\nfundamental abstractions of the augmentation methodologies for various\nretrievers and generators. This unified perspective encompasses all RAG\nscenarios, illuminating advancements and pivotal technologies that help with\npotential future progress. We also summarize additional enhancements methods\nfor RAG, facilitating effective engineering and implementation of RAG systems.\nThen from another view, we survey on practical applications of RAG across\ndifferent modalities and tasks, offering valuable references for researchers\nand practitioners. Furthermore, we introduce the benchmarks for RAG, discuss\nthe limitations of current RAG systems, and suggest potential directions for\nfuture research. Github: https://github.com/PKU-DAIR/RAG-Survey.", + "authors": "Penghao Zhao, Hailin Zhang, Qinhan Yu, Zhengren Wang, Yunteng Geng, Fangcheng Fu, Ling Yang, Wentao Zhang, Jie Jiang, Bin Cui", + "published": "2024-02-29", + "updated": "2024-05-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2403.01432v2", + "title": "Fine Tuning vs. Retrieval Augmented Generation for Less Popular Knowledge", + "abstract": "Large language models (LLMs) memorize a vast amount of factual knowledge,\nexhibiting strong performance across diverse tasks and domains. However, it has\nbeen observed that the performance diminishes when dealing with less-popular or\nlow-frequency concepts and entities, for example in domain specific\napplications. The two prominent approaches to enhance the performance of LLMs\non low-frequent topics are: Retrieval Augmented Generation (RAG) and\nfine-tuning (FT) over synthetic data. This paper explores and evaluates the\nimpact of RAG and FT on customizing LLMs in handling low-frequency entities on\nquestion answering task. Our findings indicate that FT significantly boosts the\nperformance across entities of varying popularity, especially in the most and\nleast popular groups, while RAG surpasses other methods. Additionally, the\nsuccess of both RAG and FT approaches is amplified by advancements in retrieval\nand data augmentation techniques. We release our data and code at\nhttps://github.com/informagi/RAGvsFT.", + "authors": "Heydar Soudani, Evangelos Kanoulas, Faegheh Hasibi", + "published": "2024-03-03", + "updated": "2024-03-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2401.12599v1", + "title": "Revolutionizing Retrieval-Augmented Generation with Enhanced PDF Structure Recognition", + "abstract": "With the rapid development of Large Language Models (LLMs),\nRetrieval-Augmented Generation (RAG) has become a predominant method in the\nfield of professional knowledge-based question answering. Presently, major\nfoundation model companies have opened up Embedding and Chat API interfaces,\nand frameworks like LangChain have already integrated the RAG process. It\nappears that the key models and steps in RAG have been resolved, leading to the\nquestion: are professional knowledge QA systems now approaching perfection?\nThis article discovers that current primary methods depend on the premise of\naccessing high-quality text corpora. However, since professional documents are\nmainly stored in PDFs, the low accuracy of PDF parsing significantly impacts\nthe effectiveness of professional knowledge-based QA. We conducted an empirical\nRAG experiment across hundreds of questions from the corresponding real-world\nprofessional documents. The results show that, ChatDOC, a RAG system equipped\nwith a panoptic and pinpoint PDF parser, retrieves more accurate and complete\nsegments, and thus better answers. Empirical experiments show that ChatDOC is\nsuperior to baseline on nearly 47% of questions, ties for 38% of cases, and\nfalls short on only 15% of cases. It shows that we may revolutionize RAG with\nenhanced PDF structure recognition.", + "authors": "Demiao Lin", + "published": "2024-01-23", + "updated": "2024-01-23", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2401.06954v2", + "title": "Bridging the Preference Gap between Retrievers and LLMs", + "abstract": "Large Language Models (LLMs) have demonstrated superior results across a wide\nrange of tasks, and Retrieval-augmented Generation (RAG) is an effective way to\nenhance the performance by locating relevant information and placing it into\nthe context window of the LLM. However, the relationship between retrievers and\nLLMs in a RAG is still under-investigated. Most existing work treats the\nretriever and the LLM as independent components and leaves a gap between\nretrieving human-\"friendly\" information and assembling a LLM-\"friendly\"\ncontext. In this work, we examine a novel bridge mechanism. We validate the\nranking and selection assumptions of retrievers in the context of RAG and\npropose a framework that chains together supervised and reinforcement learning\nto train a bridge model that optimizes the connection between the retriever and\nthe LLM. Empirical results demonstrate the effectiveness of our method in both\nquestion-answering and personalized generation tasks.", + "authors": "Zixuan Ke, Weize Kong, Cheng Li, Mingyang Zhang, Qiaozhu Mei, Michael Bendersky", + "published": "2024-01-13", + "updated": "2024-02-20", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2402.16874v1", + "title": "Enhancing Retrieval Processes for Language Generation with Augmented Queries", + "abstract": "In the rapidly changing world of smart technology, searching for documents\nhas become more challenging due to the rise of advanced language models. These\nmodels sometimes face difficulties, like providing inaccurate information,\ncommonly known as \"hallucination.\" This research focuses on addressing this\nissue through Retrieval-Augmented Generation (RAG), a technique that guides\nmodels to give accurate responses based on real facts. To overcome scalability\nissues, the study explores connecting user queries with sophisticated language\nmodels such as BERT and Orca2, using an innovative query optimization process.\nThe study unfolds in three scenarios: first, without RAG, second, without\nadditional assistance, and finally, with extra help. Choosing the compact yet\nefficient Orca2 7B model demonstrates a smart use of computing resources. The\nempirical results indicate a significant improvement in the initial language\nmodel's performance under RAG, particularly when assisted with prompts\naugmenters. Consistency in document retrieval across different encodings\nhighlights the effectiveness of using language model-generated queries. The\nintroduction of UMAP for BERT further simplifies document retrieval while\nmaintaining strong results.", + "authors": "Julien Pierre Edmond Ghali, Kosuke Shima, Koichi Moriyama, Atsuko Mutoh, Nobuhiro Inuzuka", + "published": "2024-02-06", + "updated": "2024-02-06", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2402.05131v3", + "title": "Financial Report Chunking for Effective Retrieval Augmented Generation", + "abstract": "Chunking information is a key step in Retrieval Augmented Generation (RAG).\nCurrent research primarily centers on paragraph-level chunking. This approach\ntreats all texts as equal and neglects the information contained in the\nstructure of documents. We propose an expanded approach to chunk documents by\nmoving beyond mere paragraph-level chunking to chunk primary by structural\nelement components of documents. Dissecting documents into these constituent\nelements creates a new way to chunk documents that yields the best chunk size\nwithout tuning. We introduce a novel framework that evaluates how chunking\nbased on element types annotated by document understanding models contributes\nto the overall context and accuracy of the information retrieved. We also\ndemonstrate how this approach impacts RAG assisted Question & Answer task\nperformance. Our research includes a comprehensive analysis of various element\ntypes, their role in effective information retrieval, and the impact they have\non the quality of RAG outputs. Findings support that element type based\nchunking largely improve RAG results on financial reporting. Through this\nresearch, we are also able to answer how to uncover highly accurate RAG.", + "authors": "Antonio Jimeno Yepes, Yao You, Jan Milczek, Sebastian Laverde, Renyu Li", + "published": "2024-02-05", + "updated": "2024-03-16", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2309.15217v1", + "title": "RAGAS: Automated Evaluation of Retrieval Augmented Generation", + "abstract": "We introduce RAGAs (Retrieval Augmented Generation Assessment), a framework\nfor reference-free evaluation of Retrieval Augmented Generation (RAG)\npipelines. RAG systems are composed of a retrieval and an LLM based generation\nmodule, and provide LLMs with knowledge from a reference textual database,\nwhich enables them to act as a natural language layer between a user and\ntextual databases, reducing the risk of hallucinations. Evaluating RAG\narchitectures is, however, challenging because there are several dimensions to\nconsider: the ability of the retrieval system to identify relevant and focused\ncontext passages, the ability of the LLM to exploit such passages in a faithful\nway, or the quality of the generation itself. With RAGAs, we put forward a\nsuite of metrics which can be used to evaluate these different dimensions\n\\textit{without having to rely on ground truth human annotations}. We posit\nthat such a framework can crucially contribute to faster evaluation cycles of\nRAG architectures, which is especially important given the fast adoption of\nLLMs.", + "authors": "Shahul Es, Jithin James, Luis Espinosa-Anke, Steven Schockaert", + "published": "2023-09-26", + "updated": "2023-09-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2210.02627v1", + "title": "Improving the Domain Adaptation of Retrieval Augmented Generation (RAG) Models for Open Domain Question Answering", + "abstract": "Retrieval Augment Generation (RAG) is a recent advancement in Open-Domain\nQuestion Answering (ODQA). RAG has only been trained and explored with a\nWikipedia-based external knowledge base and is not optimized for use in other\nspecialized domains such as healthcare and news. In this paper, we evaluate the\nimpact of joint training of the retriever and generator components of RAG for\nthe task of domain adaptation in ODQA. We propose \\textit{RAG-end2end}, an\nextension to RAG, that can adapt to a domain-specific knowledge base by\nupdating all components of the external knowledge base during training. In\naddition, we introduce an auxiliary training signal to inject more\ndomain-specific knowledge. This auxiliary signal forces \\textit{RAG-end2end} to\nreconstruct a given sentence by accessing the relevant information from the\nexternal knowledge base. Our novel contribution is unlike RAG, RAG-end2end does\njoint training of the retriever and generator for the end QA task and domain\nadaptation. We evaluate our approach with datasets from three domains:\nCOVID-19, News, and Conversations, and achieve significant performance\nimprovements compared to the original RAG model. Our work has been open-sourced\nthrough the Huggingface Transformers library, attesting to our work's\ncredibility and technical consistency.", + "authors": "Shamane Siriwardhana, Rivindu Weerasekera, Elliott Wen, Tharindu Kaluarachchi, Rajib Rana, Suranga Nanayakkara", + "published": "2022-10-06", + "updated": "2022-10-06", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.IR" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2404.16130v1", + "title": "From Local to Global: A Graph RAG Approach to Query-Focused Summarization", + "abstract": "The use of retrieval-augmented generation (RAG) to retrieve relevant\ninformation from an external knowledge source enables large language models\n(LLMs) to answer questions over private and/or previously unseen document\ncollections. However, RAG fails on global questions directed at an entire text\ncorpus, such as \"What are the main themes in the dataset?\", since this is\ninherently a query-focused summarization (QFS) task, rather than an explicit\nretrieval task. Prior QFS methods, meanwhile, fail to scale to the quantities\nof text indexed by typical RAG systems. To combine the strengths of these\ncontrasting methods, we propose a Graph RAG approach to question answering over\nprivate text corpora that scales with both the generality of user questions and\nthe quantity of source text to be indexed. Our approach uses an LLM to build a\ngraph-based text index in two stages: first to derive an entity knowledge graph\nfrom the source documents, then to pregenerate community summaries for all\ngroups of closely-related entities. Given a question, each community summary is\nused to generate a partial response, before all partial responses are again\nsummarized in a final response to the user. For a class of global sensemaking\nquestions over datasets in the 1 million token range, we show that Graph RAG\nleads to substantial improvements over a na\\\"ive RAG baseline for both the\ncomprehensiveness and diversity of generated answers. An open-source,\nPython-based implementation of both global and local Graph RAG approaches is\nforthcoming at https://aka.ms/graphrag.", + "authors": "Darren Edge, Ha Trinh, Newman Cheng, Joshua Bradley, Alex Chao, Apurva Mody, Steven Truitt, Jonathan Larson", + "published": "2024-04-24", + "updated": "2024-04-24", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.IR", + "H.3.3; I.2.7" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2402.09760v1", + "title": "Grounding Language Model with Chunking-Free In-Context Retrieval", + "abstract": "This paper presents a novel Chunking-Free In-Context (CFIC) retrieval\napproach, specifically tailored for Retrieval-Augmented Generation (RAG)\nsystems. Traditional RAG systems often struggle with grounding responses using\nprecise evidence text due to the challenges of processing lengthy documents and\nfiltering out irrelevant content. Commonly employed solutions, such as document\nchunking and adapting language models to handle longer contexts, have their\nlimitations. These methods either disrupt the semantic coherence of the text or\nfail to effectively address the issues of noise and inaccuracy in evidence\nretrieval.\n CFIC addresses these challenges by circumventing the conventional chunking\nprocess. It utilizes the encoded hidden states of documents for in-context\nretrieval, employing auto-aggressive decoding to accurately identify the\nspecific evidence text required for user queries, eliminating the need for\nchunking. CFIC is further enhanced by incorporating two decoding strategies,\nnamely Constrained Sentence Prefix Decoding and Skip Decoding. These strategies\nnot only improve the efficiency of the retrieval process but also ensure that\nthe fidelity of the generated grounding text evidence is maintained. Our\nevaluations of CFIC on a range of open QA datasets demonstrate its superiority\nin retrieving relevant and accurate evidence, offering a significant\nimprovement over traditional methods. By doing away with the need for document\nchunking, CFIC presents a more streamlined, effective, and efficient retrieval\nsolution, making it a valuable advancement in the field of RAG systems.", + "authors": "Hongjin Qian, Zheng Liu, Kelong Mao, Yujia Zhou, Zhicheng Dou", + "published": "2024-02-15", + "updated": "2024-02-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.IR" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2402.08416v1", + "title": "Pandora: Jailbreak GPTs by Retrieval Augmented Generation Poisoning", + "abstract": "Large Language Models~(LLMs) have gained immense popularity and are being\nincreasingly applied in various domains. Consequently, ensuring the security of\nthese models is of paramount importance. Jailbreak attacks, which manipulate\nLLMs to generate malicious content, are recognized as a significant\nvulnerability. While existing research has predominantly focused on direct\njailbreak attacks on LLMs, there has been limited exploration of indirect\nmethods. The integration of various plugins into LLMs, notably Retrieval\nAugmented Generation~(RAG), which enables LLMs to incorporate external\nknowledge bases into their response generation such as GPTs, introduces new\navenues for indirect jailbreak attacks.\n To fill this gap, we investigate indirect jailbreak attacks on LLMs,\nparticularly GPTs, introducing a novel attack vector named Retrieval Augmented\nGeneration Poisoning. This method, Pandora, exploits the synergy between LLMs\nand RAG through prompt manipulation to generate unexpected responses. Pandora\nuses maliciously crafted content to influence the RAG process, effectively\ninitiating jailbreak attacks. Our preliminary tests show that Pandora\nsuccessfully conducts jailbreak attacks in four different scenarios, achieving\nhigher success rates than direct attacks, with 64.3\\% for GPT-3.5 and 34.8\\%\nfor GPT-4.", + "authors": "Gelei Deng, Yi Liu, Kailong Wang, Yuekang Li, Tianwei Zhang, Yang Liu", + "published": "2024-02-13", + "updated": "2024-02-13", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2403.11366v2", + "title": "JORA: JAX Tensor-Parallel LoRA Library for Retrieval Augmented Fine-Tuning", + "abstract": "The scaling of Large Language Models (LLMs) for retrieval-based tasks,\nparticularly in Retrieval Augmented Generation (RAG), faces significant memory\nconstraints, especially when fine-tuning extensive prompt sequences. Current\nopen-source libraries support full-model inference and fine-tuning across\nmultiple GPUs but fall short of accommodating the efficient parameter\ndistribution required for retrieved context. Addressing this gap, we introduce\na novel framework for PEFT-compatible fine-tuning of Llama-2 models, leveraging\ndistributed training. Our framework uniquely utilizes JAX's just-in-time (JIT)\ncompilation and tensor-sharding for efficient resource management, thereby\nenabling accelerated fine-tuning with reduced memory requirements. This\nadvancement significantly improves the scalability and feasibility of\nfine-tuning LLMs for complex RAG applications, even on systems with limited GPU\nresources. Our experiments show more than 12x improvement in runtime compared\nto Hugging Face/DeepSpeed implementation with four GPUs while consuming less\nthan half the VRAM per GPU.", + "authors": "Anique Tahir, Lu Cheng, Huan Liu", + "published": "2024-03-17", + "updated": "2024-03-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL", + "cs.DC" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2402.07179v1", + "title": "Prompt Perturbation in Retrieval-Augmented Generation based Large Language Models", + "abstract": "The robustness of large language models (LLMs) becomes increasingly important\nas their use rapidly grows in a wide range of domains. Retrieval-Augmented\nGeneration (RAG) is considered as a means to improve the trustworthiness of\ntext generation from LLMs. However, how the outputs from RAG-based LLMs are\naffected by slightly different inputs is not well studied. In this work, we\nfind that the insertion of even a short prefix to the prompt leads to the\ngeneration of outputs far away from factually correct answers. We\nsystematically evaluate the effect of such prefixes on RAG by introducing a\nnovel optimization technique called Gradient Guided Prompt Perturbation (GGPP).\nGGPP achieves a high success rate in steering outputs of RAG-based LLMs to\ntargeted wrong answers. It can also cope with instructions in the prompts\nrequesting to ignore irrelevant context. We also exploit LLMs' neuron\nactivation difference between prompts with and without GGPP perturbations to\ngive a method that improves the robustness of RAG-based LLMs through a highly\neffective detector trained on neuron activation triggered by GGPP generated\nprompts. Our evaluation on open-sourced LLMs demonstrates the effectiveness of\nour methods.", + "authors": "Zhibo Hu, Chen Wang, Yanfeng Shu, Helen, Paik, Liming Zhu", + "published": "2024-02-11", + "updated": "2024-02-11", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.IR", + "I.2.7; H.3.3" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2404.00610v1", + "title": "RQ-RAG: Learning to Refine Queries for Retrieval Augmented Generation", + "abstract": "Large Language Models (LLMs) exhibit remarkable capabilities but are prone to\ngenerating inaccurate or hallucinatory responses. This limitation stems from\ntheir reliance on vast pretraining datasets, making them susceptible to errors\nin unseen scenarios. To tackle these challenges, Retrieval-Augmented Generation\n(RAG) addresses this by incorporating external, relevant documents into the\nresponse generation process, thus leveraging non-parametric knowledge alongside\nLLMs' in-context learning abilities. However, existing RAG implementations\nprimarily focus on initial input for context retrieval, overlooking the nuances\nof ambiguous or complex queries that necessitate further clarification or\ndecomposition for accurate responses. To this end, we propose learning to\nRefine Query for Retrieval Augmented Generation (RQ-RAG) in this paper,\nendeavoring to enhance the model by equipping it with capabilities for explicit\nrewriting, decomposition, and disambiguation. Our experimental results indicate\nthat our method, when applied to a 7B Llama2 model, surpasses the previous\nstate-of-the-art (SOTA) by an average of 1.9\\% across three single-hop QA\ndatasets, and also demonstrates enhanced performance in handling complex,\nmulti-hop QA datasets. Our code is available at\nhttps://github.com/chanchimin/RQ-RAG.", + "authors": "Chi-Min Chan, Chunpu Xu, Ruibin Yuan, Hongyin Luo, Wei Xue, Yike Guo, Jie Fu", + "published": "2024-03-31", + "updated": "2024-03-31", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2308.00479v1", + "title": "Retrieval Augmented Generation and Representative Vector Summarization for large unstructured textual data in Medical Education", + "abstract": "Large Language Models are increasingly being used for various tasks including\ncontent generation and as chatbots. Despite their impressive performances in\ngeneral tasks, LLMs need to be aligned when applying for domain specific tasks\nto mitigate the problems of hallucination and producing harmful answers.\nRetrieval Augmented Generation (RAG) allows to easily attach and manipulate a\nnon-parametric knowledgebases to LLMs. Applications of RAG in the field of\nmedical education are discussed in this paper. A combined extractive and\nabstractive summarization method for large unstructured textual data using\nrepresentative vectors is proposed.", + "authors": "S. S. Manathunga, Y. A. Illangasekara", + "published": "2023-08-01", + "updated": "2023-08-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "H.3.1; J.3" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2401.07883v1", + "title": "The Chronicles of RAG: The Retriever, the Chunk and the Generator", + "abstract": "Retrieval Augmented Generation (RAG) has become one of the most popular\nparadigms for enabling LLMs to access external data, and also as a mechanism\nfor grounding to mitigate against hallucinations. When implementing RAG you can\nface several challenges like effective integration of retrieval models,\nefficient representation learning, data diversity, computational efficiency\noptimization, evaluation, and quality of text generation. Given all these\nchallenges, every day a new technique to improve RAG appears, making it\nunfeasible to experiment with all combinations for your problem. In this\ncontext, this paper presents good practices to implement, optimize, and\nevaluate RAG for the Brazilian Portuguese language, focusing on the\nestablishment of a simple pipeline for inference and experiments. We explored a\ndiverse set of methods to answer questions about the first Harry Potter book.\nTo generate the answers we used the OpenAI's gpt-4, gpt-4-1106-preview,\ngpt-3.5-turbo-1106, and Google's Gemini Pro. Focusing on the quality of the\nretriever, our approach achieved an improvement of MRR@10 by 35.4% compared to\nthe baseline. When optimizing the input size in the application, we observed\nthat it is possible to further enhance it by 2.4%. Finally, we present the\ncomplete architecture of the RAG with our recommendations. As result, we moved\nfrom a baseline of 57.88% to a maximum relative score of 98.61%.", + "authors": "Paulo Finardi, Leonardo Avila, Rodrigo Castaldoni, Pedro Gengo, Celio Larcher, Marcos Piau, Pablo Costa, Vinicius Carid\u00e1", + "published": "2024-01-15", + "updated": "2024-01-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "cs.IR" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2402.03181v3", + "title": "C-RAG: Certified Generation Risks for Retrieval-Augmented Language Models", + "abstract": "Despite the impressive capabilities of large language models (LLMs) across\ndiverse applications, they still suffer from trustworthiness issues, such as\nhallucinations and misalignments. Retrieval-augmented language models (RAG)\nhave been proposed to enhance the credibility of generations by grounding\nexternal knowledge, but the theoretical understandings of their generation\nrisks remains unexplored. In this paper, we answer: 1) whether RAG can indeed\nlead to low generation risks, 2) how to provide provable guarantees on the\ngeneration risks of RAG and vanilla LLMs, and 3) what sufficient conditions\nenable RAG models to reduce generation risks. We propose C-RAG, the first\nframework to certify generation risks for RAG models. Specifically, we provide\nconformal risk analysis for RAG models and certify an upper confidence bound of\ngeneration risks, which we refer to as conformal generation risk. We also\nprovide theoretical guarantees on conformal generation risks for general\nbounded risk functions under test distribution shifts. We prove that RAG\nachieves a lower conformal generation risk than that of a single LLM when the\nquality of the retrieval model and transformer is non-trivial. Our intensive\nempirical results demonstrate the soundness and tightness of our conformal\ngeneration risk guarantees across four widely-used NLP datasets on four\nstate-of-the-art retrieval models.", + "authors": "Mintong Kang, Nezihe Merve G\u00fcrel, Ning Yu, Dawn Song, Bo Li", + "published": "2024-02-05", + "updated": "2024-03-03", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL", + "cs.IR" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2401.00396v1", + "title": "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models", + "abstract": "Retrieval-augmented generation (RAG) has become a main technique for\nalleviating hallucinations in large language models (LLMs). Despite the\nintegration of RAG, LLMs may still present unsupported or contradictory claims\nto the retrieved contents. In order to develop effective hallucination\nprevention strategies under RAG, it is important to create benchmark datasets\nthat can measure the extent of hallucination. This paper presents RAGTruth, a\ncorpus tailored for analyzing word-level hallucinations in various domains and\ntasks within the standard RAG frameworks for LLM applications. RAGTruth\ncomprises nearly 18,000 naturally generated responses from diverse LLMs using\nRAG. These responses have undergone meticulous manual annotations at both the\nindividual cases and word levels, incorporating evaluations of hallucination\nintensity. We not only benchmark hallucination frequencies across different\nLLMs, but also critically assess the effectiveness of several existing\nhallucination detection methodologies. Furthermore, we show that using a\nhigh-quality dataset such as RAGTruth, it is possible to finetune a relatively\nsmall LLM and achieve a competitive level of performance in hallucination\ndetection when compared to the existing prompt-based approaches using\nstate-of-the-art large language models such as GPT-4.", + "authors": "Yuanhao Wu, Juno Zhu, Siliang Xu, Kashun Shum, Cheng Niu, Randy Zhong, Juntong Song, Tong Zhang", + "published": "2023-12-31", + "updated": "2023-12-31", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2401.11246v1", + "title": "Prompt-RAG: Pioneering Vector Embedding-Free Retrieval-Augmented Generation in Niche Domains, Exemplified by Korean Medicine", + "abstract": "We propose a natural language prompt-based retrieval augmented generation\n(Prompt-RAG), a novel approach to enhance the performance of generative large\nlanguage models (LLMs) in niche domains. Conventional RAG methods mostly\nrequire vector embeddings, yet the suitability of generic LLM-based embedding\nrepresentations for specialized domains remains uncertain. To explore and\nexemplify this point, we compared vector embeddings from Korean Medicine (KM)\nand Conventional Medicine (CM) documents, finding that KM document embeddings\ncorrelated more with token overlaps and less with human-assessed document\nrelatedness, in contrast to CM embeddings. Prompt-RAG, distinct from\nconventional RAG models, operates without the need for embedding vectors. Its\nperformance was assessed through a Question-Answering (QA) chatbot application,\nwhere responses were evaluated for relevance, readability, and informativeness.\nThe results showed that Prompt-RAG outperformed existing models, including\nChatGPT and conventional vector embedding-based RAGs, in terms of relevance and\ninformativeness. Despite challenges like content structuring and response\nlatency, the advancements in LLMs are expected to encourage the use of\nPrompt-RAG, making it a promising tool for other domains in need of RAG\nmethods.", + "authors": "Bongsu Kang, Jundong Kim, Tae-Rim Yun, Chang-Eop Kim", + "published": "2024-01-20", + "updated": "2024-01-20", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.IR", + "I.2.7; H.3.3; J.3" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2404.08940v1", + "title": "Introducing Super RAGs in Mistral 8x7B-v1", + "abstract": "The relentless pursuit of enhancing Large Language Models (LLMs) has led to\nthe advent of Super Retrieval-Augmented Generation (Super RAGs), a novel\napproach designed to elevate the performance of LLMs by integrating external\nknowledge sources with minimal structural modifications. This paper presents\nthe integration of Super RAGs into the Mistral 8x7B v1, a state-of-the-art LLM,\nand examines the resultant improvements in accuracy, speed, and user\nsatisfaction. Our methodology uses a fine-tuned instruct model setup and a\ncache tuning fork system, ensuring efficient and relevant data retrieval. The\nevaluation, conducted over several epochs, demonstrates significant\nenhancements across all metrics. The findings suggest that Super RAGs can\neffectively augment LLMs, paving the way for more sophisticated and reliable AI\nsystems. This research contributes to the field by providing empirical evidence\nof the benefits of Super RAGs and offering insights into their potential\napplications.", + "authors": "Ayush Thakur, Raghav Gupta", + "published": "2024-04-13", + "updated": "2024-04-13", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.LG" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2401.06800v1", + "title": "Reinforcement Learning for Optimizing RAG for Domain Chatbots", + "abstract": "With the advent of Large Language Models (LLM), conversational assistants\nhave become prevalent for domain use cases. LLMs acquire the ability to\ncontextual question answering through training, and Retrieval Augmented\nGeneration (RAG) further enables the bot to answer domain-specific questions.\nThis paper describes a RAG-based approach for building a chatbot that answers\nuser's queries using Frequently Asked Questions (FAQ) data. We train an\nin-house retrieval embedding model using infoNCE loss, and experimental results\ndemonstrate that the in-house model works significantly better than the\nwell-known general-purpose public embedding model, both in terms of retrieval\naccuracy and Out-of-Domain (OOD) query detection. As an LLM, we use an open\nAPI-based paid ChatGPT model. We noticed that a previously retrieved-context\ncould be used to generate an answer for specific patterns/sequences of queries\n(e.g., follow-up queries). Hence, there is a scope to optimize the number of\nLLM tokens and cost. Assuming a fixed retrieval model and an LLM, we optimize\nthe number of LLM tokens using Reinforcement Learning (RL). Specifically, we\npropose a policy-based model external to the RAG, which interacts with the RAG\npipeline through policy actions and updates the policy to optimize the cost.\nThe policy model can perform two actions: to fetch FAQ context or skip\nretrieval. We use the open API-based GPT-4 as the reward model. We then train a\npolicy model using policy gradient on multiple training chat sessions. As a\npolicy model, we experimented with a public gpt-2 model and an in-house BERT\nmodel. With the proposed RL-based optimization combined with similarity\nthreshold, we are able to achieve significant cost savings while getting a\nslightly improved accuracy. Though we demonstrate results for the FAQ chatbot,\nthe proposed RL approach is generic and can be experimented with any existing\nRAG pipeline.", + "authors": "Mandar Kulkarni, Praveen Tangarajan, Kyung Kim, Anusua Trivedi", + "published": "2024-01-10", + "updated": "2024-01-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2405.01585v1", + "title": "Tabular Embedding Model (TEM): Finetuning Embedding Models For Tabular RAG Applications", + "abstract": "In recent times Large Language Models have exhibited tremendous capabilities,\nespecially in the areas of mathematics, code generation and general-purpose\nreasoning. However for specialized domains especially in applications that\nrequire parsing and analyzing large chunks of numeric or tabular data even\nstate-of-the-art (SOTA) models struggle. In this paper, we introduce a new\napproach to solving domain-specific tabular data analysis tasks by presenting a\nunique RAG workflow that mitigates the scalability issues of existing tabular\nLLM solutions. Specifically, we present Tabular Embedding Model (TEM), a novel\napproach to fine-tune embedding models for tabular Retrieval-Augmentation\nGeneration (RAG) applications. Embedding models form a crucial component in the\nRAG workflow and even current SOTA embedding models struggle as they are\npredominantly trained on textual datasets and thus underperform in scenarios\ninvolving complex tabular data. The evaluation results showcase that our\napproach not only outperforms current SOTA embedding models in this domain but\nalso does so with a notably smaller and more efficient model structure.", + "authors": "Sujit Khanna, Shishir Subedi", + "published": "2024-04-28", + "updated": "2024-04-28", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL", + "cs.IR" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2404.13948v1", + "title": "Typos that Broke the RAG's Back: Genetic Attack on RAG Pipeline by Simulating Documents in the Wild via Low-level Perturbations", + "abstract": "The robustness of recent Large Language Models (LLMs) has become increasingly\ncrucial as their applicability expands across various domains and real-world\napplications. Retrieval-Augmented Generation (RAG) is a promising solution for\naddressing the limitations of LLMs, yet existing studies on the robustness of\nRAG often overlook the interconnected relationships between RAG components or\nthe potential threats prevalent in real-world databases, such as minor textual\nerrors. In this work, we investigate two underexplored aspects when assessing\nthe robustness of RAG: 1) vulnerability to noisy documents through low-level\nperturbations and 2) a holistic evaluation of RAG robustness. Furthermore, we\nintroduce a novel attack method, the Genetic Attack on RAG (\\textit{GARAG}),\nwhich targets these aspects. Specifically, GARAG is designed to reveal\nvulnerabilities within each component and test the overall system functionality\nagainst noisy documents. We validate RAG robustness by applying our\n\\textit{GARAG} to standard QA datasets, incorporating diverse retrievers and\nLLMs. The experimental results show that GARAG consistently achieves high\nattack success rates. Also, it significantly devastates the performance of each\ncomponent and their synergy, highlighting the substantial risk that minor\ntextual inaccuracies pose in disrupting RAG systems in the real world.", + "authors": "Sukmin Cho, Soyeong Jeong, Jeongyeon Seo, Taeho Hwang, Jong C. Park", + "published": "2024-04-22", + "updated": "2024-04-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2402.17840v1", + "title": "Follow My Instruction and Spill the Beans: Scalable Data Extraction from Retrieval-Augmented Generation Systems", + "abstract": "Retrieval-Augmented Generation (RAG) improves pre-trained models by\nincorporating external knowledge at test time to enable customized adaptation.\nWe study the risk of datastore leakage in Retrieval-In-Context RAG Language\nModels (LMs). We show that an adversary can exploit LMs' instruction-following\ncapabilities to easily extract text data verbatim from the datastore of RAG\nsystems built with instruction-tuned LMs via prompt injection. The\nvulnerability exists for a wide range of modern LMs that span Llama2,\nMistral/Mixtral, Vicuna, SOLAR, WizardLM, Qwen1.5, and Platypus2, and the\nexploitability exacerbates as the model size scales up. Extending our study to\nproduction RAG models GPTs, we design an attack that can cause datastore\nleakage with a 100% success rate on 25 randomly selected customized GPTs with\nat most 2 queries, and we extract text data verbatim at a rate of 41% from a\nbook of 77,000 words and 3% from a corpus of 1,569,000 words by prompting the\nGPTs with only 100 queries generated by themselves.", + "authors": "Zhenting Qi, Hanlin Zhang, Eric Xing, Sham Kakade, Himabindu Lakkaraju", + "published": "2024-02-27", + "updated": "2024-02-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CR", + "cs.LG" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2312.07796v1", + "title": "Harnessing Retrieval-Augmented Generation (RAG) for Uncovering Knowledge Gaps", + "abstract": "The paper presents a methodology for uncovering knowledge gaps on the\ninternet using the Retrieval Augmented Generation (RAG) model. By simulating\nuser search behaviour, the RAG system identifies and addresses gaps in\ninformation retrieval systems. The study demonstrates the effectiveness of the\nRAG system in generating relevant suggestions with a consistent accuracy of\n93%. The methodology can be applied in various fields such as scientific\ndiscovery, educational enhancement, research development, market analysis,\nsearch engine optimisation, and content development. The results highlight the\nvalue of identifying and understanding knowledge gaps to guide future\nendeavours.", + "authors": "Joan Figuerola Hurtado", + "published": "2023-12-12", + "updated": "2023-12-12", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2404.17196v1", + "title": "Human-Imperceptible Retrieval Poisoning Attacks in LLM-Powered Applications", + "abstract": "Presently, with the assistance of advanced LLM application development\nframeworks, more and more LLM-powered applications can effortlessly augment the\nLLMs' knowledge with external content using the retrieval augmented generation\n(RAG) technique. However, these frameworks' designs do not have sufficient\nconsideration of the risk of external content, thereby allowing attackers to\nundermine the applications developed with these frameworks. In this paper, we\nreveal a new threat to LLM-powered applications, termed retrieval poisoning,\nwhere attackers can guide the application to yield malicious responses during\nthe RAG process. Specifically, through the analysis of LLM application\nframeworks, attackers can craft documents visually indistinguishable from\nbenign ones. Despite the documents providing correct information, once they are\nused as reference sources for RAG, the application is misled into generating\nincorrect responses. Our preliminary experiments indicate that attackers can\nmislead LLMs with an 88.33\\% success rate, and achieve a 66.67\\% success rate\nin the real-world application, demonstrating the potential impact of retrieval\npoisoning.", + "authors": "Quan Zhang, Binqi Zeng, Chijin Zhou, Gwihwan Go, Heyuan Shi, Yu Jiang", + "published": "2024-04-26", + "updated": "2024-04-26", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.AI" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2404.04287v1", + "title": "CONFLARE: CONFormal LArge language model REtrieval", + "abstract": "Retrieval-augmented generation (RAG) frameworks enable large language models\n(LLMs) to retrieve relevant information from a knowledge base and incorporate\nit into the context for generating responses. This mitigates hallucinations and\nallows for the updating of knowledge without retraining the LLM. However, RAG\ndoes not guarantee valid responses if retrieval fails to identify the necessary\ninformation as the context for response generation. Also, if there is\ncontradictory content, the RAG response will likely reflect only one of the two\npossible responses. Therefore, quantifying uncertainty in the retrieval process\nis crucial for ensuring RAG trustworthiness. In this report, we introduce a\nfour-step framework for applying conformal prediction to quantify retrieval\nuncertainty in RAG frameworks. First, a calibration set of questions answerable\nfrom the knowledge base is constructed. Each question's embedding is compared\nagainst document embeddings to identify the most relevant document chunks\ncontaining the answer and record their similarity scores. Given a\nuser-specified error rate ({\\alpha}), these similarity scores are then analyzed\nto determine a similarity score cutoff threshold. During inference, all chunks\nwith similarity exceeding this threshold are retrieved to provide context to\nthe LLM, ensuring the true answer is captured in the context with a\n(1-{\\alpha}) confidence level. We provide a Python package that enables users\nto implement the entire workflow proposed in our work, only using LLMs and\nwithout human intervention.", + "authors": "Pouria Rouzrokh, Shahriar Faghani, Cooper U. Gamble, Moein Shariatnia, Bradley J. Erickson", + "published": "2024-04-04", + "updated": "2024-04-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2312.05708v1", + "title": "Context Tuning for Retrieval Augmented Generation", + "abstract": "Large language models (LLMs) have the remarkable ability to solve new tasks\nwith just a few examples, but they need access to the right tools. Retrieval\nAugmented Generation (RAG) addresses this problem by retrieving a list of\nrelevant tools for a given task. However, RAG's tool retrieval step requires\nall the required information to be explicitly present in the query. This is a\nlimitation, as semantic search, the widely adopted tool retrieval method, can\nfail when the query is incomplete or lacks context. To address this limitation,\nwe propose Context Tuning for RAG, which employs a smart context retrieval\nsystem to fetch relevant information that improves both tool retrieval and plan\ngeneration. Our lightweight context retrieval model uses numerical,\ncategorical, and habitual usage signals to retrieve and rank context items. Our\nempirical results demonstrate that context tuning significantly enhances\nsemantic search, achieving a 3.5-fold and 1.5-fold improvement in Recall@K for\ncontext retrieval and tool retrieval tasks respectively, and resulting in an\n11.6% increase in LLM-based planner accuracy. Additionally, we show that our\nproposed lightweight model using Reciprocal Rank Fusion (RRF) with LambdaMART\noutperforms GPT-4 based retrieval. Moreover, we observe context augmentation at\nplan generation, even after tool retrieval, reduces hallucination.", + "authors": "Raviteja Anantha, Tharun Bethi, Danil Vodianik, Srinivas Chappidi", + "published": "2023-12-09", + "updated": "2023-12-09", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.LG" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2311.09476v2", + "title": "ARES: An Automated Evaluation Framework for Retrieval-Augmented Generation Systems", + "abstract": "Evaluating retrieval-augmented generation (RAG) systems traditionally relies\non hand annotations for input queries, passages to retrieve, and responses to\ngenerate. We introduce ARES, an Automated RAG Evaluation System, for evaluating\nRAG systems along the dimensions of context relevance, answer faithfulness, and\nanswer relevance. By creating its own synthetic training data, ARES finetunes\nlightweight LM judges to assess the quality of individual RAG components. To\nmitigate potential prediction errors, ARES utilizes a small set of\nhuman-annotated datapoints for prediction-powered inference (PPI). Across eight\ndifferent knowledge-intensive tasks in KILT, SuperGLUE, and AIS, ARES\naccurately evaluates RAG systems while using only a few hundred human\nannotations during evaluation. Furthermore, ARES judges remain effective across\ndomain shifts, proving accurate even after changing the type of queries and/or\ndocuments used in the evaluated RAG systems. We make our code and datasets\npublicly available on Github.", + "authors": "Jon Saad-Falcon, Omar Khattab, Christopher Potts, Matei Zaharia", + "published": "2023-11-16", + "updated": "2024-03-31", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.IR" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2402.03367v2", + "title": "RAG-Fusion: a New Take on Retrieval-Augmented Generation", + "abstract": "Infineon has identified a need for engineers, account managers, and customers\nto rapidly obtain product information. This problem is traditionally addressed\nwith retrieval-augmented generation (RAG) chatbots, but in this study, I\nevaluated the use of the newly popularized RAG-Fusion method. RAG-Fusion\ncombines RAG and reciprocal rank fusion (RRF) by generating multiple queries,\nreranking them with reciprocal scores and fusing the documents and scores.\nThrough manually evaluating answers on accuracy, relevance, and\ncomprehensiveness, I found that RAG-Fusion was able to provide accurate and\ncomprehensive answers due to the generated queries contextualizing the original\nquery from various perspectives. However, some answers strayed off topic when\nthe generated queries' relevance to the original query is insufficient. This\nresearch marks significant progress in artificial intelligence (AI) and natural\nlanguage processing (NLP) applications and demonstrates transformations in a\nglobal and multi-industry context.", + "authors": "Zackary Rackauckas", + "published": "2024-01-31", + "updated": "2024-02-21", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.LG" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2402.07483v1", + "title": "T-RAG: Lessons from the LLM Trenches", + "abstract": "Large Language Models (LLM) have shown remarkable language capabilities\nfueling attempts to integrate them into applications across a wide range of\ndomains. An important application area is question answering over private\nenterprise documents where the main considerations are data security, which\nnecessitates applications that can be deployed on-prem, limited computational\nresources and the need for a robust application that correctly responds to\nqueries. Retrieval-Augmented Generation (RAG) has emerged as the most prominent\nframework for building LLM-based applications. While building a RAG is\nrelatively straightforward, making it robust and a reliable application\nrequires extensive customization and relatively deep knowledge of the\napplication domain. We share our experiences building and deploying an LLM\napplication for question answering over private organizational documents. Our\napplication combines the use of RAG with a finetuned open-source LLM.\nAdditionally, our system, which we call Tree-RAG (T-RAG), uses a tree structure\nto represent entity hierarchies within the organization. This is used to\ngenerate a textual description to augment the context when responding to user\nqueries pertaining to entities within the organization's hierarchy. Our\nevaluations show that this combination performs better than a simple RAG or\nfinetuning implementation. Finally, we share some lessons learned based on our\nexperiences building an LLM application for real-world use.", + "authors": "Masoomali Fatehkia, Ji Kim Lucas, Sanjay Chawla", + "published": "2024-02-12", + "updated": "2024-02-12", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2404.12879v1", + "title": "Unlocking Multi-View Insights in Knowledge-Dense Retrieval-Augmented Generation", + "abstract": "While Retrieval-Augmented Generation (RAG) plays a crucial role in the\napplication of Large Language Models (LLMs), existing retrieval methods in\nknowledge-dense domains like law and medicine still suffer from a lack of\nmulti-perspective views, which are essential for improving interpretability and\nreliability. Previous research on multi-view retrieval often focused solely on\ndifferent semantic forms of queries, neglecting the expression of specific\ndomain knowledge perspectives. This paper introduces a novel multi-view RAG\nframework, MVRAG, tailored for knowledge-dense domains that utilizes\nintention-aware query rewriting from multiple domain viewpoints to enhance\nretrieval precision, thereby improving the effectiveness of the final\ninference. Experiments conducted on legal and medical case retrieval\ndemonstrate significant improvements in recall and precision rates with our\nframework. Our multi-perspective retrieval approach unleashes the potential of\nmulti-view information enhancing RAG tasks, accelerating the further\napplication of LLMs in knowledge-intensive fields.", + "authors": "Guanhua Chen, Wenhan Yu, Lei Sha", + "published": "2024-04-19", + "updated": "2024-04-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2404.06910v1", + "title": "Superposition Prompting: Improving and Accelerating Retrieval-Augmented Generation", + "abstract": "Despite the successes of large language models (LLMs), they exhibit\nsignificant drawbacks, particularly when processing long contexts. Their\ninference cost scales quadratically with respect to sequence length, making it\nexpensive for deployment in some real-world text processing applications, such\nas retrieval-augmented generation (RAG). Additionally, LLMs also exhibit the\n\"distraction phenomenon,\" where irrelevant context in the prompt degrades\noutput quality. To address these drawbacks, we propose a novel RAG prompting\nmethodology, superposition prompting, which can be directly applied to\npre-trained transformer-based LLMs without the need for fine-tuning. At a high\nlevel, superposition prompting allows the LLM to process input documents in\nparallel prompt paths, discarding paths once they are deemed irrelevant. We\ndemonstrate the capability of our method to simultaneously enhance time\nefficiency across a variety of question-answering benchmarks using multiple\npre-trained LLMs. Furthermore, our technique significantly improves accuracy\nwhen the retrieved context is large relative the context the model was trained\non. For example, our approach facilitates an 93x reduction in compute time\nwhile improving accuracy by 43\\% on the NaturalQuestions-Open dataset with the\nMPT-7B instruction-tuned model over naive RAG.", + "authors": "Thomas Merth, Qichen Fu, Mohammad Rastegari, Mahyar Najibi", + "published": "2024-04-10", + "updated": "2024-04-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2310.11511v1", + "title": "Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection", + "abstract": "Despite their remarkable capabilities, large language models (LLMs) often\nproduce responses containing factual inaccuracies due to their sole reliance on\nthe parametric knowledge they encapsulate. Retrieval-Augmented Generation\n(RAG), an ad hoc approach that augments LMs with retrieval of relevant\nknowledge, decreases such issues. However, indiscriminately retrieving and\nincorporating a fixed number of retrieved passages, regardless of whether\nretrieval is necessary, or passages are relevant, diminishes LM versatility or\ncan lead to unhelpful response generation. We introduce a new framework called\nSelf-Reflective Retrieval-Augmented Generation (Self-RAG) that enhances an LM's\nquality and factuality through retrieval and self-reflection. Our framework\ntrains a single arbitrary LM that adaptively retrieves passages on-demand, and\ngenerates and reflects on retrieved passages and its own generations using\nspecial tokens, called reflection tokens. Generating reflection tokens makes\nthe LM controllable during the inference phase, enabling it to tailor its\nbehavior to diverse task requirements. Experiments show that Self-RAG (7B and\n13B parameters) significantly outperforms state-of-the-art LLMs and\nretrieval-augmented models on a diverse set of tasks. Specifically, Self-RAG\noutperforms ChatGPT and retrieval-augmented Llama2-chat on Open-domain QA,\nreasoning and fact verification tasks, and it shows significant gains in\nimproving factuality and citation accuracy for long-form generations relative\nto these models.", + "authors": "Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, Hannaneh Hajishirzi", + "published": "2023-10-17", + "updated": "2023-10-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2404.17347v1", + "title": "InspectorRAGet: An Introspection Platform for RAG Evaluation", + "abstract": "Large Language Models (LLM) have become a popular approach for implementing\nRetrieval Augmented Generation (RAG) systems, and a significant amount of\neffort has been spent on building good models and metrics. In spite of\nincreased recognition of the need for rigorous evaluation of RAG systems, few\ntools exist that go beyond the creation of model output and automatic\ncalculation. We present InspectorRAGet, an introspection platform for RAG\nevaluation. InspectorRAGet allows the user to analyze aggregate and\ninstance-level performance of RAG systems, using both human and algorithmic\nmetrics as well as annotator quality. InspectorRAGet is suitable for multiple\nuse cases and is available publicly to the community. The demo video is\navailable at https://youtu.be/MJhe8QIXcEc", + "authors": "Kshitij Fadnis, Siva Sankalp Patel, Odellia Boni, Yannis Katsis, Sara Rosenthal, Benjamin Sznajder, Marina Danilevsky", + "published": "2024-04-26", + "updated": "2024-04-26", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.HC" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2402.01717v1", + "title": "From RAG to QA-RAG: Integrating Generative AI for Pharmaceutical Regulatory Compliance Process", + "abstract": "Regulatory compliance in the pharmaceutical industry entails navigating\nthrough complex and voluminous guidelines, often requiring significant human\nresources. To address these challenges, our study introduces a chatbot model\nthat utilizes generative AI and the Retrieval Augmented Generation (RAG)\nmethod. This chatbot is designed to search for guideline documents relevant to\nthe user inquiries and provide answers based on the retrieved guidelines.\nRecognizing the inherent need for high reliability in this domain, we propose\nthe Question and Answer Retrieval Augmented Generation (QA-RAG) model. In\ncomparative experiments, the QA-RAG model demonstrated a significant\nimprovement in accuracy, outperforming all other baselines including\nconventional RAG methods. This paper details QA-RAG's structure and performance\nevaluation, emphasizing its potential for the regulatory compliance domain in\nthe pharmaceutical industry and beyond. We have made our work publicly\navailable for further research and development.", + "authors": "Jaewoong Kim, Moohong Min", + "published": "2024-01-26", + "updated": "2024-01-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.IR", + "I.2.7; I.2.1; J.3" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2403.01193v2", + "title": "RAGged Edges: The Double-Edged Sword of Retrieval-Augmented Chatbots", + "abstract": "Large language models (LLMs) like ChatGPT demonstrate the remarkable progress\nof artificial intelligence. However, their tendency to hallucinate -- generate\nplausible but false information -- poses a significant challenge. This issue is\ncritical, as seen in recent court cases where ChatGPT's use led to citations of\nnon-existent legal rulings. This paper explores how Retrieval-Augmented\nGeneration (RAG) can counter hallucinations by integrating external knowledge\nwith prompts. We empirically evaluate RAG against standard LLMs using prompts\ndesigned to induce hallucinations. Our results show that RAG increases accuracy\nin some cases, but can still be misled when prompts directly contradict the\nmodel's pre-trained understanding. These findings highlight the complex nature\nof hallucinations and the need for more robust solutions to ensure LLM\nreliability in real-world applications. We offer practical recommendations for\nRAG deployment and discuss implications for the development of more trustworthy\nLLMs.", + "authors": "Philip Feldman. James R. Foulds, Shimei Pan", + "published": "2024-03-02", + "updated": "2024-03-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "H.3.3; I.2.7" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2401.02333v3", + "title": "Beyond Extraction: Contextualising Tabular Data for Efficient Summarisation by Language Models", + "abstract": "The conventional use of the Retrieval-Augmented Generation (RAG) architecture\nhas proven effective for retrieving information from diverse documents.\nHowever, challenges arise in handling complex table queries, especially within\nPDF documents containing intricate tabular structures.This research introduces\nan innovative approach to enhance the accuracy of complex table queries in\nRAG-based systems. Our methodology involves storing PDFs in the retrieval\ndatabase and extracting tabular content separately. The extracted tables\nundergo a process of context enrichment, concatenating headers with\ncorresponding values. To ensure a comprehensive understanding of the enriched\ndata, we employ a fine-tuned version of the Llama-2-chat language model for\nsummarisation within the RAG architecture. Furthermore, we augment the tabular\ndata with contextual sense using the ChatGPT 3.5 API through a one-shot prompt.\nThis enriched data is then fed into the retrieval database alongside other\nPDFs. Our approach aims to significantly improve the precision of complex table\nqueries, offering a promising solution to a longstanding challenge in\ninformation retrieval.", + "authors": "Uday Allu, Biddwan Ahmed, Vishesh Tripathi", + "published": "2024-01-04", + "updated": "2024-02-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2309.01105v2", + "title": "A Study on the Implementation of Generative AI Services Using an Enterprise Data-Based LLM Application Architecture", + "abstract": "This study presents a method for implementing generative AI services by\nutilizing the Large Language Models (LLM) application architecture. With recent\nadvancements in generative AI technology, LLMs have gained prominence across\nvarious domains. In this context, the research addresses the challenge of\ninformation scarcity and proposes specific remedies by harnessing LLM\ncapabilities. The investigation delves into strategies for mitigating the issue\nof inadequate data, offering tailored solutions. The study delves into the\nefficacy of employing fine-tuning techniques and direct document integration to\nalleviate data insufficiency. A significant contribution of this work is the\ndevelopment of a Retrieval-Augmented Generation (RAG) model, which tackles the\naforementioned challenges. The RAG model is carefully designed to enhance\ninformation storage and retrieval processes, ensuring improved content\ngeneration. The research elucidates the key phases of the information storage\nand retrieval methodology underpinned by the RAG model. A comprehensive\nanalysis of these steps is undertaken, emphasizing their significance in\naddressing the scarcity of data. The study highlights the efficacy of the\nproposed method, showcasing its applicability through illustrative instances.\nBy implementing the RAG model for information storage and retrieval, the\nresearch not only contributes to a deeper comprehension of generative AI\ntechnology but also facilitates its practical usability within enterprises\nutilizing LLMs. This work holds substantial value in advancing the field of\ngenerative AI, offering insights into enhancing data-driven content generation\nand fostering active utilization of LLM-based services within corporate\nsettings.", + "authors": "Cheonsu Jeong", + "published": "2023-09-03", + "updated": "2023-09-18", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2404.08189v1", + "title": "Reducing hallucination in structured outputs via Retrieval-Augmented Generation", + "abstract": "A common and fundamental limitation of Generative AI (GenAI) is its\npropensity to hallucinate. While large language models (LLM) have taken the\nworld by storm, without eliminating or at least reducing hallucinations,\nreal-world GenAI systems may face challenges in user adoption. In the process\nof deploying an enterprise application that produces workflows based on natural\nlanguage requirements, we devised a system leveraging Retrieval Augmented\nGeneration (RAG) to greatly improve the quality of the structured output that\nrepresents such workflows. Thanks to our implementation of RAG, our proposed\nsystem significantly reduces hallucinations in the output and improves the\ngeneralization of our LLM in out-of-domain settings. In addition, we show that\nusing a small, well-trained retriever encoder can reduce the size of the\naccompanying LLM, thereby making deployments of LLM-based systems less\nresource-intensive.", + "authors": "Patrice B\u00e9chard, Orlando Marquez Ayala", + "published": "2024-04-12", + "updated": "2024-04-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "cs.IR" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2307.05915v2", + "title": "Prompt Generate Train (PGT): Few-shot Domain Adaption of Retrieval Augmented Generation Models for Open Book Question-Answering", + "abstract": "We propose a framework - Prompt, Generate, Train (PGT) - to efficiently\ndevelop a generative question-answering model for open-book question-answering\nover a proprietary collection of text documents. The framework adapts a\nretriever augmented generation (RAG) model to the target domain using\nsupervised fine-tuning and reinforcement learning with synthetic feedback in a\nfew-shot setting. This, we hypothesize, will yield an aligned, uncertainty\ncalibrated model that is competitive with GPT-4 based in-context retrieval\naugmented generation in generating relevant answers at lower serving costs. The\nframework's synthetic generation pipeline will generate synthetic training data\ncomprising tuples using an open-source LLM and a\nnovel consistency filtering scheme. The pipeline will be designed to generate\nboth abstractive and extractive questions that span the entire corpus. The\nframework proposes to fine-tune a smaller RAG model comprising a dense\nretriever (ColBERTv2) and a smaller sized LLM on the synthetic dataset. In\nparallel, the framework will train a Reward model to score domain grounded\nanswers higher than hallucinated answers using an a priori relevance ordering\nof synthetically assembled samples. In the next phase, the framework will align\nthe RAG model with the target domain using reinforcement learning (Proximal\nPolicy Optimization). This step may improve the RAG model's ability to generate\ngrounded answers and ignore out of domain questions. In the final phase, the\nframework will calibrate the model's uncertainty for extractive\nquestion-answers.", + "authors": "C. S. Krishna", + "published": "2023-07-12", + "updated": "2023-07-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2310.20158v1", + "title": "GAR-meets-RAG Paradigm for Zero-Shot Information Retrieval", + "abstract": "Given a query and a document corpus, the information retrieval (IR) task is\nto output a ranked list of relevant documents. Combining large language models\n(LLMs) with embedding-based retrieval models, recent work shows promising\nresults on the zero-shot retrieval problem, i.e., no access to labeled data\nfrom the target domain. Two such popular paradigms are generation-augmented\nretrieval or GAR (generate additional context for the query and then retrieve),\nand retrieval-augmented generation or RAG (retrieve relevant documents as\ncontext and then generate answers). The success of these paradigms hinges on\n(i) high-recall retrieval models, which are difficult to obtain in the\nzero-shot setting, and (ii) high-precision (re-)ranking models which typically\nneed a good initialization. In this work, we propose a novel GAR-meets-RAG\nrecurrence formulation that overcomes the challenges of existing paradigms. Our\nmethod iteratively improves retrieval (via GAR) and rewrite (via RAG) stages in\nthe zero-shot setting. A key design principle is that the rewrite-retrieval\nstages improve the recall of the system and a final re-ranking stage improves\nthe precision. We conduct extensive experiments on zero-shot passage retrieval\nbenchmarks, BEIR and TREC-DL. Our method establishes a new state-of-the-art in\nthe BEIR benchmark, outperforming previous best results in Recall@100 and\nnDCG@10 metrics on 6 out of 8 datasets, with up to 17% relative gains over the\nprevious best.", + "authors": "Daman Arora, Anush Kini, Sayak Ray Chowdhury, Nagarajan Natarajan, Gaurav Sinha, Amit Sharma", + "published": "2023-10-31", + "updated": "2023-10-31", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2403.14374v1", + "title": "FIT-RAG: Black-Box RAG with Factual Information and Token Reduction", + "abstract": "Due to the extraordinarily large number of parameters, fine-tuning Large\nLanguage Models (LLMs) to update long-tail or out-of-date knowledge is\nimpractical in lots of applications. To avoid fine-tuning, we can alternatively\ntreat a LLM as a black-box (i.e., freeze the parameters of the LLM) and augment\nit with a Retrieval-Augmented Generation (RAG) system, namely black-box RAG.\nRecently, black-box RAG has achieved success in knowledge-intensive tasks and\nhas gained much attention. Existing black-box RAG methods typically fine-tune\nthe retriever to cater to LLMs' preferences and concatenate all the retrieved\ndocuments as the input, which suffers from two issues: (1) Ignorance of Factual\nInformation. The LLM preferred documents may not contain the factual\ninformation for the given question, which can mislead the retriever and hurt\nthe effectiveness of black-box RAG; (2) Waste of Tokens. Simply concatenating\nall the retrieved documents brings large amounts of unnecessary tokens for\nLLMs, which degenerates the efficiency of black-box RAG. To address these\nissues, this paper proposes a novel black-box RAG framework which utilizes the\nfactual information in the retrieval and reduces the number of tokens for\naugmentation, dubbed FIT-RAG. FIT-RAG utilizes the factual information by\nconstructing a bi-label document scorer. Besides, it reduces the tokens by\nintroducing a self-knowledge recognizer and a sub-document-level token reducer.\nFIT-RAG achieves both superior effectiveness and efficiency, which is validated\nby extensive experiments across three open-domain question-answering datasets:\nTriviaQA, NQ and PopQA. FIT-RAG can improve the answering accuracy of\nLlama2-13B-Chat by 14.3\\% on TriviaQA, 19.9\\% on NQ and 27.5\\% on PopQA,\nrespectively. Furthermore, it can save approximately half of the tokens on\naverage across the three datasets.", + "authors": "Yuren Mao, Xuemei Dong, Wenyi Xu, Yunjun Gao, Bin Wei, Ying Zhang", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.IR" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2401.14887v4", + "title": "The Power of Noise: Redefining Retrieval for RAG Systems", + "abstract": "Retrieval-Augmented Generation (RAG) has recently emerged as a method to\nextend beyond the pre-trained knowledge of Large Language Models by augmenting\nthe original prompt with relevant passages or documents retrieved by an\nInformation Retrieval (IR) system. RAG has become increasingly important for\nGenerative AI solutions, especially in enterprise settings or in any domain in\nwhich knowledge is constantly refreshed and cannot be memorized in the LLM. We\nargue here that the retrieval component of RAG systems, be it dense or sparse,\ndeserves increased attention from the research community, and accordingly, we\nconduct the first comprehensive and systematic examination of the retrieval\nstrategy of RAG systems. We focus, in particular, on the type of passages IR\nsystems within a RAG solution should retrieve. Our analysis considers multiple\nfactors, such as the relevance of the passages included in the prompt context,\ntheir position, and their number. One counter-intuitive finding of this work is\nthat the retriever's highest-scoring documents that are not directly relevant\nto the query (e.g., do not contain the answer) negatively impact the\neffectiveness of the LLM. Even more surprising, we discovered that adding\nrandom documents in the prompt improves the LLM accuracy by up to 35%. These\nresults highlight the need to investigate the appropriate strategies when\nintegrating retrieval with LLMs, thereby laying the groundwork for future\nresearch in this area.", + "authors": "Florin Cuconasu, Giovanni Trappolini, Federico Siciliano, Simone Filice, Cesare Campagnano, Yoelle Maarek, Nicola Tonellotto, Fabrizio Silvestri", + "published": "2024-01-26", + "updated": "2024-05-01", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2402.11891v1", + "title": "FeB4RAG: Evaluating Federated Search in the Context of Retrieval Augmented Generation", + "abstract": "Federated search systems aggregate results from multiple search engines,\nselecting appropriate sources to enhance result quality and align with user\nintent. With the increasing uptake of Retrieval-Augmented Generation (RAG)\npipelines, federated search can play a pivotal role in sourcing relevant\ninformation across heterogeneous data sources to generate informed responses.\nHowever, existing datasets, such as those developed in the past TREC FedWeb\ntracks, predate the RAG paradigm shift and lack representation of modern\ninformation retrieval challenges. To bridge this gap, we present FeB4RAG, a\nnovel dataset specifically designed for federated search within RAG frameworks.\nThis dataset, derived from 16 sub-collections of the widely used \\beir\nbenchmarking collection, includes 790 information requests (akin to\nconversational queries) tailored for chatbot applications, along with top\nresults returned by each resource and associated LLM-derived relevance\njudgements. Additionally, to support the need for this collection, we\ndemonstrate the impact on response generation of a high quality federated\nsearch system for RAG compared to a naive approach to federated search. We do\nso by comparing answers generated through the RAG pipeline through a\nqualitative side-by-side comparison. Our collection fosters and supports the\ndevelopment and evaluation of new federated search methods, especially in the\ncontext of RAG pipelines.", + "authors": "Shuai Wang, Ekaterina Khramtsova, Shengyao Zhuang, Guido Zuccon", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2403.11413v1", + "title": "Dynamic Contexts for Generating Suggestion Questions in RAG Based Conversational Systems", + "abstract": "When interacting with Retrieval-Augmented Generation (RAG)-based\nconversational agents, the users must carefully craft their queries to be\nunderstood correctly. Yet, understanding the system's capabilities can be\nchallenging for the users, leading to ambiguous questions that necessitate\nfurther clarification. This work aims to bridge the gap by developing a\nsuggestion question generator. To generate suggestion questions, our approach\ninvolves utilizing dynamic context, which includes both dynamic few-shot\nexamples and dynamically retrieved contexts. Through experiments, we show that\nthe dynamic contexts approach can generate better suggestion questions as\ncompared to other prompting approaches.", + "authors": "Anuja Tayal, Aman Tyagi", + "published": "2024-03-18", + "updated": "2024-03-18", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2402.07867v1", + "title": "PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented Generation of Large Language Models", + "abstract": "Large language models (LLMs) have achieved remarkable success due to their\nexceptional generative capabilities. Despite their success, they also have\ninherent limitations such as a lack of up-to-date knowledge and hallucination.\nRetrieval-Augmented Generation (RAG) is a state-of-the-art technique to\nmitigate those limitations. In particular, given a question, RAG retrieves\nrelevant knowledge from a knowledge database to augment the input of the LLM.\nFor instance, the retrieved knowledge could be a set of top-k texts that are\nmost semantically similar to the given question when the knowledge database\ncontains millions of texts collected from Wikipedia. As a result, the LLM could\nutilize the retrieved knowledge as the context to generate an answer for the\ngiven question. Existing studies mainly focus on improving the accuracy or\nefficiency of RAG, leaving its security largely unexplored. We aim to bridge\nthe gap in this work. Particularly, we propose PoisonedRAG , a set of knowledge\npoisoning attacks to RAG, where an attacker could inject a few poisoned texts\ninto the knowledge database such that the LLM generates an attacker-chosen\ntarget answer for an attacker-chosen target question. We formulate knowledge\npoisoning attacks as an optimization problem, whose solution is a set of\npoisoned texts. Depending on the background knowledge (e.g., black-box and\nwhite-box settings) of an attacker on the RAG, we propose two solutions to\nsolve the optimization problem, respectively. Our results on multiple benchmark\ndatasets and LLMs show our attacks could achieve 90% attack success rates when\ninjecting 5 poisoned texts for each target question into a database with\nmillions of texts. We also evaluate recent defenses and our results show they\nare insufficient to defend against our attacks, highlighting the need for new\ndefenses.", + "authors": "Wei Zou, Runpeng Geng, Binghui Wang, Jinyuan Jia", + "published": "2024-02-12", + "updated": "2024-02-12", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.LG" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2401.08406v3", + "title": "RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture", + "abstract": "There are two common ways in which developers are incorporating proprietary\nand domain-specific data when building applications of Large Language Models\n(LLMs): Retrieval-Augmented Generation (RAG) and Fine-Tuning. RAG augments the\nprompt with the external data, while fine-Tuning incorporates the additional\nknowledge into the model itself. However, the pros and cons of both approaches\nare not well understood. In this paper, we propose a pipeline for fine-tuning\nand RAG, and present the tradeoffs of both for multiple popular LLMs, including\nLlama2-13B, GPT-3.5, and GPT-4. Our pipeline consists of multiple stages,\nincluding extracting information from PDFs, generating questions and answers,\nusing them for fine-tuning, and leveraging GPT-4 for evaluating the results. We\npropose metrics to assess the performance of different stages of the RAG and\nfine-Tuning pipeline. We conduct an in-depth study on an agricultural dataset.\nAgriculture as an industry has not seen much penetration of AI, and we study a\npotentially disruptive application - what if we could provide location-specific\ninsights to a farmer? Our results show the effectiveness of our dataset\ngeneration pipeline in capturing geographic-specific knowledge, and the\nquantitative and qualitative benefits of RAG and fine-tuning. We see an\naccuracy increase of over 6 p.p. when fine-tuning the model and this is\ncumulative with RAG, which increases accuracy by 5 p.p. further. In one\nparticular experiment, we also demonstrate that the fine-tuned model leverages\ninformation from across geographies to answer specific questions, increasing\nanswer similarity from 47% to 72%. Overall, the results point to how systems\nbuilt using LLMs can be adapted to respond and incorporate knowledge across a\ndimension that is critical for a specific industry, paving the way for further\napplications of LLMs in other industrial domains.", + "authors": "Angels Balaguer, Vinamra Benara, Renato Luiz de Freitas Cunha, Roberto de M. Estev\u00e3o Filho, Todd Hendry, Daniel Holstein, Jennifer Marsman, Nick Mecklenburg, Sara Malvar, Leonardo O. Nunes, Rafael Padilha, Morris Sharp, Bruno Silva, Swati Sharma, Vijay Aski, Ranveer Chandra", + "published": "2024-01-16", + "updated": "2024-01-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2401.17043v2", + "title": "CRUD-RAG: A Comprehensive Chinese Benchmark for Retrieval-Augmented Generation of Large Language Models", + "abstract": "Retrieval-Augmented Generation (RAG) is a technique that enhances the\ncapabilities of large language models (LLMs) by incorporating external\nknowledge sources. This method addresses common LLM limitations, including\noutdated information and the tendency to produce inaccurate \"hallucinated\"\ncontent. However, the evaluation of RAG systems is challenging, as existing\nbenchmarks are limited in scope and diversity. Most of the current benchmarks\npredominantly assess question-answering applications, overlooking the broader\nspectrum of situations where RAG could prove advantageous. Moreover, they only\nevaluate the performance of the LLM component of the RAG pipeline in the\nexperiments, and neglect the influence of the retrieval component and the\nexternal knowledge database. To address these issues, this paper constructs a\nlarge-scale and more comprehensive benchmark, and evaluates all the components\nof RAG systems in various RAG application scenarios. Specifically, we have\ncategorized the range of RAG applications into four distinct types-Create,\nRead, Update, and Delete (CRUD), each representing a unique use case. \"Create\"\nrefers to scenarios requiring the generation of original, varied content.\n\"Read\" involves responding to intricate questions in knowledge-intensive\nsituations. \"Update\" focuses on revising and rectifying inaccuracies or\ninconsistencies in pre-existing texts. \"Delete\" pertains to the task of\nsummarizing extensive texts into more concise forms. For each of these CRUD\ncategories, we have developed comprehensive datasets to evaluate the\nperformance of RAG systems. We also analyze the effects of various components\nof the RAG system, such as the retriever, the context length, the knowledge\nbase construction, and the LLM. Finally, we provide useful insights for\noptimizing the RAG technology for different scenarios.", + "authors": "Yuanjie Lyu, Zhiyu Li, Simin Niu, Feiyu Xiong, Bo Tang, Wenjin Wang, Hao Wu, Huanyong Liu, Tong Xu, Enhong Chen, Yi Luo, Peng Cheng, Haiying Deng, Zhonghao Wang, Zijia Lu", + "published": "2024-01-30", + "updated": "2024-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2404.00657v1", + "title": "Observations on Building RAG Systems for Technical Documents", + "abstract": "Retrieval augmented generation (RAG) for technical documents creates\nchallenges as embeddings do not often capture domain information. We review\nprior art for important factors affecting RAG and perform experiments to\nhighlight best practices and potential challenges to build RAG systems for\ntechnical documents.", + "authors": "Sumit Soman, Sujoy Roychowdhury", + "published": "2024-03-31", + "updated": "2024-03-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL", + "I.2.7" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2404.13781v1", + "title": "Evaluating Retrieval Quality in Retrieval-Augmented Generation", + "abstract": "Evaluating retrieval-augmented generation (RAG) presents challenges,\nparticularly for retrieval models within these systems. Traditional end-to-end\nevaluation methods are computationally expensive. Furthermore, evaluation of\nthe retrieval model's performance based on query-document relevance labels\nshows a small correlation with the RAG system's downstream performance. We\npropose a novel evaluation approach, eRAG, where each document in the retrieval\nlist is individually utilized by the large language model within the RAG\nsystem. The output generated for each document is then evaluated based on the\ndownstream task ground truth labels. In this manner, the downstream performance\nfor each document serves as its relevance label. We employ various downstream\ntask metrics to obtain document-level annotations and aggregate them using\nset-based or ranking metrics. Extensive experiments on a wide range of datasets\ndemonstrate that eRAG achieves a higher correlation with downstream RAG\nperformance compared to baseline methods, with improvements in Kendall's $\\tau$\ncorrelation ranging from 0.168 to 0.494. Additionally, eRAG offers significant\ncomputational advantages, improving runtime and consuming up to 50 times less\nGPU memory than end-to-end evaluation.", + "authors": "Alireza Salemi, Hamed Zamani", + "published": "2024-04-21", + "updated": "2024-04-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.IR" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2403.05676v1", + "title": "PipeRAG: Fast Retrieval-Augmented Generation via Algorithm-System Co-design", + "abstract": "Retrieval-augmented generation (RAG) can enhance the generation quality of\nlarge language models (LLMs) by incorporating external token databases.\nHowever, retrievals from large databases can constitute a substantial portion\nof the overall generation time, particularly when retrievals are periodically\nperformed to align the retrieved content with the latest states of generation.\nIn this paper, we introduce PipeRAG, a novel algorithm-system co-design\napproach to reduce generation latency and enhance generation quality. PipeRAG\nintegrates (1) pipeline parallelism to enable concurrent retrieval and\ngeneration processes, (2) flexible retrieval intervals to maximize the\nefficiency of pipeline parallelism, and (3) a performance model to\nautomatically balance retrieval quality and latency based on the generation\nstates and underlying hardware. Our evaluation shows that, by combining the\nthree aforementioned methods, PipeRAG achieves up to 2.6$\\times$ speedup in\nend-to-end generation latency while improving generation quality. These\npromising results showcase the effectiveness of co-designing algorithms with\nunderlying systems, paving the way for the adoption of PipeRAG in future RAG\nsystems.", + "authors": "Wenqi Jiang, Shuai Zhang, Boran Han, Jie Wang, Bernie Wang, Tim Kraska", + "published": "2024-03-08", + "updated": "2024-03-08", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2404.07221v1", + "title": "Improving Retrieval for RAG based Question Answering Models on Financial Documents", + "abstract": "The effectiveness of Large Language Models (LLMs) in generating accurate\nresponses relies heavily on the quality of input provided, particularly when\nemploying Retrieval Augmented Generation (RAG) techniques. RAG enhances LLMs by\nsourcing the most relevant text chunk(s) to base queries upon. Despite the\nsignificant advancements in LLMs' response quality in recent years, users may\nstill encounter inaccuracies or irrelevant answers; these issues often stem\nfrom suboptimal text chunk retrieval by RAG rather than the inherent\ncapabilities of LLMs. To augment the efficacy of LLMs, it is crucial to refine\nthe RAG process. This paper explores the existing constraints of RAG pipelines\nand introduces methodologies for enhancing text retrieval. It delves into\nstrategies such as sophisticated chunking techniques, query expansion, the\nincorporation of metadata annotations, the application of re-ranking\nalgorithms, and the fine-tuning of embedding algorithms. Implementing these\napproaches can substantially improve the retrieval quality, thereby elevating\nthe overall performance and reliability of LLMs in processing and responding to\nqueries.", + "authors": "Spurthi Setty, Katherine Jijo, Eden Chung, Natan Vidra", + "published": "2024-03-23", + "updated": "2024-03-23", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL", + "cs.LG", + "q-fin.GN" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2404.12457v2", + "title": "RAGCache: Efficient Knowledge Caching for Retrieval-Augmented Generation", + "abstract": "Retrieval-Augmented Generation (RAG) has shown significant improvements in\nvarious natural language processing tasks by integrating the strengths of large\nlanguage models (LLMs) and external knowledge databases. However, RAG\nintroduces long sequence generation and leads to high computation and memory\ncosts. We propose RAGCache, a novel multilevel dynamic caching system tailored\nfor RAG. Our analysis benchmarks current RAG systems, pinpointing the\nperformance bottleneck (i.e., long sequence due to knowledge injection) and\noptimization opportunities (i.e., caching knowledge's intermediate states).\nBased on these insights, we design RAGCache, which organizes the intermediate\nstates of retrieved knowledge in a knowledge tree and caches them in the GPU\nand host memory hierarchy. RAGCache proposes a replacement policy that is aware\nof LLM inference characteristics and RAG retrieval patterns. It also\ndynamically overlaps the retrieval and inference steps to minimize the\nend-to-end latency. We implement RAGCache and evaluate it on vLLM, a\nstate-of-the-art LLM inference system and Faiss, a state-of-the-art vector\ndatabase. The experimental results show that RAGCache reduces the time to first\ntoken (TTFT) by up to 4x and improves the throughput by up to 2.1x compared to\nvLLM integrated with Faiss.", + "authors": "Chao Jin, Zili Zhang, Xuanlin Jiang, Fangyue Liu, Xin Liu, Xuanzhe Liu, Xin Jin", + "published": "2024-04-18", + "updated": "2024-04-25", + "primary_cat": "cs.DC", + "cats": [ + "cs.DC", + "cs.CL", + "cs.LG" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2401.15884v2", + "title": "Corrective Retrieval Augmented Generation", + "abstract": "Large language models (LLMs) inevitably exhibit hallucinations since the\naccuracy of generated texts cannot be secured solely by the parametric\nknowledge they encapsulate. Although retrieval-augmented generation (RAG) is a\npracticable complement to LLMs, it relies heavily on the relevance of retrieved\ndocuments, raising concerns about how the model behaves if retrieval goes\nwrong. To this end, we propose the Corrective Retrieval Augmented Generation\n(CRAG) to improve the robustness of generation. Specifically, a lightweight\nretrieval evaluator is designed to assess the overall quality of retrieved\ndocuments for a query, returning a confidence degree based on which different\nknowledge retrieval actions can be triggered. Since retrieval from static and\nlimited corpora can only return sub-optimal documents, large-scale web searches\nare utilized as an extension for augmenting the retrieval results. Besides, a\ndecompose-then-recompose algorithm is designed for retrieved documents to\nselectively focus on key information and filter out irrelevant information in\nthem. CRAG is plug-and-play and can be seamlessly coupled with various\nRAG-based approaches. Experiments on four datasets covering short- and\nlong-form generation tasks show that CRAG can significantly improve the\nperformance of RAG-based approaches.", + "authors": "Shi-Qi Yan, Jia-Chen Gu, Yun Zhu, Zhen-Hua Ling", + "published": "2024-01-29", + "updated": "2024-02-16", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2405.03085v1", + "title": "Compressing Long Context for Enhancing RAG with AMR-based Concept Distillation", + "abstract": "Large Language Models (LLMs) have made significant strides in information\nacquisition. However, their overreliance on potentially flawed parametric\nknowledge leads to hallucinations and inaccuracies, particularly when handling\nlong-tail, domain-specific queries. Retrieval Augmented Generation (RAG)\naddresses this limitation by incorporating external, non-parametric knowledge.\nNevertheless, the retrieved long-context documents often contain noisy,\nirrelevant information alongside vital knowledge, negatively diluting LLMs'\nattention. Inspired by the supportive role of essential concepts in\nindividuals' reading comprehension, we propose a novel concept-based RAG\nframework with the Abstract Meaning Representation (AMR)-based concept\ndistillation algorithm. The proposed algorithm compresses the cluttered raw\nretrieved documents into a compact set of crucial concepts distilled from the\ninformative nodes of AMR by referring to reliable linguistic features. The\nconcepts explicitly constrain LLMs to focus solely on vital information in the\ninference process. We conduct extensive experiments on open-domain\nquestion-answering datasets to empirically evaluate the proposed method's\neffectiveness. The results indicate that the concept-based RAG framework\noutperforms other baseline methods, particularly as the number of supporting\ndocuments increases, while also exhibiting robustness across various backbone\nLLMs. This emphasizes the distilled concepts are informative for augmenting the\nRAG process by filtering out interference information. To the best of our\nknowledge, this is the first work introducing AMR to enhance the RAG,\npresenting a potential solution to augment inference performance with\nsemantic-based context compression.", + "authors": "Kaize Shi, Xueyao Sun, Qing Li, Guandong Xu", + "published": "2024-05-06", + "updated": "2024-05-06", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2401.05856v1", + "title": "Seven Failure Points When Engineering a Retrieval Augmented Generation System", + "abstract": "Software engineers are increasingly adding semantic search capabilities to\napplications using a strategy known as Retrieval Augmented Generation (RAG). A\nRAG system involves finding documents that semantically match a query and then\npassing the documents to a large language model (LLM) such as ChatGPT to\nextract the right answer using an LLM. RAG systems aim to: a) reduce the\nproblem of hallucinated responses from LLMs, b) link sources/references to\ngenerated responses, and c) remove the need for annotating documents with\nmeta-data. However, RAG systems suffer from limitations inherent to information\nretrieval systems and from reliance on LLMs. In this paper, we present an\nexperience report on the failure points of RAG systems from three case studies\nfrom separate domains: research, education, and biomedical. We share the\nlessons learned and present 7 failure points to consider when designing a RAG\nsystem. The two key takeaways arising from our work are: 1) validation of a RAG\nsystem is only feasible during operation, and 2) the robustness of a RAG system\nevolves rather than designed in at the start. We conclude with a list of\npotential research directions on RAG systems for the software engineering\ncommunity.", + "authors": "Scott Barnett, Stefanus Kurniawan, Srikanth Thudumu, Zach Brannelly, Mohamed Abdelrazek", + "published": "2024-01-11", + "updated": "2024-01-11", + "primary_cat": "cs.SE", + "cats": [ + "cs.SE", + "cs.AI" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2402.12177v4", + "title": "Mafin: Enhancing Black-Box Embeddings with Model Augmented Fine-Tuning", + "abstract": "Retrieval Augmented Generation (RAG) has emerged as an effective solution for\nmitigating hallucinations in Large Language Models (LLMs). The retrieval stage\nin RAG typically involves a pre-trained embedding model, which converts queries\nand passages into vectors to capture their semantics. However, a standard\npre-trained embedding model may exhibit sub-optimal performance when applied to\nspecific domain knowledge, necessitating fine-tuning. This paper addresses\nscenarios where the embeddings are only available from a black-box model. We\nintroduce Model augmented fine-tuning (Mafin) -- a novel approach for\nfine-tuning a black-box embedding model by augmenting it with a trainable\nembedding model. Our results demonstrate that Mafin significantly enhances the\nperformance of the black-box embeddings by only requiring the training of a\nsmall augmented model. We validate the effectiveness of our method on both\nlabeled and unlabeled datasets, illustrating its broad applicability and\nefficiency.", + "authors": "Mingtian Zhang, Shawn Lan, Peter Hayes, David Barber", + "published": "2024-02-19", + "updated": "2024-03-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2401.01511v1", + "title": "Enhancing Multilingual Information Retrieval in Mixed Human Resources Environments: A RAG Model Implementation for Multicultural Enterprise", + "abstract": "The advent of Large Language Models has revolutionized information retrieval,\nushering in a new era of expansive knowledge accessibility. While these models\nexcel in providing open-world knowledge, effectively extracting answers in\ndiverse linguistic environments with varying levels of literacy remains a\nformidable challenge. Retrieval Augmented Generation (RAG) emerges as a\npromising solution, bridging the gap between information availability and\nmultilingual comprehension. However, deploying RAG models in real-world\nscenarios demands careful consideration of various factors. This paper\naddresses the critical challenges associated with implementing RAG models in\nmulticultural environments. We delve into essential considerations, including\ndata feeding strategies, timely updates, mitigation of hallucinations,\nprevention of erroneous responses, and optimization of delivery speed. Our work\ninvolves the integration of a diverse array of tools, meticulously combined to\nfacilitate the seamless adoption of RAG models across languages and literacy\nlevels within a multicultural organizational context. Through strategic tweaks\nin our approaches, we achieve not only effectiveness but also efficiency,\nensuring the accelerated and accurate delivery of information in a manner that\nis tailored to the unique requirements of multilingual and multicultural\nsettings.", + "authors": "Syed Rameel Ahmad", + "published": "2024-01-03", + "updated": "2024-01-03", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2404.10981v1", + "title": "A Survey on Retrieval-Augmented Text Generation for Large Language Models", + "abstract": "Retrieval-Augmented Generation (RAG) merges retrieval methods with deep\nlearning advancements to address the static limitations of large language\nmodels (LLMs) by enabling the dynamic integration of up-to-date external\ninformation. This methodology, focusing primarily on the text domain, provides\na cost-effective solution to the generation of plausible but incorrect\nresponses by LLMs, thereby enhancing the accuracy and reliability of their\noutputs through the use of real-world data. As RAG grows in complexity and\nincorporates multiple concepts that can influence its performance, this paper\norganizes the RAG paradigm into four categories: pre-retrieval, retrieval,\npost-retrieval, and generation, offering a detailed perspective from the\nretrieval viewpoint. It outlines RAG's evolution and discusses the field's\nprogression through the analysis of significant studies. Additionally, the\npaper introduces evaluation methods for RAG, addressing the challenges faced\nand proposing future research directions. By offering an organized framework\nand categorization, the study aims to consolidate existing research on RAG,\nclarify its technological underpinnings, and highlight its potential to broaden\nthe adaptability and applications of LLMs.", + "authors": "Yizheng Huang, Jimmy Huang", + "published": "2024-04-17", + "updated": "2024-04-17", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2401.15391v1", + "title": "MultiHop-RAG: Benchmarking Retrieval-Augmented Generation for Multi-Hop Queries", + "abstract": "Retrieval-augmented generation (RAG) augments large language models (LLM) by\nretrieving relevant knowledge, showing promising potential in mitigating LLM\nhallucinations and enhancing response quality, thereby facilitating the great\nadoption of LLMs in practice. However, we find that existing RAG systems are\ninadequate in answering multi-hop queries, which require retrieving and\nreasoning over multiple pieces of supporting evidence. Furthermore, to our\nknowledge, no existing RAG benchmarking dataset focuses on multi-hop queries.\nIn this paper, we develop a novel dataset, MultiHop-RAG, which consists of a\nknowledge base, a large collection of multi-hop queries, their ground-truth\nanswers, and the associated supporting evidence. We detail the procedure of\nbuilding the dataset, utilizing an English news article dataset as the\nunderlying RAG knowledge base. We demonstrate the benchmarking utility of\nMultiHop-RAG in two experiments. The first experiment compares different\nembedding models for retrieving evidence for multi-hop queries. In the second\nexperiment, we examine the capabilities of various state-of-the-art LLMs,\nincluding GPT-4, PaLM, and Llama2-70B, in reasoning and answering multi-hop\nqueries given the evidence. Both experiments reveal that existing RAG methods\nperform unsatisfactorily in retrieving and answering multi-hop queries. We hope\nMultiHop-RAG will be a valuable resource for the community in developing\neffective RAG systems, thereby facilitating greater adoption of LLMs in\npractice. The MultiHop-RAG and implemented RAG system is publicly available at\nhttps://github.com/yixuantt/MultiHop-RAG/.", + "authors": "Yixuan Tang, Yi Yang", + "published": "2024-01-27", + "updated": "2024-01-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2106.11517v1", + "title": "Fine-tune the Entire RAG Architecture (including DPR retriever) for Question-Answering", + "abstract": "In this paper, we illustrate how to fine-tune the entire Retrieval Augment\nGeneration (RAG) architecture in an end-to-end manner. We highlighted the main\nengineering challenges that needed to be addressed to achieve this objective.\nWe also compare how end-to-end RAG architecture outperforms the original RAG\narchitecture for the task of question answering. We have open-sourced our\nimplementation in the HuggingFace Transformers library.", + "authors": "Shamane Siriwardhana, Rivindu Weerasekera, Elliott Wen, Suranga Nanayakkara", + "published": "2021-06-22", + "updated": "2021-06-22", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2403.10446v1", + "title": "Enhancing LLM Factual Accuracy with RAG to Counter Hallucinations: A Case Study on Domain-Specific Queries in Private Knowledge-Bases", + "abstract": "We proposed an end-to-end system design towards utilizing Retrieval Augmented\nGeneration (RAG) to improve the factual accuracy of Large Language Models\n(LLMs) for domain-specific and time-sensitive queries related to private\nknowledge-bases. Our system integrates RAG pipeline with upstream datasets\nprocessing and downstream performance evaluation. Addressing the challenge of\nLLM hallucinations, we finetune models with a curated dataset which originates\nfrom CMU's extensive resources and annotated with the teacher model. Our\nexperiments demonstrate the system's effectiveness in generating more accurate\nanswers to domain-specific and time-sensitive inquiries. The results also\nrevealed the limitations of fine-tuning LLMs with small-scale and skewed\ndatasets. This research highlights the potential of RAG systems in augmenting\nLLMs with external datasets for improved performance in knowledge-intensive\ntasks. Our code and models are available on Github.", + "authors": "Jiarui Li, Ye Yuan, Zehua Zhang", + "published": "2024-03-15", + "updated": "2024-03-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2403.10081v1", + "title": "DRAGIN: Dynamic Retrieval Augmented Generation based on the Real-time Information Needs of Large Language Models", + "abstract": "Dynamic retrieval augmented generation (RAG) paradigm actively decides when\nand what to retrieve during the text generation process of Large Language\nModels (LLMs). There are two key elements of this paradigm: identifying the\noptimal moment to activate the retrieval module (deciding when to retrieve) and\ncrafting the appropriate query once retrieval is triggered (determining what to\nretrieve). However, current dynamic RAG methods fall short in both aspects.\nFirstly, the strategies for deciding when to retrieve often rely on static\nrules. Moreover, the strategies for deciding what to retrieve typically limit\nthemselves to the LLM's most recent sentence or the last few tokens, while the\nLLM's real-time information needs may span across the entire context. To\novercome these limitations, we introduce a new framework, DRAGIN, i.e., Dynamic\nRetrieval Augmented Generation based on the real-time Information Needs of\nLLMs. Our framework is specifically designed to make decisions on when and what\nto retrieve based on the LLM's real-time information needs during the text\ngeneration process. We evaluate DRAGIN along with existing methods\ncomprehensively over 4 knowledge-intensive generation datasets. Experimental\nresults show that DRAGIN achieves superior performance on all tasks,\ndemonstrating the effectiveness of our method. We have open-sourced all the\ncode, data, and models in GitHub: https://github.com/oneal2000/DRAGIN/tree/main", + "authors": "Weihang Su, Yichen Tang, Qingyao Ai, Zhijing Wu, Yiqun Liu", + "published": "2024-03-15", + "updated": "2024-03-15", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.IR" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2403.04256v1", + "title": "Federated Recommendation via Hybrid Retrieval Augmented Generation", + "abstract": "Federated Recommendation (FR) emerges as a novel paradigm that enables\nprivacy-preserving recommendations. However, traditional FR systems usually\nrepresent users/items with discrete identities (IDs), suffering from\nperformance degradation due to the data sparsity and heterogeneity in FR. On\nthe other hand, Large Language Models (LLMs) as recommenders have proven\neffective across various recommendation scenarios. Yet, LLM-based recommenders\nencounter challenges such as low inference efficiency and potential\nhallucination, compromising their performance in real-world scenarios. To this\nend, we propose GPT-FedRec, a federated recommendation framework leveraging\nChatGPT and a novel hybrid Retrieval Augmented Generation (RAG) mechanism.\nGPT-FedRec is a two-stage solution. The first stage is a hybrid retrieval\nprocess, mining ID-based user patterns and text-based item features. Next, the\nretrieved results are converted into text prompts and fed into GPT for\nre-ranking. Our proposed hybrid retrieval mechanism and LLM-based re-rank aims\nto extract generalized features from data and exploit pretrained knowledge\nwithin LLM, overcoming data sparsity and heterogeneity in FR. In addition, the\nRAG approach also prevents LLM hallucination, improving the recommendation\nperformance for real-world users. Experimental results on diverse benchmark\ndatasets demonstrate the superior performance of GPT-FedRec against\nstate-of-the-art baseline methods.", + "authors": "Huimin Zeng, Zhenrui Yue, Qian Jiang, Dong Wang", + "published": "2024-03-07", + "updated": "2024-03-07", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2311.04177v1", + "title": "Enhancing LLM Intelligence with ARM-RAG: Auxiliary Rationale Memory for Retrieval Augmented Generation", + "abstract": "Large Language Models (LLMs) are smart but forgetful. Recent studies, (e.g.,\n(Bubeck et al., 2023)) on modern LLMs have shown that they are capable of\nperforming amazing tasks typically necessitating human-level intelligence.\nHowever, unlike humans, frozen LLMs do not improve over time; they neither\nacquire new knowledge nor learn from their successes or failures. Some\napproaches to improving the intelligence of LLMs include fine-tuning models\nbased on problem-solving performance (Zelikman et al., 2022), and building\nbigger and more sophisticated models (Bubeck et al., 2023). However, these\nmethods have the drawback of requiring substantial data and computational\nresources to retrain existing models. In this paper, we explore the use of\nRetrieval Augmented Generation, also known as RAG (Lewis et al., 2021) to\nimprove problem-solving performance. We propose ARM-RAG (Auxiliary Rationale\nMemory for Retrieval Augmented Generation), a system that learns from its\nsuccesses without incurring high training costs. We demonstrate that the\nstorage and subsequent retrieval of reasoning chains have a positive influence\non performance in grade-school math problems.", + "authors": "Eric Melz", + "published": "2023-11-07", + "updated": "2023-11-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2403.09727v1", + "title": "Investigating the performance of Retrieval-Augmented Generation and fine-tuning for the development of AI-driven knowledge-based systems", + "abstract": "The development of generative large language models (G-LLM) opened up new\nopportunities for the development of new types of knowledge-based systems\nsimilar to ChatGPT, Bing, or Gemini. Fine-tuning (FN) and Retrieval-Augmented\nGeneration (RAG) are the techniques that can be used to implement domain\nadaptation for the development of G-LLM-based knowledge systems. In our study,\nusing ROUGE, BLEU, METEOR scores, and cosine similarity, we compare and examine\nthe performance of RAG and FN for the GPT-J-6B, OPT-6.7B, LlaMA, LlaMA-2\nlanguage models. Based on measurements shown on different datasets, we\ndemonstrate that RAG-based constructions are more efficient than models\nproduced with FN. We point out that connecting RAG and FN is not trivial,\nbecause connecting FN models with RAG can cause a decrease in performance.\nFurthermore, we outline a simple RAG-based architecture which, on average,\noutperforms the FN models by 16% in terms of the ROGUE score, 15% in the case\nof the BLEU score, and 53% based on the cosine similarity. This shows the\nsignificant advantage of RAG over FN in terms of hallucination, which is not\noffset by the fact that the average 8% better METEOR score of FN models\nindicates greater creativity compared to RAG.", + "authors": "Robert Lakatos, Peter Pollner, Andras Hajdu, Tamas Joo", + "published": "2024-03-12", + "updated": "2024-03-12", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2402.12352v1", + "title": "Graph-Based Retriever Captures the Long Tail of Biomedical Knowledge", + "abstract": "Large language models (LLMs) are transforming the way information is\nretrieved with vast amounts of knowledge being summarized and presented via\nnatural language conversations. Yet, LLMs are prone to highlight the most\nfrequently seen pieces of information from the training set and to neglect the\nrare ones. In the field of biomedical research, latest discoveries are key to\nacademic and industrial actors and are obscured by the abundance of an\never-increasing literature corpus (the information overload problem). Surfacing\nnew associations between biomedical entities, e.g., drugs, genes, diseases,\nwith LLMs becomes a challenge of capturing the long-tail knowledge of the\nbiomedical scientific production. To overcome this challenge, Retrieval\nAugmented Generation (RAG) has been proposed to alleviate some of the\nshortcomings of LLMs by augmenting the prompts with context retrieved from\nexternal datasets. RAG methods typically select the context via maximum\nsimilarity search over text embeddings. In this study, we show that RAG methods\nleave out a significant proportion of relevant information due to clusters of\nover-represented concepts in the biomedical literature. We introduce a novel\ninformation-retrieval method that leverages a knowledge graph to downsample\nthese clusters and mitigate the information overload problem. Its retrieval\nperformance is about twice better than embedding similarity alternatives on\nboth precision and recall. Finally, we demonstrate that both embedding\nsimilarity and knowledge graph retrieval methods can be advantageously combined\ninto a hybrid model that outperforms both, enabling potential improvements to\nbiomedical question-answering models.", + "authors": "Julien Delile, Srayanta Mukherjee, Anton Van Pamel, Leonid Zhukov", + "published": "2024-02-19", + "updated": "2024-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.IR" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2404.02103v1", + "title": "CLAPNQ: Cohesive Long-form Answers from Passages in Natural Questions for RAG systems", + "abstract": "Retrieval Augmented Generation (RAG) has become a popular application for\nlarge language models. It is preferable that successful RAG systems provide\naccurate answers that are supported by being grounded in a passage without any\nhallucinations. While considerable work is required for building a full RAG\npipeline, being able to benchmark performance is also necessary. We present\nClapNQ, a benchmark Long-form Question Answering dataset for the full RAG\npipeline. ClapNQ includes long answers with grounded gold passages from Natural\nQuestions (NQ) and a corpus to perform either retrieval, generation, or the\nfull RAG pipeline. The ClapNQ answers are concise, 3x smaller than the full\npassage, and cohesive, with multiple pieces of the passage that are not\ncontiguous. RAG models must adapt to these properties to be successful at\nClapNQ. We present baseline experiments and analysis for ClapNQ that highlight\nareas where there is still significant room for improvement in grounded RAG.\nCLAPNQ is publicly available at https://github.com/primeqa/clapnq", + "authors": "Sara Rosenthal, Avirup Sil, Radu Florian, Salim Roukos", + "published": "2024-04-02", + "updated": "2024-04-02", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2312.10997v5", + "title": "Retrieval-Augmented Generation for Large Language Models: A Survey", + "abstract": "Large Language Models (LLMs) showcase impressive capabilities but encounter\nchallenges like hallucination, outdated knowledge, and non-transparent,\nuntraceable reasoning processes. Retrieval-Augmented Generation (RAG) has\nemerged as a promising solution by incorporating knowledge from external\ndatabases. This enhances the accuracy and credibility of the generation,\nparticularly for knowledge-intensive tasks, and allows for continuous knowledge\nupdates and integration of domain-specific information. RAG synergistically\nmerges LLMs' intrinsic knowledge with the vast, dynamic repositories of\nexternal databases. This comprehensive review paper offers a detailed\nexamination of the progression of RAG paradigms, encompassing the Naive RAG,\nthe Advanced RAG, and the Modular RAG. It meticulously scrutinizes the\ntripartite foundation of RAG frameworks, which includes the retrieval, the\ngeneration and the augmentation techniques. The paper highlights the\nstate-of-the-art technologies embedded in each of these critical components,\nproviding a profound understanding of the advancements in RAG systems.\nFurthermore, this paper introduces up-to-date evaluation framework and\nbenchmark. At the end, this article delineates the challenges currently faced\nand points out prospective avenues for research and development.", + "authors": "Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, Meng Wang, Haofen Wang", + "published": "2023-12-18", + "updated": "2024-03-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2405.03963v1", + "title": "ERATTA: Extreme RAG for Table To Answers with Large Language Models", + "abstract": "Large language models (LLMs) with residual augmented-generation (RAG) have\nbeen the optimal choice for scalable generative AI solutions in the recent\npast. However, the choice of use-cases that incorporate RAG with LLMs have been\neither generic or extremely domain specific, thereby questioning the\nscalability and generalizability of RAG-LLM approaches. In this work, we\npropose a unique LLM-based system where multiple LLMs can be invoked to enable\ndata authentication, user query routing, data retrieval and custom prompting\nfor question answering capabilities from data tables that are highly varying\nand large in size. Our system is tuned to extract information from\nEnterprise-level data products and furnish real time responses under 10\nseconds. One prompt manages user-to-data authentication followed by three\nprompts to route, fetch data and generate a customizable prompt natural\nlanguage responses. Additionally, we propose a five metric scoring module that\ndetects and reports hallucinations in the LLM responses. Our proposed system\nand scoring metrics achieve >90% confidence scores across hundreds of user\nqueries in the sustainability, financial health and social media domains.\nExtensions to the proposed extreme RAG architectures can enable heterogeneous\nsource querying using LLMs.", + "authors": "Sohini Roychowdhury, Marko Krema, Anvar Mahammad, Brian Moore, Arijit Mukherjee, Punit Prakashchandra", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2310.13848v1", + "title": "FABULA: Intelligence Report Generation Using Retrieval-Augmented Narrative Construction", + "abstract": "Narrative construction is the process of representing disparate event\ninformation into a logical plot structure that models an end to end story.\nIntelligence analysis is an example of a domain that can benefit tremendously\nfrom narrative construction techniques, particularly in aiding analysts during\nthe largely manual and costly process of synthesizing event information into\ncomprehensive intelligence reports. Manual intelligence report generation is\noften prone to challenges such as integrating dynamic event information,\nwriting fine-grained queries, and closing information gaps. This motivates the\ndevelopment of a system that retrieves and represents critical aspects of\nevents in a form that aids in automatic generation of intelligence reports.\n We introduce a Retrieval Augmented Generation (RAG) approach to augment\nprompting of an autoregressive decoder by retrieving structured information\nasserted in a knowledge graph to generate targeted information based on a\nnarrative plot model. We apply our approach to the problem of neural\nintelligence report generation and introduce FABULA, framework to augment\nintelligence analysis workflows using RAG. An analyst can use FABULA to query\nan Event Plot Graph (EPG) to retrieve relevant event plot points, which can be\nused to augment prompting of a Large Language Model (LLM) during intelligence\nreport generation. Our evaluation studies show that the plot points included in\nthe generated intelligence reports have high semantic relevance, high\ncoherency, and low data redundancy.", + "authors": "Priyanka Ranade, Anupam Joshi", + "published": "2023-10-20", + "updated": "2023-10-20", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2405.00175v1", + "title": "Towards a Search Engine for Machines: Unified Ranking for Multiple Retrieval-Augmented Large Language Models", + "abstract": "This paper introduces uRAG--a framework with a unified retrieval engine that\nserves multiple downstream retrieval-augmented generation (RAG) systems. Each\nRAG system consumes the retrieval results for a unique purpose, such as\nopen-domain question answering, fact verification, entity linking, and relation\nextraction. We introduce a generic training guideline that standardizes the\ncommunication between the search engine and the downstream RAG systems that\nengage in optimizing the retrieval model. This lays the groundwork for us to\nbuild a large-scale experimentation ecosystem consisting of 18 RAG systems that\nengage in training and 18 unknown RAG systems that use the uRAG as the new\nusers of the search engine. Using this experimentation ecosystem, we answer a\nnumber of fundamental research questions that improve our understanding of\npromises and challenges in developing search engines for machines.", + "authors": "Alireza Salemi, Hamed Zamani", + "published": "2024-04-30", + "updated": "2024-04-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.IR" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2405.04700v1", + "title": "Robust Implementation of Retrieval-Augmented Generation on Edge-based Computing-in-Memory Architectures", + "abstract": "Large Language Models (LLMs) deployed on edge devices learn through\nfine-tuning and updating a certain portion of their parameters. Although such\nlearning methods can be optimized to reduce resource utilization, the overall\nrequired resources remain a heavy burden on edge devices. Instead,\nRetrieval-Augmented Generation (RAG), a resource-efficient LLM learning method,\ncan improve the quality of the LLM-generated content without updating model\nparameters. However, the RAG-based LLM may involve repetitive searches on the\nprofile data in every user-LLM interaction. This search can lead to significant\nlatency along with the accumulation of user data. Conventional efforts to\ndecrease latency result in restricting the size of saved user data, thus\nreducing the scalability of RAG as user data continuously grows. It remains an\nopen question: how to free RAG from the constraints of latency and scalability\non edge devices? In this paper, we propose a novel framework to accelerate RAG\nvia Computing-in-Memory (CiM) architectures. It accelerates matrix\nmultiplications by performing in-situ computation inside the memory while\navoiding the expensive data transfer between the computing unit and memory. Our\nframework, Robust CiM-backed RAG (RoCR), utilizing a novel contrastive\nlearning-based training method and noise-aware training, can enable RAG to\nefficiently search profile data with CiM. To the best of our knowledge, this is\nthe first work utilizing CiM to accelerate RAG.", + "authors": "Ruiyang Qin, Zheyu Yan, Dewen Zeng, Zhenge Jia, Dancheng Liu, Jianbo Liu, Zhi Zheng, Ningyuan Cao, Kai Ni, Jinjun Xiong, Yiyu Shi", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.DC", + "cs.IR" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2402.17497v1", + "title": "REAR: A Relevance-Aware Retrieval-Augmented Framework for Open-Domain Question Answering", + "abstract": "Considering the limited internal parametric knowledge, retrieval-augmented\ngeneration (RAG) has been widely used to extend the knowledge scope of large\nlanguage models (LLMs). Despite the extensive efforts on RAG research, in\nexisting methods, LLMs cannot precisely assess the relevance of retrieved\ndocuments, thus likely leading to misleading or even incorrect utilization of\nexternal knowledge (i.e., retrieved documents). To address this issue, in this\npaper, we propose REAR, a RElevance-Aware Retrieval-augmented approach for\nopen-domain question answering (QA). As the key motivation, we aim to enhance\nthe self-awareness of source relevance for LLMs, so as to adaptively utilize\nexternal knowledge in RAG systems. Specially, we develop a new architecture for\nLLM based RAG system, by incorporating a specially designed rank head that\nprecisely assesses the relevance of retrieved documents. Furthermore, we\npropose an improved training method based on bi-granularity relevance fusion\nand noise-resistant training. By combining the improvements in both\narchitecture and training, our proposed REAR can better utilize external\nknowledge by effectively perceiving the relevance of retrieved documents.\nExperiments on four open-domain QA tasks show that REAR significantly\noutperforms previous a number of competitive RAG approaches. Our code and data\ncan be accessed at https://github.com/RUCAIBox/REAR.", + "authors": "Yuhao Wang, Ruiyang Ren, Junyi Li, Wayne Xin Zhao, Jing Liu, Ji-Rong Wen", + "published": "2024-02-27", + "updated": "2024-02-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.IR" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2402.13178v2", + "title": "Benchmarking Retrieval-Augmented Generation for Medicine", + "abstract": "While large language models (LLMs) have achieved state-of-the-art performance\non a wide range of medical question answering (QA) tasks, they still face\nchallenges with hallucinations and outdated knowledge. Retrieval-augmented\ngeneration (RAG) is a promising solution and has been widely adopted. However,\na RAG system can involve multiple flexible components, and there is a lack of\nbest practices regarding the optimal RAG setting for various medical purposes.\nTo systematically evaluate such systems, we propose the Medical Information\nRetrieval-Augmented Generation Evaluation (MIRAGE), a first-of-its-kind\nbenchmark including 7,663 questions from five medical QA datasets. Using\nMIRAGE, we conducted large-scale experiments with over 1.8 trillion prompt\ntokens on 41 combinations of different corpora, retrievers, and backbone LLMs\nthrough the MedRAG toolkit introduced in this work. Overall, MedRAG improves\nthe accuracy of six different LLMs by up to 18% over chain-of-thought\nprompting, elevating the performance of GPT-3.5 and Mixtral to GPT-4-level. Our\nresults show that the combination of various medical corpora and retrievers\nachieves the best performance. In addition, we discovered a log-linear scaling\nproperty and the \"lost-in-the-middle\" effects in medical RAG. We believe our\ncomprehensive evaluations can serve as practical guidelines for implementing\nRAG systems for medicine.", + "authors": "Guangzhi Xiong, Qiao Jin, Zhiyong Lu, Aidong Zhang", + "published": "2024-02-20", + "updated": "2024-02-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2404.07220v1", + "title": "Blended RAG: Improving RAG (Retriever-Augmented Generation) Accuracy with Semantic Search and Hybrid Query-Based Retrievers", + "abstract": "Retrieval-Augmented Generation (RAG) is a prevalent approach to infuse a\nprivate knowledge base of documents with Large Language Models (LLM) to build\nGenerative Q\\&A (Question-Answering) systems. However, RAG accuracy becomes\nincreasingly challenging as the corpus of documents scales up, with Retrievers\nplaying an outsized role in the overall RAG accuracy by extracting the most\nrelevant document from the corpus to provide context to the LLM. In this paper,\nwe propose the 'Blended RAG' method of leveraging semantic search techniques,\nsuch as Dense Vector indexes and Sparse Encoder indexes, blended with hybrid\nquery strategies. Our study achieves better retrieval results and sets new\nbenchmarks for IR (Information Retrieval) datasets like NQ and TREC-COVID\ndatasets. We further extend such a 'Blended Retriever' to the RAG system to\ndemonstrate far superior results on Generative Q\\&A datasets like SQUAD, even\nsurpassing fine-tuning performance.", + "authors": "Kunal Sawarkar, Abhilasha Mangal, Shivam Raj Solanki", + "published": "2024-03-22", + "updated": "2024-03-22", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2404.17897v1", + "title": "Tool Calling: Enhancing Medication Consultation via Retrieval-Augmented Large Language Models", + "abstract": "Large-scale language models (LLMs) have achieved remarkable success across\nvarious language tasks but suffer from hallucinations and temporal\nmisalignment. To mitigate these shortcomings, Retrieval-augmented generation\n(RAG) has been utilized to provide external knowledge to facilitate the answer\ngeneration. However, applying such models to the medical domain faces several\nchallenges due to the lack of domain-specific knowledge and the intricacy of\nreal-world scenarios. In this study, we explore LLMs with RAG framework for\nknowledge-intensive tasks in the medical field. To evaluate the capabilities of\nLLMs, we introduce MedicineQA, a multi-round dialogue benchmark that simulates\nthe real-world medication consultation scenario and requires LLMs to answer\nwith retrieved evidence from the medicine database. MedicineQA contains 300\nmulti-round question-answering pairs, each embedded within a detailed dialogue\nhistory, highlighting the challenge posed by this knowledge-intensive task to\ncurrent LLMs. We further propose a new \\textit{Distill-Retrieve-Read} framework\ninstead of the previous \\textit{Retrieve-then-Read}. Specifically, the\ndistillation and retrieval process utilizes a tool calling mechanism to\nformulate search queries that emulate the keyword-based inquiries used by\nsearch engines. With experimental results, we show that our framework brings\nnotable performance improvements and surpasses the previous counterparts in the\nevidence retrieval process in terms of evidence retrieval accuracy. This\nadvancement sheds light on applying RAG to the medical domain.", + "authors": "Zhongzhen Huang, Kui Xue, Yongqi Fan, Linjie Mu, Ruoyu Liu, Tong Ruan, Shaoting Zhang, Xiaofan Zhang", + "published": "2024-04-27", + "updated": "2024-04-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2402.13547v1", + "title": "ActiveRAG: Revealing the Treasures of Knowledge via Active Learning", + "abstract": "Retrieval Augmented Generation (RAG) has introduced a new paradigm for Large\nLanguage Models (LLMs), aiding in the resolution of knowledge-intensive tasks.\nHowever, current RAG models position LLMs as passive knowledge receptors,\nthereby restricting their capacity for learning and comprehending external\nknowledge. In this paper, we present ActiveRAG, an innovative RAG framework\nthat shifts from passive knowledge acquisition to an active learning mechanism.\nThis approach utilizes the Knowledge Construction mechanism to develop a deeper\nunderstanding of external knowledge by associating it with previously acquired\nor memorized knowledge. Subsequently, it designs the Cognitive Nexus mechanism\nto incorporate the outcomes from both chains of thought and knowledge\nconstruction, thereby calibrating the intrinsic cognition of LLMs. Our\nexperimental results demonstrate that ActiveRAG surpasses previous RAG models,\nachieving a 5% improvement on question-answering datasets. All data and codes\nare available at https://github.com/OpenMatch/ActiveRAG.", + "authors": "Zhipeng Xu, Zhenghao Liu, Yibin Liu, Chenyan Xiong, Yukun Yan, Shuo Wang, Shi Yu, Zhiyuan Liu, Ge Yu", + "published": "2024-02-21", + "updated": "2024-02-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2404.15939v2", + "title": "Telco-RAG: Navigating the Challenges of Retrieval-Augmented Language Models for Telecommunications", + "abstract": "The application of Large Language Models (LLMs) and Retrieval-Augmented\nGeneration (RAG) systems in the telecommunication domain presents unique\nchallenges, primarily due to the complex nature of telecom standard documents\nand the rapid evolution of the field. The paper introduces Telco-RAG, an\nopen-source RAG framework designed to handle the specific needs of\ntelecommunications standards, particularly 3rd Generation Partnership Project\n(3GPP) documents. Telco-RAG addresses the critical challenges of implementing a\nRAG pipeline on highly technical content, paving the way for applying LLMs in\ntelecommunications and offering guidelines for RAG implementation in other\ntechnical domains.", + "authors": "Andrei-Laurentiu Bornea, Fadhel Ayed, Antonio De Domenico, Nicola Piovesan, Ali Maatouk", + "published": "2024-04-24", + "updated": "2024-04-26", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "eess.SP" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2402.16893v1", + "title": "The Good and The Bad: Exploring Privacy Issues in Retrieval-Augmented Generation (RAG)", + "abstract": "Retrieval-augmented generation (RAG) is a powerful technique to facilitate\nlanguage model with proprietary and private data, where data privacy is a\npivotal concern. Whereas extensive research has demonstrated the privacy risks\nof large language models (LLMs), the RAG technique could potentially reshape\nthe inherent behaviors of LLM generation, posing new privacy issues that are\ncurrently under-explored. In this work, we conduct extensive empirical studies\nwith novel attack methods, which demonstrate the vulnerability of RAG systems\non leaking the private retrieval database. Despite the new risk brought by RAG\non the retrieval data, we further reveal that RAG can mitigate the leakage of\nthe LLMs' training data. Overall, we provide new insights in this paper for\nprivacy protection of retrieval-augmented LLMs, which benefit both LLMs and RAG\nsystems builders. Our code is available at\nhttps://github.com/phycholosogy/RAG-privacy.", + "authors": "Shenglai Zeng, Jiankun Zhang, Pengfei He, Yue Xing, Yiding Liu, Han Xu, Jie Ren, Shuaiqiang Wang, Dawei Yin, Yi Chang, Jiliang Tang", + "published": "2024-02-23", + "updated": "2024-02-23", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.AI", + "cs.CL" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + }, + { + "url": "http://arxiv.org/abs/2402.01733v1", + "title": "Development and Testing of Retrieval Augmented Generation in Large Language Models -- A Case Study Report", + "abstract": "Purpose: Large Language Models (LLMs) hold significant promise for medical\napplications. Retrieval Augmented Generation (RAG) emerges as a promising\napproach for customizing domain knowledge in LLMs. This case study presents the\ndevelopment and evaluation of an LLM-RAG pipeline tailored for healthcare,\nfocusing specifically on preoperative medicine.\n Methods: We developed an LLM-RAG model using 35 preoperative guidelines and\ntested it against human-generated responses, with a total of 1260 responses\nevaluated. The RAG process involved converting clinical documents into text\nusing Python-based frameworks like LangChain and Llamaindex, and processing\nthese texts into chunks for embedding and retrieval. Vector storage techniques\nand selected embedding models to optimize data retrieval, using Pinecone for\nvector storage with a dimensionality of 1536 and cosine similarity for loss\nmetrics. Human-generated answers, provided by junior doctors, were used as a\ncomparison.\n Results: The LLM-RAG model generated answers within an average of 15-20\nseconds, significantly faster than the 10 minutes typically required by humans.\nAmong the basic LLMs, GPT4.0 exhibited the best accuracy of 80.1%. This\naccuracy was further increased to 91.4% when the model was enhanced with RAG.\nCompared to the human-generated instructions, which had an accuracy of 86.3%,\nthe performance of the GPT4.0 RAG model demonstrated non-inferiority (p=0.610).\n Conclusions: In this case study, we demonstrated a LLM-RAG model for\nhealthcare implementation. The pipeline shows the advantages of grounded\nknowledge, upgradability, and scalability as important aspects of healthcare\nLLM deployment.", + "authors": "YuHe Ke, Liyuan Jin, Kabilan Elangovan, Hairil Rizal Abdullah, Nan Liu, Alex Tiong Heng Sia, Chai Rick Soh, Joshua Yi Min Tung, Jasmine Chiat Ling Ong, Daniel Shu Wei Ting", + "published": "2024-01-29", + "updated": "2024-01-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Retrieval AND Augmented AND Generation AND RAG" + } +] \ No newline at end of file