diff --git "a/related_53K/test_related_long_2404.16300v1.json" "b/related_53K/test_related_long_2404.16300v1.json" new file mode 100644--- /dev/null +++ "b/related_53K/test_related_long_2404.16300v1.json" @@ -0,0 +1,8851 @@ +[ + { + "url": "http://arxiv.org/abs/2404.16300v1", + "title": "Reinforcement Learning with Generative Models for Compact Support Sets", + "abstract": "Foundation models contain a wealth of information from their vast number of\ntraining samples. However, most prior arts fail to extract this information in\na precise and efficient way for small sample sizes. In this work, we propose a\nframework utilizing reinforcement learning as a control for foundation models,\nallowing for the granular generation of small, focused synthetic support sets\nto augment the performance of neural network models on real data classification\ntasks. We first allow a reinforcement learning agent access to a novel context\nbased dictionary; the agent then uses this dictionary with a novel prompt\nstructure to form and optimize prompts as inputs to generative models,\nreceiving feedback based on a reward function combining the change in\nvalidation accuracy and entropy. A support set is formed this way over several\nexploration steps. Our framework produced excellent results, increasing\nclassification accuracy by significant margins for no additional labelling or\ndata cost.", + "authors": "Nico Schiavone, Xingyu Li", + "published": "2024-04-25", + "updated": "2024-04-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Model AND Based AND Reinforcement AND Learning", + "gt": "2.1. Reinforcement Learning Reinforcement learning [14] defines an agent and an environment with rules on how they can interact. The agent receives rewards based on how their actions affect the environment, with one of several reward schemes. The rewards inform the optimal behaviour of the agent, and thus the desirable properties of the end model. Popular reward schemes include exploration-based, which incentivizes exploring the action space, and goal-based, which explores to achieve set goals. Past works have attempted to use reinforcement learning directly in classification algorithms, but this generally yields lacklustre results for the amount of effort and training time required [4]. This is due to the long convergence time of conventional reinforcement learning algorithms, and the relative ease of using simple deep learning models when a well-labelled dataset is available, rather than optimizing the loss with an agent. In our framework, we circumvent this issue by using a deep learning model for classification and optimizing it by altering the training set, rather than directly making the predictions using the agent. 2.2. Generative Models Generative models have shown unprecedented success in many tasks in natural language processing and computer vision [1, 13]. Such models are often trained on datasets with in excess of one billion images, which stores a large wealth of knowledge that can be accessed through their generation capabilities [1]. These generative models have been widely used in contemporary research for image synthesis, such as augmentation of existing samples to artificially simulate a larger dataset [19, 20]. Replacing the dataset entirely with synthetic images is also a topic of interest, with excellent preliminary results despite no real data [22]. Finally, the generation of large support sets to supplement real data has Figure 1. Overall framework also been explored, but this mainly utilizes synthesis over a large scale to shore up the weaknesses of a dataset [11]. Contemporary generative models usually require text prompts to guide their behaviour. General prompting is successful in simple tasks, such as guided image synthesis, but complex and specific prompts often lead to unexpected results. This leads to an area of research known as prompt engineering, which is the focus of much of the recent literature in the topic of large models [2]. Common approaches generally utilize a fixed set of prompts that have been carefully engineered to produce certain results; in our framework, we allow the prompts to evolve naturally from a general structure to their optimal state using reinforcement learning to choose the subjects and the model performance as feedback.", + "pre_questions": [], + "main_content": "Introduction Deep learning [10] is one of the most popular and successful methods for any task where a large dataset can be procured, including fundamental computer vision tasks like classification. However, large, well-balanced, well-labelled datasets are often difficult and prohibitively expensive to acquire. Consequently, much of contemporary image classification utilizes a high quality source dataset and support sets with highly relevant data to the target task. The generation of such support sets has been a focus of contemporary research, and recently utilizes the output of the unprecedented success of large pretrained generative models like Stable Diffusion [13]. The advancements in generative models have led to the rise of synthetic datasets, where images are generated in large scale according to the target task and used in place of a real training dataset, yielding excellent results [6, 11, 22]. Despite these advancements, the body of research relating to synthetic datasets remains primarily focused on largebatch image synthesis. In this way, any issues caused by the unpredictable behaviour of modern generative models can easily be smoothed out. However, this results in the majority of successful applications requiring tens of thousands of images generated for a single task [6, 11], which is inefficient in time and cost. The goal of creating specific, highly focused support sets composed of several hundred images rather than several thousand is currently an open problem at the forefront of generative computer vision research. Consequently, it raises the question of if synthetic data can supplement real data, making up a very small portion of the overall dataset to shore up specific weaknesses, or whether synthetic data must make up a significant amount of the dataset if it is to be used at all. Reinforcement learning [14] is a popular control scheme that has an agent learn the optimal behaviour given an environment and a reward for desirable interactions. Recent studies have found reinforcement learning effective at writing and re-writing prompts [3, 7], but the use of reinforcement learning to guide the evolution of prompts has yet to be explored. Reinforcement learning is an excellent framework for imposing specific learned behaviours upon the resulting agent, and we posit that combining reinforcement learning with pretrained generative models will impart that much-needed specificity on the synthesized images, resulting in significant performance gains for a relatively small number of synthetic images. In this work, we introduce a framework utilizing reinforcement learning as a control for large generative models to synthesize precise support sets, intended to bolster the lacking aspects of real datasets without overwriting them for increased model performance at no extra data or labelling costs. To accomplish this, we utilize a dictionary based on the features of the original training dataset, and allow a reinforcement learning agent to learn the optimal structures and word choice to generate high quality, specific prompts for Stable Diffusion. The controlled output of Stable Diffusion is then used to supplement the existing training data for a neural network model, and the performance of this model on a validation set is given as feedback to the agent. 1 arXiv:2404.16300v1 [cs.LG] 25 Apr 2024 In this way, the framework allows Stable Diffusion to act as an extension of the reinforcement learning agent, acting directly to improve the performance of the model by tweaking the prompts that make up the support set. We evaluate this framework on several datasets, including CIFAR10 [8], and Tiny-ImageNet [9], showing free improvements on neural networks of \u223c1% for less than 500 total images in the support set. The main contributions for this work are: \u2022 A novel framework combining reinforcement learning and large pretrained generative models for the construction of small, focused, and effective synthetic support sets. \u2022 A new reward scheme that facilitates a better interaction sets. \u2022 A new reward scheme that facilitates a better interaction between reinforcement learning and classification. 3.1. Problem Formulation Initially, there is a well-labelled dataset D, consisting of N training samples, and a synthetic support set S, consisting of k\u2217m samples, where k is the current step number, and m is the number of samples generated per step. In this work, we impose an extra limit Nsyn on the number of samples in S. There is also a validation set V, and a test set T . Our goal in this study is to train a reinforcement learning agent A to optimally control a pretrained generative model, such as Stable Diffusion, to optimally populate S with at most Nsyn synthetic images, where Nsyn << N. As shown in Fig. 1, in each step, the agent forms a prompt, feeds it to Stable Diffusion, and the resulting images are added to S. The resulting dataset D+S is used to train a model M , and its performance on V is passed back to A as feedback. This 2 Figure 2. Images generated using our framework using CIFAR10 [8] labels. continues until a total of Nsyn images are contained within S, at which point the exploration thread terminates. When all exploration threads within the preset exploration budget are explored, the resulting framework is tested on the test set T yielding the final performance. 3.2. Image Synthesis For image synthesis, we are using Stable Diffusion [13], a successful text-to-image model that is trained on billions of text-image pairs.Stable Diffusion has already been used to great effect in contemporary works when the aim is to replace a real dataset [18, 22], and to augment existing samples [19, 20], but with comparatively fewer works focusing on consistently generating small, effective support sets. 3.3. Controlling the Synthesis with RL Reinforcement learning (RL) defines an agent and an environment, and gives a set of actions that the agent can take to affect the environment. In our framework, we take a classification model and its training dataset as the Environment. The reinforcement learning agent adaptively selects text prompts for the generative model towards image synthesis, which supplements the training set for classification performance improvement. The agent then receives feedback based on the change in the model\u2019s performance, which is taken as the State in our reinforcement framework. In this study, we adopt the policy-based method for agent optimization, building a policy \u03c0 : s \u2212 \u2192a that maps states to optimal actions [14]. The specific objective function is: L(\u03b8) = \u02c6 E[min(rt(\u03b8) \u02c6 At, clip(rt(\u03b8), 1 \u2212\u03f5, 1 + \u03f5) \u02c6 At)]. (1) where rt = \u03c0\u03b8(at|st) \u03c0\u03b8old(at|st) is the probability ratio, \u02c6 At is the estimator of the advantage function at step t, and \u03f5 is a small value. Action space: Our framework allows the reinforcement learning agent to interact with Stable Diffusion by forming prompts. Prompts of unlimited length are subject to unmanageable time complexity, so we utilize a set dictionary based on the dataset. We formulate the interaction with a basic sentence structure with enough expression to accurately place the image, and pose the following format: \u201dA {domain} of a {class}, {class}, and {class}\u201d. Domains include photographs, digital artwork, paintings, mosaics, and other clear representations of the class. Next, three class names are chosen from the list of classes in the dataset. We notice that Stable Diffusion usually puts more attention on the first \u201dclass\u201d term and generates the corresponding theme in the resulting image. Thus, our prompt design allows the agent to position the generated images at the boundaries between classes, which is where new images are most effective for improving classification performance [12]. This is in contrast to traditional prompting methods, where the prompt describes the primary subject of interest with qualifiers for other subjects. We instead follow contemporary diversity research, prioritizing brevity and maximal control [15]. The benefits of our approach are that single-class representative samples can be easily generated as follows: \u201dA {domain} of a car, car, and car\u201d, which has the added benefit of including more representative features from the chosen class due to the repetition. Multi-class samples can be equally easily generated by including two or three different class names, and the significance of each class can be altered by changing the order the classes appear in. In this way, our method allows the agent a yet unseen amount of control over the output of Stable Diffusion, resulting in significantly improved precision. Reward function: The agent\u2019s desired behaviour is to increase the accuracy of the classification model as much as possible with limited image synthesis. In our framework, we use a combined reward function, utilizing the validation set accuracy and the entropy to bias our model towards high, balanced accuracy. Under the assumption of a welllabelled training dataset, the former (i.e. classification accuracy on validate set) offers the most unfiltered access to the state changes in the model\u2019s performance. It is noteworthy that different from previous works utilizing reinforcement learning for classification, the accuracy alone is used, the addition of entropy in our reward allows the framework to simultaneously reward the improvement of weak classes, which improves the overall model performance on underrepresented classes. The formulation of our reward function is shown in Eq. 2, where the entropy under a state s can be calculated following Eq. 3. r(s, s\u2032) = \u2206Acc(s \u2192s\u2032) \u2212\u2206\u03c3entropy(s \u2192s\u2032), (2) \u03c3entropy(x, M) = \u2212\u03a3k i=1pM(yi|x) log pM(yi|x), (3) where s\u2032 is the state after performing action a, and s is the state before performing action a, and pM(\u02c6 y|x) represents the class probability of sample x under model M. 3 Pretrained Rand Syn Ours ResNet-18 92.0 92.3 92.7 ResNet-50 93.9 94.2 94.5 VGG-16 93.9 94.1 94.9 ShuffleNetV2 93.3 93.6 94.1 EfficientNetV2S 94.1 94.3 95.2 Table 1. Classification accuracy (%) on CIFAR-10 [8]. Pretrained Rand Syn Ours ResNet-18 54.3 54.4 54.7 ResNet-50 71.1 71.1 71.5 VGG-16 63.2 63.4 63.9 ShuffleNetV2 48.6 48.6 48.8 EfficientNetV2S 69.9 70.0 70.4 Table 2. Classification accuracy (%) on Tiny ImageNet [9]. 3.4. Full Algorithm One training step for the agent A consists of the following processes, in order: 1. A chooses a domain and three classes in the prompt to represent the generated images. 2. m images are generated following the prompt, which are added to S. 3. M is trained on D + S, and tested again on V, reporting the accuracy and entropy of the predictions. 4. The reward r(s, s\u2032) is given back to the agent. If k = 1, then the pretrained statistics are used in place of the data from the previous state s. This sequence is optimized using Proximal-PolicyOptimization [14] to find the optimal set of Nsyn synthetic samples contained in S. After the training process is completed, the algorithm has found the optimal prompts for to generate the optimal support set, and runs a final time without feedback to form S, the desired support set. 4. Results & Discussion 4.1. Datasets We evaluate our framework on two popular natural image datasets, CIFAR-10 [8] and Tiny ImageNet [9]. We chose these datasets due to computational reasons \u2013 the action space complexity scales as n3, where n is the number of classes in the dataset. Tiny ImageNet is a 200 class balanced dataset of 100 000 64x64 coloured images, and CIFAR-10 is a 10 class balanced dataset of 60 000 32x32 coloured images. In each case, we split the datasets using an 80:10:10 ratio of train:validation:test. 4.2. Experimental Protocol We follow the setup laid out in Section 3. For both datasets, we use a domain dictionary of {\u201dphotograph\u201d, \u201dpainting\u201d, \u201dstill-life\u201d, \u201dimage\u201d, \u201ddigital image\u201d} and a class dictionary composed of each class name once. In experiments, we select k = 10 to generate 10 images per step and our algorithm will run until a maximum of Nsyn = 400 images. Various models, including ResNet18, ResNet50 [5], ShuffleNetV2 [17], VGG-16 [16], and EfficientNetV2 [21], are evaluated in our experiments. We compare the results of our framework against vanilla trained models and the models trained with random synthetic images in equal number. The \u2019Random Synthesis\u2019 setting adds to the training set 400 images synthesized by selecting random classes to fill the blanks in the prompt, and our method uses the full reinforcement learning framework. 4.3. Main Results and Discussion The results of applying our framework are reported in Tables 1 and 2. In addition, example images generated off of the CIFAR-10 dataset are demonstrated in Fig. 2. From these results, we can see that our framework is superior to random synthesis for small-batch support set synthesis, increasing the accuracy by as much as 0.9% over the random synthesis method, and 1.1% over the baseline model. Notably, for two backbones on Tiny ImageNet, random synthesis fails to improve the performance of the model by > 0.1%, while our framework increases the accuracy by \u223c0.2%. In addition, our method adds only 0.33% extra images for CIFAR-10, and 0.2% for Tiny-ImageNet. Our experimental results show that the proposed framework has a high performance gain relative to the number of samples synthesized, a characteristic not seen in prior arts. We attribute this gain to the fine control that our designed reinforcement learning agent gives over the output of the large pretrained model, and the effectiveness of the feedback given back to the agent. Our framework currently requires some amount of information about the target dataset in order to work: class names, and a rough domain. This could be bypassed by forming the dictionary using an image-to-text encoder on representative samples after clustering by an unsupervised learning algorithm, but we leave the pursuit of this direction for future work. 5. Conclusions In this work, we proposed a framework allowing for the granular generation of small, focused synthetic support sets to augment the performance of general backbone networks on real data classification tasks. Our framework exploits the wealth of information present in large pretrained models by controlling their output using reinforcement learning agents, so that optimal, explainable prompts can be generated over many training steps. Our framework produced excellent results on a variety of backbones, increasing classification accuracy by significant margins for no additional labelling or data cost. 4", + "additional_info": [ + [ + { + "url": "http://arxiv.org/abs/2402.15164v2", + "title": "EasyRL4Rec: An Easy-to-use Library for Reinforcement Learning Based Recommender Systems", + "abstract": "Reinforcement Learning (RL)-Based Recommender Systems (RSs) have gained\nrising attention for their potential to enhance long-term user engagement.\nHowever, research in this field faces challenges, including the lack of\nuser-friendly frameworks, inconsistent evaluation metrics, and difficulties in\nreproducing existing studies. To tackle these issues, we introduce EasyRL4Rec,\nan easy-to-use code library designed specifically for RL-based RSs. This\nlibrary provides lightweight and diverse RL environments based on five public\ndatasets and includes core modules with rich options, simplifying model\ndevelopment. It provides unified evaluation standards focusing on long-term\noutcomes and offers tailored designs for state modeling and action\nrepresentation for recommendation scenarios. Furthermore, we share our findings\nfrom insightful experiments with current methods. EasyRL4Rec seeks to\nfacilitate the model development and experimental process in the domain of\nRL-based RSs. The library is available for public use.", + "authors": "Yuanqing Yu, Chongming Gao, Jiawei Chen, Heng Tang, Yuefeng Sun, Qian Chen, Weizhi Ma, Min Zhang", + "published": "2024-02-23", + "updated": "2024-04-27", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Model AND Based AND Reinforcement AND Learning", + "gt": "2.1 Development of RL-Based RSs Reinforcement Learning (RL), a branch of machine learning, focuses on agents learning the ability of decision-making through environmental feedback2. Recently, RL-Based Recommender Systems (RSs) have gained considerable attention due to their ability to model recommendation as a multi-step decision-making process and enhance the long-term bene\ufb01ts. Numerous studies have investigated the application of RL in recommender systems. Shani et al. [44] \ufb01rst formulated the recommendation process as a Markov Decision Process (MDP) and utilized model-based RL methods. Zhao et al. [62] adapt a DQN [37] architecture to incorporate positive and negative feedback from users. Zheng et al. [63] applies the Dueling DQN algorithm to news recommendation. Chen et al. [9, 10] extend REINFORCE [54] and o\ufb00-policy actor-critic algorithm to recommendation. Xin et al. [55] proposed to utilize self-supervision signals to empower RL-based RSs. Unlike most methods based on discrete actions, Cai et al. [4], 2RL in this work speci\ufb01cally refers to deep reinforcement learning. EasyRL4Rec: An Easy-to-use Library for Reinforcement Learning Based Recommender Systems SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Liu et al. [34], Xue et al. [57] investigate the applications of continuous action-based RL and improvements of long-term user engagement in short-video scenarios. In addition, Ren et al. [38] focus on e\ufb00ective state representations, while recent work [16, 19] addresses speci\ufb01c issues when applying RL in RSs. The above work mainly focuses on optimizing RL policy, state modeling, or speci\ufb01c issues in recommendation scenarios. However, over 60% of existing work is not open-source, inspiring us to develop a library that encompasses these aspects and facilitates the reproduction of existing work. 2.2 Resources for RL-Based RSs Existing resources for RL-based RSs can be categorized into two groups: Simulators & Datasets, and Frameworks & Libraries. In Table 1, we summarize the characteristics of existing resources and our EasyRL4Rec regarding modules and experimental procedures. 2.2.1 Simulators and Datasets. Numerous studies have focused on the development of datasets or simulation platforms to create interactive RL environments. RecoGym [39] focuses on simulation bandit environments under e-commerce recommendation setting, while Virtual-Taobao [45] provides a user simulator trained on historical behavior data. RecSim [23] is a simulation platform supporting sequential user interactions and easy con\ufb01guration of environments. Additionally, SOFA [22] is the \ufb01rst simulator that accounts for interaction biases for optimization and evaluation. In the latest work, RL4RS [51] provides a validated simulation environment, advanced evaluation methods, and a real dataset. KuaiSim [60] o\ufb00ers an environment with multi-behavior feedback, supporting three levels of recommendation tasks. However, most of these studies merely provide an interactive environment, lacking integrated experimental procedures and other crucial components in RL-based RSs, such as diverse policies and state trackers. RL4RS is the most similar work to our library, with the following distinctions: 1) They focus on datasets and evaluation, whereas our goal is to o\ufb00er an easy-to-use framework for developing new models and facilitating experiments. 2) They support only speci\ufb01c slate recommendation scenarios, whereas our library caters to broader and more commonly used scenarios. 2.2.2 Frameworks and Libraries. As far as we know, there are few frameworks or libraries that directly address RL-based RSs. However, some high-quality libraries exist for Multi-ArmedBandit (MAB) algorithms [1, 6, 30, 49] with interactive training environments3. BEARS [2] serves as an evaluation framework that facilitates easy testing of bandit-based RS solutions and supports reproducible of\ufb02ine assessments. MABWiser [47] is a parallelizable library that supports traditional MAB solutions and Contextual Bandits. Open Bandit Pipeline (OBP) [40] focuses on o\ufb00-policy evaluation (OPE) and provides a streamlined and standardized library for implementing batch bandit algorithms and OPE. iRec [46] proposes an interactive recommender systems framework that also caters to multiarmed bandits models. These libraries primarily serve bandit-based RSs and do not incorporate RL policies. Moreover, some well-established libraries 3In Table 1, MAB algorithms is not classi\ufb01ed as RL policies, within the scope of deep reinforcement learning. Figure 1: Architecture of EasyRL4Rec. The library is structured around four core modules: Environment (abbreviated as Envs), Policy, StateTracker, and Collector. Buffer serves as a fundamental data structure for organizing raw data trajectories, while the Trainer and Evaluator act as executors, managing the entire process. designed for classic RL scenarios(e.g. rllib [31], tianshou [53]) cannot be directly applied to recommendations due to the lack of recommendation environment construction and state modeling.", + "pre_questions": [], + "main_content": "INTRODUCTION Recommender systems (RSs) are increasingly becoming integral to various domains, such as e-commerce, social media, and online streaming services. Traditional RSs usually rely on supervised learning methods like collaborative \ufb01ltering [41] to learn and predict users\u2019 interests. However, these methods often fail to capture the long-term e\ufb00ects, leading to potential issues like Feedback Loops [35], Filter Bubbles [12], and other undesirable biases [7, 8]. Reinforcement Learning (RL)-Based Recommender Systems have gained rising attention due to their ability to optimize long-term user engagement. In this setting, the recommendation process is perceived as a multi-step decision-making problem. An agent (recommender) interacts with the environment (users) and receives feedback on actions (items). The main goal of RL-based RSs is to learn an optimal recommendation policy that maximizes cumulative rewards (users\u2019 long-term engagement) through trial and error. Currently, RL-based RSs have been extensively applied in scenarios such as short video recommendations [4, 5, 34, 57], e-commerce recommendations [39, 61], and beyond. With increasing attention to RL-based RSs, research in this \ufb01eld is confronted with the following three challenges. \u2022 Absence of an easy-to-use framework. Currently, there is a lack of easy-to-use frameworks for academic research. Research for RL-based RSs requires interactive environment building. However, existing resources neither use mass datasets nor simulated data generated by simulators, making them inconvenient for usage. In addition, it is di\ufb03cult to directly apply RL libraries (e.g. rllib [31]) to recommendation scenarios due to the absence of suitable environments and state modeling. \u2022 Distinct evaluationstrategiesin di\ufb00erent work. The absence of standardized evaluation metrics complicates the comparison of model performances across di\ufb00erent research teams. While some studies follow traditional RS metrics such as Normalized Discounted Cumulative Gain (NDCG) and Hit Rate (HR), others rely on RL-speci\ufb01c measures like cumulative reward and interaction length, leading to inconsistencies in performance assessment. \u2022 Poor reproducibility of previous studies. Thirdly, poor reproducibility of some previous studies increases the di\ufb03culty of further studies. Researchers and practitioners in this \ufb01eld often SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Yuanqing Yu, et al. Table 1: A comparison between EasyRL4Rec and existing resources. Real Data refers to using data collected from the real world and not generated by simulators. Disc.Act. indicates support for discrete action-based policies, while Cont.Act. is short for continuous action. Interactive training indicates learning with instant feedback. Type Resource Modules Training Evaluation Real Data State Encoder Disc.Act. Cont.Act. RL Policy Offline Logs Interactive Long-term Simulators & Datasets RecoGym [39] \ufffd \ufffd \ufffd RecSim [23] \ufffd \ufffd \ufffd \ufffd Virtual-Taobao [45] \ufffd \ufffd \ufffd \ufffd \ufffd \ufffd SOFA [22] \ufffd \ufffd \ufffd \ufffd \ufffd \ufffd KuaiSim [60] \ufffd \ufffd \ufffd \ufffd \ufffd RL4RS [51] \ufffd \ufffd \ufffd \ufffd \ufffd \ufffd \ufffd Frameworks & Libraries BEARS [2] \ufffd \ufffd Only have Bandits methods. ABWiser [47] \ufffd \ufffd OBP [40] \ufffd \ufffd \ufffd \ufffd Most Bandits methods do not have an explicit training. BEARS [2] \ufffd \ufffd Only have Bandits methods. Most Bandits methods do not have an explicit training. \ufffd MABWiser [47] \ufffd \ufffd \ufffd OBP [40] \ufffd \ufffd \ufffd iRec [46] \ufffd \ufffd EasyRL4Rec \ufffd \ufffd \ufffd \ufffd \ufffd \ufffd \ufffd \ufffd face challenges in reproducing methods and ensuring the effectiveness of self-implemented baselines due to variations in details, hindering the field\u2019s development. To tackle the above issues, we implement a comprehensive code library for RL-based RSs, named EasyRL4Rec. It provides an easyto-use framework with core modules with rich choices and a unified training and evaluating process, aiming to simplify the model development and experimental process in the domain of RL-based RS. The library is composed of four core modules: Environment, Policy, StateTracker, and Collector, which cater to different stages of the RL interaction process. Environment, built from lightweight static datasets, provides feedback on upcoming actions. The Policy module selects the optimal action based on the current state, which is encoded by StateTracker.Collector bridges the interactions between Environmentand Policy. To address the difficulty of obtaining user states from environments, we implement multiple StateTrackers for state modeling, encompassing today\u2019s popularmethods in sequential modeling [21, 25, 48, 58]. Since items in recommendation systems are discrete, EasyRL4Rec includes a mechanism to convert continuous actions to discrete items, allowing for continuous action-based policies. Moreover, EasyRL4Rec provides a unified training and evaluation procedureby Trainerand Evaluatorexecutors. Evaluator supports three modes, allowing for the removal of recommended items and a quit mechanism. The Trainer module offers two training paradigms: learning directly from offline logs or training with a pre-trained user model. With this unified framework, we conduct comprehensive experiments on classic RL models and several recent work [16, 55] and present insightful results. The main contributions of this work can be summarized as follows: \u2022 Easy-to-use Framework. EasyRL4Rec provides an easy-to-use framework for RL-based RSs. We construct lightweight RL environments based on five public datasets encompassing diverse domains, which are easy to follow for researchers. Moreover, the design of core modules with rich options reduces the complexity of developing a new model. \u2022 Unified Evaluation Standards. EasyRL4Rec offers a unified experimental pipeline, evaluating models with various metrics from the perspective of long-term benefits (e.g. Cumulative Reward). Furthermore, the library offers two training paradigms and three evaluation settings, giving users multiple choices. \u2022 Tailored Designsfor RecommendationScenarios. In response to challenges when applying RL algorithms in practical recommender systems, we have developed customizable modules for state modeling and action representation, with a conversion mechanism to support continuous action-based policies. \u2022 Insightful Experimentsfor RL-based RSs. With EasyRL4Rec, we conduct comprehensive experiments to compare the performance of classic RL models and some recent work. We present insightful experimental results from various perspectives. In RL-based RSs, the sequential interactions are formulated as a Markov decision process (MDP) \ud440= (S, A, P, \ud445,\ud6fe) where \u2022 S, state space, \ud460\ud461\u2208S represents the current state of user at timestamp \ud461. In recommendation scenarios, user characteristics and user history are usually modeled as states, where user history refers to the actions and corresponding feedback of each step in the previous interaction process. \u2022 A, action space, \ud44e\ud461\u2208A represents the action that RL agent take at timestamp \ud461. In recommendation scenarios, the recommended product is generally chosen as the action. \u2022 P: state transition probability function, \ud443(\ud460,\ud44e,\ud460\u2032) = \ud443(\ud460\ud461+1 = \ud460\u2032|\ud460\ud461= \ud460, \ud44e\ud461= \ud44e) represents the transition probability from (\ud460,\ud44e) to \ud460\u2032. \u2022 \ud45f: reward function, \ud45f(\ud460,\ud44e) denotes the reward by taking action \ud44eat state \ud460. Rewards in recommendation scenarios are usually set to feedback signals provided by users, such as click behavior, favorite behavior, or dwell time. \u2022 \ud6fe: the discount factor for future rewards. The main goal of RL is to learn an optimal decision-making policy \u03c0\ud703(\ud44e|\ud460)to maximize the cumulative reward \ud43a\ud447: max \u03c0 E[\ud43a\ud447] = max \u03c0 E \ufffd\ud447 \ufffd \ud461=0 \ufffd \ud461=0 \ud6fe\ud461\ud45f(\ud460\ud461,\ud44e\ud461) \ufffd (1) 4 THE LIBRARY-EASYRL4REC 4.1 Library Overview The overall architecture of EasyRL4Rec is illustrated in Figure 1. The library is composed of four core modules: Environment, Policy, StateTracker,and Collector, each addressing distinct stages of the reinforcement learning interaction process. These modules build upon the foundations provided by the Gymnasium SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Yuanqing Yu, et al. and Tianshou [53] libraries. The Buffer serves as a fundamental data structure for organizing raw data trajectories, while the Trainer and Evaluator act as executors, managing the entire process. In the following, we will delve into the designs of these core modules. The details of the training and evaluation pipeline can be found in Section 5. 4.2 Environment The Environment module is responsible for constructing RL environments from static datasets and providing feedback on upcoming actions. Compared to simulation platforms, environments built from public datasets are much lighter and allow for faster training. Previous work [16] has pointed out that some recommendation datasets are too sparse or lack necessary information (e.g., timestamps, explicit feedback, item categories) to build RL environments. In EasyRL4Rec, we choose \ufb01ve public datasets suitable for the RL task to construct environments, encompassing diverse recommendation scenarios, such as e-commerce, movies, and short-video recommendations. The statistics of datasets after preprocessing are summarized in Table 2. Consistent with prior research [22, 51, 60], we implement RL environments using the APIs of OpenAI Gymnasium [50]. The central function of the environment is the step() method, which returns observed states and rewards for the current action. Rewards can be de\ufb01ned as clicks, ratings, etc., and can be sourced from static datasets. For o\ufb04ine evaluation, we employ an MF model trained on the test set to predict the rating/reward of vacant user-item pairs. Table 2: Datasets currently involved in the EasyRL4Rec. These datasets vary in size and encompass diverse domains. Dataset Domain Usage #user #item #inter. Coat [42] Product Train 290 300 7.0k Test 290 300 4.6k YahooR3 [36] Music Train 15,400 1,000 311.7k Test 5,400 1,000 54.0k MovieLens4 Movie Train 6,040 3,952 800.4k Test 6,040 3,952 200.5k KuaiRec [17] Video Train 7,176 10,728 12530.8k Test 1,411 3,327 4676.6k KuaiRand [18] Video Train 26,210 7,538 1141.6k Test 27,285 7,583 1186.1k 4.3 Policy The Policy module applies a reinforcement learning algorithm to select the optimal action based on the current state. We implement this module by extending RL policies in the Tianshou [53], incorporating the following tailored designs for recommendation scenarios. Firstly, we support both discrete and continuous action-based policies. Given that items in RSs are discrete and more suitable for discrete action-based policies, EasyRL4Rec includes a mechanism to convert continuous actions to discrete items, supporting 4https://grouplens.org/datasets/movielens/1m continuous action-based policies. Secondly, we customize policies by encoding the state via StateTracker since states cannot be directly obtained from environments. These encoded state embeddings are optimized simultaneously with policies. Thirdly, we introduce a Remove Recommended Items option when interacting with environments, addressing the common need for multi-round recommendation. This feature is implemented by adding a mask for logits. Policies supported by EasyRL4Rec can be categorized as follows: \u2022 Batch RL: Learn policies from o\ufb04ine logs collected in advance, also known as O\ufb04ineRL. EasyRL4Rec supportsclassical batch RL algorithms, including BCQ [13], CQL [29], and CRR [52]. \u2022 Model-free O\ufb00-policy RL: Learn from trajectory data generated by a di\ufb00erent policy. Classic algorithms such as DQN [37], C51 [3], DDPG [32], and TD3 [14] have been included in EasyRL4Rec. \u2022 Model-free On-policy RL: Learn from trajectory data generated by the current policy being learned. EasyRL4Rec supports algorithms like PG [54], A2C [26], PPO [43], etc. 4.4 StateTracker Unlike traditional application \ufb01elds such as games, the state in recommended scenarios cannot be directly obtained from the environment, which requires arti\ufb01cial modeling of the state. The StateTracker module is responsible for modeling and encoding states. In most research work [10, 19, 55, 59], users\u2019 characteristics and interaction history are usually transformed into state representations to capture user preferences and action sequence values at the current moment. This setting will be used for state encoding in this study. We have implemented \ufb01ve di\ufb00erent StateTrackers in EasyRL4Rec, which are all classical models for sequential recommendation: \u2022 Average [33]. Average concatenates the user embedding and the average pooling result of historical actions as the state representation. \u2022 GRU [21]: GRU is a seminal method using RNNs to model user action sequences for session-based recommendation. \u2022 Caser [48]: Caser learns sequential patterns using Convolutional Neural Network (CNN) modeling historical actions as an \u201cimage\u201d among time and latent dimensions. \u2022 SASRec [25]: SASRec is a self-attention-based sequential model that can balance short-term intent and long-term preference. \u2022 NextItNet [58]: NextItNet is an e\ufb00ective generative model that is capable of learning high-level representation from both shortand long-range item dependencies. 4.5 Collector The Collector module serves as a crucial link facilitating interactions between Environment and Policy, responsible for collecting interaction trajectories into Buffer. Collector plays a pivotal role in both the Training and Evaluation stages. To be speci\ufb01c, at time \ud461, the collector would call the Policy module to execute an action \ud44e\ud461according to observed information\ud45c\ud461. Subsequently, it conveys this action to Environment EasyRL4Rec: An Easy-to-use Library for Reinforcement Learning Based Recommender Systems SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Figure 2: Visualization of data/trajectories stored in Buffer. To support simultaneous interactions in multiple environments, Buffer comprises interaction data from \ud45benvironments, with di\ufb00erent trajectories in each environment represented by distinct colors and the presence of the start symbol. and gets corresponding rewards \ud45f\ud461. Considering a complete interaction from time 1 to time\ud447, the observations, actions, and rewards at each timestamp, denoted as {(\ud45c1, \ud44e1,\ud45f1), ..., (\ud45c\ud447, \ud44e\ud447,\ud45f\ud447)}, would be considered as one single trajectory and stored in Buffer. These trajectories would be sampled for policy learning and subsequent updates. For e\ufb00ective construction and utilization, EasyRL4Rec supports simultaneous interaction in isolative multiple environments. As visualization in Figure 2, data in the Buffer are stored in a streaming manner. Data collected at timestamp \ud461would be stored as a block containing all pertinent information (like (\ud45c\ud461, \ud44e\ud461,\ud45f\ud461)). These blocks are organized sequentially, and trajectories are di\ufb00erentiated by the presence of the start symbol. 5 TRAINING AND EVALUATION PIPELINE In this section, we will introduce the whole pipeline of training and evaluation applied to RL-based RSs in our library. Following training and evaluation settings exhibit substantial distinctions from conventional recommender systems. 5.1 Training Di\ufb00erent from multi-armed bandit (MAB) algorithms [1, 6, 30, 49], which often have simpler problem structure and operational mechanisms, deep RL uses deep neural networks to approximate the optimal policy and/or value functions, requiring training before deployment. Typically, training involves an iterative process where the model interacts with an environment to update the network\u2019s weights and learn optimal policies through trial and error. EasyRL4Rec o\ufb00ers two training settings: (1) Learning from Of\ufb02ine Logs and (2) Learning with a User Model. In the former setting, the policy directly learns from o\ufb04ine logs, which have been collected in the Buffer in advance. As the blue lines shown in Figure 3, o\ufb04ine logs from the dataset would be split into trajectories and used to build Buffer for the training process. Then, Policy would iteratively get a batch of trajectories and learn the relationships between actions and outcomes involved. In EasyRL4Rec, we implement three bu\ufb00er construction methods: (1) Sequential: logs would be split in chronological order. (2) Convolution: logs would be augmented through convolution. (3) Counterfactual: logs would Figure 3: Two Training Settings. Blue lines represent the process of learning from o\ufb00line logs, while red lines represent the process of learning with a user model. be randomly shu\ufb04ed over time. This setting is suitable for classic batch RL methods such as BCQ [15], CQL [29], and CRR [52]. The Learning with a User Model setting follows a paradigm similar to ChatGPT\u2019s RLHF learning [11]. As the red lines shown in Figure 3, a user model (or reward model) is pre-trained using training data to capture users\u2019 preferences. Then, a behavior policy would interact with this user model and Collector collects feedback on a series of actions into Buffer. Subsequently, the target policy would learn from trajectories stored in Buffer. The distinction between on-policy and o\ufb00-policy methods hinges on whether the behavior policy is the same as the target policy. One epoch in the training process can have multiple above loops. 5.2 Evaluation The evaluation process and metrics vary across current studies, making it challenging to compare model performance from di\ufb00erent research teams fairly. In EasyRL4Rec, we execute evaluation in an o\ufb04ine manner, due to less timeand money-consuming than online tests. More importantly, a policy requires thorough evaluation in an o\ufb04ine environment before deployment online. Due to the sparsity of the dataset and missing interaction data in the test set, we follow the simulation methodology proposed by previous research [22, 51]. To be speci\ufb01c, a simulated environment is established to provide missing feedback from users on speci\ufb01c items recommended by the policy. In our library, we adopt a similar approach to [22], leveraging a Matrix Factorization (MF) [27, 28] model to predict missing values in the user-item matrix. To better simulate real online user behavior for evaluation, we introduce a quit mechanism following work [16, 19, 56]. In this setting, users will interrupt the process of interaction and quit when the termination condition is triggered. The termination condition can be customized according to datasets and requirements of researchers, like considering the mentality of boredom. Moreover, we o\ufb00er the option of allowing repeated recommendations, catering to the needs of online applications. Combining these two mechanisms, EasyRL4Rec o\ufb00er three modes for evaluation: FreeB, NX_0_, NX_10_, which are described in detail as follows: \u2022 FreeB: allow repeated recommendations, interactions are terminated by quit mechanism. SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Yuanqing Yu, et al. Table 3: Metrics currently involved in the EasyRL4Rec. Metrics measuring long-term e\ufb00ects are used to evaluate RL policies, while others can be used to evaluate user models. Scenarios Metrics Reinforcement Learning Long-term e\ufb00ects Cumulative Reward (Rcumu), Average Reward (Ravg), Interaction Length (\ud43f\ud452\ud45b\ud454\ud461\u210e) Recommander Systems Prediction MAE, MSE, RMSE Top-K Recall, Precision, NDCG, HitRate, MAP, MRR Others Coverage, Diversity, Novelty \u2022 NX_0_: prohibit repeated recommendations, interactions are terminated by quit mechanism. \u2022 NX_X_: prohibit repeated recommendations, interactions are \ufb01xed as X rounds without quit mechanism. Furthermore, EasyRL4Rec provides abundant evaluation metrics that are commonly used in the \ufb01eld of Reinforcement Learning and Recommender Systems, which are summarised in Table 3. To evaluate the long-term e\ufb00ects of RL policies, we introduce Cumulative Reward (Rcumu) \u00cd \ud461\ud45f\ud461to measure the cumulative gain of one interaction trajectory, which is usually applied in the RL scenario. In addition, Interaction Length (\ud43f\ud452\ud45b\ud454\ud461\u210e) and Average Reward (Ravg measure the length of the interaction trajectory and the single-round reward, respectively. We also provide commonly used metrics in traditional RSs for evaluating user models or other recommendation models, such as Normalized Discounted Cumulative Gain (NDCG [24]), HitRate, etc. 6 APPLICATION EXAMPLES OF EASYRL4REC In this section, we present the typical usage example of our library, which includes three stages: Preparation, Training, and Evaluation. To implement a new algorithm using EasyRL4Rec, researchers can modify our core modules easily. 6.1 Initialization & Preparation In this, we must prepare the chosen dataset and train the user model, which will be used to construct environments. The following code snippet demonstrates the usage of our library to pre-train a user model. Firstly, one must specify the save path and the dataset. After con\ufb01guring the learning task and initializing the user model, the model can be \ufb01tted to the training data. All intermediate results and model parameters will be saved. 1 # 1. Prepare the saved path. 2 MODEL_SAVE_PATH, logger_path = prepare_dir_log(args) 3 4 # 2. Prepare dataset 5 env, dataset, kwargs_um = get_true_env(args) 6 dataset_train, dataset_val = prepare_dataset(args, dataset,...) 7 8 # 3. Setup user model 9 task, task_logit_dim, is_ranking = get_task(args.env, args.yfeat) 10 ensemble_models = setup_user_model(args,...) 11 12 # 4. Learn and evaluate model 13 ensemble_models.fit_data(dataset_train, dataset_val,...) 6.2 Policy Training As mentioned in section 5.1, EasyRL4Rec supports two types of training settings. Here, we take the Learning with a User Model setting as an example, with the main code outlined below. We call the Policy module to give a batch of actions according to current states, then obtain feedback from Environment. With trajectories collected in Buffer, we can update the parameters in Policy. 1 # Collect training data in Buffer 2 result = self.policy(self.data, self.buffer, ...) # inference 3 act = to_numpy(result.act) 4 if self.exploration_noise: # exploration 5 act = self.policy.exploration_noise(act, self.data) 6 # Obtain feedback from Environment 7 obs_next, rew, terminated, truncated, info = self.env.step(...) 8 ... 9 # Update Policy 10 losses = self.policy.update(self.batch_size, ...) 6.3 Policy Evaluation During the evaluation process, we will evaluate the performance of the trained policy, measuring the long-term e\ufb00ects. As the following code snippet shows, we \ufb01rst reset all parameters in Environment and Buffer. Then, the Collector will call the Policy and Environment module to collect test trajectories, similar to the training process. For collected data, we calculate metrics through callback functions5. 1 # Reset the Environment and Buffer 2 collector.reset_env() 3 collector.reset_buffer() 4 policy.eval() 5 # Collect test trajectories (call Policy and obtain feedback) 6 test_result = collector.collect(n_episode) # similar as training 7 ... 8 # Callback functions to calculate metrics 9 for callback in self.policy.callbacks: 10 callback.on_epoch_end(self.epoch, test_result) 7 EXPERIMENTS In this section, we conduct comprehensive experiments on models based on classic RL policies and some recent work. To ensure a balanced comparison, we separately evaluate the overall performance of model-free RL policies (see Section 7.2) and batch RL policies (see Section 7.3) under di\ufb00erent training conditions. We then detail our insights regarding the e\ufb00ectiveness of RL policies in terms of coverage, diversity, and novelty factors often overlooked in prior research. Importantly, we identify the Preference Overestimation issue in RL-based RSs, exploring possible reasons. Further experiments on the in\ufb02uence of each component are available in Section 7.6 and Section 7.7. All tables and \ufb01gures can be reproduced by the code available at https://github.com/chongminggao/EasyRL4Rec/tree/main/vi 5A callback is a function provided as an argument to another function, executed after the latter completes its task. EasyRL4Rec: An Easy-to-use Library for Reinforcement Learning Based Recommender Systems SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Table 4: Performance comparison between model-free RL methods trained with a user model on three datasets. Policies using continuous action are indicated by the symbol (C). The meaning of underlining and bold should be included in caption. The best results are highlighted in bold, and the second-best results are underlined. Method Coat MovieLens KuaiRec Rcumu Ravg Length Rcumu Ravg Length Rcumu Ravg Length O\ufb00policy DQN 54.3476 2.4687 22.0196 21.9550 2.9400 7.4624 12.6543 0.7935 15.9480 C51 41.0788 2.5941 15.8280 18.0006 2.8290 6.3440 11.5855 0.8151 14.2304 DDPG(C) 16.3348 2.3277 7.0200 9.3706 3.0329 3.1152 9.2155 1.0192 9.0440 TD3(C) 16.3232 2.3542 6.9324 10.1620 2.9410 3.4568 7.8179 0.8610 9.0980 Onpolicy PG 79.3392 2.6586 29.8424 27.9514 3.2614 8.5708 18.8922 0.6326 29.8628 A2C 81.7952 2.7341 29.9164 32.4296 3.2526 9.9704 25.2442 0.8437 29.9196 PPO 73.0300 2.5306 28.8552 29.2253 3.6532 8.0000 19.0767 0.6359 30.0000 PG(C) 21.0912 2.5914 8.1424 17.8453 2.5247 7.0824 15.8942 0.6637 23.9260 A2C(C) 24.5980 2.5137 9.7932 26.5039 3.3884 7.8172 18.2968 0.6732 27.2416 PPO(C) 53.2212 2.5962 20.5100 29.4684 3.8737 7.6016 18.6928 0.6730 27.6780 DORL 76.9936 2.6025 29.5816 45.7708 2.6401 17.3440 22.5246 0.7533 29.9016 IntRD 77.4292 2.5926 29.8660 25.9168 2.2783 11.3676 20.9392 0.7748 27.0216 7.1 Experimental Settings 7.1.1 Datasets. We conduct experiments on Coat6, MovieLens7, and KuaiRec8, three representative datasets with various scales and domains. Statistics of three datasets have been shown in Table 2. Details of data Preprocessing and environment building are presented in Section 4.2. 7.1.2 Models implementation. Currently, EasyRL4Rec supports more than \ufb01fteen classic RL policies as base models and implements several recent work [16, 55] in RL-based RSs. Experiments are conducted on some representative algorithms, covering batch RL policies and model-free RL policies. Reproducing more models in previous work is considered future work. 7.1.3 Experimental Details. In the training stage, all policies are trained with 100 epochs, with the default learning rate 1e\u22123. We apply the \ufb01rst training paradigm, i.e. learning from o\ufb04ine logs, to train batch RL algorithms. When training model-free RL algorithms, we pre-train a user model (DeepFM [20]) to guide the online learning of RL policies. For the evaluation process, after each epoch, the policy undergoes evaluation using 100 episodes (i.e., interaction trajectories), and the maximum recommended sequence length is limited to 30. We set the random seed to 2023 for consistency and reproducibility. The code for replicating all experimental results is available in our library. 7.2 Performance of Model-free RL We evaluate the performance of representative model-free RL algorithms, which can be divided into two groups: \u2022 O\ufb00-policy: O\ufb00-policy methods learn from data produced by a policy di\ufb00erent from the one currently being optimized. For our experiments, we choose Q-learning based DQN [37], C51 [3], and continuous control based methods like DDPG [32] and TD3 [14]. 6https://www.cs.cornell.edu/~schnabts/mnar/ 7https://grouplens.org/datasets/movielens/1m/ 8https://kuairec.com/ \u2022 On-policy: Contrarily, on-policy methodsutilize data generated by their current policy. This category includes policy gradientbased PG [54], and actor-critic-based A2C [26] and PPO [43]. \u2022 Others: DORL [16] is a debiased model based on A2C policy to alleviate the Matthew e\ufb00ect. Intrinsic modi\ufb01es the reward model by incorporating an intrinsic reward, which fosters exploration to enhance item coverage and diversity. While items in Recommender Systems (RSs) are inherently discrete and thus more compatible with discrete action-based policies, EasyRL4Rec incorporates a feature that enables the transformation of continuous actions into discrete items, supporting continuous action-based policies. The overall performance of model-free RL algorithms is presented in Table 4, which reports the mean values of all metrics during the last 25% of the training epochs. From the experimental results, we mainly have the following observations. Firstly, discrete action-based methods perform much better than continuous action-based methods (marked by (C)). Discrete methods such as DQN and C51 in the o\ufb00-policy category, and PG and PPO in the on-policy category, achieve higher Rcumu and longer \ud43f\ud452\ud45b\ud454\ud461\u210eacross all three datasets. Yet continuous methodslike DDPG(C), TD3(C), PG(C), and A2C(C) show fewer Lengths, which might be due to less e\ufb03cient exploration strategies or poor \ufb01t between the continuous action representation and the discrete item space in RSs. This trend suggests that discrete action spaces, which are more aligned with the discrete nature of items in recommender systems, are more e\ufb00ective for these model-free RL algorithms. Secondly, on-policy methods achieve better results than o\ufb00-policy methods within the same action type. For example, focusing on discrete action strategies on the KuaiRec dataset, the on-policy method PG demonstrates a better performance with an Rcumu of compared to the o\ufb00-policy method DQN. This trend is further supported by Figure 4, which shows the variation curves of 1) cumulative reward, 2) interaction length, and 3) single-round reward during training on the Coat and KuaiRec datasets. This could indicate that on-policy methods, which keeps target-policy align with SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Yuanqing Yu, et al. Figure 4: Variation curves of 1) cumulative reward, 2) interaction length, and 3) single-round reward during training on the Coat and KuaiRec datasets. behavior-policy, are more adept at handling the exploration-exploitation tradeo\ufb00in these contexts. Thirdly, methods designed for RSs (DORL, Intrinsic) achieve competing results to the best method. On the Coat and KuaiRec datasets, DORL and Intrinsic achieve slightly worse results than A2C, while on MovieLens, DORL achieves superior performance compared to other models. 7.3 Performance of Batch RL We conducted experiments on models based on Batch Reinforcement Learning (Batch RL) using the following representative algorithms: \u2022 BCQ [13], short for Batch-Constrained deep Q-learning, utilizes high-con\ufb01dence data to update the policy. \u2022 CQL [29], or Conservative Q-Learning, incorporates a Q-value regularizer over an actor-critic policy. \u2022 CRR [52], or Critic Regularized Regression, trains the policy to avoid out-of-distribution (OOD) actions. \u2022 SQN [55], or Self-Supervised Q-learning, employs self-supervised learning (SSL) to enhance Reinforcement Learning-based Recommender Systems (RL RSs). It comprises two outputlayers (heads): one for the cross-entropy loss and the other for RL. Table 5: Performance comparison between batch RL methods trained from o\ufb00line logs. The best results are highlighted in bold, and the second-best results are underlined. Method Coat MovieLens KuaiRec Rcumu Ravg Len Rcumu Ravg Len Rcumu Ravg Len BCQ 9.56 2.36 4.06 9.54 3.81 2.51 4.34 0.83 5.24 CQL 23.35 2.27 10.28 9.89 3.85 2.57 5.28 1.03 5.12 CRR 28.96 2.23 13.00 10.13 2.95 3.44 8.57 0.87 9.83 SQN 26.60 2.31 11.51 9.45 2.78 3.39 6.82 0.77 8.92 Table 6: Performance of Coverage, Diversity & Novelty on the KuaiRec Dataset. The best results are highlighted in bold, and the second-best results are underlined. Method Coverage Diversity Novelty O\ufb00policy DQN 0.0505 0.8981 2.6370 C51 0.1027 0.8797 2.5135 DDPG(C) 0.0477 0.8209 2.3907 TD3(C) 0.0438 0.8286 2.2612 Onpolicy PG 0.0015 0.8273 2.9794 A2C 0.0020 0.8291 2.3907 PPO 0.0015 0.8276 2.5408 PG(C) 0.0071 0.8386 3.0606 A2C(C) 0.0098 0.8419 3.4543 PPO(C) 0.0141 0.8677 3.5830 DORL 0.0021 0.8307 3.1160 Intrinsic 0.0274 0.8950 2.7575 Table 5 shows the performance of di\ufb00erent batch RL algorithms trained from o\ufb04ine logs. As shown, CRR outperforms all models across all three datasets, while BCQ yields the lowest performance. Regarding the issue of Preference Overestimation (detailed in Section 7.5), both CQL and CRR outperform BCQ due to their more conservative strategies. Furthermore, it is worth noting that due to the limited availability of o\ufb04ine logs, batch RL methods achieve lower rewards compared to model-free RL baselines that are trained online with a user model. 7.4 Coverage, Diversity & Novelty We evaluate the e\ufb00ectiveness of these models in terms of coverage, diversity, and novelty\u2014factors often overlooked in prior research. From Table 6 we can observe the performance of on-policy and o\ufb00policy methods with respect to coverage, diversity, and novelty. For coverage, which measures the proportion of the state-action space that the policy explores, o\ufb00-policy methods exhibit higher coverage than the on-policy methods. This observation indicates that on-policy methods are generally more conservative in exploration. In terms of diversity, which quanti\ufb01es the variety of the actions taken by the policy, all methods perform similarly. Among all methods, DQN stands out with a diversity score of 0.8981, suggesting it is capable of producing a wide range of actions. It\u2019s worth noting that Intrinsic promotes better coverage and diversity than EasyRL4Rec: An Easy-to-use Library for Reinforcement Learning Based Recommender Systems SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Figure 5: Demonstration of preference overestimation issue on MovieLens-1M, with red lines representing -MSE of User Model, while orange bars and blue bars representing estimated reward and true reward respectively. its on-policy counterparts due to its reward structure. Novelty measures the tendency of the policy to recommend less popular or less known items. As we can see, on-policy methods perform better than o\ufb00-policy methods, which indicates that on-policy methods pay more attention to the user\u2019s niche interests. 7.5 Preference Overestimation Issue In this section, we discuss our observations on the Preference Overestimation issue that occurs in RL-based RSs. Much like the challenge of value overestimation in o\ufb04ine RL, this particular issue can result in an exaggerated estimation of user preferences for items that rarely appear in the training logs. To explore this problem, we evaluated the performance of the A2C algorithm using various user models, each trained with a di\ufb00erent number of negative samples. From Figure 5 we can observe that user models trained with fewer negative samples tend to predict more accurately, as indicated by the higher negative mean squared error (MSE). However, choosing a user model with seemingly superior predictive performance (for instance, one trained with 0 negative samples) leads to a paradox. Despite the model predicting higher rewards, as shown by the orange bars, the actual average rewards received, represented by the blue bars, are disappointingly low. This stark contrast brings to light the issue of preference overestimation. Though this issue could be alleviated by some conservative algorithms, our \ufb01ndings suggest that increasing the number of negative samples could be a viable strategy. This approach remains further explored in future research. 7.6 Impact of StateTrackers StateTracker is used to generate representations of the current state as input to Policy module. The details of di\ufb00erent StateTrackers can be found in Section 4.4. We conduct experiments to investigate the impact of the choice of StateTrackers, encompassing today\u2019s popular methods in sequential modeling. We keep the same experimental setting as Section 7 and choose A2C [26] as the algorithm. From Table 7 we can observe that, there is no signi\ufb01cant di\ufb00erence between all StateTrackers, with GRU achieving the best Rcumu. It indicates that the choice of StateTracker has a minimal impact on the model performance. Table 7: Performance comparison between StateTrackers on KuaiRec. The best results are highlighted in bold, and the second-best results are underlined. StateTrackers Rcumu Ravg Length GRU [21] 19.1353 0.6385 29.9688 Caser [48] 18.6679 0.6226 29.9824 SASRec [25] 18.5527 0.6184 30.0000 Average [33] 18.8922 0.6326 29.8628 NextItNet [58] 18.7167 0.6239 30.0000 7.7 Impact of Construction Methods As mentioned in section 5.1, EasyRL4Rec o\ufb00ers three di\ufb00erent bu\ufb00er construction methods: (1) Sequential: logs are split in chronological order. (2) Convolution: logs are augmented through convolution. (3) Counterfactual: logs are randomly shu\ufb04ed over time. We conduct experiments with CRR [52] policy on the KuaiRec dataset, with results present in Table 8. We can observe no significant di\ufb00erence across the Sequential, Convolution, and Counterfactual constructions. The slight variations in performance metrics suggest that all three constructions provide comparable e\ufb00ectiveness in their contributions to the model\u2019s ability to predict or recommend. Table 8: Performance comparison between di\ufb00erent construction methods on KuaiRec. The best results are highlighted in bold, and the second-best results are underlined. Construction Rcumu Ravg Length Sequential 8.5656 0.8713 9.8332 Convolution 8.5639 0.8704 9.8420 Counterfactual 8.5740 0.8701 9.8516 8 CONCLUSION & FUTURE WORK In this work, we introduce EasyRL4Rec, an easy-to-use codelibrary speci\ufb01cally crafted for RL-based RSs, simplifying the development and evaluation of RL models. EasyRL4Rec constructs lightweight RL environments based on \ufb01ve public datasets. It o\ufb00ers a uni\ufb01ed training and evaluation pipeline, evaluating models from the perspective of long-term bene\ufb01ts. Moreover, EasyRL4Rec facilitates customizable state modeling and action representation, addressing challenges in applying RL to recommender systems. Through comprehensive experiments, we compare the performance of existing models and share our \ufb01ndings from various perspectives. Currently, EasyRL4Rec supports lots of classic RL algorithms, but few recent studies in RL-base RSs. In the future, we plan to extend EasyRL4Rec to include more existing models, more datasets, and more utils like parameters-tuning for easy usage. EasyRL4Rec is expected to facilitate future research in the \ufb01eld of RL-based RSs. SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Yuanqing Yu, et al." + }, + { + "url": "http://arxiv.org/abs/1003.0146v2", + "title": "A Contextual-Bandit Approach to Personalized News Article Recommendation", + "abstract": "Personalized web services strive to adapt their services (advertisements,\nnews articles, etc) to individual users by making use of both content and user\ninformation. Despite a few recent advances, this problem remains challenging\nfor at least two reasons. First, web service is featured with dynamically\nchanging pools of content, rendering traditional collaborative filtering\nmethods inapplicable. Second, the scale of most web services of practical\ninterest calls for solutions that are both fast in learning and computation.\n In this work, we model personalized recommendation of news articles as a\ncontextual bandit problem, a principled approach in which a learning algorithm\nsequentially selects articles to serve users based on contextual information\nabout the users and articles, while simultaneously adapting its\narticle-selection strategy based on user-click feedback to maximize total user\nclicks.\n The contributions of this work are three-fold. First, we propose a new,\ngeneral contextual bandit algorithm that is computationally efficient and well\nmotivated from learning theory. Second, we argue that any bandit algorithm can\nbe reliably evaluated offline using previously recorded random traffic.\nFinally, using this offline evaluation method, we successfully applied our new\nalgorithm to a Yahoo! Front Page Today Module dataset containing over 33\nmillion events. Results showed a 12.5% click lift compared to a standard\ncontext-free bandit algorithm, and the advantage becomes even greater when data\ngets more scarce.", + "authors": "Lihong Li, Wei Chu, John Langford, Robert E. Schapire", + "published": "2010-02-28", + "updated": "2012-03-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.IR", + "H.3.5; I.2.6" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2107.14171v3", + "title": "Tianshou: a Highly Modularized Deep Reinforcement Learning Library", + "abstract": "In this paper, we present Tianshou, a highly modularized Python library for\ndeep reinforcement learning (DRL) that uses PyTorch as its backend. Tianshou\nintends to be research-friendly by providing a flexible and reliable\ninfrastructure of DRL algorithms. It supports online and offline training with\nmore than 20 classic algorithms through a unified interface. To facilitate\nrelated research and prove Tianshou's reliability, we have released Tianshou's\nbenchmark of MuJoCo environments, covering eight classic algorithms with\nstate-of-the-art performance. We open-sourced Tianshou at\nhttps://github.com/thu-ml/tianshou/.", + "authors": "Jiayi Weng, Huayu Chen, Dong Yan, Kaichao You, Alexis Duburcq, Minghao Zhang, Yi Su, Hang Su, Jun Zhu", + "published": "2021-07-29", + "updated": "2022-08-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1812.02353v3", + "title": "Top-K Off-Policy Correction for a REINFORCE Recommender System", + "abstract": "Industrial recommender systems deal with extremely large action spaces --\nmany millions of items to recommend. Moreover, they need to serve billions of\nusers, who are unique at any point in time, making a complex user state space.\nLuckily, huge quantities of logged implicit feedback (e.g., user clicks, dwell\ntime) are available for learning. Learning from the logged feedback is however\nsubject to biases caused by only observing feedback on recommendations selected\nby the previous versions of the recommender. In this work, we present a general\nrecipe of addressing such biases in a production top-K recommender system at\nYoutube, built with a policy-gradient-based algorithm, i.e. REINFORCE. The\ncontributions of the paper are: (1) scaling REINFORCE to a production\nrecommender system with an action space on the orders of millions; (2) applying\noff-policy correction to address data biases in learning from logged feedback\ncollected from multiple behavior policies; (3) proposing a novel top-K\noff-policy correction to account for our policy recommending multiple items at\na time; (4) showcasing the value of exploration. We demonstrate the efficacy of\nour approaches through a series of simulations and multiple live experiments on\nYoutube.", + "authors": "Minmin Chen, Alex Beutel, Paul Covington, Sagar Jain, Francois Belletti, Ed Chi", + "published": "2018-12-06", + "updated": "2021-12-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.IR", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1808.00720v2", + "title": "RecoGym: A Reinforcement Learning Environment for the problem of Product Recommendation in Online Advertising", + "abstract": "Recommender Systems are becoming ubiquitous in many settings and take many\nforms, from product recommendation in e-commerce stores, to query suggestions\nin search engines, to friend recommendation in social networks. Current\nresearch directions which are largely based upon supervised learning from\nhistorical data appear to be showing diminishing returns with a lot of\npractitioners report a discrepancy between improvements in offline metrics for\nsupervised learning and the online performance of the newly proposed models.\nOne possible reason is that we are using the wrong paradigm: when looking at\nthe long-term cycle of collecting historical performance data, creating a new\nversion of the recommendation model, A/B testing it and then rolling it out. We\nsee that there a lot of commonalities with the reinforcement learning (RL)\nsetup, where the agent observes the environment and acts upon it in order to\nchange its state towards better states (states with higher rewards). To this\nend we introduce RecoGym, an RL environment for recommendation, which is\ndefined by a model of user traffic patterns on e-commerce and the users\nresponse to recommendations on the publisher websites. We believe that this is\nan important step forward for the field of recommendation systems research,\nthat could open up an avenue of collaboration between the recommender systems\nand reinforcement learning communities and lead to better alignment between\noffline and online performance metrics.", + "authors": "David Rohde, Stephen Bonner, Travis Dunlop, Flavian Vasile, Alexandros Karatzoglou", + "published": "2018-08-02", + "updated": "2018-09-14", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1805.10000v1", + "title": "Virtual-Taobao: Virtualizing Real-world Online Retail Environment for Reinforcement Learning", + "abstract": "Applying reinforcement learning in physical-world tasks is extremely\nchallenging. It is commonly infeasible to sample a large number of trials, as\nrequired by current reinforcement learning methods, in a physical environment.\nThis paper reports our project on using reinforcement learning for better\ncommodity search in Taobao, one of the largest online retail platforms and\nmeanwhile a physical environment with a high sampling cost. Instead of training\nreinforcement learning in Taobao directly, we present our approach: first we\nbuild Virtual Taobao, a simulator learned from historical customer behavior\ndata through the proposed GAN-SD (GAN for Simulating Distributions) and MAIL\n(multi-agent adversarial imitation learning), and then we train policies in\nVirtual Taobao with no physical costs in which ANC (Action Norm Constraint)\nstrategy is proposed to reduce over-fitting. In experiments, Virtual Taobao is\ntrained from hundreds of millions of customers' records, and its properties are\ncompared with the real environment. The results disclose that Virtual Taobao\nfaithfully recovers important properties of the real environment. We also show\nthat the policies trained in Virtual Taobao can have significantly superior\nonline performance to the traditional supervised approaches. We hope our work\ncould shed some light on reinforcement learning applications in complex\nphysical environments.", + "authors": "Jing-Cheng Shi, Yang Yu, Qing Da, Shi-Yong Chen, An-Xiang Zeng", + "published": "2018-05-25", + "updated": "2018-05-25", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2302.01724v3", + "title": "Reinforcing User Retention in a Billion Scale Short Video Recommender System", + "abstract": "Recently, short video platforms have achieved rapid user growth by\nrecommending interesting content to users. The objective of the recommendation\nis to optimize user retention, thereby driving the growth of DAU (Daily Active\nUsers). Retention is a long-term feedback after multiple interactions of users\nand the system, and it is hard to decompose retention reward to each item or a\nlist of items. Thus traditional point-wise and list-wise models are not able to\noptimize retention. In this paper, we choose reinforcement learning methods to\noptimize the retention as they are designed to maximize the long-term\nperformance. We formulate the problem as an infinite-horizon request-based\nMarkov Decision Process, and our objective is to minimize the accumulated time\ninterval of multiple sessions, which is equal to improving the app open\nfrequency and user retention. However, current reinforcement learning\nalgorithms can not be directly applied in this setting due to uncertainty,\nbias, and long delay time incurred by the properties of user retention. We\npropose a novel method, dubbed RLUR, to address the aforementioned challenges.\nBoth offline and live experiments show that RLUR can significantly improve user\nretention. RLUR has been fully launched in Kuaishou app for a long time, and\nachieves consistent performance improvement on user retention and DAU.", + "authors": "Qingpeng Cai, Shuchang Liu, Xueliang Wang, Tianyou Zuo, Wentao Xie, Bin Yang, Dong Zheng, Peng Jiang, Kun Gai", + "published": "2023-02-03", + "updated": "2023-02-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.IR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2204.01266v2", + "title": "CIRS: Bursting Filter Bubbles by Counterfactual Interactive Recommender System", + "abstract": "While personalization increases the utility of recommender systems, it also\nbrings the issue of filter bubbles. E.g., if the system keeps exposing and\nrecommending the items that the user is interested in, it may also make the\nuser feel bored and less satisfied. Existing work studies filter bubbles in\nstatic recommendation, where the effect of overexposure is hard to capture. In\ncontrast, we believe it is more meaningful to study the issue in interactive\nrecommendation and optimize long-term user satisfaction. Nevertheless, it is\nunrealistic to train the model online due to the high cost. As such, we have to\nleverage offline training data and disentangle the causal effect on user\nsatisfaction.\n To achieve this goal, we propose a counterfactual interactive recommender\nsystem (CIRS) that augments offline reinforcement learning (offline RL) with\ncausal inference. The basic idea is to first learn a causal user model on\nhistorical data to capture the overexposure effect of items on user\nsatisfaction. It then uses the learned causal user model to help the planning\nof the RL policy. To conduct evaluation offline, we innovatively create an\nauthentic RL environment (KuaiEnv) based on a real-world fully observed user\nrating dataset. The experiments show the effectiveness of CIRS in bursting\nfilter bubbles and achieving long-term success in interactive recommendation.\nThe implementation of CIRS is available via\nhttps://github.com/chongminggao/CIRS-codes.", + "authors": "Chongming Gao, Shiqi Wang, Shijun Li, Jiawei Chen, Xiangnan He, Wenqiang Lei, Biao Li, Yuan Zhang, Peng Jiang", + "published": "2022-04-04", + "updated": "2023-04-22", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2309.12645v2", + "title": "KuaiSim: A Comprehensive Simulator for Recommender Systems", + "abstract": "Reinforcement Learning (RL)-based recommender systems (RSs) have garnered\nconsiderable attention due to their ability to learn optimal recommendation\npolicies and maximize long-term user rewards. However, deploying RL models\ndirectly in online environments and generating authentic data through A/B tests\ncan pose challenges and require substantial resources. Simulators offer an\nalternative approach by providing training and evaluation environments for RS\nmodels, reducing reliance on real-world data. Existing simulators have shown\npromising results but also have limitations such as simplified user feedback,\nlacking consistency with real-world data, the challenge of simulator\nevaluation, and difficulties in migration and expansion across RSs. To address\nthese challenges, we propose KuaiSim, a comprehensive user environment that\nprovides user feedback with multi-behavior and cross-session responses. The\nresulting simulator can support three levels of recommendation problems: the\nrequest level list-wise recommendation task, the whole-session level sequential\nrecommendation task, and the cross-session level retention optimization task.\nFor each task, KuaiSim also provides evaluation protocols and baseline\nrecommendation algorithms that further serve as benchmarks for future research.\nWe also restructure existing competitive simulators on the KuaiRand Dataset and\ncompare them against KuaiSim to future assess their performance and behavioral\ndifferences. Furthermore, to showcase KuaiSim's flexibility in accommodating\ndifferent datasets, we demonstrate its versatility and robustness when\ndeploying it on the ML-1m dataset.", + "authors": "Kesen Zhao, Shuchang Liu, Qingpeng Cai, Xiangyu Zhao, Ziru Liu, Dong Zheng, Peng Jiang, Kun Gai", + "published": "2023-09-22", + "updated": "2023-10-19", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2212.02779v2", + "title": "PrefRec: Recommender Systems with Human Preferences for Reinforcing Long-term User Engagement", + "abstract": "Current advances in recommender systems have been remarkably successful in\noptimizing immediate engagement. However, long-term user engagement, a more\ndesirable performance metric, remains difficult to improve. Meanwhile, recent\nreinforcement learning (RL) algorithms have shown their effectiveness in a\nvariety of long-term goal optimization tasks. For this reason, RL is widely\nconsidered as a promising framework for optimizing long-term user engagement in\nrecommendation. Though promising, the application of RL heavily relies on\nwell-designed rewards, but designing rewards related to long-term user\nengagement is quite difficult. To mitigate the problem, we propose a novel\nparadigm, recommender systems with human preferences (or Preference-based\nRecommender systems), which allows RL recommender systems to learn from\npreferences about users historical behaviors rather than explicitly defined\nrewards. Such preferences are easily accessible through techniques such as\ncrowdsourcing, as they do not require any expert knowledge. With PrefRec, we\ncan fully exploit the advantages of RL in optimizing long-term goals, while\navoiding complex reward engineering. PrefRec uses the preferences to\nautomatically train a reward function in an end-to-end manner. The reward\nfunction is then used to generate learning signals to train the recommendation\npolicy. Furthermore, we design an effective optimization method for PrefRec,\nwhich uses an additional value function, expectile regression and reward model\npre-training to improve the performance. We conduct experiments on a variety of\nlong-term user engagement optimization tasks. The results show that PrefRec\nsignificantly outperforms previous state-of-the-art methods in all the tasks.", + "authors": "Wanqi Xue, Qingpeng Cai, Zhenghai Xue, Shuo Sun, Shuchang Liu, Dong Zheng, Peng Jiang, Kun Gai, Bo An", + "published": "2022-12-06", + "updated": "2023-06-02", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1909.04847v2", + "title": "RecSim: A Configurable Simulation Platform for Recommender Systems", + "abstract": "We propose RecSim, a configurable platform for authoring simulation\nenvironments for recommender systems (RSs) that naturally supports sequential\ninteraction with users. RecSim allows the creation of new environments that\nreflect particular aspects of user behavior and item structure at a level of\nabstraction well-suited to pushing the limits of current reinforcement learning\n(RL) and RS techniques in sequential interactive recommendation problems.\nEnvironments can be easily configured that vary assumptions about: user\npreferences and item familiarity; user latent state and its dynamics; and\nchoice models and other user response behavior. We outline how RecSim offers\nvalue to RL and RS researchers and practitioners, and how it can serve as a\nvehicle for academic-industrial collaboration.", + "authors": "Eugene Ie, Chih-wei Hsu, Martin Mladenov, Vihan Jain, Sanmit Narvekar, Jing Wang, Rui Wu, Craig Boutilier", + "published": "2019-09-11", + "updated": "2019-09-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.HC", + "cs.IR", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2302.03431v2", + "title": "Exploration and Regularization of the Latent Action Space in Recommendation", + "abstract": "In recommender systems, reinforcement learning solutions have effectively\nboosted recommendation performance because of their ability to capture\nlong-term user-system interaction. However, the action space of the\nrecommendation policy is a list of items, which could be extremely large with a\ndynamic candidate item pool. To overcome this challenge, we propose a\nhyper-actor and critic learning framework where the policy decomposes the item\nlist generation process into a hyper-action inference step and an effect-action\nselection step. The first step maps the given state space into a vectorized\nhyper-action space, and the second step selects the item list based on the\nhyper-action. In order to regulate the discrepancy between the two action\nspaces, we design an alignment module along with a kernel mapping function for\nitems to ensure inference accuracy and include a supervision module to\nstabilize the learning process. We build simulated environments on public\ndatasets and empirically show that our framework is superior in recommendation\ncompared to standard RL baselines.", + "authors": "Shuchang Liu, Qingpeng Cai, Bowen Sun, Yuhao Wang, Ji Jiang, Dong Zheng, Kun Gai, Peng Jiang, Xiangyu Zhao, Yongfeng Zhang", + "published": "2023-02-07", + "updated": "2023-02-08", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1712.09381v4", + "title": "RLlib: Abstractions for Distributed Reinforcement Learning", + "abstract": "Reinforcement learning (RL) algorithms involve the deep nesting of highly\nirregular computation patterns, each of which typically exhibits opportunities\nfor distributed computation. We argue for distributing RL components in a\ncomposable way by adapting algorithms for top-down hierarchical control,\nthereby encapsulating parallelism and resource requirements within\nshort-running compute tasks. We demonstrate the benefits of this principle\nthrough RLlib: a library that provides scalable software primitives for RL.\nThese primitives enable a broad range of algorithms to be implemented with high\nperformance, scalability, and substantial code reuse. RLlib is available at\nhttps://rllib.io/.", + "authors": "Eric Liang, Richard Liaw, Philipp Moritz, Robert Nishihara, Roy Fox, Ken Goldberg, Joseph E. Gonzalez, Michael I. Jordan, Ion Stoica", + "published": "2017-12-26", + "updated": "2018-06-29", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.DC", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.11081v1", + "title": "Contrastive State Augmentations for Reinforcement Learning-Based Recommender Systems", + "abstract": "Learning reinforcement learning (RL)-based recommenders from historical\nuser-item interaction sequences is vital to generate high-reward\nrecommendations and improve long-term cumulative benefits. However, existing RL\nrecommendation methods encounter difficulties (i) to estimate the value\nfunctions for states which are not contained in the offline training data, and\n(ii) to learn effective state representations from user implicit feedback due\nto the lack of contrastive signals. In this work, we propose contrastive state\naugmentations (CSA) for the training of RL-based recommender systems. To tackle\nthe first issue, we propose four state augmentation strategies to enlarge the\nstate space of the offline data. The proposed method improves the\ngeneralization capability of the recommender by making the RL agent visit the\nlocal state regions and ensuring the learned value functions are similar\nbetween the original and augmented states. For the second issue, we propose\nintroducing contrastive signals between augmented states and the state randomly\nsampled from other sessions to improve the state representation learning\nfurther. To verify the effectiveness of the proposed CSA, we conduct extensive\nexperiments on two publicly accessible datasets and one dataset collected from\na real-life e-commerce platform. We also conduct experiments on a simulated\nenvironment as the online evaluation setting. Experimental results demonstrate\nthat CSA can effectively improve recommendation performance.", + "authors": "Zhaochun Ren, Na Huang, Yidan Wang, Pengjie Ren, Jun Ma, Jiahuan Lei, Xinlei Shi, Hengliang Luo, Joemon M Jose, Xin Xin", + "published": "2023-05-18", + "updated": "2023-05-18", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2008.07146v5", + "title": "Open Bandit Dataset and Pipeline: Towards Realistic and Reproducible Off-Policy Evaluation", + "abstract": "Off-policy evaluation (OPE) aims to estimate the performance of hypothetical\npolicies using data generated by a different policy. Because of its huge\npotential impact in practice, there has been growing research interest in this\nfield. There is, however, no real-world public dataset that enables the\nevaluation of OPE, making its experimental studies unrealistic and\nirreproducible. With the goal of enabling realistic and reproducible OPE\nresearch, we present Open Bandit Dataset, a public logged bandit dataset\ncollected on a large-scale fashion e-commerce platform, ZOZOTOWN. Our dataset\nis unique in that it contains a set of multiple logged bandit datasets\ncollected by running different policies on the same platform. This enables\nexperimental comparisons of different OPE estimators for the first time. We\nalso develop Python software called Open Bandit Pipeline to streamline and\nstandardize the implementation of batch bandit algorithms and OPE. Our open\ndata and software will contribute to fair and transparent OPE research and help\nthe community identify fruitful research directions. We provide extensive\nbenchmark experiments of existing OPE estimators using our dataset and\nsoftware. The results open up essential challenges and new avenues for future\nOPE research.", + "authors": "Yuta Saito, Shunsuke Aihara, Megumi Matsutani, Yusuke Narita", + "published": "2020-08-17", + "updated": "2021-10-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2110.11073v5", + "title": "RL4RS: A Real-World Dataset for Reinforcement Learning based Recommender System", + "abstract": "Reinforcement learning based recommender systems (RL-based RS) aim at\nlearning a good policy from a batch of collected data, by casting\nrecommendations to multi-step decision-making tasks. However, current RL-based\nRS research commonly has a large reality gap. In this paper, we introduce the\nfirst open-source real-world dataset, RL4RS, hoping to replace the artificial\ndatasets and semi-simulated RS datasets previous studies used due to the\nresource limitation of the RL-based RS domain. Unlike academic RL research,\nRL-based RS suffers from the difficulties of being well-validated before\ndeployment. We attempt to propose a new systematic evaluation framework,\nincluding evaluation of environment simulation, evaluation on environments,\ncounterfactual policy evaluation, and evaluation on environments built from\ntest set. In summary, the RL4RS (Reinforcement Learning for Recommender\nSystems), a new resource with special concerns on the reality gaps, contains\ntwo real-world datasets, data understanding tools, tuned simulation\nenvironments, related advanced RL baselines, batch RL baselines, and\ncounterfactual policy evaluation algorithms. The RL4RS suite can be found at\nhttps://github.com/fuxiAIlab/RL4RS. In addition to the RL-based recommender\nsystems, we expect the resource to contribute to research in applied\nreinforcement learning.", + "authors": "Kai Wang, Zhene Zou, Minghao Zhao, Qilin Deng, Yue Shang, Yile Liang, Runze Wu, Xudong Shen, Tangjie Lyu, Changjie Fan", + "published": "2021-10-18", + "updated": "2023-04-17", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2307.04571v1", + "title": "Alleviating Matthew Effect of Offline Reinforcement Learning in Interactive Recommendation", + "abstract": "Offline reinforcement learning (RL), a technology that offline learns a\npolicy from logged data without the need to interact with online environments,\nhas become a favorable choice in decision-making processes like interactive\nrecommendation. Offline RL faces the value overestimation problem. To address\nit, existing methods employ conservatism, e.g., by constraining the learned\npolicy to be close to behavior policies or punishing the rarely visited\nstate-action pairs. However, when applying such offline RL to recommendation,\nit will cause a severe Matthew effect, i.e., the rich get richer and the poor\nget poorer, by promoting popular items or categories while suppressing the less\npopular ones. It is a notorious issue that needs to be addressed in practical\nrecommender systems.\n In this paper, we aim to alleviate the Matthew effect in offline RL-based\nrecommendation. Through theoretical analyses, we find that the conservatism of\nexisting methods fails in pursuing users' long-term satisfaction. It inspires\nus to add a penalty term to relax the pessimism on states with high entropy of\nthe logging policy and indirectly penalizes actions leading to less diverse\nstates. This leads to the main technical contribution of the work: Debiased\nmodel-based Offline RL (DORL) method. Experiments show that DORL not only\ncaptures user interests well but also alleviates the Matthew effect. The\nimplementation is available via https://github.com/chongminggao/DORL-codes.", + "authors": "Chongming Gao, Kexin Huang, Jiawei Chen, Yuan Zhang, Biao Li, Peng Jiang, Shiqi Wang, Zhong Zhang, Xiangnan He", + "published": "2023-07-10", + "updated": "2023-07-10", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1802.06501v3", + "title": "Recommendations with Negative Feedback via Pairwise Deep Reinforcement Learning", + "abstract": "Recommender systems play a crucial role in mitigating the problem of\ninformation overload by suggesting users' personalized items or services. The\nvast majority of traditional recommender systems consider the recommendation\nprocedure as a static process and make recommendations following a fixed\nstrategy. In this paper, we propose a novel recommender system with the\ncapability of continuously improving its strategies during the interactions\nwith users. We model the sequential interactions between users and a\nrecommender system as a Markov Decision Process (MDP) and leverage\nReinforcement Learning (RL) to automatically learn the optimal strategies via\nrecommending trial-and-error items and receiving reinforcements of these items\nfrom users' feedback. Users' feedback can be positive and negative and both\ntypes of feedback have great potentials to boost recommendations. However, the\nnumber of negative feedback is much larger than that of positive one; thus\nincorporating them simultaneously is challenging since positive feedback could\nbe buried by negative one. In this paper, we develop a novel approach to\nincorporate them into the proposed deep recommender system (DEERS) framework.\nThe experimental results based on real-world e-commerce data demonstrate the\neffectiveness of the proposed framework. Further experiments have been\nconducted to understand the importance of both positive and negative feedback\nin recommendations.", + "authors": "Xiangyu Zhao, Liang Zhang, Zhuoye Ding, Long Xia, Jiliang Tang, Dawei Yin", + "published": "2018-02-19", + "updated": "2018-08-10", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1301.0600v2", + "title": "An MDP-based Recommender System", + "abstract": "Typical Recommender systems adopt a static view of the recommendation process\nand treat it as a prediction problem. We argue that it is more appropriate to\nview the problem of generating recommendations as a sequential decision problem\nand, consequently, that Markov decision processes (MDP) provide a more\nappropriate model for Recommender systems. MDPs introduce two benefits: they\ntake into account the long-term effects of each recommendation, and they take\ninto account the expected value of each recommendation. To succeed in practice,\nan MDP-based Recommender system must employ a strong initial model; and the\nbulk of this paper is concerned with the generation of such a model. In\nparticular, we suggest the use of an n-gram predictive model for generating the\ninitial MDP. Our n-gram model induces a Markov-chain model of user behavior\nwhose predictive accuracy is greater than that of existing predictive models.\nWe describe our predictive model in detail and evaluate its performance on real\ndata. In addition, we show how the model can be used in an MDP-based\nRecommender system.", + "authors": "Guy Shani, Ronen I. Brafman, David Heckerman", + "published": "2012-12-12", + "updated": "2015-05-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.IR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1312.5602v1", + "title": "Playing Atari with Deep Reinforcement Learning", + "abstract": "We present the first deep learning model to successfully learn control\npolicies directly from high-dimensional sensory input using reinforcement\nlearning. The model is a convolutional neural network, trained with a variant\nof Q-learning, whose input is raw pixels and whose output is a value function\nestimating future rewards. We apply our method to seven Atari 2600 games from\nthe Arcade Learning Environment, with no adjustment of the architecture or\nlearning algorithm. We find that it outperforms all previous approaches on six\nof the games and surpasses a human expert on three of them.", + "authors": "Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, Martin Riedmiller", + "published": "2013-12-19", + "updated": "2013-12-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2212.08232v1", + "title": "Offline Robot Reinforcement Learning with Uncertainty-Guided Human Expert Sampling", + "abstract": "Recent advances in batch (offline) reinforcement learning have shown\npromising results in learning from available offline data and proved offline\nreinforcement learning to be an essential toolkit in learning control policies\nin a model-free setting. An offline reinforcement learning algorithm applied to\na dataset collected by a suboptimal non-learning-based algorithm can result in\na policy that outperforms the behavior agent used to collect the data. Such a\nscenario is frequent in robotics, where existing automation is collecting\noperational data. Although offline learning techniques can learn from data\ngenerated by a sub-optimal behavior agent, there is still an opportunity to\nimprove the sample complexity of existing offline reinforcement learning\nalgorithms by strategically introducing human demonstration data into the\ntraining process. To this end, we propose a novel approach that uses\nuncertainty estimation to trigger the injection of human demonstration data and\nguide policy training towards optimal behavior while reducing overall sample\ncomplexity. Our experiments show that this approach is more sample efficient\nwhen compared to a naive way of combining expert data with data collected from\na sub-optimal agent. We augmented an existing offline reinforcement learning\nalgorithm Conservative Q-Learning with our approach and performed experiments\non data collected from MuJoCo and OffWorld Gym learning environments.", + "authors": "Ashish Kumar, Ilya Kuzovkin", + "published": "2022-12-16", + "updated": "2022-12-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2007.12666v5", + "title": "Safe Model-Based Reinforcement Learning for Systems with Parametric Uncertainties", + "abstract": "Reinforcement learning has been established over the past decade as an\neffective tool to find optimal control policies for dynamical systems, with\nrecent focus on approaches that guarantee safety during the learning and/or\nexecution phases. In general, safety guarantees are critical in reinforcement\nlearning when the system is safety-critical and/or task restarts are not\npractically feasible. In optimal control theory, safety requirements are often\nexpressed in terms of state and/or control constraints. In recent years,\nreinforcement learning approaches that rely on persistent excitation have been\ncombined with a barrier transformation to learn the optimal control policies\nunder state constraints. To soften the excitation requirements, model-based\nreinforcement learning methods that rely on exact model knowledge have also\nbeen integrated with the barrier transformation framework. The objective of\nthis paper is to develop safe reinforcement learning method for deterministic\nnonlinear systems, with parametric uncertainties in the model, to learn\napproximate constrained optimal policies without relying on stringent\nexcitation conditions. To that end, a model-based reinforcement learning\ntechnique that utilizes a novel filtered concurrent learning method, along with\na barrier transformation, is developed in this paper to realize simultaneous\nlearning of unknown model parameters and approximate optimal state-constrained\ncontrol policies for safety-critical systems.", + "authors": "S M Nahid Mahmud, Scott A Nivison, Zachary I. Bell, Rushikesh Kamalapurkar", + "published": "2020-07-24", + "updated": "2021-10-05", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.SY", + "math.OC" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2211.10688v2", + "title": "ReInform: Selecting paths with reinforcement learning for contextualized link prediction", + "abstract": "We propose to use reinforcement learning to inform transformer-based\ncontextualized link prediction models by providing paths that are most useful\nfor predicting the correct answer. This is in contrast to previous approaches,\nthat either used reinforcement learning (RL) to directly search for the answer,\nor based their prediction on limited or randomly selected context. Our\nexperiments on WN18RR and FB15k-237 show that contextualized link prediction\nmodels consistently outperform RL-based answer search, and that additional\nimprovements (of up to 13.5% MRR) can be gained by combining RL with a link\nprediction model. The PyTorch implementation of the RL agent is available at\nhttps://github.com/marina-sp/reinform", + "authors": "Marina Speranskaya, Sameh Methias, Benjamin Roth", + "published": "2022-11-19", + "updated": "2023-01-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.03688v1", + "title": "A Computational Model of Representation Learning in the Brain Cortex, Integrating Unsupervised and Reinforcement Learning", + "abstract": "A common view on the brain learning processes proposes that the three classic\nlearning paradigms -- unsupervised, reinforcement, and supervised -- take place\nin respectively the cortex, the basal-ganglia, and the cerebellum. However,\ndopamine outbursts, usually assumed to encode reward, are not limited to the\nbasal ganglia but also reach prefrontal, motor, and higher sensory cortices. We\npropose that in the cortex the same reward-based trial-and-error processes\nmight support not only the acquisition of motor representations but also of\nsensory representations. In particular, reward signals might guide\ntrial-and-error processes that mix with associative learning processes to\nsupport the acquisition of representations better serving downstream action\nselection. We tested the soundness of this hypothesis with a computational\nmodel that integrates unsupervised learning (Contrastive Divergence) and\nreinforcement learning (REINFORCE). The model was tested with a task requiring\ndifferent responses to different visual images grouped in categories involving\neither colour, shape, or size. Results show that a balanced mix of unsupervised\nand reinforcement learning processes leads to the best performance. Indeed,\nexcessive unsupervised learning tends to under-represent task-relevant features\nwhile excessive reinforcement learning tends to initially learn slowly and then\nto incur in local minima. These results stimulate future empirical studies on\ncategory learning directed to investigate similar effects in the extrastriate\nvisual cortices. Moreover, they prompt further computational investigations\ndirected to study the possible advantages of integrating unsupervised and\nreinforcement learning processes.", + "authors": "Giovanni Granato, Emilio Cartoni, Federico Da Rold, Andrea Mattera, Gianluca Baldassarre", + "published": "2021-06-07", + "updated": "2021-06-07", + "primary_cat": "q-bio.NC", + "cats": [ + "q-bio.NC", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.00766v1", + "title": "Tracking the Race Between Deep Reinforcement Learning and Imitation Learning -- Extended Version", + "abstract": "Learning-based approaches for solving large sequential decision making\nproblems have become popular in recent years. The resulting agents perform\ndifferently and their characteristics depend on those of the underlying\nlearning approach. Here, we consider a benchmark planning problem from the\nreinforcement learning domain, the Racetrack, to investigate the properties of\nagents derived from different deep (reinforcement) learning approaches. We\ncompare the performance of deep supervised learning, in particular imitation\nlearning, to reinforcement learning for the Racetrack model. We find that\nimitation learning yields agents that follow more risky paths. In contrast, the\ndecisions of deep reinforcement learning are more foresighted, i.e., avoid\nstates in which fatal decisions are more likely. Our evaluations show that for\nthis sequential decision making problem, deep reinforcement learning performs\nbest in many aspects even though for imitation learning optimal decisions are\nconsidered.", + "authors": "Timo P. Gros, Daniel H\u00f6ller, J\u00f6rg Hoffmann, Verena Wolf", + "published": "2020-08-03", + "updated": "2020-08-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.14766v1", + "title": "Reinforcement Learning from Statistical Feedback: the Journey from AB Testing to ANT Testing", + "abstract": "Reinforcement Learning from Human Feedback (RLHF) has played a crucial role\nin the success of large models such as ChatGPT. RLHF is a reinforcement\nlearning framework which combines human feedback to improve learning\neffectiveness and performance. However, obtaining preferences feedback manually\nis quite expensive in commercial applications. Some statistical commercial\nindicators are usually more valuable and always ignored in RLHF. There exists a\ngap between commercial target and model training. In our research, we will\nattempt to fill this gap with statistical business feedback instead of human\nfeedback, using AB testing which is a well-established statistical method.\nReinforcement Learning from Statistical Feedback (RLSF) based on AB testing is\nproposed. Statistical inference methods are used to obtain preferences for\ntraining the reward network, which fine-tunes the pre-trained model in\nreinforcement learning framework, achieving greater business value.\nFurthermore, we extend AB testing with double selections at a single time-point\nto ANT testing with multiple selections at different feedback time points.\nMoreover, we design numerical experiences to validate the effectiveness of our\nalgorithm framework.", + "authors": "Feiyang Han, Yimin Wei, Zhaofeng Liu, Yanxing Qi", + "published": "2023-11-24", + "updated": "2023-11-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "math.ST", + "stat.ME", + "stat.TH" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2301.03933v1", + "title": "Hint assisted reinforcement learning: an application in radio astronomy", + "abstract": "Model based reinforcement learning has proven to be more sample efficient\nthan model free methods. On the other hand, the construction of a dynamics\nmodel in model based reinforcement learning has increased complexity. Data\nprocessing tasks in radio astronomy are such situations where the original\nproblem which is being solved by reinforcement learning itself is the creation\nof a model. Fortunately, many methods based on heuristics or signal processing\ndo exist to perform the same tasks and we can leverage them to propose the best\naction to take, or in other words, to provide a `hint'. We propose to use\n`hints' generated by the environment as an aid to the reinforcement learning\nprocess mitigating the complexity of model construction. We modify the soft\nactor critic algorithm to use hints and use the alternating direction method of\nmultipliers algorithm with inequality constraints to train the agent. Results\nin several environments show that we get the increased sample efficiency by\nusing hints as compared to model free methods.", + "authors": "Sarod Yatawatta", + "published": "2023-01-10", + "updated": "2023-01-10", + "primary_cat": "astro-ph.IM", + "cats": [ + "astro-ph.IM", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.03967v1", + "title": "A Deep Reinforcement Learning Approach for Composing Moving IoT Services", + "abstract": "We develop a novel framework for efficiently and effectively discovering\ncrowdsourced services that move in close proximity to a user over a period of\ntime. We introduce a moving crowdsourced service model which is modelled as a\nmoving region. We propose a deep reinforcement learning-based composition\napproach to select and compose moving IoT services considering quality\nparameters. Additionally, we develop a parallel flock-based service discovery\nalgorithm as a ground-truth to measure the accuracy of the proposed approach.\nThe experiments on two real-world datasets verify the effectiveness and\nefficiency of the deep reinforcement learning-based approach.", + "authors": "Azadeh Ghari Neiat, Athman Bouguettaya, Mohammed Bahutair", + "published": "2021-11-06", + "updated": "2021-11-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2011.10714v1", + "title": "Double Meta-Learning for Data Efficient Policy Optimization in Non-Stationary Environments", + "abstract": "We are interested in learning models of non-stationary environments, which\ncan be framed as a multi-task learning problem. Model-free reinforcement\nlearning algorithms can achieve good asymptotic performance in multi-task\nlearning at a cost of extensive sampling, due to their approach, which requires\nlearning from scratch. While model-based approaches are among the most data\nefficient learning algorithms, they still struggle with complex tasks and model\nuncertainties. Meta-reinforcement learning addresses the efficiency and\ngeneralization challenges on multi task learning by quickly leveraging the\nmeta-prior policy for a new task. In this paper, we propose a\nmeta-reinforcement learning approach to learn the dynamic model of a\nnon-stationary environment to be used for meta-policy optimization later. Due\nto the sample efficiency of model-based learning methods, we are able to\nsimultaneously train both the meta-model of the non-stationary environment and\nthe meta-policy until dynamic model convergence. Then, the meta-learned dynamic\nmodel of the environment will generate simulated data for meta-policy\noptimization. Our experiment demonstrates that our proposed method can\nmeta-learn the policy in a non-stationary environment with the data efficiency\nof model-based learning approaches while achieving the high asymptotic\nperformance of model-free meta-reinforcement learning.", + "authors": "Elahe Aghapour, Nora Ayanian", + "published": "2020-11-21", + "updated": "2020-11-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.01794v1", + "title": "Imitation Game: A Model-based and Imitation Learning Deep Reinforcement Learning Hybrid", + "abstract": "Autonomous and learning systems based on Deep Reinforcement Learning have\nfirmly established themselves as a foundation for approaches to creating\nresilient and efficient Cyber-Physical Energy Systems. However, most current\napproaches suffer from two distinct problems: Modern model-free algorithms such\nas Soft Actor Critic need a high number of samples to learn a meaningful\npolicy, as well as a fallback to ward against concept drifts (e. g.,\ncatastrophic forgetting). In this paper, we present the work in progress\ntowards a hybrid agent architecture that combines model-based Deep\nReinforcement Learning with imitation learning to overcome both problems.", + "authors": "Eric MSP Veith, Torben Logemann, Aleksandr Berezin, Arlena Well\u00dfow, Stephan Balduin", + "published": "2024-04-02", + "updated": "2024-04-02", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1910.03016v4", + "title": "Is a Good Representation Sufficient for Sample Efficient Reinforcement Learning?", + "abstract": "Modern deep learning methods provide effective means to learn good\nrepresentations. However, is a good representation itself sufficient for sample\nefficient reinforcement learning? This question has largely been studied only\nwith respect to (worst-case) approximation error, in the more classical\napproximate dynamic programming literature. With regards to the statistical\nviewpoint, this question is largely unexplored, and the extant body of\nliterature mainly focuses on conditions which permit sample efficient\nreinforcement learning with little understanding of what are necessary\nconditions for efficient reinforcement learning.\n This work shows that, from the statistical viewpoint, the situation is far\nsubtler than suggested by the more traditional approximation viewpoint, where\nthe requirements on the representation that suffice for sample efficient RL are\neven more stringent. Our main results provide sharp thresholds for\nreinforcement learning methods, showing that there are hard limitations on what\nconstitutes good function approximation (in terms of the dimensionality of the\nrepresentation), where we focus on natural representational conditions relevant\nto value-based, model-based, and policy-based learning. These lower bounds\nhighlight that having a good (value-based, model-based, or policy-based)\nrepresentation in and of itself is insufficient for efficient reinforcement\nlearning, unless the quality of this approximation passes certain hard\nthresholds. Furthermore, our lower bounds also imply exponential separations on\nthe sample complexity between 1) value-based learning with perfect\nrepresentation and value-based learning with a good-but-not-perfect\nrepresentation, 2) value-based learning and policy-based learning, 3)\npolicy-based learning and supervised learning and 4) reinforcement learning and\nimitation learning.", + "authors": "Simon S. Du, Sham M. Kakade, Ruosong Wang, Lin F. Yang", + "published": "2019-10-07", + "updated": "2020-02-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "math.OC", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.07525v1", + "title": "Using Collision Momentum in Deep Reinforcement Learning Based Adversarial Pedestrian Modeling", + "abstract": "Recent research in pedestrian simulation often aims to develop realistic\nbehaviors in various situations, but it is challenging for existing algorithms\nto generate behaviors that identify weaknesses in automated vehicles'\nperformance in extreme and unlikely scenarios and edge cases. To address this,\nspecialized pedestrian behavior algorithms are needed. Current research focuses\non realistic trajectories using social force models and reinforcement learning\nbased models. However, we propose a reinforcement learning algorithm that\nspecifically targets collisions and better uncovers unique failure modes of\nautomated vehicle controllers. Our algorithm is efficient and generates more\nsevere collisions, allowing for the identification and correction of weaknesses\nin autonomous driving algorithms in complex and varied scenarios.", + "authors": "Dianwei Chen, Ekim Yurtsever, Keith Redmill, Umit Ozguner", + "published": "2023-06-13", + "updated": "2023-06-13", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2009.14365v1", + "title": "Toolpath design for additive manufacturing using deep reinforcement learning", + "abstract": "Toolpath optimization of metal-based additive manufacturing processes is\ncurrently hampered by the high-dimensionality of its design space. In this\nwork, a reinforcement learning platform is proposed that dynamically learns\ntoolpath strategies to build an arbitrary part. To this end, three prominent\nmodel-free reinforcement learning formulations are investigated to design\nadditive manufacturing toolpaths and demonstrated for two cases of dense and\nsparse reward structures. The results indicate that this learning-based\ntoolpath design approach achieves high scores, especially when a dense reward\nstructure is present.", + "authors": "Mojtaba Mozaffar, Ablodghani Ebrahimi, Jian Cao", + "published": "2020-09-30", + "updated": "2020-09-30", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2003.13839v1", + "title": "Model-Reference Reinforcement Learning Control of Autonomous Surface Vehicles with Uncertainties", + "abstract": "This paper presents a novel model-reference reinforcement learning control\nmethod for uncertain autonomous surface vehicles. The proposed control combines\na conventional control method with deep reinforcement learning. With the\nconventional control, we can ensure the learning-based control law provides\nclosed-loop stability for the overall system, and potentially increase the\nsample efficiency of the deep reinforcement learning. With the reinforcement\nlearning, we can directly learn a control law to compensate for modeling\nuncertainties. In the proposed control, a nominal system is employed for the\ndesign of a baseline control law using a conventional control approach. The\nnominal system also defines the desired performance for uncertain autonomous\nvehicles to follow. In comparison with traditional deep reinforcement learning\nmethods, our proposed learning-based control can provide stability guarantees\nand better sample efficiency. We demonstrate the performance of the new\nalgorithm via extensive simulation results.", + "authors": "Qingrui Zhang, Wei Pan, Vasso Reppa", + "published": "2020-03-30", + "updated": "2020-03-30", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.AI", + "cs.LG", + "cs.RO", + "cs.SY", + "math.OC" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1507.06923v1", + "title": "A Reinforcement Learning Approach to Online Learning of Decision Trees", + "abstract": "Online decision tree learning algorithms typically examine all features of a\nnew data point to update model parameters. We propose a novel alternative,\nReinforcement Learning- based Decision Trees (RLDT), that uses Reinforcement\nLearning (RL) to actively examine a minimal number of features of a data point\nto classify it with high accuracy. Furthermore, RLDT optimizes a long term\nreturn, providing a better alternative to the traditional myopic greedy\napproach to growing decision trees. We demonstrate that this approach performs\nas well as batch learning algorithms and other online decision tree learning\nalgorithms, while making significantly fewer queries about the features of the\ndata points. We also show that RLDT can effectively handle concept drift.", + "authors": "Abhinav Garlapati, Aditi Raghunathan, Vaishnavh Nagarajan, Balaraman Ravindran", + "published": "2015-07-24", + "updated": "2015-07-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2012.00743v1", + "title": "Adaptive Neural Architectures for Recommender Systems", + "abstract": "Deep learning has proved an effective means to capture the non-linear\nassociations of user preferences. However, the main drawback of existing deep\nlearning architectures is that they follow a fixed recommendation strategy,\nignoring users' real time-feedback. Recent advances of deep reinforcement\nstrategies showed that recommendation policies can be continuously updated\nwhile users interact with the system. In doing so, we can learn the optimal\npolicy that fits to users' preferences over the recommendation sessions. The\nmain drawback of deep reinforcement strategies is that are based on predefined\nand fixed neural architectures. To shed light on how to handle this issue, in\nthis study we first present deep reinforcement learning strategies for\nrecommendation and discuss the main limitations due to the fixed neural\narchitectures. Then, we detail how recent advances on progressive neural\narchitectures are used for consecutive tasks in other research domains.\nFinally, we present the key challenges to fill the gap between deep\nreinforcement learning and adaptive neural architectures. We provide guidelines\nfor searching for the best neural architecture based on each user feedback via\nreinforcement learning, while considering the prediction performance on\nreal-time recommendations and the model complexity.", + "authors": "Dimitrios Rafailidis, Stefanos Antaris", + "published": "2020-11-11", + "updated": "2020-11-11", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2308.14897v1", + "title": "Statistically Efficient Variance Reduction with Double Policy Estimation for Off-Policy Evaluation in Sequence-Modeled Reinforcement Learning", + "abstract": "Offline reinforcement learning aims to utilize datasets of previously\ngathered environment-action interaction records to learn a policy without\naccess to the real environment. Recent work has shown that offline\nreinforcement learning can be formulated as a sequence modeling problem and\nsolved via supervised learning with approaches such as decision transformer.\nWhile these sequence-based methods achieve competitive results over\nreturn-to-go methods, especially on tasks that require longer episodes or with\nscarce rewards, importance sampling is not considered to correct the policy\nbias when dealing with off-policy data, mainly due to the absence of behavior\npolicy and the use of deterministic evaluation policies. To this end, we\npropose DPE: an RL algorithm that blends offline sequence modeling and offline\nreinforcement learning with Double Policy Estimation (DPE) in a unified\nframework with statistically proven properties on variance reduction. We\nvalidate our method in multiple tasks of OpenAI Gym with D4RL benchmarks. Our\nmethod brings a performance improvements on selected methods which outperforms\nSOTA baselines in several tasks, demonstrating the advantages of enabling\ndouble policy estimation for sequence-modeled reinforcement learning.", + "authors": "Hanhan Zhou, Tian Lan, Vaneet Aggarwal", + "published": "2023-08-28", + "updated": "2023-08-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.DC" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1705.03562v1", + "title": "Deep Episodic Value Iteration for Model-based Meta-Reinforcement Learning", + "abstract": "We present a new deep meta reinforcement learner, which we call Deep Episodic\nValue Iteration (DEVI). DEVI uses a deep neural network to learn a similarity\nmetric for a non-parametric model-based reinforcement learning algorithm. Our\nmodel is trained end-to-end via back-propagation. Despite being trained using\nthe model-free Q-learning objective, we show that DEVI's model-based internal\nstructure provides `one-shot' transfer to changes in reward and transition\nstructure, even for tasks with very high-dimensional state spaces.", + "authors": "Steven Stenberg Hansen", + "published": "2017-05-09", + "updated": "2017-05-09", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.AI", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1712.04170v2", + "title": "Interpretable Policies for Reinforcement Learning by Genetic Programming", + "abstract": "The search for interpretable reinforcement learning policies is of high\nacademic and industrial interest. Especially for industrial systems, domain\nexperts are more likely to deploy autonomously learned controllers if they are\nunderstandable and convenient to evaluate. Basic algebraic equations are\nsupposed to meet these requirements, as long as they are restricted to an\nadequate complexity. Here we introduce the genetic programming for\nreinforcement learning (GPRL) approach based on model-based batch reinforcement\nlearning and genetic programming, which autonomously learns policy equations\nfrom pre-existing default state-action trajectory samples. GPRL is compared to\na straight-forward method which utilizes genetic programming for symbolic\nregression, yielding policies imitating an existing well-performing, but\nnon-interpretable policy. Experiments on three reinforcement learning\nbenchmarks, i.e., mountain car, cart-pole balancing, and industrial benchmark,\ndemonstrate the superiority of our GPRL approach compared to the symbolic\nregression method. GPRL is capable of producing well-performing interpretable\nreinforcement learning policies from pre-existing default trajectory data.", + "authors": "Daniel Hein, Steffen Udluft, Thomas A. Runkler", + "published": "2017-12-12", + "updated": "2018-04-04", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.NE", + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.02025v1", + "title": "Between Rate-Distortion Theory & Value Equivalence in Model-Based Reinforcement Learning", + "abstract": "The quintessential model-based reinforcement-learning agent iteratively\nrefines its estimates or prior beliefs about the true underlying model of the\nenvironment. Recent empirical successes in model-based reinforcement learning\nwith function approximation, however, eschew the true model in favor of a\nsurrogate that, while ignoring various facets of the environment, still\nfacilitates effective planning over behaviors. Recently formalized as the value\nequivalence principle, this algorithmic technique is perhaps unavoidable as\nreal-world reinforcement learning demands consideration of a simple,\ncomputationally-bounded agent interacting with an overwhelmingly complex\nenvironment. In this work, we entertain an extreme scenario wherein some\ncombination of immense environment complexity and limited agent capacity\nentirely precludes identifying an exactly value-equivalent model. In light of\nthis, we embrace a notion of approximate value equivalence and introduce an\nalgorithm for incrementally synthesizing simple and useful approximations of\nthe environment from which an agent might still recover near-optimal behavior.\nCrucially, we recognize the information-theoretic nature of this lossy\nenvironment compression problem and use the appropriate tools of\nrate-distortion theory to make mathematically precise how value equivalence can\nlend tractability to otherwise intractable sequential decision-making problems.", + "authors": "Dilip Arumugam, Benjamin Van Roy", + "published": "2022-06-04", + "updated": "2022-06-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.IT", + "math.IT" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.02900v2", + "title": "Meta Federated Reinforcement Learning for Distributed Resource Allocation", + "abstract": "In cellular networks, resource allocation is usually performed in a\ncentralized way, which brings huge computation complexity to the base station\n(BS) and high transmission overhead. This paper explores a distributed resource\nallocation method that aims to maximize energy efficiency (EE) while ensuring\nthe quality of service (QoS) for users. Specifically, in order to address\nwireless channel conditions, we propose a robust meta federated reinforcement\nlearning (\\textit{MFRL}) framework that allows local users to optimize transmit\npower and assign channels using locally trained neural network models, so as to\noffload computational burden from the cloud server to the local users, reducing\ntransmission overhead associated with local channel state information. The BS\nperforms the meta learning procedure to initialize a general global model,\nenabling rapid adaptation to different environments with improved EE\nperformance. The federated learning technique, based on decentralized\nreinforcement learning, promotes collaboration and mutual benefits among users.\nAnalysis and numerical results demonstrate that the proposed \\textit{MFRL}\nframework accelerates the reinforcement learning process, decreases\ntransmission overhead, and offloads computation, while outperforming the\nconventional decentralized reinforcement learning algorithm in terms of\nconvergence speed and EE performance across various scenarios.", + "authors": "Zelin Ji, Zhijin Qin, Xiaoming Tao", + "published": "2023-07-06", + "updated": "2023-07-09", + "primary_cat": "eess.SP", + "cats": [ + "eess.SP", + "cs.SY", + "eess.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.15175v1", + "title": "Coordinated Reinforcement Learning for Optimizing Mobile Networks", + "abstract": "Mobile networks are composed of many base stations and for each of them many\nparameters must be optimized to provide good services. Automatically and\ndynamically optimizing all these entities is challenging as they are sensitive\nto variations in the environment and can affect each other through\ninterferences. Reinforcement learning (RL) algorithms are good candidates to\nautomatically learn base station configuration strategies from incoming data\nbut they are often hard to scale to many agents. In this work, we demonstrate\nhow to use coordination graphs and reinforcement learning in a complex\napplication involving hundreds of cooperating agents. We show how mobile\nnetworks can be modeled using coordination graphs and how network optimization\nproblems can be solved efficiently using multi- agent reinforcement learning.\nThe graph structure occurs naturally from expert knowledge about the network\nand allows to explicitly learn coordinating behaviors between the antennas\nthrough edge value functions represented by neural networks. We show\nempirically that coordinated reinforcement learning outperforms other methods.\nThe use of local RL updates and parameter sharing can handle a large number of\nagents without sacrificing coordination which makes it well suited to optimize\nthe ever denser networks brought by 5G and beyond.", + "authors": "Maxime Bouton, Hasan Farooq, Julien Forgeat, Shruti Bothe, Meral Shirazipour, Per Karlsson", + "published": "2021-09-30", + "updated": "2021-09-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2204.01409v1", + "title": "Safe Controller for Output Feedback Linear Systems using Model-Based Reinforcement Learning", + "abstract": "The objective of this research is to enable safety-critical systems to\nsimultaneously learn and execute optimal control policies in a safe manner to\nachieve complex autonomy. Learning optimal policies via trial and error, i.e.,\ntraditional reinforcement learning, is difficult to implement in\nsafety-critical systems, particularly when task restarts are unavailable. Safe\nmodel-based reinforcement learning techniques based on a barrier transformation\nhave recently been developed to address this problem. However, these methods\nrely on full state feedback, limiting their usability in a real-world\nenvironment. In this work, an output-feedback safe model-based reinforcement\nlearning technique based on a novel barrier-aware dynamic state estimator has\nbeen designed to address this issue. The developed approach facilitates\nsimultaneous learning and execution of safe control policies for\nsafety-critical linear systems. Simulation results indicate that barrier\ntransformation is an effective approach to achieve online reinforcement\nlearning in safety-critical systems using output feedback.", + "authors": "S M Nahid Mahmud, Moad Abudia, Scott A Nivison, Zachary I. Bell, Rushikesh Kamalapurkar", + "published": "2022-04-04", + "updated": "2022-04-04", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1903.08543v6", + "title": "Optimizing thermodynamic trajectories using evolutionary and gradient-based reinforcement learning", + "abstract": "Using a model heat engine, we show that neural network-based reinforcement\nlearning can identify thermodynamic trajectories of maximal efficiency. We\nconsider both gradient and gradient-free reinforcement learning. We use an\nevolutionary learning algorithm to evolve a population of neural networks,\nsubject to a directive to maximize the efficiency of a trajectory composed of a\nset of elementary thermodynamic processes; the resulting networks learn to\ncarry out the maximally-efficient Carnot, Stirling, or Otto cycles. When given\nan additional irreversible process, this evolutionary scheme learns a\npreviously unknown thermodynamic cycle. Gradient-based reinforcement learning\nis able to learn the Stirling cycle, whereas an evolutionary approach achieves\nthe optimal Carnot cycle. Our results show how the reinforcement learning\nstrategies developed for game playing can be applied to solve physical problems\nconditioned upon path-extensive order parameters.", + "authors": "Chris Beeler, Uladzimir Yahorau, Rory Coles, Kyle Mills, Stephen Whitelam, Isaac Tamblyn", + "published": "2019-03-20", + "updated": "2021-11-22", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE", + "cond-mat.stat-mech", + "cs.LG", + "physics.comp-ph" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.11142v1", + "title": "Two-stage Deep Reinforcement Learning for Inverter-based Volt-VAR Control in Active Distribution Networks", + "abstract": "Model-based Vol/VAR optimization method is widely used to eliminate voltage\nviolations and reduce network losses. However, the parameters of active\ndistribution networks(ADNs) are not onsite identified, so significant errors\nmay be involved in the model and make the model-based method infeasible. To\ncope with this critical issue, we propose a novel two-stage deep reinforcement\nlearning (DRL) method to improve the voltage profile by regulating\ninverter-based energy resources, which consists of offline stage and online\nstage. In the offline stage, a highly efficient adversarial reinforcement\nlearning algorithm is developed to train an offline agent robust to the model\nmismatch. In the sequential online stage, we transfer the offline agent safely\nas the online agent to perform continuous learning and controlling online with\nsignificantly improved safety and efficiency. Numerical simulations on IEEE\ntest cases not only demonstrate that the proposed adversarial reinforcement\nlearning algorithm outperforms the state-of-art algorithm, but also show that\nour proposed two-stage method achieves much better performance than the\nexisting DRL based methods in the online application.", + "authors": "Haotian Liu, Wenchuan Wu", + "published": "2020-05-20", + "updated": "2020-05-20", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.LG", + "cs.SY", + "J.7; C.3" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2011.01734v1", + "title": "Differentiable Physics Models for Real-world Offline Model-based Reinforcement Learning", + "abstract": "A limitation of model-based reinforcement learning (MBRL) is the exploitation\nof errors in the learned models. Black-box models can fit complex dynamics with\nhigh fidelity, but their behavior is undefined outside of the data\ndistribution.Physics-based models are better at extrapolating, due to the\ngeneral validity of their informed structure, but underfit in the real world\ndue to the presence of unmodeled phenomena. In this work, we demonstrate\nexperimentally that for the offline model-based reinforcement learning setting,\nphysics-based models can be beneficial compared to high-capacity function\napproximators if the mechanical structure is known. Physics-based models can\nlearn to perform the ball in a cup (BiC) task on a physical manipulator using\nonly 4 minutes of sampled data using offline MBRL. We find that black-box\nmodels consistently produce unviable policies for BiC as all predicted\ntrajectories diverge to physically impossible state, despite having access to\nmore data than the physics-based model. In addition, we generalize the approach\nof physics parameter identification from modeling holonomic multi-body systems\nto systems with nonholonomic dynamics using end-to-end automatic\ndifferentiation.\n Videos: https://sites.google.com/view/ball-in-a-cup-in-4-minutes/", + "authors": "Michael Lutter, Johannes Silberbauer, Joe Watson, Jan Peters", + "published": "2020-11-03", + "updated": "2020-11-03", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1802.10592v2", + "title": "Model-Ensemble Trust-Region Policy Optimization", + "abstract": "Model-free reinforcement learning (RL) methods are succeeding in a growing\nnumber of tasks, aided by recent advances in deep learning. However, they tend\nto suffer from high sample complexity, which hinders their use in real-world\ndomains. Alternatively, model-based reinforcement learning promises to reduce\nsample complexity, but tends to require careful tuning and to date have\nsucceeded mainly in restrictive domains where simple models are sufficient for\nlearning. In this paper, we analyze the behavior of vanilla model-based\nreinforcement learning methods when deep neural networks are used to learn both\nthe model and the policy, and show that the learned policy tends to exploit\nregions where insufficient data is available for the model to be learned,\ncausing instability in training. To overcome this issue, we propose to use an\nensemble of models to maintain the model uncertainty and regularize the\nlearning process. We further show that the use of likelihood ratio derivatives\nyields much more stable learning than backpropagation through time. Altogether,\nour approach Model-Ensemble Trust-Region Policy Optimization (ME-TRPO)\nsignificantly reduces the sample complexity compared to model-free deep RL\nmethods on challenging continuous control benchmark tasks.", + "authors": "Thanard Kurutach, Ignasi Clavera, Yan Duan, Aviv Tamar, Pieter Abbeel", + "published": "2018-02-28", + "updated": "2018-10-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2105.00822v2", + "title": "Generative Adversarial Reward Learning for Generalized Behavior Tendency Inference", + "abstract": "Recent advances in reinforcement learning have inspired increasing interest\nin learning user modeling adaptively through dynamic interactions, e.g., in\nreinforcement learning based recommender systems. Reward function is crucial\nfor most of reinforcement learning applications as it can provide the guideline\nabout the optimization. However, current reinforcement-learning-based methods\nrely on manually-defined reward functions, which cannot adapt to dynamic and\nnoisy environments. Besides, they generally use task-specific reward functions\nthat sacrifice generalization ability. We propose a generative inverse\nreinforcement learning for user behavioral preference modelling, to address the\nabove issues. Instead of using predefined reward functions, our model can\nautomatically learn the rewards from user's actions based on discriminative\nactor-critic network and Wasserstein GAN. Our model provides a general way of\ncharacterizing and explaining underlying behavioral tendencies, and our\nexperiments show our method outperforms state-of-the-art methods in a variety\nof scenarios, namely traffic signal control, online recommender systems, and\nscanpath prediction.", + "authors": "Xiaocong Chen, Lina Yao, Xianzhi Wang, Aixin Sun, Wenjie Zhang, Quan Z. Sheng", + "published": "2021-05-03", + "updated": "2021-05-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.IR" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2301.10119v2", + "title": "Minimal Value-Equivalent Partial Models for Scalable and Robust Planning in Lifelong Reinforcement Learning", + "abstract": "Learning models of the environment from pure interaction is often considered\nan essential component of building lifelong reinforcement learning agents.\nHowever, the common practice in model-based reinforcement learning is to learn\nmodels that model every aspect of the agent's environment, regardless of\nwhether they are important in coming up with optimal decisions or not. In this\npaper, we argue that such models are not particularly well-suited for\nperforming scalable and robust planning in lifelong reinforcement learning\nscenarios and we propose new kinds of models that only model the relevant\naspects of the environment, which we call \"minimal value-equivalent partial\nmodels\". After providing a formal definition for these models, we provide\ntheoretical results demonstrating the scalability advantages of performing\nplanning with such models and then perform experiments to empirically\nillustrate our theoretical results. Then, we provide some useful heuristics on\nhow to learn these kinds of models with deep learning architectures and\nempirically demonstrate that models learned in such a way can allow for\nperforming planning that is robust to distribution shifts and compounding model\nerrors. Overall, both our theoretical and empirical results suggest that\nminimal value-equivalent partial models can provide significant benefits to\nperforming scalable and robust planning in lifelong reinforcement learning\nscenarios.", + "authors": "Safa Alver, Doina Precup", + "published": "2023-01-24", + "updated": "2023-06-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1906.08312v1", + "title": "Calibrated Model-Based Deep Reinforcement Learning", + "abstract": "Estimates of predictive uncertainty are important for accurate model-based\nplanning and reinforcement learning. However, predictive\nuncertainties---especially ones derived from modern deep learning systems---can\nbe inaccurate and impose a bottleneck on performance. This paper explores which\nuncertainties are needed for model-based reinforcement learning and argues that\ngood uncertainties must be calibrated, i.e. their probabilities should match\nempirical frequencies of predicted events. We describe a simple way to augment\nany model-based reinforcement learning agent with a calibrated model and show\nthat doing so consistently improves planning, sample complexity, and\nexploration. On the \\textsc{HalfCheetah} MuJoCo task, our system achieves\nstate-of-the-art performance using 50\\% fewer samples than the current leading\napproach. Our findings suggest that calibration can improve the performance of\nmodel-based reinforcement learning with minimal computational and\nimplementation overhead.", + "authors": "Ali Malik, Volodymyr Kuleshov, Jiaming Song, Danny Nemer, Harlan Seymour, Stefano Ermon", + "published": "2019-06-19", + "updated": "2019-06-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2301.11520v3", + "title": "SNeRL: Semantic-aware Neural Radiance Fields for Reinforcement Learning", + "abstract": "As previous representations for reinforcement learning cannot effectively\nincorporate a human-intuitive understanding of the 3D environment, they usually\nsuffer from sub-optimal performances. In this paper, we present Semantic-aware\nNeural Radiance Fields for Reinforcement Learning (SNeRL), which jointly\noptimizes semantic-aware neural radiance fields (NeRF) with a convolutional\nencoder to learn 3D-aware neural implicit representation from multi-view\nimages. We introduce 3D semantic and distilled feature fields in parallel to\nthe RGB radiance fields in NeRF to learn semantic and object-centric\nrepresentation for reinforcement learning. SNeRL outperforms not only previous\npixel-based representations but also recent 3D-aware representations both in\nmodel-free and model-based reinforcement learning.", + "authors": "Dongseok Shim, Seungjae Lee, H. Jin Kim", + "published": "2023-01-27", + "updated": "2023-05-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2304.00006v1", + "title": "Bi-directional personalization reinforcement learning-based architecture with active learning using a multi-model data service for the travel nursing industry", + "abstract": "The challenges of using inadequate online recruitment systems can be\naddressed with machine learning and software engineering techniques.\nBi-directional personalization reinforcement learning-based architecture with\nactive learning can get recruiters to recommend qualified applicants and also\nenable applicants to receive personalized job recommendations. This paper\nfocuses on how machine learning techniques can enhance the recruitment process\nin the travel nursing industry by helping speed up data acquisition using a\nmulti-model data service and then providing personalized recommendations using\nbi-directional reinforcement learning with active learning. This need was\nespecially evident when trying to respond to the overwhelming needs of\nhealthcare facilities during the COVID-19 pandemic. The need for traveling\nnurses and other healthcare professionals was more evident during the lockdown\nperiod. A data service was architected for job feed processing using an\norchestration of natural language processing (NLP) models that synthesize\njob-related data into a database efficiently and accurately. The multi-model\ndata service provided the data necessary to develop a bi-directional\npersonalization system using reinforcement learning with active learning that\ncould recommend travel nurses and healthcare professionals to recruiters and\nprovide job recommendations to applicants using an internally developed smart\nmatch score as a basis. The bi-directional personalization reinforcement\nlearning-based architecture with active learning combines two personalization\nsystems - one that runs forward to recommend qualified candidates for jobs and\nanother that runs backward and recommends jobs for applicants.", + "authors": "Ezana N. Beyenne", + "published": "2023-03-14", + "updated": "2023-03-14", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.LG", + "I.2" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.08162v1", + "title": "Causal Reasoning from Meta-reinforcement Learning", + "abstract": "Discovering and exploiting the causal structure in the environment is a\ncrucial challenge for intelligent agents. Here we explore whether causal\nreasoning can emerge via meta-reinforcement learning. We train a recurrent\nnetwork with model-free reinforcement learning to solve a range of problems\nthat each contain causal structure. We find that the trained agent can perform\ncausal reasoning in novel situations in order to obtain rewards. The agent can\nselect informative interventions, draw causal inferences from observational\ndata, and make counterfactual predictions. Although established formal causal\nreasoning algorithms also exist, in this paper we show that such reasoning can\narise from model-free reinforcement learning, and suggest that causal reasoning\nin complex settings may benefit from the more end-to-end learning-based\napproaches presented here. This work also offers new strategies for structured\nexploration in reinforcement learning, by providing agents with the ability to\nperform -- and interpret -- experiments.", + "authors": "Ishita Dasgupta, Jane Wang, Silvia Chiappa, Jovana Mitrovic, Pedro Ortega, David Raposo, Edward Hughes, Peter Battaglia, Matthew Botvinick, Zeb Kurth-Nelson", + "published": "2019-01-23", + "updated": "2019-01-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2006.07178v2", + "title": "Meta-Reinforcement Learning Robust to Distributional Shift via Model Identification and Experience Relabeling", + "abstract": "Reinforcement learning algorithms can acquire policies for complex tasks\nautonomously. However, the number of samples required to learn a diverse set of\nskills can be prohibitively large. While meta-reinforcement learning methods\nhave enabled agents to leverage prior experience to adapt quickly to new tasks,\ntheir performance depends crucially on how close the new task is to the\npreviously experienced tasks. Current approaches are either not able to\nextrapolate well, or can do so at the expense of requiring extremely large\namounts of data for on-policy meta-training. In this work, we present model\nidentification and experience relabeling (MIER), a meta-reinforcement learning\nalgorithm that is both efficient and extrapolates well when faced with\nout-of-distribution tasks at test time. Our method is based on a simple\ninsight: we recognize that dynamics models can be adapted efficiently and\nconsistently with off-policy data, more easily than policies and value\nfunctions. These dynamics models can then be used to continue training policies\nand value functions for out-of-distribution tasks without using\nmeta-reinforcement learning at all, by generating synthetic experience for the\nnew task.", + "authors": "Russell Mendonca, Xinyang Geng, Chelsea Finn, Sergey Levine", + "published": "2020-06-12", + "updated": "2020-06-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.12516v2", + "title": "Prioritized Experience-based Reinforcement Learning with Human Guidance for Autonomous Driving", + "abstract": "Reinforcement learning (RL) requires skillful definition and remarkable\ncomputational efforts to solve optimization and control problems, which could\nimpair its prospect. Introducing human guidance into reinforcement learning is\na promising way to improve learning performance. In this paper, a comprehensive\nhuman guidance-based reinforcement learning framework is established. A novel\nprioritized experience replay mechanism that adapts to human guidance in the\nreinforcement learning process is proposed to boost the efficiency and\nperformance of the reinforcement learning algorithm. To relieve the heavy\nworkload on human participants, a behavior model is established based on an\nincremental online learning method to mimic human actions. We design two\nchallenging autonomous driving tasks for evaluating the proposed algorithm.\nExperiments are conducted to access the training and testing performance and\nlearning mechanism of the proposed algorithm. Comparative results against the\nstate-of-the-art methods suggest the advantages of our algorithm in terms of\nlearning efficiency, performance, and robustness.", + "authors": "Jingda Wu, Zhiyu Huang, Wenhui Huang, Chen Lv", + "published": "2021-09-26", + "updated": "2022-11-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1709.09346v2", + "title": "Cold-Start Reinforcement Learning with Softmax Policy Gradient", + "abstract": "Policy-gradient approaches to reinforcement learning have two common and\nundesirable overhead procedures, namely warm-start training and sample variance\nreduction. In this paper, we describe a reinforcement learning method based on\na softmax value function that requires neither of these procedures. Our method\ncombines the advantages of policy-gradient methods with the efficiency and\nsimplicity of maximum-likelihood approaches. We apply this new cold-start\nreinforcement learning method in training sequence generation models for\nstructured output prediction problems. Empirical evidence validates this method\non automatic summarization and image captioning tasks.", + "authors": "Nan Ding, Radu Soricut", + "published": "2017-09-27", + "updated": "2017-10-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1206.3281v1", + "title": "Model-Based Bayesian Reinforcement Learning in Large Structured Domains", + "abstract": "Model-based Bayesian reinforcement learning has generated significant\ninterest in the AI community as it provides an elegant solution to the optimal\nexploration-exploitation tradeoff in classical reinforcement learning.\nUnfortunately, the applicability of this type of approach has been limited to\nsmall domains due to the high complexity of reasoning about the joint posterior\nover model parameters. In this paper, we consider the use of factored\nrepresentations combined with online planning techniques, to improve\nscalability of these methods. The main contribution of this paper is a Bayesian\nframework for learning the structure and parameters of a dynamical system,\nwhile also simultaneously planning a (near-)optimal sequence of actions.", + "authors": "Stephane Ross, Joelle Pineau", + "published": "2012-06-13", + "updated": "2012-06-13", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.02104v2", + "title": "Model-Based Episodic Memory Induces Dynamic Hybrid Controls", + "abstract": "Episodic control enables sample efficiency in reinforcement learning by\nrecalling past experiences from an episodic memory. We propose a new\nmodel-based episodic memory of trajectories addressing current limitations of\nepisodic control. Our memory estimates trajectory values, guiding the agent\ntowards good policies. Built upon the memory, we construct a complementary\nlearning model via a dynamic hybrid control unifying model-based, episodic and\nhabitual learning into a single architecture. Experiments demonstrate that our\nmodel allows significantly faster and better learning than other strong\nreinforcement learning agents across a variety of environments including\nstochastic and non-Markovian settings.", + "authors": "Hung Le, Thommen Karimpanal George, Majid Abdolshah, Truyen Tran, Svetha Venkatesh", + "published": "2021-11-03", + "updated": "2021-11-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.12142v1", + "title": "Bridging Imagination and Reality for Model-Based Deep Reinforcement Learning", + "abstract": "Sample efficiency has been one of the major challenges for deep reinforcement\nlearning. Recently, model-based reinforcement learning has been proposed to\naddress this challenge by performing planning on imaginary trajectories with a\nlearned world model. However, world model learning may suffer from overfitting\nto training trajectories, and thus model-based value estimation and policy\nsearch will be pone to be sucked in an inferior local policy. In this paper, we\npropose a novel model-based reinforcement learning algorithm, called BrIdging\nReality and Dream (BIRD). It maximizes the mutual information between imaginary\nand real trajectories so that the policy improvement learned from imaginary\ntrajectories can be easily generalized to real trajectories. We demonstrate\nthat our approach improves sample efficiency of model-based planning, and\nachieves state-of-the-art performance on challenging visual control benchmarks.", + "authors": "Guangxiang Zhu, Minghao Zhang, Honglak Lee, Chongjie Zhang", + "published": "2020-10-23", + "updated": "2020-10-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.07315v1", + "title": "An introduction to reinforcement learning for neuroscience", + "abstract": "Reinforcement learning has a rich history in neuroscience, from early work on\ndopamine as a reward prediction error signal for temporal difference learning\n(Schultz et al., 1997) to recent work suggesting that dopamine could implement\na form of 'distributional reinforcement learning' popularized in deep learning\n(Dabney et al., 2020). Throughout this literature, there has been a tight link\nbetween theoretical advances in reinforcement learning and neuroscientific\nexperiments and findings. As a result, the theories describing our experimental\ndata have become increasingly complex and difficult to navigate. In this\nreview, we cover the basic theory underlying classical work in reinforcement\nlearning and build up to an introductory overview of methods used in modern\ndeep reinforcement learning that have found applications in systems\nneuroscience. We start with an overview of the reinforcement learning problem\nand classical temporal difference algorithms, followed by a discussion of\n'model-free' and 'model-based' reinforcement learning together with methods\nsuch as DYNA and successor representations that fall in between these two\ncategories. Throughout these sections, we highlight the close parallels between\nthe machine learning methods and related work in both experimental and\ntheoretical neuroscience. We then provide an introduction to deep reinforcement\nlearning with examples of how these methods have been used to model different\nlearning phenomena in the systems neuroscience literature, such as\nmeta-reinforcement learning (Wang et al., 2018) and distributional\nreinforcement learning (Dabney et al., 2020). Code that implements the methods\ndiscussed in this work and generates the figures is also provided.", + "authors": "Kristopher T. Jensen", + "published": "2023-11-13", + "updated": "2023-11-13", + "primary_cat": "q-bio.NC", + "cats": [ + "q-bio.NC", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1912.03918v1", + "title": "Transformer Based Reinforcement Learning For Games", + "abstract": "Recent times have witnessed sharp improvements in reinforcement learning\ntasks using deep reinforcement learning techniques like Deep Q Networks, Policy\nGradients, Actor Critic methods which are based on deep learning based models\nand back-propagation of gradients to train such models. An active area of\nresearch in reinforcement learning is about training agents to play complex\nvideo games, which so far has been something accomplished only by human\nintelligence. Some state of the art performances in video game playing using\ndeep reinforcement learning are obtained by processing the sequence of frames\nfrom video games, passing them through a convolutional network to obtain\nfeatures and then using recurrent neural networks to figure out the action\nleading to optimal rewards. The recurrent neural network will learn to extract\nthe meaningful signal out of the sequence of such features. In this work, we\npropose a method utilizing a transformer network which have recently replaced\nRNNs in Natural Language Processing (NLP), and perform experiments to compare\nwith existing methods.", + "authors": "Uddeshya Upadhyay, Nikunj Shah, Sucheta Ravikanti, Mayanka Medhe", + "published": "2019-12-09", + "updated": "2019-12-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NE" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1810.03198v1", + "title": "Reinforcement Evolutionary Learning Method for self-learning", + "abstract": "In statistical modelling the biggest threat is concept drift which makes the\nmodel gradually showing deteriorating performance over time. There are state of\nthe art methodologies to detect the impact of concept drift, however general\nstrategy considered to overcome the issue in performance is to rebuild or\nre-calibrate the model periodically as the variable patterns for the model\nchanges significantly due to market change or consumer behavior change etc.\nQuantitative research is the most widely spread application of data science in\nMarketing or financial domain where applicability of state of the art\nreinforcement learning for auto-learning is less explored paradigm.\nReinforcement learning is heavily dependent on having a simulated environment\nwhich is majorly available for gaming or online systems, to learn from the live\nfeedback. However, there are some research happened on the area of online\nadvertisement, pricing etc where due to the nature of the online learning\nenvironment scope of reinforcement learning is explored. Our proposed solution\nis a reinforcement learning based, true self-learning algorithm which can adapt\nto the data change or concept drift and auto learn and self-calibrate for the\nnew patterns of the data solving the problem of concept drift.\n Keywords - Reinforcement learning, Genetic Algorithm, Q-learning,\nClassification modelling, CMA-ES, NES, Multi objective optimization, Concept\ndrift, Population stability index, Incremental learning, F1-measure, Predictive\nModelling, Self-learning, MCTS, AlphaGo, AlphaZero", + "authors": "Kumarjit Pathak, Jitin Kapila", + "published": "2018-10-07", + "updated": "2018-10-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NE", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1506.00685v1", + "title": "Model-based reinforcement learning for infinite-horizon approximate optimal tracking", + "abstract": "This paper provides an approximate online adaptive solution to the\ninfinite-horizon optimal tracking problem for control-affine continuous-time\nnonlinear systems with unknown drift dynamics. Model-based reinforcement\nlearning is used to relax the persistence of excitation condition. Model-based\nreinforcement learning is implemented using a concurrent learning-based system\nidentifier to simulate experience by evaluating the Bellman error over\nunexplored areas of the state space. Tracking of the desired trajectory and\nconvergence of the developed policy to a neighborhood of the optimal policy are\nestablished via Lyapunov-based stability analysis. Simulation results\ndemonstrate the effectiveness of the developed technique.", + "authors": "Rushikesh Kamalapurkar, Lindsey Andrews, Patrick Walters, Warren E. Dixon", + "published": "2015-06-01", + "updated": "2015-06-01", + "primary_cat": "cs.SY", + "cats": [ + "cs.SY", + "math.OC" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.02380v2", + "title": "Adaptive Rollout Length for Model-Based RL Using Model-Free Deep RL", + "abstract": "Model-based reinforcement learning promises to learn an optimal policy from\nfewer interactions with the environment compared to model-free reinforcement\nlearning by learning an intermediate model of the environment in order to\npredict future interactions. When predicting a sequence of interactions, the\nrollout length, which limits the prediction horizon, is a critical\nhyperparameter as accuracy of the predictions diminishes in the regions that\nare further away from real experience. As a result, with a longer rollout\nlength, an overall worse policy is learned in the long run. Thus, the\nhyperparameter provides a trade-off between quality and efficiency. In this\nwork, we frame the problem of tuning the rollout length as a meta-level\nsequential decision-making problem that optimizes the final policy learned by\nmodel-based reinforcement learning given a fixed budget of environment\ninteractions by adapting the hyperparameter dynamically based on feedback from\nthe learning process, such as accuracy of the model and the remaining budget of\ninteractions. We use model-free deep reinforcement learning to solve the\nmeta-level decision problem and demonstrate that our approach outperforms\ncommon heuristic baselines on two well-known reinforcement learning\nenvironments.", + "authors": "Abhinav Bhatia, Philip S. Thomas, Shlomo Zilberstein", + "published": "2022-06-06", + "updated": "2022-06-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1910.11914v3", + "title": "On the convergence of projective-simulation-based reinforcement learning in Markov decision processes", + "abstract": "In recent years, the interest in leveraging quantum effects for enhancing\nmachine learning tasks has significantly increased. Many algorithms speeding up\nsupervised and unsupervised learning were established. The first framework in\nwhich ways to exploit quantum resources specifically for the broader context of\nreinforcement learning were found is projective simulation. Projective\nsimulation presents an agent-based reinforcement learning approach designed in\na manner which may support quantum walk-based speed-ups. Although classical\nvariants of projective simulation have been benchmarked against common\nreinforcement learning algorithms, very few formal theoretical analyses have\nbeen provided for its performance in standard learning scenarios. In this\npaper, we provide a detailed formal discussion of the properties of this model.\nSpecifically, we prove that one version of the projective simulation model,\nunderstood as a reinforcement learning approach, converges to optimal behavior\nin a large class of Markov decision processes. This proof shows that a\nphysically-inspired approach to reinforcement learning can guarantee to\nconverge.", + "authors": "Walter L. Boyajian, Jens Clausen, Lea M. Trenkwalder, Vedran Dunjko, Hans J. Briegel", + "published": "2019-10-25", + "updated": "2020-11-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "quant-ph", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.09064v2", + "title": "Can Interpretable Reinforcement Learning Manage Prosperity Your Way?", + "abstract": "Personalisation of products and services is fast becoming the driver of\nsuccess in banking and commerce. Machine learning holds the promise of gaining\na deeper understanding of and tailoring to customers' needs and preferences.\nWhereas traditional solutions to financial decision problems frequently rely on\nmodel assumptions, reinforcement learning is able to exploit large amounts of\ndata to improve customer modelling and decision-making in complex financial\nenvironments with fewer assumptions. Model explainability and interpretability\npresent challenges from a regulatory perspective which demands transparency for\nacceptance; they also offer the opportunity for improved insight into and\nunderstanding of customers. Post-hoc approaches are typically used for\nexplaining pretrained reinforcement learning models. Based on our previous\nmodeling of customer spending behaviour, we adapt our recent reinforcement\nlearning algorithm that intrinsically characterizes desirable behaviours and we\ntransition to the problem of asset management. We train inherently\ninterpretable reinforcement learning agents to give investment advice that is\naligned with prototype financial personality traits which are combined to make\na final recommendation. We observe that the trained agents' advice adheres to\ntheir intended characteristics, they learn the value of compound growth, and,\nwithout any explicit reference, the notion of risk as well as improved policy\nconvergence.", + "authors": "Charl Maree, Christian Omlin", + "published": "2022-02-18", + "updated": "2022-06-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.05546v2", + "title": "Multi-Agent Quantum Reinforcement Learning using Evolutionary Optimization", + "abstract": "Multi-Agent Reinforcement Learning is becoming increasingly more important in\ntimes of autonomous driving and other smart industrial applications.\nSimultaneously a promising new approach to Reinforcement Learning arises using\nthe inherent properties of quantum mechanics, reducing the trainable parameters\nof a model significantly. However, gradient-based Multi-Agent Quantum\nReinforcement Learning methods often have to struggle with barren plateaus,\nholding them back from matching the performance of classical approaches. We\nbuild upon an existing approach for gradient free Quantum Reinforcement\nLearning and propose three genetic variations with Variational Quantum Circuits\nfor Multi-Agent Reinforcement Learning using evolutionary optimization. We\nevaluate our genetic variations in the Coin Game environment and also compare\nthem to classical approaches. We showed that our Variational Quantum Circuit\napproaches perform significantly better compared to a neural network with a\nsimilar amount of trainable parameters. Compared to the larger neural network,\nour approaches archive similar results using $97.88\\%$ less parameters.", + "authors": "Michael K\u00f6lle, Felix Topp, Thomy Phan, Philipp Altmann, Jonas N\u00fc\u00dflein, Claudia Linnhoff-Popien", + "published": "2023-11-09", + "updated": "2024-01-13", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph", + "cs.AI", + "cs.MA" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.05440v1", + "title": "Delay-Aware Model-Based Reinforcement Learning for Continuous Control", + "abstract": "Action delays degrade the performance of reinforcement learning in many\nreal-world systems. This paper proposes a formal definition of delay-aware\nMarkov Decision Process and proves it can be transformed into standard MDP with\naugmented states using the Markov reward process. We develop a delay-aware\nmodel-based reinforcement learning framework that can incorporate the\nmulti-step delay into the learned system models without learning effort.\nExperiments with the Gym and MuJoCo platforms show that the proposed\ndelay-aware model-based algorithm is more efficient in training and\ntransferable between systems with various durations of delay compared with\noff-policy model-free reinforcement learning methods. Codes available at:\nhttps://github.com/baimingc/dambrl.", + "authors": "Baiming Chen, Mengdi Xu, Liang Li, Ding Zhao", + "published": "2020-05-11", + "updated": "2020-05-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2312.15385v1", + "title": "Discrete-Time Mean-Variance Strategy Based on Reinforcement Learning", + "abstract": "This paper studies a discrete-time mean-variance model based on reinforcement\nlearning. Compared with its continuous-time counterpart in \\cite{zhou2020mv},\nthe discrete-time model makes more general assumptions about the asset's return\ndistribution. Using entropy to measure the cost of exploration, we derive the\noptimal investment strategy, whose density function is also Gaussian type.\nAdditionally, we design the corresponding reinforcement learning algorithm.\nBoth simulation experiments and empirical analysis indicate that our\ndiscrete-time model exhibits better applicability when analyzing real-world\ndata than the continuous-time model.", + "authors": "Xiangyu Cui, Xun Li, Yun Shi, Si Zhao", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "q-fin.MF", + "cats": [ + "q-fin.MF", + "cs.LG", + "q-fin.PM" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2009.09781v1", + "title": "Rethinking Supervised Learning and Reinforcement Learning in Task-Oriented Dialogue Systems", + "abstract": "Dialogue policy learning for task-oriented dialogue systems has enjoyed great\nprogress recently mostly through employing reinforcement learning methods.\nHowever, these approaches have become very sophisticated. It is time to\nre-evaluate it. Are we really making progress developing dialogue agents only\nbased on reinforcement learning? We demonstrate how (1)~traditional supervised\nlearning together with (2)~a simulator-free adversarial learning method can be\nused to achieve performance comparable to state-of-the-art RL-based methods.\nFirst, we introduce a simple dialogue action decoder to predict the appropriate\nactions. Then, the traditional multi-label classification solution for dialogue\npolicy learning is extended by adding dense layers to improve the dialogue\nagent performance. Finally, we employ the Gumbel-Softmax estimator to\nalternatively train the dialogue agent and the dialogue reward model without\nusing reinforcement learning. Based on our extensive experimentation, we can\nconclude the proposed methods can achieve more stable and higher performance\nwith fewer efforts, such as the domain knowledge required to design a user\nsimulator and the intractable parameter tuning in reinforcement learning. Our\nmain goal is not to beat reinforcement learning with supervised learning, but\nto demonstrate the value of rethinking the role of reinforcement learning and\nsupervised learning in optimizing task-oriented dialogue systems.", + "authors": "Ziming Li, Julia Kiseleva, Maarten de Rijke", + "published": "2020-09-21", + "updated": "2020-09-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.11738v1", + "title": "Optimising Stochastic Routing for Taxi Fleets with Model Enhanced Reinforcement Learning", + "abstract": "The future of mobility-as-a-Service (Maas)should embrace an integrated system\nof ride-hailing, street-hailing and ride-sharing with optimised intelligent\nvehicle routing in response to a real-time, stochastic demand pattern. We aim\nto optimise routing policies for a large fleet of vehicles for street-hailing\nservices, given a stochastic demand pattern in small to medium-sized road\nnetworks. A model-based dispatch algorithm, a high performance model-free\nreinforcement learning based algorithm and a novel hybrid algorithm combining\nthe benefits of both the top-down approach and the model-free reinforcement\nlearning have been proposed to route the \\emph{vacant} vehicles. We design our\nreinforcement learning based routing algorithm using proximal policy\noptimisation and combined intrinsic and extrinsic rewards to strike a balance\nbetween exploration and exploitation. Using a large-scale agent-based\nmicroscopic simulation platform to evaluate our proposed algorithms, our\nmodel-free reinforcement learning and hybrid algorithm show excellent\nperformance on both artificial road network and community-based Singapore road\nnetwork with empirical demands, and our hybrid algorithm can significantly\naccelerate the model-free learner in the process of learning.", + "authors": "Shen Ren, Qianxiao Li, Liye Zhang, Zheng Qin, Bo Yang", + "published": "2020-10-22", + "updated": "2020-10-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "nlin.AO", + "physics.soc-ph" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.11437v3", + "title": "Successor Features Combine Elements of Model-Free and Model-based Reinforcement Learning", + "abstract": "A key question in reinforcement learning is how an intelligent agent can\ngeneralize knowledge across different inputs. By generalizing across different\ninputs, information learned for one input can be immediately reused for\nimproving predictions for another input. Reusing information allows an agent to\ncompute an optimal decision-making strategy using less data. State\nrepresentation is a key element of the generalization process, compressing a\nhigh-dimensional input space into a low-dimensional latent state space. This\narticle analyzes properties of different latent state spaces, leading to new\nconnections between model-based and model-free reinforcement learning.\nSuccessor features, which predict frequencies of future observations, form a\nlink between model-based and model-free learning: Learning to predict future\nexpected reward outcomes, a key characteristic of model-based agents, is\nequivalent to learning successor features. Learning successor features is a\nform of temporal difference learning and is equivalent to learning to predict a\nsingle policy's utility, which is a characteristic of model-free agents.\nDrawing on the connection between model-based reinforcement learning and\nsuccessor features, we demonstrate that representations that are predictive of\nfuture reward outcomes generalize across variations in both transitions and\nrewards. This result extends previous work on successor features, which is\nconstrained to fixed transitions and assumes re-learning of the transferred\nstate representation.", + "authors": "Lucas Lehnert, Michael L. Littman", + "published": "2019-01-31", + "updated": "2020-10-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.07905v2", + "title": "Reinforcement Learning Ship Autopilot: Sample efficient and Model Predictive Control-based Approach", + "abstract": "In this research we focus on developing a reinforcement learning system for a\nchallenging task: autonomous control of a real-sized boat, with difficulties\narising from large uncertainties in the challenging ocean environment and the\nextremely high cost of exploring and sampling with a real boat. To this end, we\nexplore a novel Gaussian processes (GP) based reinforcement learning approach\nthat combines sample-efficient model-based reinforcement learning and model\npredictive control (MPC). Our approach, sample-efficient probabilistic model\npredictive control (SPMPC), iteratively learns a Gaussian process dynamics\nmodel and uses it to efficiently update control signals within the MPC closed\ncontrol loop. A system using SPMPC is built to efficiently learn an autopilot\ntask. After investigating its performance in a simulation modeled upon real\nboat driving data, the proposed system successfully learns to drive a\nreal-sized boat equipped with a single engine and sensors measuring GPS, speed,\ndirection, and wind in an autopilot task without human demonstration.", + "authors": "Yunduan Cui, Shigeki Osaki, Takamitsu Matsubara", + "published": "2019-01-23", + "updated": "2019-07-23", + "primary_cat": "cs.SY", + "cats": [ + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.13044v1", + "title": "Reinforcement Learning with Feedback-modulated TD-STDP", + "abstract": "Spiking neuron networks have been used successfully to solve simple\nreinforcement learning tasks with continuous action set applying learning rules\nbased on spike-timing-dependent plasticity (STDP). However, most of these\nmodels cannot be applied to reinforcement learning tasks with discrete action\nset since they assume that the selected action is a deterministic function of\nfiring rate of neurons, which is continuous. In this paper, we propose a new\nSTDP-based learning rule for spiking neuron networks which contains feedback\nmodulation. We show that the STDP-based learning rule can be used to solve\nreinforcement learning tasks with discrete action set at a speed similar to\nstandard reinforcement learning algorithms when applied to the CartPole and\nLunarLander tasks. Moreover, we demonstrate that the agent is unable to solve\nthese tasks if feedback modulation is omitted from the learning rule. We\nconclude that feedback modulation allows better credit assignment when only the\nunits contributing to the executed action and TD error participate in learning.", + "authors": "Stephen Chung, Robert Kozma", + "published": "2020-08-29", + "updated": "2020-08-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML", + "I.2.8" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1806.01265v2", + "title": "Equivalence Between Wasserstein and Value-Aware Loss for Model-based Reinforcement Learning", + "abstract": "Learning a generative model is a key component of model-based reinforcement\nlearning. Though learning a good model in the tabular setting is a simple task,\nlearning a useful model in the approximate setting is challenging. In this\ncontext, an important question is the loss function used for model learning as\nvarying the loss function can have a remarkable impact on effectiveness of\nplanning. Recently Farahmand et al. (2017) proposed a value-aware model\nlearning (VAML) objective that captures the structure of value function during\nmodel learning. Using tools from Asadi et al. (2018), we show that minimizing\nthe VAML objective is in fact equivalent to minimizing the Wasserstein metric.\nThis equivalence improves our understanding of value-aware models, and also\ncreates a theoretical foundation for applications of Wasserstein in model-based\nreinforcement~learning.", + "authors": "Kavosh Asadi, Evan Cater, Dipendra Misra, Michael L. Littman", + "published": "2018-06-01", + "updated": "2018-07-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2102.03022v1", + "title": "Deceptive Reinforcement Learning for Privacy-Preserving Planning", + "abstract": "In this paper, we study the problem of deceptive reinforcement learning to\npreserve the privacy of a reward function. Reinforcement learning is the\nproblem of finding a behaviour policy based on rewards received from\nexploratory behaviour. A key ingredient in reinforcement learning is a reward\nfunction, which determines how much reward (negative or positive) is given and\nwhen. However, in some situations, we may want to keep a reward function\nprivate; that is, to make it difficult for an observer to determine the reward\nfunction used. We define the problem of privacy-preserving reinforcement\nlearning, and present two models for solving it. These models are based on\ndissimulation -- a form of deception that `hides the truth'. We evaluate our\nmodels both computationally and via human behavioural experiments. Results show\nthat the resulting policies are indeed deceptive, and that participants can\ndetermine the true reward function less reliably than that of an honest agent.", + "authors": "Zhengshang Liu, Yue Yang, Tim Miller, Peta Masters", + "published": "2021-02-05", + "updated": "2021-02-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.MA" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1611.00862v1", + "title": "Quantile Reinforcement Learning", + "abstract": "In reinforcement learning, the standard criterion to evaluate policies in a\nstate is the expectation of (discounted) sum of rewards. However, this\ncriterion may not always be suitable, we consider an alternative criterion\nbased on the notion of quantiles. In the case of episodic reinforcement\nlearning problems, we propose an algorithm based on stochastic approximation\nwith two timescales. We evaluate our proposition on a simple model of the TV\nshow, Who wants to be a millionaire.", + "authors": "Hugo Gilbert, Paul Weng", + "published": "2016-11-03", + "updated": "2016-11-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.16348v2", + "title": "Rating-based Reinforcement Learning", + "abstract": "This paper develops a novel rating-based reinforcement learning approach that\nuses human ratings to obtain human guidance in reinforcement learning.\nDifferent from the existing preference-based and ranking-based reinforcement\nlearning paradigms, based on human relative preferences over sample pairs, the\nproposed rating-based reinforcement learning approach is based on human\nevaluation of individual trajectories without relative comparisons between\nsample pairs. The rating-based reinforcement learning approach builds on a new\nprediction model for human ratings and a novel multi-class loss function. We\nconduct several experimental studies based on synthetic ratings and real human\nratings to evaluate the effectiveness and benefits of the new rating-based\nreinforcement learning approach.", + "authors": "Devin White, Mingkang Wu, Ellen Novoseller, Vernon J. Lawhern, Nicholas Waytowich, Yongcan Cao", + "published": "2023-07-30", + "updated": "2024-01-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1708.07738v1", + "title": "A Function Approximation Method for Model-based High-Dimensional Inverse Reinforcement Learning", + "abstract": "This works handles the inverse reinforcement learning problem in\nhigh-dimensional state spaces, which relies on an efficient solution of\nmodel-based high-dimensional reinforcement learning problems. To solve the\ncomputationally expensive reinforcement learning problems, we propose a\nfunction approximation method to ensure that the Bellman Optimality Equation\nalways holds, and then estimate a function based on the observed human actions\nfor inverse reinforcement learning problems. The time complexity of the\nproposed method is linearly proportional to the cardinality of the action set,\nthus it can handle high-dimensional even continuous state spaces efficiently.\nWe test the proposed method in a simulated environment to show its accuracy,\nand three clinical tasks to show how it can be used to evaluate a doctor's\nproficiency.", + "authors": "Kun Li, Joel W. Burdick", + "published": "2017-08-23", + "updated": "2017-08-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.03188v3", + "title": "Optimizing Quantum Variational Circuits with Deep Reinforcement Learning", + "abstract": "Quantum Machine Learning (QML) is considered to be one of the most promising\napplications of near term quantum devices. However, the optimization of quantum\nmachine learning models presents numerous challenges arising from the\nimperfections of hardware and the fundamental obstacles in navigating an\nexponentially scaling Hilbert space. In this work, we evaluate the potential of\ncontemporary methods in deep reinforcement learning to augment gradient based\noptimization routines in quantum variational circuits. We find that\nreinforcement learning augmented optimizers consistently outperform gradient\ndescent in noisy environments. All code and pretrained weights are available to\nreplicate the results or deploy the models at:\nhttps://github.com/lockwo/rl_qvc_opt.", + "authors": "Owen Lockwood", + "published": "2021-09-07", + "updated": "2022-05-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "quant-ph" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1703.04489v1", + "title": "Reinforcement Learning for Transition-Based Mention Detection", + "abstract": "This paper describes an application of reinforcement learning to the mention\ndetection task. We define a novel action-based formulation for the mention\ndetection task, in which a model can flexibly revise past labeling decisions by\ngrouping together tokens and assigning partial mention labels. We devise a\nmethod to create mention-level episodes and we train a model by rewarding\ncorrectly labeled complete mentions, irrespective of the inner structure\ncreated. The model yields results which are on par with a competitive\nsupervised counterpart while being more flexible in terms of achieving targeted\nbehavior through reward modeling and generating internal mention structure,\nespecially on longer mentions.", + "authors": "Georgiana Dinu, Wael Hamza, Radu Florian", + "published": "2017-03-13", + "updated": "2017-03-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.07260v1", + "title": "TIAGo RL: Simulated Reinforcement Learning Environments with Tactile Data for Mobile Robots", + "abstract": "Tactile information is important for robust performance in robotic tasks that\ninvolve physical interaction, such as object manipulation. However, with more\ndata included in the reasoning and control process, modeling behavior becomes\nincreasingly difficult. Deep Reinforcement Learning (DRL) produced promising\nresults for learning complex behavior in various domains, including\ntactile-based manipulation in robotics. In this work, we present our\nopen-source reinforcement learning environments for the TIAGo service robot.\nThey produce tactile sensor measurements that resemble those of a real\nsensorised gripper for TIAGo, encouraging research in transfer learning of DRL\npolicies. Lastly, we show preliminary training results of a learned force\ncontrol policy and compare it to a classical PI controller.", + "authors": "Luca Lach, Francesco Ferro, Robert Haschke", + "published": "2023-11-13", + "updated": "2023-11-13", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2305.03360v1", + "title": "A Survey on Offline Model-Based Reinforcement Learning", + "abstract": "Model-based approaches are becoming increasingly popular in the field of\noffline reinforcement learning, with high potential in real-world applications\ndue to the model's capability of thoroughly utilizing the large historical\ndatasets available with supervised learning techniques. This paper presents a\nliterature review of recent work in offline model-based reinforcement learning,\na field that utilizes model-based approaches in offline reinforcement learning.\nThe survey provides a brief overview of the concepts and recent developments in\nboth offline reinforcement learning and model-based reinforcement learning, and\ndiscuss the intersection of the two fields. We then presents key relevant\npapers in the field of offline model-based reinforcement learning and discuss\ntheir methods, particularly their approaches in solving the issue of\ndistributional shift, the main problem faced by all current offline model-based\nreinforcement learning methods. We further discuss key challenges faced by the\nfield, and suggest possible directions for future work.", + "authors": "Haoyang He", + "published": "2023-05-05", + "updated": "2023-05-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SY", + "eess.SY", + "I.2.6; I.2.8" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2112.01195v1", + "title": "Maximum Entropy Model-based Reinforcement Learning", + "abstract": "Recent advances in reinforcement learning have demonstrated its ability to\nsolve hard agent-environment interaction tasks on a super-human level. However,\nthe application of reinforcement learning methods to practical and real-world\ntasks is currently limited due to most RL state-of-art algorithms' sample\ninefficiency, i.e., the need for a vast number of training episodes. For\nexample, OpenAI Five algorithm that has beaten human players in Dota 2 has\ntrained for thousands of years of game time. Several approaches exist that\ntackle the issue of sample inefficiency, that either offers a more efficient\nusage of already gathered experience or aim to gain a more relevant and diverse\nexperience via a better exploration of an environment. However, to our\nknowledge, no such approach exists for model-based algorithms, that showed\ntheir high sample efficiency in solving hard control tasks with\nhigh-dimensional state space. This work connects exploration techniques and\nmodel-based reinforcement learning. We have designed a novel exploration method\nthat takes into account features of the model-based approach. We also\ndemonstrate through experiments that our method significantly improves the\nperformance of the model-based algorithm Dreamer.", + "authors": "Oleg Svidchenko, Aleksei Shpilman", + "published": "2021-12-02", + "updated": "2021-12-02", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.01474v1", + "title": "Offline Reinforcement Learning with Causal Structured World Models", + "abstract": "Model-based methods have recently shown promising for offline reinforcement\nlearning (RL), aiming to learn good policies from historical data without\ninteracting with the environment. Previous model-based offline RL methods learn\nfully connected nets as world-models that map the states and actions to the\nnext-step states. However, it is sensible that a world-model should adhere to\nthe underlying causal effect such that it will support learning an effective\npolicy generalizing well in unseen states. In this paper, We first provide\ntheoretical results that causal world-models can outperform plain world-models\nfor offline RL by incorporating the causal structure into the generalization\nerror bound. We then propose a practical algorithm, oFfline mOdel-based\nreinforcement learning with CaUsal Structure (FOCUS), to illustrate the\nfeasibility of learning and leveraging causal structure in offline RL.\nExperimental results on two benchmarks show that FOCUS reconstructs the\nunderlying causal structure accurately and robustly. Consequently, it performs\nbetter than the plain model-based offline RL algorithms and other causal\nmodel-based RL algorithms.", + "authors": "Zheng-Mao Zhu, Xiong-Hui Chen, Hong-Long Tian, Kun Zhang, Yang Yu", + "published": "2022-06-03", + "updated": "2022-06-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.07915v2", + "title": "Barrier Certified Safety Learning Control: When Sum-of-Square Programming Meets Reinforcement Learning", + "abstract": "Safety guarantee is essential in many engineering implementations.\nReinforcement learning provides a useful way to strengthen safety. However,\nreinforcement learning algorithms cannot completely guarantee safety over\nrealistic operations. To address this issue, this work adopts control barrier\nfunctions over reinforcement learning, and proposes a compensated algorithm to\ncompletely maintain safety. Specifically, a sum-of-squares programming has been\nexploited to search for the optimal controller, and tune the learning\nhyperparameters simultaneously. Thus, the control actions are pledged to be\nalways within the safe region. The effectiveness of proposed method is\ndemonstrated via an inverted pendulum model. Compared to quadratic programming\nbased reinforcement learning methods, our sum-of-squares programming based\nreinforcement learning has shown its superiority.", + "authors": "Hejun Huang, Zhenglong Li, Dongkun Han", + "published": "2022-06-16", + "updated": "2022-06-29", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.LG", + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.13489v2", + "title": "Boosting Reinforcement Learning and Planning with Demonstrations: A Survey", + "abstract": "Although reinforcement learning has seen tremendous success recently, this\nkind of trial-and-error learning can be impractical or inefficient in complex\nenvironments. The use of demonstrations, on the other hand, enables agents to\nbenefit from expert knowledge rather than having to discover the best action to\ntake through exploration. In this survey, we discuss the advantages of using\ndemonstrations in sequential decision making, various ways to apply\ndemonstrations in learning-based decision making paradigms (for example,\nreinforcement learning and planning in the learned models), and how to collect\nthe demonstrations in various scenarios. Additionally, we exemplify a practical\npipeline for generating and utilizing demonstrations in the recently proposed\nManiSkill robot learning benchmark.", + "authors": "Tongzhou Mu, Hao Su", + "published": "2023-03-23", + "updated": "2023-03-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.01659v1", + "title": "Reinforcement Learning for Battery Energy Storage Dispatch augmented with Model-based Optimizer", + "abstract": "Reinforcement learning has been found useful in solving optimal power flow\n(OPF) problems in electric power distribution systems. However, the use of\nlargely model-free reinforcement learning algorithms that completely ignore the\nphysics-based modeling of the power grid compromises the optimizer performance\nand poses scalability challenges. This paper proposes a novel approach to\nsynergistically combine the physics-based models with learning-based algorithms\nusing imitation learning to solve distribution-level OPF problems.\nSpecifically, we propose imitation learning based improvements in deep\nreinforcement learning (DRL) methods to solve the OPF problem for a specific\ncase of battery storage dispatch in the power distribution systems. The\nproposed imitation learning algorithm uses the approximate optimal solutions\nobtained from a linearized model-based OPF solver to provide a good initial\npolicy for the DRL algorithms while improving the training efficiency. The\neffectiveness of the proposed approach is demonstrated using IEEE 34-bus and\n123-bus distribution feeders with numerous distribution-level battery storage\nsystems.", + "authors": "Gayathri Krishnamoorthy, Anamika Dubey", + "published": "2021-09-02", + "updated": "2021-09-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SY", + "eess.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1406.1853v2", + "title": "Model-based Reinforcement Learning and the Eluder Dimension", + "abstract": "We consider the problem of learning to optimize an unknown Markov decision\nprocess (MDP). We show that, if the MDP can be parameterized within some known\nfunction class, we can obtain regret bounds that scale with the dimensionality,\nrather than cardinality, of the system. We characterize this dependence\nexplicitly as $\\tilde{O}(\\sqrt{d_K d_E T})$ where $T$ is time elapsed, $d_K$ is\nthe Kolmogorov dimension and $d_E$ is the \\emph{eluder dimension}. These\nrepresent the first unified regret bounds for model-based reinforcement\nlearning and provide state of the art guarantees in several important settings.\nMoreover, we present a simple and computationally efficient algorithm\n\\emph{posterior sampling for reinforcement learning} (PSRL) that satisfies\nthese bounds.", + "authors": "Ian Osband, Benjamin Van Roy", + "published": "2014-06-07", + "updated": "2014-10-31", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1305.1809v2", + "title": "Cover Tree Bayesian Reinforcement Learning", + "abstract": "This paper proposes an online tree-based Bayesian approach for reinforcement\nlearning. For inference, we employ a generalised context tree model. This\ndefines a distribution on multivariate Gaussian piecewise-linear models, which\ncan be updated in closed form. The tree structure itself is constructed using\nthe cover tree method, which remains efficient in high dimensional spaces. We\ncombine the model with Thompson sampling and approximate dynamic programming to\nobtain effective exploration policies in unknown environments. The flexibility\nand computational simplicity of the model render it suitable for many\nreinforcement learning problems in continuous state spaces. We demonstrate this\nin an experimental comparison with least squares policy iteration.", + "authors": "Nikolaos Tziortziotis, Christos Dimitrakakis, Konstantinos Blekas", + "published": "2013-05-08", + "updated": "2014-05-02", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.09013v1", + "title": "Self-Inspection Method of Unmanned Aerial Vehicles in Power Plants Using Deep Q-Network Reinforcement Learning", + "abstract": "For the purpose of inspecting power plants, autonomous robots can be built\nusing reinforcement learning techniques. The method replicates the environment\nand employs a simple reinforcement learning (RL) algorithm. This strategy might\nbe applied in several sectors, including the electricity generation sector. A\npre-trained model with perception, planning, and action is suggested by the\nresearch. To address optimization problems, such as the Unmanned Aerial Vehicle\n(UAV) navigation problem, Deep Q-network (DQN), a reinforcement learning-based\nframework that Deepmind launched in 2015, incorporates both deep learning and\nQ-learning. To overcome problems with current procedures, the research proposes\na power plant inspection system incorporating UAV autonomous navigation and DQN\nreinforcement learning. These training processes set reward functions with\nreference to states and consider both internal and external effect factors,\nwhich distinguishes them from other reinforcement learning training techniques\nnow in use. The key components of the reinforcement learning segment of the\ntechnique, for instance, introduce states such as the simulation of a wind\nfield, the battery charge level of an unmanned aerial vehicle, the height the\nUAV reached, etc. The trained model makes it more likely that the inspection\nstrategy will be applied in practice by enabling the UAV to move around on its\nown in difficult environments. The average score of the model converges to\n9,000. The trained model allowed the UAV to make the fewest number of rotations\nnecessary to go to the target point.", + "authors": "Haoran Guan", + "published": "2023-03-16", + "updated": "2023-03-16", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1810.01112v1", + "title": "The Dreaming Variational Autoencoder for Reinforcement Learning Environments", + "abstract": "Reinforcement learning has shown great potential in generalizing over raw\nsensory data using only a single neural network for value optimization. There\nare several challenges in the current state-of-the-art reinforcement learning\nalgorithms that prevent them from converging towards the global optima. It is\nlikely that the solution to these problems lies in short- and long-term\nplanning, exploration and memory management for reinforcement learning\nalgorithms. Games are often used to benchmark reinforcement learning algorithms\nas they provide a flexible, reproducible, and easy to control environment.\nRegardless, few games feature a state-space where results in exploration,\nmemory, and planning are easily perceived. This paper presents The Dreaming\nVariational Autoencoder (DVAE), a neural network based generative modeling\narchitecture for exploration in environments with sparse feedback. We further\npresent Deep Maze, a novel and flexible maze engine that challenges DVAE in\npartial and fully-observable state-spaces, long-horizon tasks, and\ndeterministic and stochastic problems. We show initial findings and encourage\nfurther work in reinforcement learning driven by generative exploration.", + "authors": "Per-Arne Andersen, Morten Goodwin, Ole-Christoffer Granmo", + "published": "2018-10-02", + "updated": "2018-10-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2012.09737v2", + "title": "Model-free and Bayesian Ensembling Model-based Deep Reinforcement Learning for Particle Accelerator Control Demonstrated on the FERMI FEL", + "abstract": "Reinforcement learning holds tremendous promise in accelerator controls. The\nprimary goal of this paper is to show how this approach can be utilised on an\noperational level on accelerator physics problems. Despite the success of\nmodel-free reinforcement learning in several domains, sample-efficiency still\nis a bottle-neck, which might be encompassed by model-based methods. We compare\nwell-suited purely model-based to model-free reinforcement learning applied to\nthe intensity optimisation on the FERMI FEL system. We find that the\nmodel-based approach demonstrates higher representational power and\nsample-efficiency, while the asymptotic performance of the model-free method is\nslightly superior. The model-based algorithm is implemented in a DYNA-style\nusing an uncertainty aware model, and the model-free algorithm is based on\ntailored deep Q-learning. In both cases, the algorithms were implemented in a\nway, which presents increased noise robustness as omnipresent in accelerator\ncontrol problems. Code is released in\nhttps://github.com/MathPhysSim/FERMI_RL_Paper.", + "authors": "Simon Hirlaender, Niky Bruchon", + "published": "2020-12-17", + "updated": "2022-01-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SY", + "eess.SY", + "physics.acc-ph", + "I.2; J.2" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.12095v1", + "title": "Document-editing Assistants and Model-based Reinforcement Learning as a Path to Conversational AI", + "abstract": "Intelligent assistants that follow commands or answer simple questions, such\nas Siri and Google search, are among the most economically important\napplications of AI. Future conversational AI assistants promise even greater\ncapabilities and a better user experience through a deeper understanding of the\ndomain, the user, or the user's purposes. But what domain and what methods are\nbest suited to researching and realizing this promise? In this article we argue\nfor the domain of voice document editing and for the methods of model-based\nreinforcement learning. The primary advantages of voice document editing are\nthat the domain is tightly scoped and that it provides something for the\nconversation to be about (the document) that is delimited and fully accessible\nto the intelligent assistant. The advantages of reinforcement learning in\ngeneral are that its methods are designed to learn from interaction without\nexplicit instruction and that it formalizes the purposes of the assistant.\nModel-based reinforcement learning is needed in order to genuinely understand\nthe domain of discourse and thereby work efficiently with the user to achieve\ntheir goals. Together, voice document editing and model-based reinforcement\nlearning comprise a promising research direction for achieving conversational\nAI.", + "authors": "Katya Kudashkina, Patrick M. Pilarski, Richard S. Sutton", + "published": "2020-08-27", + "updated": "2020-08-27", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.HC", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2006.09234v1", + "title": "Model Embedding Model-Based Reinforcement Learning", + "abstract": "Model-based reinforcement learning (MBRL) has shown its advantages in\nsample-efficiency over model-free reinforcement learning (MFRL). Despite the\nimpressive results it achieves, it still faces a trade-off between the ease of\ndata generation and model bias. In this paper, we propose a simple and elegant\nmodel-embedding model-based reinforcement learning (MEMB) algorithm in the\nframework of the probabilistic reinforcement learning. To balance the\nsample-efficiency and model bias, we exploit both real and imaginary data in\nthe training. In particular, we embed the model in the policy update and learn\n$Q$ and $V$ functions from the real data set. We provide the theoretical\nanalysis of MEMB with the Lipschitz continuity assumption on the model and\npolicy. At last, we evaluate MEMB on several benchmarks and demonstrate our\nalgorithm can achieve state-of-the-art performance.", + "authors": "Xiaoyu Tan, Chao Qu, Junwu Xiong, James Zhang", + "published": "2020-06-16", + "updated": "2020-06-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1609.03348v4", + "title": "A Threshold-based Scheme for Reinforcement Learning in Neural Networks", + "abstract": "A generic and scalable Reinforcement Learning scheme for Artificial Neural\nNetworks is presented, providing a general purpose learning machine. By\nreference to a node threshold three features are described 1) A mechanism for\nPrimary Reinforcement, capable of solving linearly inseparable problems 2) The\nlearning scheme is extended to include a mechanism for Conditioned\nReinforcement, capable of forming long term strategy 3) The learning scheme is\nmodified to use a threshold-based deep learning algorithm, providing a robust\nand biologically inspired alternative to backpropagation. The model may be used\nfor supervised as well as unsupervised training regimes.", + "authors": "Thomas H. Ward", + "published": "2016-09-12", + "updated": "2017-01-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NE" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2209.06604v1", + "title": "Learning state correspondence of reinforcement learning tasks for knowledge transfer", + "abstract": "Deep reinforcement learning has shown an ability to achieve super-human\nperformance in solving complex reinforcement learning (RL) tasks only from\nraw-pixels. However, it fails to reuse knowledge from previously learnt tasks\nto solve new, unseen ones. Generalizing and reusing knowledge are the\nfundamental requirements for creating a truly intelligent agent. This work\nproposes a general method for one-to-one transfer learning based on generative\nadversarial network model tailored to RL task.", + "authors": "Marko Ruman, Tatiana V. Guy", + "published": "2022-09-14", + "updated": "2022-09-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1811.00128v1", + "title": "Towards a Simple Approach to Multi-step Model-based Reinforcement Learning", + "abstract": "When environmental interaction is expensive, model-based reinforcement\nlearning offers a solution by planning ahead and avoiding costly mistakes.\nModel-based agents typically learn a single-step transition model. In this\npaper, we propose a multi-step model that predicts the outcome of an action\nsequence with variable length. We show that this model is easy to learn, and\nthat the model can make policy-conditional predictions. We report preliminary\nresults that show a clear advantage for the multi-step model compared to its\none-step counterpart.", + "authors": "Kavosh Asadi, Evan Cater, Dipendra Misra, Michael L. Littman", + "published": "2018-10-31", + "updated": "2018-10-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2305.00477v2", + "title": "Posterior Sampling for Deep Reinforcement Learning", + "abstract": "Despite remarkable successes, deep reinforcement learning algorithms remain\nsample inefficient: they require an enormous amount of trial and error to find\ngood policies. Model-based algorithms promise sample efficiency by building an\nenvironment model that can be used for planning. Posterior Sampling for\nReinforcement Learning is such a model-based algorithm that has attracted\nsignificant interest due to its performance in the tabular setting. This paper\nintroduces Posterior Sampling for Deep Reinforcement Learning (PSDRL), the\nfirst truly scalable approximation of Posterior Sampling for Reinforcement\nLearning that retains its model-based essence. PSDRL combines efficient\nuncertainty quantification over latent state space models with a specially\ntailored continual planning algorithm based on value-function approximation.\nExtensive experiments on the Atari benchmark show that PSDRL significantly\noutperforms previous state-of-the-art attempts at scaling up posterior sampling\nwhile being competitive with a state-of-the-art (model-based) reinforcement\nlearning method, both in sample efficiency and computational efficiency.", + "authors": "Remo Sasso, Michelangelo Conserva, Paulo Rauber", + "published": "2023-04-30", + "updated": "2023-05-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "68T07", + "I.2.m" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1906.12189v1", + "title": "Learning-based Model Predictive Control for Safe Exploration and Reinforcement Learning", + "abstract": "Reinforcement learning has been successfully used to solve difficult tasks in\ncomplex unknown environments. However, these methods typically do not provide\nany safety guarantees during the learning process. This is particularly\nproblematic, since reinforcement learning agent actively explore their\nenvironment. This prevents their use in safety-critical, real-world\napplications. In this paper, we present a learning-based model predictive\ncontrol scheme that provides high-probability safety guarantees throughout the\nlearning process. Based on a reliable statistical model, we construct provably\naccurate confidence intervals on predicted trajectories. Unlike previous\napproaches, we allow for input-dependent uncertainties. Based on these reliable\npredictions, we guarantee that trajectories satisfy safety constraints.\nMoreover, we use a terminal set constraint to recursively guarantee the\nexistence of safe control actions at every iteration. We evaluate the resulting\nalgorithm to safely explore the dynamics of an inverted pendulum and to solve a\nreinforcement learning task on a cart-pole system with safety constraints.", + "authors": "Torsten Koller, Felix Berkenkamp, Matteo Turchetta, Joschka Boedecker, Andreas Krause", + "published": "2019-06-27", + "updated": "2019-06-27", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.AI", + "cs.LG", + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1804.07193v3", + "title": "Lipschitz Continuity in Model-based Reinforcement Learning", + "abstract": "We examine the impact of learning Lipschitz continuous models in the context\nof model-based reinforcement learning. We provide a novel bound on multi-step\nprediction error of Lipschitz models where we quantify the error using the\nWasserstein metric. We go on to prove an error bound for the value-function\nestimate arising from Lipschitz models and show that the estimated value\nfunction is itself Lipschitz. We conclude with empirical results that show the\nbenefits of controlling the Lipschitz constant of neural-network models.", + "authors": "Kavosh Asadi, Dipendra Misra, Michael L. Littman", + "published": "2018-04-19", + "updated": "2018-07-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + } + ], + [ + { + "url": "http://arxiv.org/abs/2403.03558v1", + "title": "Benchmarking Hallucination in Large Language Models based on Unanswerable Math Word Problem", + "abstract": "Large language models (LLMs) are highly effective in various natural language\nprocessing (NLP) tasks. However, they are susceptible to producing unreliable\nconjectures in ambiguous contexts called hallucination. This paper presents a\nnew method for evaluating LLM hallucination in Question Answering (QA) based on\nthe unanswerable math word problem (MWP). To support this approach, we\ninnovatively develop a dataset called Unanswerable Math Word Problem (UMWP)\nwhich comprises 5200 questions across five categories. We developed an\nevaluation methodology combining text similarity and mathematical expression\ndetection to determine whether LLM considers the question unanswerable. The\nresults of extensive experiments conducted on 31 LLMs, including GPT-3,\nInstructGPT, LLaMA, and Claude, demonstrate that in-context learning and\nreinforcement learning with human feedback (RLHF) training significantly\nenhance the model's ability to avoid hallucination. We show that utilizing MWP\nis a reliable and effective approach to assess hallucination. Our code and data\nare available at https://github.com/Yuki-Asuuna/UMWP.", + "authors": "Yuhong Sun, Zhangyue Yin, Qipeng Guo, Jiawen Wu, Xipeng Qiu, Hui Zhao", + "published": "2024-03-06", + "updated": "2024-03-06", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Original Paper", + "paper_cat": "Model AND Based AND Reinforcement AND Learning", + "gt": "2.1. Math Word Problem Benchmark Many answerable MWP datasets have been proposed in previous research, primarily differing in terms of difficulty, dataset size, and content. Koncel-Kedziorski et al. (2016) provides an automatic construction framework and collects 3,320 problems for a dataset called MAWPS. Miao et al. (2020) presents ASDiv that covers more text patterns and most problem types taught in elementary school. Each MWP is annotated with its problem type and grade level. Patel et al. (2021) creates a challenge set called SVAMP for a more robust evaluation of methods developed to solve elementarylevel MWP. OpenAI introduces GSM8K (Cobbe et al., 2021), a dataset comprising 8.5K high-quality linguistically diverse grade school MWPs, designing to evaluate the multi-step mathematical reasoning capability of LLMs. Hendrycks et al. (2021) introduces MATH, a dataset of 12,500 challenging competition mathematics problems. For now, MATH and GSM8K are the two most difficult MWP datasets. 2.2. Mathematical Ability of LLM With the popularity of LLM, there is an increasing focus on applying LLM to solve math problems. Frieder et al. (2023) investigates the mathematical capabilities of two iterations of ChatGPT (released 9-January-2023 and 30-January-2023) and of GPT4 by testing them on 6 publicly available datasets. The result shows that though the quality of answers can be positively surprising, GPT is not yet ready to deliver high-quality proofs or calculations consistently. Wei et al. (2022) shows that applying a chain of thought prompting can greatly improve performance on a range of arithmetic, commonsense, and symbolic reasoning tasks. Yu et al. (2023) proposes MetaMath, a fine-tuned language model from Llama-v2 that specializes in mathematical reasoning. MetaMath-7B exceeds the state-of-the-art models of the same size by 11.5% and 8.7% on GSM8K and 19.4% on MATH (Hendrycks et al., 2021). MetaMath-70B achieves an accuracy of 82.3% on GSM8K, slightly better than GPT-3.5Turbo. It proves that well-fine-tuned open-source LLMs can compete with commercial LLMs even Source Total Percentage Avg. Length SVAMP 500 19.2% 30.38 MultiArith 300 11.5% 31.76 GSM8K 1700 65.4% 45.38 ASDiv 100 3.8% 28.37 Table 2: Statistics of answerable questions. having much fewer parameters. 2.3. Hallucination Benchmark Research is scarce on hallucination benchmark in the field of mathematical reasoning. However, here are some existing hallucination evaluation studies that focus on general questions. Lin et al. (2022) purposes TruthfulQA containing 817 questions that span 38 categories, including health, law, finance, and politics, to evaluate the truthfulness of LLM. These questions are crafted in a way that will lead humans to answer falsely due to a false belief or misconception. Yin et al. (2023) purposes the SelfAware dataset consisting of 1,032 openended unanswerable questions to evaluate LLMs\u2019 self-knowledge. Li et al. (2023) introduces the HaluEval benchmark, a large collection of generated and human-annotated hallucinated samples for evaluating the performance of LLMs in recognizing hallucination. HaluEval evaluates whether LLM hallucinates through a binary label approach. Min et al. (2023) proposes a unique benchmark called FACTSCORE to automatically evaluate the truthfulness of LLM from the perspective of biographies in Wikipedia.", + "pre_questions": [], + "main_content": "Introduction Large Language Models (LLMs) have taken the field by storm, making remarkable advancements in pushing the boundaries of Natural Language Processing (NLP) (Zhao et al., 2023). Notably, OpenAI\u2019s GPT-4 (OpenAI, 2023), Meta AI\u2019s LLaMA2 (Touvron et al., 2023a) and Google\u2019s PaLM 2 (Anil et al., 2023) have demonstrated exceptional performance across various few-shot and zero-shot NLP tasks, including text generation, text summarization and question answering. However, LLMs can produce unreliable conjectures in ambiguous contexts, which is known as hallucination (Rawte et al., 2023). Within the context of NLP, the most inclusive and standard definition of hallucination is the generated content that is nonsensical or unfaithful to the provided source content (Ji et al., 2023). The undesired phenomenon has the potential to seriously mislead humans (Talmor et al., 2019). Figure 1 illustrates an example of hallucination towards a Math Word Problem (MWP). Towards the QA task, this paper evaluates LLMs\u2019 degree of hallucination based on Math Word Problems (MWP). (i) Compared with general questions, MWP is challenging to mitigate hallucination through additional text retrieval. Answering MWP heavily relies on the LLM\u2019s intrinsic abilities, including comprehension, reasoning, and computation abilities. (ii) The answer to MWP is often unique \u0000Corresponding author. Q: Tom had a total of 50 salty cookies and sweet cookies combined. He ate 14 sweet cookies and 9 salty cookies. How many salty cookies did Tom have left? A: Tom had 41 salty cookies left. A: The problem doesn't provide information on how many salty cookies and sweet cookies Tom had at the beginning, so we can't definitively answer how many salty cookies Tom has left. GPT-4 GPT-3.5 Figure 1: An example of hallucination towards a Math Word Problem(MWP). and represented as a numerical value or variable expression. In determining whether a model is prone to hallucination, the MWP-based method only involves evaluating the correctness of a numerical or variable expression output. We regard the MWP with non-unique solutions or no solution that may lead to hallucination in LLMs as the \u201cunanswerable question\u201d. Unanswerable questions can serve as a means to evaluate the degree of hallucination in LLMs, just as teachers often use unanswerable questions to gauge students\u2019 understanding of certain concepts. Rajpurkar et al. (2018) observes extractive reading comprehension systems often tend to make unreliable guesses when the context is missing or ambiguous. This phenomenon also happens in LLMs. When hallucination occurs, LLM tends to give arbitrary or unreasonable answers, just as Figure 1 shows. Ideally, LLM should reply with \u201cInformation missing\u201d or \u201cUnable to answer\u201d. arXiv:2403.03558v1 [cs.CL] 6 Mar 2024 Type Example Percentage Key Information Missing Samanta has 8 more points than Mark, and Mark has 50% more points than Eric. How many points do Samanta, Mark, and Eric have in total? 32% Ambiguous Key Information Jack received some emails in the morning, 5 emails in the afternoon, and 8 emails in the evening. How many more emails did Jack receive in the afternoon and evening than in the morning? 49% Unrealistic Conditions How many triangles with a height of 0 inches and a width of 0 inches could fit inside a square with 2-inch sides? 11% Unrelated Object Joshua bought 25 oranges for $12.50. He sells each one for 60c, how much profit in cents will he make on each apple? 4% Question Missing Baker made 13 cakes. He sold 91 of them and bought 154 new cakes. How many? 5% Table 1: Unanswerable questions in the UMWP dataset that span across mutiple categories. It\u2019s worth noting that while all existing MWP datasets (Hendrycks et al., 2021; Cobbe et al., 2021; Patel et al., 2021) focus on answerable questions, there is a scarcity of datasets related to unanswerable questions. Therefore, to address this data gap, we build a new dataset called UMWP, upon several previous MWP datasets. UMWP comprises a total of 5,200 questions with half answerable questions and half unanswerable questions. We classify unanswerable questions into five categories based on their unanswerability reasons. The main contributions of this paper are summarized as follows: \u2022 We innovatively propose a new dataset UMWP consisting of answerable and unanswerable MWP to evaluate the degree of hallucination in LLMs. \u2022 We present a novel hallucination evaluation method for LLMs. Our method employs text similarity and mathematical expression detection to judge whether the LLMs\u2019 responses reflect unanswerability. \u2022 Extensive experiments on a variety of LLMs reveal variations in the degree of hallucination concerning model size, input form, and the utilization of RLHF. To the best of our knowledge, all popular MWP datasets do not have unanswerable questions. We build a novel dataset UMWP upon the existing four MWP datasets SVAMP (Patel et al., 2021), MultiArith (Koncel-Kedziorski et al., 2016), GSM8K (Cobbe et al., 2021), and ASDiv (Miao et al., 2020). The questions in these four datasets are from real-life scenarios and have unique answers. We task two data annotators with modifying the original questions to make them unanswerable. Specific strategies in Table 5 are applied during the modification process. Three volunteers validate the questions. The question with three unanswerable annotations is accepted. Finally, we build a dataset composed of 2,600 answerable questions and 2,600 unanswerable questions. 3.1. Unanswerable Question Unanswerable questions are classified into five categories based on the reasons for unanswerability. The classification criteria are referenced from negative examples in SQUAD 2.0 (Rajpurkar et al., 2018). Table 1 illustrates the five categories with the statistics. LLM\u2019s ideal response for unanswerable question should express uncertainty rather than providing a precise answer. (i) Key Information Missing: Questions where essential conditions are omitted. (ii) Ambiguous Key Information: Questions with ambiguous conditions, including ranges, vague terms, or negations. (iii) Unrealistic Conditions: Questions with conditions that conflict with real-world logic, such as using negative numbers for item quantities or decimals for indivisible items. (iv) Unrelated Object: Questions where the subject mentioned in the question is absent from the source input. (v) Question Missing: Questions without the actual question body. 3.2. Answerable Question Each answerable question has a definite answer. The statistics of answerable questions are shown in Table 2. The GSM8K dataset features longer question descriptions by token count, whereas the other three datasets have shorter ones. 4. Evaluation Method In this section, we introduce the method for quantitatively evaluating LLMs\u2019 degree of hallucination. In the context of instruction and In-Context Learning (ICL) input forms (Ouyang et al., 2022), we observe that LLMs tend to exhibit strong templatelike outputs when expressing uncertain meanings. However, in the Direct input form, LLM outputs may contain words indicating uncertainty, such as \u201cunknown\u201d or \u201cunsure\u201d. Algorithm 1 shows the details of the evaluation process. To judge whether the output of a question reflects unanswerability, we define a similarity function, fsim, to compute the similarity, S, between a given sentence, v, and set U = {u1, u2, . . . , ui}. Set U contains unanswerable template sentences. T is a pre-determined threshold. Si = fsim(v, ui) (1) If the condition is met: max(S) \u2265T . The output is regarded as \u201cunanswerable\u201d. If LLMs\u2019 responses appear as variable expressions, we assume the LLM may have identified potential variables in the unanswerable question. Algorithm 1 Answerability Evaluation 1: Input: Generated text v of a question by LLM 2: Output: Answerable or not 3: S \u2190fsim(v, ui) 4: if max(S) \u2265T then 5: return False 6: end if 7: T \u2190TokenizeText(v) 8: T \u2032 \u2190RemoveCommonVocabulary(T) 9: v\u2032 \u2190RemoveWhitespace(T \u2032) 10: if ContainsExpression(v\u2032) then 11: return False 12: end if 13: return True Figure 2: An example of extracting variable expression from raw LLM output. Otherwise, we assume LLM regards the question as \u201canswerable\u201d. The identification process is described as follows: (i) LLMs\u2019 output is tokenized by the open-source tool Spacy (Montani et al., 2023). (ii) Common vocabulary and space characters are removed from the text. (iii) Identification is done by checking for the presence of valid variable expressions by regex. If found, the output is labeled as \u201cunanswerable\u201d. An example is illustrated in Figure 2. We adopt the F1 score as the metric for evaluating LLMs\u2019 degree of hallucination. To identify unanswerable questions, we designate unanswerable questions as positive cases and answerable questions as negative cases. 5. Experiment We conduct experiments using a series of LLMs, including GPT-3 (Brown et al., 2020), InstructGPT (Ouyang et al., 2022), Claude, LLaMA (Touvron et al., 2023b) and LLaMA-2 (Touvron et al., 2023a). We employ three different input forms: Direct, Instruction, and ICL. 5.1. Setting We adopt SimCSE (Gao et al., 2021) as the similarity function. According to the threshold ablation (Yin et al., 2023), we set the similarity threshold T = 0.75. During the generation process, we set the temperature T = 0.7 for GPT, InstructGPT, LLaMA, and LLaMA-2. To eliminate potential similarity calculation errors caused by differences in the lengths of target and reference sentences, we employ a sliding window of length 6 to parse the output sentence into semantic chunks. 5.2. Human Benchmark To establish a benchmark for humans, We randomly select 200 samples from UMWP, ensuring the distribution of these samples across different categories remains consistent with the original dataset. Subsequently, we assign these samples to five volunteers. The benchmark for humans is calculated based on the average F1 score obtained from these five volunteers. 5.3. Set U Construction We aggregate answers from 31 LLMs that are labeled as \u201cunanswerable\u201d and extract common features to construct the set U. Subsequently, we conducted a manual filtering process to eliminate incorrect strings from set U. The detail of set U is shown in Section A.5. 5.4. Experiment Results Analysis We conduct a concise analysis of LLMs\u2019 hallucination performance on UMWP, mainly considering 4 dimensions: model size, input forms, RLHF, and comparison of evaluation methods. The experimental results for the following three dimensions (model size, input forms, RLHF) are depicted in Figure 3. Model Size. In the LLaMA series, across three input forms, there is a continuous improvement in the model\u2019s F1 Score as the model size increases. In the InstructGPT series, this trend is generally observed, except for the text-babbage-001. Input Forms. Compared to Direct input, the Instruction and ICL input forms can provide richer contextual information, significantly improving the LLMs\u2019 ability to recognize hallucination. As the parameter size increases, the F1 score difference between the instruction and the ICL input form is gradually decreasing. Reinforcement Learning with Human Feedback (RLHF). Comparing LLaMA-v2-7b-chat to LLaMA-v2-7b, LLaMA-v2-13b-chat to LLaMA-v213b, and LLaMA-v2-70b-chat to LLaMA-v2-70b, we find RLHF (Ouyang et al., 2022) substantially improves the F1 score across three input forms. Notably, LLaMA-v2-13b-chat\u2019s performance can compete with that of LLaMA-65b, despite having significantly fewer parameters. text-ada-001(350M) text-babbage-001(3B) text-curie-001(13B) text-davinci-001(175B) text-davinci-002(175B) text-davinci-003(175B) gpt-3.5-turbo-0301 gpt-3.5-turbo-0613 gpt-4-0314 gpt-4-0613 0 10 20 30 40 50 60 70 80 90 F1 Score InstructGPT Series Direct Instruction ICL claude-1 claude-instant-1 claude-instant-1.1 claude-2 claude-instant-1.2 0 10 20 30 40 50 60 70 80 90 F1 Score Claude Series Direct Instruction ICL LLaMA-7b LLaMA-v2-7b LLaMA-v2-7b-chat LLaMA-13b LLaMA-v2-13b LLaMA-v2-13b-chat LLaMA-30b LLaMA-65b LLaMA-v2-70b LLaMA-v2-70b-chat 0 10 20 30 40 50 60 70 80 90 F1 Score LLaMA Series Direct Instruction ICL Figure 3: Experiment results from InstructGPT, Claude, and LLaMA series using three different input forms (Direct, Instruction, and ICL). davinci text-davinci-003 LLaMA-65b LLaMA-v2-70b-chat claude-instant-1.2 claude-2 gpt-3.5-turbo-0613 gpt-4-0613 human Models 0 10 20 30 40 50 60 70 80 90 100 F1 Score (%) 46.37 55.27 59.28 73.18 76.34 78.13 81.53 85.24 93.16 Figure 4: F1 score of LLMs in different series and human in the instruction input form. Evaluation Methods Comparison. LLMs can recognize potential variables within unanswerable questions and may output a math expression in response. We set the sample size to 520 (10% of the UMWP) and employ the random sampling strategy. We ensure the proportion of unanswerable questions across different categories is consistent with Table 2. 5 annotators participate in the evaluation process. Table 3 shows that using a template-based approach combined with mathematical expression detection can improve the consistency with human judgment. The Cohen\u2019s kappa coefficient for the LLMs in Table 3 falls within the range of a good match(>0.75). Compare with Human. We also investigate human benchmarks on UMWP. Figure 4 presents the comparison of LLMs in different series based on their F1 scores under the instruction input form. GPT-4 demonstrates the best performance achieving an impressive F1 score of 85.24%. However, it still shows a difference when compared to the human benchmark result of 93.16%. 5.5. Noise Analysis According to Algorithm 1, the LLM response is labeled binary. Experiments need to be conducted Model Template TemplateRule text-davinci-003 0.732 0.804(+0.072) claude-1 0.744 0.791(+0.047) Llama-7b 0.702 0.757(+0.055) gpt-3.5 0.753 0.802(+0.049) gpt-4 0.864 0.891(+0.027) Table 3: Cohen\u2019s Kappa comparison between two evaluation methods in the direct input form. to judge whether LLM output contains nonsensical or unfaithful information beyond the binary classification. We manually examine whether 5 LLMs generate unrelated content. These LLMs are chosen because they exhibit relatively lower capabilities within their respective series. The result is shown in Appendix Table 4. Although there are cases where LLM may output information unrelated to the question, such cases are rare and have a limited impact on the benchmark results. We conduct further discussions and analysis in Section A.1. 6. Conclusion The hallucination of LLM has the potential to mislead humans seriously. This study explores the evaluation of hallucination in LLMs through the perspective of Unanswerable Math Word Problems (UMWP). Based on existing MWP datasets, we create a new dataset and introduce an evaluation method combining text similarity and mathematical expression detection for assessing hallucination in various series of LLMs including GPT-3, InstructGPT, Claude, and LLaMA. The results of extensive experiments highlight the impact of model size, InContext Learning, and RLHF on hallucination mitigation. We believe that our work provides a feasible way of assessing hallucination in LLMs. Ethics Statement Adhering to the CC-BY-SA-4.0 protocol, the the UMWP dataset has been exclusively curated for academic and research purposes. We explicitly prohibit any commercial use or any application of the data that might be considered unlawful, harmful, or unethical. The answerable questions in UMWP originated from open-source datasets GSM8K, MultiArith, ASDiv, and SVAMP. The unanswerable questions have undergone careful manual modifications by three different annotators. To establish a benchmark for humans, we invited five volunteers to complete the random samples from the UMWP dataset. All annotators are compensated at the local average hourly wage for their work and are ensured to work during regular working hours. The UMWP dataset strictly adheres to relevant laws, regulations, and data collection principles. We have obtained all necessary authorizations and permissions to ensure the lawful acquisition and utilization of the data. We are committed to safeguarding the privacy rights of individuals within UMWP dataset. We have implemented rigorous anonymization procedures, ensuring that all personal identity information and sensitive data are transformed to prevent any inadvertent disclosure of individual identities or sensitive information. We welcome feedback and concerns from users and researchers regarding the dataset. We pledge to address and resolve any relevant issues as soon as possible. We encourage all users and researchers to adhere to ethical standards and maintain a high level of moral and legal consciousness when using the dataset. Limitations We focus on hallucination benchmarking in the context of question answering in English, and we do not explore it on other tasks, such as summarization or code generation. The UMWP dataset could cover other different languages, not only English. Besides, we only propose methods to mitigate hallucination from the perspective of prompt engineering in the experiment section, without delving into the fundamental causes and solutions of the phenomenon of hallucination in the context of UMWP. Acknowledgments This work is supported by National Key Research and Development Program of China(2022YFC3302600). Bibliographical" + }, + { + "url": "http://arxiv.org/abs/2109.07958v2", + "title": "TruthfulQA: Measuring How Models Mimic Human Falsehoods", + "abstract": "We propose a benchmark to measure whether a language model is truthful in\ngenerating answers to questions. The benchmark comprises 817 questions that\nspan 38 categories, including health, law, finance and politics. We crafted\nquestions that some humans would answer falsely due to a false belief or\nmisconception. To perform well, models must avoid generating false answers\nlearned from imitating human texts. We tested GPT-3, GPT-Neo/J, GPT-2 and a\nT5-based model. The best model was truthful on 58% of questions, while human\nperformance was 94%. Models generated many false answers that mimic popular\nmisconceptions and have the potential to deceive humans. The largest models\nwere generally the least truthful. This contrasts with other NLP tasks, where\nperformance improves with model size. However, this result is expected if false\nanswers are learned from the training distribution. We suggest that scaling up\nmodels alone is less promising for improving truthfulness than fine-tuning\nusing training objectives other than imitation of text from the web.", + "authors": "Stephanie Lin, Jacob Hilton, Owain Evans", + "published": "2021-09-08", + "updated": "2022-05-08", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CY", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.14251v2", + "title": "FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation", + "abstract": "Evaluating the factuality of long-form text generated by large language\nmodels (LMs) is non-trivial because (1) generations often contain a mixture of\nsupported and unsupported pieces of information, making binary judgments of\nquality inadequate, and (2) human evaluation is time-consuming and costly. In\nthis paper, we introduce FACTSCORE, a new evaluation that breaks a generation\ninto a series of atomic facts and computes the percentage of atomic facts\nsupported by a reliable knowledge source. We conduct an extensive human\nevaluation to obtain FACTSCOREs of people biographies generated by several\nstate-of-the-art commercial LMs -- InstructGPT, ChatGPT, and the\nretrieval-augmented PerplexityAI -- and report new analysis demonstrating the\nneed for such a fine-grained score (e.g., ChatGPT only achieves 58%). Since\nhuman evaluation is costly, we also introduce an automated model that estimates\nFACTSCORE using retrieval and a strong language model, with less than a 2%\nerror rate. Finally, we use this automated metric to evaluate 6,500 generations\nfrom a new set of 13 recent LMs that would have cost $26K if evaluated by\nhumans, with various findings: GPT-4 and ChatGPT are more factual than public\nmodels, and Vicuna and Alpaca are some of the best public models. FACTSCORE is\navailable for public use via `pip install factscore`.", + "authors": "Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Wei Koh, Mohit Iyyer, Luke Zettlemoyer, Hannaneh Hajishirzi", + "published": "2023-05-23", + "updated": "2023-10-11", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2201.11903v6", + "title": "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models", + "abstract": "We explore how generating a chain of thought -- a series of intermediate\nreasoning steps -- significantly improves the ability of large language models\nto perform complex reasoning. In particular, we show how such reasoning\nabilities emerge naturally in sufficiently large language models via a simple\nmethod called chain of thought prompting, where a few chain of thought\ndemonstrations are provided as exemplars in prompting. Experiments on three\nlarge language models show that chain of thought prompting improves performance\non a range of arithmetic, commonsense, and symbolic reasoning tasks. The\nempirical gains can be striking. For instance, prompting a 540B-parameter\nlanguage model with just eight chain of thought exemplars achieves state of the\nart accuracy on the GSM8K benchmark of math word problems, surpassing even\nfinetuned GPT-3 with a verifier.", + "authors": "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou", + "published": "2022-01-28", + "updated": "2023-01-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2309.12284v4", + "title": "MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models", + "abstract": "Large language models (LLMs) have pushed the limits of natural language\nunderstanding and exhibited excellent problem-solving ability. Despite the\ngreat success, most existing open-source LLMs (e.g., LLaMA-2) are still far\naway from satisfactory for solving mathematical problem due to the complex\nreasoning procedures. To bridge this gap, we propose MetaMath, a fine-tuned\nlanguage model that specializes in mathematical reasoning. Specifically, we\nstart by bootstrapping mathematical questions by rewriting the question from\nmultiple perspectives without extra knowledge, which results in a new dataset\ncalled MetaMathQA. Then we fine-tune the LLaMA-2 models on MetaMathQA.\nExperimental results on two popular benchmarks (i.e., GSM8K and MATH) for\nmathematical reasoning demonstrate that MetaMath outperforms a suite of\nopen-source LLMs by a significant margin. Our MetaMath-7B model achieves 66.4%\non GSM8K and 19.4% on MATH, exceeding the state-of-the-art models of the same\nsize by 11.5% and 8.7%. Particularly, MetaMath-70B achieves an accuracy of\n82.3% on GSM8K, slightly better than GPT-3.5-Turbo. We release all the\nMetaMathQA dataset, the MetaMath models with different model sizes and the\ntraining code for public use.", + "authors": "Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu", + "published": "2023-09-21", + "updated": "2024-05-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2103.03874v2", + "title": "Measuring Mathematical Problem Solving With the MATH Dataset", + "abstract": "Many intellectual endeavors require mathematical problem solving, but this\nskill remains beyond the capabilities of computers. To measure this ability in\nmachine learning models, we introduce MATH, a new dataset of 12,500 challenging\ncompetition mathematics problems. Each problem in MATH has a full step-by-step\nsolution which can be used to teach models to generate answer derivations and\nexplanations. To facilitate future research and increase accuracy on MATH, we\nalso contribute a large auxiliary pretraining dataset which helps teach models\nthe fundamentals of mathematics. Even though we are able to increase accuracy\non MATH, our results show that accuracy remains relatively low, even with\nenormous Transformer models. Moreover, we find that simply increasing budgets\nand model parameter counts will be impractical for achieving strong\nmathematical reasoning if scaling trends continue. While scaling Transformers\nis automatically solving most other text-based tasks, scaling is not currently\nsolving MATH. To have more traction on mathematical problem solving we will\nlikely need new algorithmic advancements from the broader research community.", + "authors": "Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, Jacob Steinhardt", + "published": "2021-03-05", + "updated": "2021-11-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2110.14168v2", + "title": "Training Verifiers to Solve Math Word Problems", + "abstract": "State-of-the-art language models can match human performance on many tasks,\nbut they still struggle to robustly perform multi-step mathematical reasoning.\nTo diagnose the failures of current models and support research, we introduce\nGSM8K, a dataset of 8.5K high quality linguistically diverse grade school math\nword problems. We find that even the largest transformer models fail to achieve\nhigh test performance, despite the conceptual simplicity of this problem\ndistribution. To increase performance, we propose training verifiers to judge\nthe correctness of model completions. At test time, we generate many candidate\nsolutions and select the one ranked highest by the verifier. We demonstrate\nthat verification significantly improves performance on GSM8K, and we provide\nstrong empirical evidence that verification scales more effectively with\nincreased data than a finetuning baseline.", + "authors": "Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, John Schulman", + "published": "2021-10-27", + "updated": "2021-11-18", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2301.13867v2", + "title": "Mathematical Capabilities of ChatGPT", + "abstract": "We investigate the mathematical capabilities of two iterations of ChatGPT\n(released 9-January-2023 and 30-January-2023) and of GPT-4 by testing them on\npublicly available datasets, as well as hand-crafted ones, using a novel\nmethodology. In contrast to formal mathematics, where large databases of formal\nproofs are available (e.g., the Lean Mathematical Library), current datasets of\nnatural-language mathematics, used to benchmark language models, either cover\nonly elementary mathematics or are very small. We address this by publicly\nreleasing two new datasets: GHOSTS and miniGHOSTS. These are the first\nnatural-language datasets curated by working researchers in mathematics that\n(1) aim to cover graduate-level mathematics, (2) provide a holistic overview of\nthe mathematical capabilities of language models, and (3) distinguish multiple\ndimensions of mathematical reasoning. These datasets also test whether ChatGPT\nand GPT-4 can be helpful assistants to professional mathematicians by emulating\nuse cases that arise in the daily professional activities of mathematicians. We\nbenchmark the models on a range of fine-grained performance metrics. For\nadvanced mathematics, this is the most detailed evaluation effort to date. We\nfind that ChatGPT can be used most successfully as a mathematical assistant for\nquerying facts, acting as a mathematical search engine and knowledge base\ninterface. GPT-4 can additionally be used for undergraduate-level mathematics\nbut fails on graduate-level difficulty. Contrary to many positive reports in\nthe media about GPT-4 and ChatGPT's exam-solving abilities (a potential case of\nselection bias), their overall mathematical performance is well below the level\nof a graduate student. Hence, if your goal is to use ChatGPT to pass a\ngraduate-level math exam, you would be better off copying from your average\npeer!", + "authors": "Simon Frieder, Luca Pinchetti, Alexis Chevalier, Ryan-Rhys Griffiths, Tommaso Salvatori, Thomas Lukasiewicz, Philipp Christian Petersen, Julius Berner", + "published": "2023-01-31", + "updated": "2023-07-20", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2103.03874v2", + "title": "Measuring Mathematical Problem Solving With the MATH Dataset", + "abstract": "Many intellectual endeavors require mathematical problem solving, but this\nskill remains beyond the capabilities of computers. To measure this ability in\nmachine learning models, we introduce MATH, a new dataset of 12,500 challenging\ncompetition mathematics problems. Each problem in MATH has a full step-by-step\nsolution which can be used to teach models to generate answer derivations and\nexplanations. To facilitate future research and increase accuracy on MATH, we\nalso contribute a large auxiliary pretraining dataset which helps teach models\nthe fundamentals of mathematics. Even though we are able to increase accuracy\non MATH, our results show that accuracy remains relatively low, even with\nenormous Transformer models. Moreover, we find that simply increasing budgets\nand model parameter counts will be impractical for achieving strong\nmathematical reasoning if scaling trends continue. While scaling Transformers\nis automatically solving most other text-based tasks, scaling is not currently\nsolving MATH. To have more traction on mathematical problem solving we will\nlikely need new algorithmic advancements from the broader research community.", + "authors": "Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, Jacob Steinhardt", + "published": "2021-03-05", + "updated": "2021-11-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2306.07525v1", + "title": "Using Collision Momentum in Deep Reinforcement Learning Based Adversarial Pedestrian Modeling", + "abstract": "Recent research in pedestrian simulation often aims to develop realistic\nbehaviors in various situations, but it is challenging for existing algorithms\nto generate behaviors that identify weaknesses in automated vehicles'\nperformance in extreme and unlikely scenarios and edge cases. To address this,\nspecialized pedestrian behavior algorithms are needed. Current research focuses\non realistic trajectories using social force models and reinforcement learning\nbased models. However, we propose a reinforcement learning algorithm that\nspecifically targets collisions and better uncovers unique failure modes of\nautomated vehicle controllers. Our algorithm is efficient and generates more\nsevere collisions, allowing for the identification and correction of weaknesses\nin autonomous driving algorithms in complex and varied scenarios.", + "authors": "Dianwei Chen, Ekim Yurtsever, Keith Redmill, Umit Ozguner", + "published": "2023-06-13", + "updated": "2023-06-13", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1811.00128v1", + "title": "Towards a Simple Approach to Multi-step Model-based Reinforcement Learning", + "abstract": "When environmental interaction is expensive, model-based reinforcement\nlearning offers a solution by planning ahead and avoiding costly mistakes.\nModel-based agents typically learn a single-step transition model. In this\npaper, we propose a multi-step model that predicts the outcome of an action\nsequence with variable length. We show that this model is easy to learn, and\nthat the model can make policy-conditional predictions. We report preliminary\nresults that show a clear advantage for the multi-step model compared to its\none-step counterpart.", + "authors": "Kavosh Asadi, Evan Cater, Dipendra Misra, Michael L. Littman", + "published": "2018-10-31", + "updated": "2018-10-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.09013v1", + "title": "Self-Inspection Method of Unmanned Aerial Vehicles in Power Plants Using Deep Q-Network Reinforcement Learning", + "abstract": "For the purpose of inspecting power plants, autonomous robots can be built\nusing reinforcement learning techniques. The method replicates the environment\nand employs a simple reinforcement learning (RL) algorithm. This strategy might\nbe applied in several sectors, including the electricity generation sector. A\npre-trained model with perception, planning, and action is suggested by the\nresearch. To address optimization problems, such as the Unmanned Aerial Vehicle\n(UAV) navigation problem, Deep Q-network (DQN), a reinforcement learning-based\nframework that Deepmind launched in 2015, incorporates both deep learning and\nQ-learning. To overcome problems with current procedures, the research proposes\na power plant inspection system incorporating UAV autonomous navigation and DQN\nreinforcement learning. These training processes set reward functions with\nreference to states and consider both internal and external effect factors,\nwhich distinguishes them from other reinforcement learning training techniques\nnow in use. The key components of the reinforcement learning segment of the\ntechnique, for instance, introduce states such as the simulation of a wind\nfield, the battery charge level of an unmanned aerial vehicle, the height the\nUAV reached, etc. The trained model makes it more likely that the inspection\nstrategy will be applied in practice by enabling the UAV to move around on its\nown in difficult environments. The average score of the model converges to\n9,000. The trained model allowed the UAV to make the fewest number of rotations\nnecessary to go to the target point.", + "authors": "Haoran Guan", + "published": "2023-03-16", + "updated": "2023-03-16", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2110.14524v1", + "title": "Model based Multi-agent Reinforcement Learning with Tensor Decompositions", + "abstract": "A challenge in multi-agent reinforcement learning is to be able to generalize\nover intractable state-action spaces. Inspired from Tesseract [Mahajan et al.,\n2021], this position paper investigates generalisation in state-action space\nover unexplored state-action pairs by modelling the transition and reward\nfunctions as tensors of low CP-rank. Initial experiments on synthetic MDPs show\nthat using tensor decompositions in a model-based reinforcement learning\nalgorithm can lead to much faster convergence if the true transition and reward\nfunctions are indeed of low rank.", + "authors": "Pascal Van Der Vaart, Anuj Mahajan, Shimon Whiteson", + "published": "2021-10-27", + "updated": "2021-10-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.MA" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.01474v1", + "title": "Offline Reinforcement Learning with Causal Structured World Models", + "abstract": "Model-based methods have recently shown promising for offline reinforcement\nlearning (RL), aiming to learn good policies from historical data without\ninteracting with the environment. Previous model-based offline RL methods learn\nfully connected nets as world-models that map the states and actions to the\nnext-step states. However, it is sensible that a world-model should adhere to\nthe underlying causal effect such that it will support learning an effective\npolicy generalizing well in unseen states. In this paper, We first provide\ntheoretical results that causal world-models can outperform plain world-models\nfor offline RL by incorporating the causal structure into the generalization\nerror bound. We then propose a practical algorithm, oFfline mOdel-based\nreinforcement learning with CaUsal Structure (FOCUS), to illustrate the\nfeasibility of learning and leveraging causal structure in offline RL.\nExperimental results on two benchmarks show that FOCUS reconstructs the\nunderlying causal structure accurately and robustly. Consequently, it performs\nbetter than the plain model-based offline RL algorithms and other causal\nmodel-based RL algorithms.", + "authors": "Zheng-Mao Zhu, Xiong-Hui Chen, Hong-Long Tian, Kun Zhang, Yang Yu", + "published": "2022-06-03", + "updated": "2022-06-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1912.03918v1", + "title": "Transformer Based Reinforcement Learning For Games", + "abstract": "Recent times have witnessed sharp improvements in reinforcement learning\ntasks using deep reinforcement learning techniques like Deep Q Networks, Policy\nGradients, Actor Critic methods which are based on deep learning based models\nand back-propagation of gradients to train such models. An active area of\nresearch in reinforcement learning is about training agents to play complex\nvideo games, which so far has been something accomplished only by human\nintelligence. Some state of the art performances in video game playing using\ndeep reinforcement learning are obtained by processing the sequence of frames\nfrom video games, passing them through a convolutional network to obtain\nfeatures and then using recurrent neural networks to figure out the action\nleading to optimal rewards. The recurrent neural network will learn to extract\nthe meaningful signal out of the sequence of such features. In this work, we\npropose a method utilizing a transformer network which have recently replaced\nRNNs in Natural Language Processing (NLP), and perform experiments to compare\nwith existing methods.", + "authors": "Uddeshya Upadhyay, Nikunj Shah, Sucheta Ravikanti, Mayanka Medhe", + "published": "2019-12-09", + "updated": "2019-12-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NE" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2305.00477v2", + "title": "Posterior Sampling for Deep Reinforcement Learning", + "abstract": "Despite remarkable successes, deep reinforcement learning algorithms remain\nsample inefficient: they require an enormous amount of trial and error to find\ngood policies. Model-based algorithms promise sample efficiency by building an\nenvironment model that can be used for planning. Posterior Sampling for\nReinforcement Learning is such a model-based algorithm that has attracted\nsignificant interest due to its performance in the tabular setting. This paper\nintroduces Posterior Sampling for Deep Reinforcement Learning (PSDRL), the\nfirst truly scalable approximation of Posterior Sampling for Reinforcement\nLearning that retains its model-based essence. PSDRL combines efficient\nuncertainty quantification over latent state space models with a specially\ntailored continual planning algorithm based on value-function approximation.\nExtensive experiments on the Atari benchmark show that PSDRL significantly\noutperforms previous state-of-the-art attempts at scaling up posterior sampling\nwhile being competitive with a state-of-the-art (model-based) reinforcement\nlearning method, both in sample efficiency and computational efficiency.", + "authors": "Remo Sasso, Michelangelo Conserva, Paulo Rauber", + "published": "2023-04-30", + "updated": "2023-05-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "68T07", + "I.2.m" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1012.1552v1", + "title": "Bridging the Gap between Reinforcement Learning and Knowledge Representation: A Logical Off- and On-Policy Framework", + "abstract": "Knowledge Representation is important issue in reinforcement learning. In\nthis paper, we bridge the gap between reinforcement learning and knowledge\nrepresentation, by providing a rich knowledge representation framework, based\non normal logic programs with answer set semantics, that is capable of solving\nmodel-free reinforcement learning problems for more complex do-mains and\nexploits the domain-specific knowledge. We prove the correctness of our\napproach. We show that the complexity of finding an offline and online policy\nfor a model-free reinforcement learning problem in our approach is NP-complete.\nMoreover, we show that any model-free reinforcement learning problem in MDP\nenvironment can be encoded as a SAT problem. The importance of that is\nmodel-free reinforcement", + "authors": "Emad Saad", + "published": "2010-12-07", + "updated": "2010-12-07", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG", + "cs.LO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2211.10688v2", + "title": "ReInform: Selecting paths with reinforcement learning for contextualized link prediction", + "abstract": "We propose to use reinforcement learning to inform transformer-based\ncontextualized link prediction models by providing paths that are most useful\nfor predicting the correct answer. This is in contrast to previous approaches,\nthat either used reinforcement learning (RL) to directly search for the answer,\nor based their prediction on limited or randomly selected context. Our\nexperiments on WN18RR and FB15k-237 show that contextualized link prediction\nmodels consistently outperform RL-based answer search, and that additional\nimprovements (of up to 13.5% MRR) can be gained by combining RL with a link\nprediction model. The PyTorch implementation of the RL agent is available at\nhttps://github.com/marina-sp/reinform", + "authors": "Marina Speranskaya, Sameh Methias, Benjamin Roth", + "published": "2022-11-19", + "updated": "2023-01-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1709.05067v1", + "title": "Deep Reinforcement Learning for Conversational AI", + "abstract": "Deep reinforcement learning is revolutionizing the artificial intelligence\nfield. Currently, it serves as a good starting point for constructing\nintelligent autonomous systems which offer a better knowledge of the visual\nworld. It is possible to scale deep reinforcement learning with the use of deep\nlearning and do amazing tasks such as use of pixels in playing video games. In\nthis paper, key concepts of deep reinforcement learning including reward\nfunction, differences between reinforcement learning and supervised learning\nand models for implementation of reinforcement are discussed. Key challenges\nrelated to the implementation of reinforcement learning in conversational AI\ndomain are identified as well as discussed in detail. Various conversational\nmodels which are based on deep reinforcement learning (as well as deep\nlearning) are also discussed. In summary, this paper discusses key aspects of\ndeep reinforcement learning which are crucial for designing an efficient\nconversational AI.", + "authors": "Mahipal Jadeja, Neelanshi Varia, Agam Shah", + "published": "2017-09-15", + "updated": "2017-09-15", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1206.3281v1", + "title": "Model-Based Bayesian Reinforcement Learning in Large Structured Domains", + "abstract": "Model-based Bayesian reinforcement learning has generated significant\ninterest in the AI community as it provides an elegant solution to the optimal\nexploration-exploitation tradeoff in classical reinforcement learning.\nUnfortunately, the applicability of this type of approach has been limited to\nsmall domains due to the high complexity of reasoning about the joint posterior\nover model parameters. In this paper, we consider the use of factored\nrepresentations combined with online planning techniques, to improve\nscalability of these methods. The main contribution of this paper is a Bayesian\nframework for learning the structure and parameters of a dynamical system,\nwhile also simultaneously planning a (near-)optimal sequence of actions.", + "authors": "Stephane Ross, Joelle Pineau", + "published": "2012-06-13", + "updated": "2012-06-13", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.03967v1", + "title": "A Deep Reinforcement Learning Approach for Composing Moving IoT Services", + "abstract": "We develop a novel framework for efficiently and effectively discovering\ncrowdsourced services that move in close proximity to a user over a period of\ntime. We introduce a moving crowdsourced service model which is modelled as a\nmoving region. We propose a deep reinforcement learning-based composition\napproach to select and compose moving IoT services considering quality\nparameters. Additionally, we develop a parallel flock-based service discovery\nalgorithm as a ground-truth to measure the accuracy of the proposed approach.\nThe experiments on two real-world datasets verify the effectiveness and\nefficiency of the deep reinforcement learning-based approach.", + "authors": "Azadeh Ghari Neiat, Athman Bouguettaya, Mohammed Bahutair", + "published": "2021-11-06", + "updated": "2021-11-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.13044v1", + "title": "Reinforcement Learning with Feedback-modulated TD-STDP", + "abstract": "Spiking neuron networks have been used successfully to solve simple\nreinforcement learning tasks with continuous action set applying learning rules\nbased on spike-timing-dependent plasticity (STDP). However, most of these\nmodels cannot be applied to reinforcement learning tasks with discrete action\nset since they assume that the selected action is a deterministic function of\nfiring rate of neurons, which is continuous. In this paper, we propose a new\nSTDP-based learning rule for spiking neuron networks which contains feedback\nmodulation. We show that the STDP-based learning rule can be used to solve\nreinforcement learning tasks with discrete action set at a speed similar to\nstandard reinforcement learning algorithms when applied to the CartPole and\nLunarLander tasks. Moreover, we demonstrate that the agent is unable to solve\nthese tasks if feedback modulation is omitted from the learning rule. We\nconclude that feedback modulation allows better credit assignment when only the\nunits contributing to the executed action and TD error participate in learning.", + "authors": "Stephen Chung, Robert Kozma", + "published": "2020-08-29", + "updated": "2020-08-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML", + "I.2.8" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.01977v1", + "title": "Accelerating Goal-Directed Reinforcement Learning by Model Characterization", + "abstract": "We propose a hybrid approach aimed at improving the sample efficiency in\ngoal-directed reinforcement learning. We do this via a two-step mechanism where\nfirstly, we approximate a model from Model-Free reinforcement learning. Then,\nwe leverage this approximate model along with a notion of reachability using\nMean First Passage Times to perform Model-Based reinforcement learning. Built\non such a novel observation, we design two new algorithms - Mean First Passage\nTime based Q-Learning (MFPT-Q) and Mean First Passage Time based DYNA\n(MFPT-DYNA), that have been fundamentally modified from the state-of-the-art\nreinforcement learning techniques. Preliminary results have shown that our\nhybrid approaches converge with much fewer iterations than their corresponding\nstate-of-the-art counterparts and therefore requiring much fewer samples and\nmuch fewer training trials to converge.", + "authors": "Shoubhik Debnath, Gaurav Sukhatme, Lantao Liu", + "published": "2019-01-04", + "updated": "2019-01-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2102.05612v1", + "title": "Personalization for Web-based Services using Offline Reinforcement Learning", + "abstract": "Large-scale Web-based services present opportunities for improving UI\npolicies based on observed user interactions. We address challenges of learning\nsuch policies through model-free offline Reinforcement Learning (RL) with\noff-policy training. Deployed in a production system for user authentication in\na major social network, it significantly improves long-term objectives. We\narticulate practical challenges, compare several ML techniques, provide\ninsights on training and evaluation of RL models, and discuss generalizations.", + "authors": "Pavlos Athanasios Apostolopoulos, Zehui Wang, Hanson Wang, Chad Zhou, Kittipat Virochsiri, Norm Zhou, Igor L. Markov", + "published": "2021-02-10", + "updated": "2021-02-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.HC", + "cs.SE" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2209.05530v1", + "title": "Model-based Reinforcement Learning with Multi-step Plan Value Estimation", + "abstract": "A promising way to improve the sample efficiency of reinforcement learning is\nmodel-based methods, in which many explorations and evaluations can happen in\nthe learned models to save real-world samples. However, when the learned model\nhas a non-negligible model error, sequential steps in the model are hard to be\naccurately evaluated, limiting the model's utilization. This paper proposes to\nalleviate this issue by introducing multi-step plans to replace multi-step\nactions for model-based RL. We employ the multi-step plan value estimation,\nwhich evaluates the expected discounted return after executing a sequence of\naction plans at a given state, and updates the policy by directly computing the\nmulti-step policy gradient via plan value estimation. The new model-based\nreinforcement learning algorithm MPPVE (Model-based Planning Policy Learning\nwith Multi-step Plan Value Estimation) shows a better utilization of the\nlearned model and achieves a better sample efficiency than state-of-the-art\nmodel-based RL approaches.", + "authors": "Haoxin Lin, Yihao Sun, Jiaji Zhang, Yang Yu", + "published": "2022-09-12", + "updated": "2022-09-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.07915v2", + "title": "Barrier Certified Safety Learning Control: When Sum-of-Square Programming Meets Reinforcement Learning", + "abstract": "Safety guarantee is essential in many engineering implementations.\nReinforcement learning provides a useful way to strengthen safety. However,\nreinforcement learning algorithms cannot completely guarantee safety over\nrealistic operations. To address this issue, this work adopts control barrier\nfunctions over reinforcement learning, and proposes a compensated algorithm to\ncompletely maintain safety. Specifically, a sum-of-squares programming has been\nexploited to search for the optimal controller, and tune the learning\nhyperparameters simultaneously. Thus, the control actions are pledged to be\nalways within the safe region. The effectiveness of proposed method is\ndemonstrated via an inverted pendulum model. Compared to quadratic programming\nbased reinforcement learning methods, our sum-of-squares programming based\nreinforcement learning has shown its superiority.", + "authors": "Hejun Huang, Zhenglong Li, Dongkun Han", + "published": "2022-06-16", + "updated": "2022-06-29", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.LG", + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2312.15385v1", + "title": "Discrete-Time Mean-Variance Strategy Based on Reinforcement Learning", + "abstract": "This paper studies a discrete-time mean-variance model based on reinforcement\nlearning. Compared with its continuous-time counterpart in \\cite{zhou2020mv},\nthe discrete-time model makes more general assumptions about the asset's return\ndistribution. Using entropy to measure the cost of exploration, we derive the\noptimal investment strategy, whose density function is also Gaussian type.\nAdditionally, we design the corresponding reinforcement learning algorithm.\nBoth simulation experiments and empirical analysis indicate that our\ndiscrete-time model exhibits better applicability when analyzing real-world\ndata than the continuous-time model.", + "authors": "Xiangyu Cui, Xun Li, Yun Shi, Si Zhao", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "q-fin.MF", + "cats": [ + "q-fin.MF", + "cs.LG", + "q-fin.PM" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1609.03348v4", + "title": "A Threshold-based Scheme for Reinforcement Learning in Neural Networks", + "abstract": "A generic and scalable Reinforcement Learning scheme for Artificial Neural\nNetworks is presented, providing a general purpose learning machine. By\nreference to a node threshold three features are described 1) A mechanism for\nPrimary Reinforcement, capable of solving linearly inseparable problems 2) The\nlearning scheme is extended to include a mechanism for Conditioned\nReinforcement, capable of forming long term strategy 3) The learning scheme is\nmodified to use a threshold-based deep learning algorithm, providing a robust\nand biologically inspired alternative to backpropagation. The model may be used\nfor supervised as well as unsupervised training regimes.", + "authors": "Thomas H. Ward", + "published": "2016-09-12", + "updated": "2017-01-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NE" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1810.03198v1", + "title": "Reinforcement Evolutionary Learning Method for self-learning", + "abstract": "In statistical modelling the biggest threat is concept drift which makes the\nmodel gradually showing deteriorating performance over time. There are state of\nthe art methodologies to detect the impact of concept drift, however general\nstrategy considered to overcome the issue in performance is to rebuild or\nre-calibrate the model periodically as the variable patterns for the model\nchanges significantly due to market change or consumer behavior change etc.\nQuantitative research is the most widely spread application of data science in\nMarketing or financial domain where applicability of state of the art\nreinforcement learning for auto-learning is less explored paradigm.\nReinforcement learning is heavily dependent on having a simulated environment\nwhich is majorly available for gaming or online systems, to learn from the live\nfeedback. However, there are some research happened on the area of online\nadvertisement, pricing etc where due to the nature of the online learning\nenvironment scope of reinforcement learning is explored. Our proposed solution\nis a reinforcement learning based, true self-learning algorithm which can adapt\nto the data change or concept drift and auto learn and self-calibrate for the\nnew patterns of the data solving the problem of concept drift.\n Keywords - Reinforcement learning, Genetic Algorithm, Q-learning,\nClassification modelling, CMA-ES, NES, Multi objective optimization, Concept\ndrift, Population stability index, Incremental learning, F1-measure, Predictive\nModelling, Self-learning, MCTS, AlphaGo, AlphaZero", + "authors": "Kumarjit Pathak, Jitin Kapila", + "published": "2018-10-07", + "updated": "2018-10-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NE", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.01794v1", + "title": "Imitation Game: A Model-based and Imitation Learning Deep Reinforcement Learning Hybrid", + "abstract": "Autonomous and learning systems based on Deep Reinforcement Learning have\nfirmly established themselves as a foundation for approaches to creating\nresilient and efficient Cyber-Physical Energy Systems. However, most current\napproaches suffer from two distinct problems: Modern model-free algorithms such\nas Soft Actor Critic need a high number of samples to learn a meaningful\npolicy, as well as a fallback to ward against concept drifts (e. g.,\ncatastrophic forgetting). In this paper, we present the work in progress\ntowards a hybrid agent architecture that combines model-based Deep\nReinforcement Learning with imitation learning to overcome both problems.", + "authors": "Eric MSP Veith, Torben Logemann, Aleksandr Berezin, Arlena Well\u00dfow, Stephan Balduin", + "published": "2024-04-02", + "updated": "2024-04-02", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2102.03022v1", + "title": "Deceptive Reinforcement Learning for Privacy-Preserving Planning", + "abstract": "In this paper, we study the problem of deceptive reinforcement learning to\npreserve the privacy of a reward function. Reinforcement learning is the\nproblem of finding a behaviour policy based on rewards received from\nexploratory behaviour. A key ingredient in reinforcement learning is a reward\nfunction, which determines how much reward (negative or positive) is given and\nwhen. However, in some situations, we may want to keep a reward function\nprivate; that is, to make it difficult for an observer to determine the reward\nfunction used. We define the problem of privacy-preserving reinforcement\nlearning, and present two models for solving it. These models are based on\ndissimulation -- a form of deception that `hides the truth'. We evaluate our\nmodels both computationally and via human behavioural experiments. Results show\nthat the resulting policies are indeed deceptive, and that participants can\ndetermine the true reward function less reliably than that of an honest agent.", + "authors": "Zhengshang Liu, Yue Yang, Tim Miller, Peta Masters", + "published": "2021-02-05", + "updated": "2021-02-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.MA" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1910.03016v4", + "title": "Is a Good Representation Sufficient for Sample Efficient Reinforcement Learning?", + "abstract": "Modern deep learning methods provide effective means to learn good\nrepresentations. However, is a good representation itself sufficient for sample\nefficient reinforcement learning? This question has largely been studied only\nwith respect to (worst-case) approximation error, in the more classical\napproximate dynamic programming literature. With regards to the statistical\nviewpoint, this question is largely unexplored, and the extant body of\nliterature mainly focuses on conditions which permit sample efficient\nreinforcement learning with little understanding of what are necessary\nconditions for efficient reinforcement learning.\n This work shows that, from the statistical viewpoint, the situation is far\nsubtler than suggested by the more traditional approximation viewpoint, where\nthe requirements on the representation that suffice for sample efficient RL are\neven more stringent. Our main results provide sharp thresholds for\nreinforcement learning methods, showing that there are hard limitations on what\nconstitutes good function approximation (in terms of the dimensionality of the\nrepresentation), where we focus on natural representational conditions relevant\nto value-based, model-based, and policy-based learning. These lower bounds\nhighlight that having a good (value-based, model-based, or policy-based)\nrepresentation in and of itself is insufficient for efficient reinforcement\nlearning, unless the quality of this approximation passes certain hard\nthresholds. Furthermore, our lower bounds also imply exponential separations on\nthe sample complexity between 1) value-based learning with perfect\nrepresentation and value-based learning with a good-but-not-perfect\nrepresentation, 2) value-based learning and policy-based learning, 3)\npolicy-based learning and supervised learning and 4) reinforcement learning and\nimitation learning.", + "authors": "Simon S. Du, Sham M. Kakade, Ruosong Wang, Lin F. Yang", + "published": "2019-10-07", + "updated": "2020-02-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "math.OC", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.11437v3", + "title": "Successor Features Combine Elements of Model-Free and Model-based Reinforcement Learning", + "abstract": "A key question in reinforcement learning is how an intelligent agent can\ngeneralize knowledge across different inputs. By generalizing across different\ninputs, information learned for one input can be immediately reused for\nimproving predictions for another input. Reusing information allows an agent to\ncompute an optimal decision-making strategy using less data. State\nrepresentation is a key element of the generalization process, compressing a\nhigh-dimensional input space into a low-dimensional latent state space. This\narticle analyzes properties of different latent state spaces, leading to new\nconnections between model-based and model-free reinforcement learning.\nSuccessor features, which predict frequencies of future observations, form a\nlink between model-based and model-free learning: Learning to predict future\nexpected reward outcomes, a key characteristic of model-based agents, is\nequivalent to learning successor features. Learning successor features is a\nform of temporal difference learning and is equivalent to learning to predict a\nsingle policy's utility, which is a characteristic of model-free agents.\nDrawing on the connection between model-based reinforcement learning and\nsuccessor features, we demonstrate that representations that are predictive of\nfuture reward outcomes generalize across variations in both transitions and\nrewards. This result extends previous work on successor features, which is\nconstrained to fixed transitions and assumes re-learning of the transferred\nstate representation.", + "authors": "Lucas Lehnert, Michael L. Littman", + "published": "2019-01-31", + "updated": "2020-10-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.13489v2", + "title": "Boosting Reinforcement Learning and Planning with Demonstrations: A Survey", + "abstract": "Although reinforcement learning has seen tremendous success recently, this\nkind of trial-and-error learning can be impractical or inefficient in complex\nenvironments. The use of demonstrations, on the other hand, enables agents to\nbenefit from expert knowledge rather than having to discover the best action to\ntake through exploration. In this survey, we discuss the advantages of using\ndemonstrations in sequential decision making, various ways to apply\ndemonstrations in learning-based decision making paradigms (for example,\nreinforcement learning and planning in the learned models), and how to collect\nthe demonstrations in various scenarios. Additionally, we exemplify a practical\npipeline for generating and utilizing demonstrations in the recently proposed\nManiSkill robot learning benchmark.", + "authors": "Tongzhou Mu, Hao Su", + "published": "2023-03-23", + "updated": "2023-03-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1611.00862v1", + "title": "Quantile Reinforcement Learning", + "abstract": "In reinforcement learning, the standard criterion to evaluate policies in a\nstate is the expectation of (discounted) sum of rewards. However, this\ncriterion may not always be suitable, we consider an alternative criterion\nbased on the notion of quantiles. In the case of episodic reinforcement\nlearning problems, we propose an algorithm based on stochastic approximation\nwith two timescales. We evaluate our proposition on a simple model of the TV\nshow, Who wants to be a millionaire.", + "authors": "Hugo Gilbert, Paul Weng", + "published": "2016-11-03", + "updated": "2016-11-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.12516v2", + "title": "Prioritized Experience-based Reinforcement Learning with Human Guidance for Autonomous Driving", + "abstract": "Reinforcement learning (RL) requires skillful definition and remarkable\ncomputational efforts to solve optimization and control problems, which could\nimpair its prospect. Introducing human guidance into reinforcement learning is\na promising way to improve learning performance. In this paper, a comprehensive\nhuman guidance-based reinforcement learning framework is established. A novel\nprioritized experience replay mechanism that adapts to human guidance in the\nreinforcement learning process is proposed to boost the efficiency and\nperformance of the reinforcement learning algorithm. To relieve the heavy\nworkload on human participants, a behavior model is established based on an\nincremental online learning method to mimic human actions. We design two\nchallenging autonomous driving tasks for evaluating the proposed algorithm.\nExperiments are conducted to access the training and testing performance and\nlearning mechanism of the proposed algorithm. Comparative results against the\nstate-of-the-art methods suggest the advantages of our algorithm in terms of\nlearning efficiency, performance, and robustness.", + "authors": "Jingda Wu, Zhiyu Huang, Wenhui Huang, Chen Lv", + "published": "2021-09-26", + "updated": "2022-11-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1810.01112v1", + "title": "The Dreaming Variational Autoencoder for Reinforcement Learning Environments", + "abstract": "Reinforcement learning has shown great potential in generalizing over raw\nsensory data using only a single neural network for value optimization. There\nare several challenges in the current state-of-the-art reinforcement learning\nalgorithms that prevent them from converging towards the global optima. It is\nlikely that the solution to these problems lies in short- and long-term\nplanning, exploration and memory management for reinforcement learning\nalgorithms. Games are often used to benchmark reinforcement learning algorithms\nas they provide a flexible, reproducible, and easy to control environment.\nRegardless, few games feature a state-space where results in exploration,\nmemory, and planning are easily perceived. This paper presents The Dreaming\nVariational Autoencoder (DVAE), a neural network based generative modeling\narchitecture for exploration in environments with sparse feedback. We further\npresent Deep Maze, a novel and flexible maze engine that challenges DVAE in\npartial and fully-observable state-spaces, long-horizon tasks, and\ndeterministic and stochastic problems. We show initial findings and encourage\nfurther work in reinforcement learning driven by generative exploration.", + "authors": "Per-Arne Andersen, Morten Goodwin, Ole-Christoffer Granmo", + "published": "2018-10-02", + "updated": "2018-10-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1406.1853v2", + "title": "Model-based Reinforcement Learning and the Eluder Dimension", + "abstract": "We consider the problem of learning to optimize an unknown Markov decision\nprocess (MDP). We show that, if the MDP can be parameterized within some known\nfunction class, we can obtain regret bounds that scale with the dimensionality,\nrather than cardinality, of the system. We characterize this dependence\nexplicitly as $\\tilde{O}(\\sqrt{d_K d_E T})$ where $T$ is time elapsed, $d_K$ is\nthe Kolmogorov dimension and $d_E$ is the \\emph{eluder dimension}. These\nrepresent the first unified regret bounds for model-based reinforcement\nlearning and provide state of the art guarantees in several important settings.\nMoreover, we present a simple and computationally efficient algorithm\n\\emph{posterior sampling for reinforcement learning} (PSRL) that satisfies\nthese bounds.", + "authors": "Ian Osband, Benjamin Van Roy", + "published": "2014-06-07", + "updated": "2014-10-31", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2305.03360v1", + "title": "A Survey on Offline Model-Based Reinforcement Learning", + "abstract": "Model-based approaches are becoming increasingly popular in the field of\noffline reinforcement learning, with high potential in real-world applications\ndue to the model's capability of thoroughly utilizing the large historical\ndatasets available with supervised learning techniques. This paper presents a\nliterature review of recent work in offline model-based reinforcement learning,\na field that utilizes model-based approaches in offline reinforcement learning.\nThe survey provides a brief overview of the concepts and recent developments in\nboth offline reinforcement learning and model-based reinforcement learning, and\ndiscuss the intersection of the two fields. We then presents key relevant\npapers in the field of offline model-based reinforcement learning and discuss\ntheir methods, particularly their approaches in solving the issue of\ndistributional shift, the main problem faced by all current offline model-based\nreinforcement learning methods. We further discuss key challenges faced by the\nfield, and suggest possible directions for future work.", + "authors": "Haoyang He", + "published": "2023-05-05", + "updated": "2023-05-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SY", + "eess.SY", + "I.2.6; I.2.8" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.02104v2", + "title": "Model-Based Episodic Memory Induces Dynamic Hybrid Controls", + "abstract": "Episodic control enables sample efficiency in reinforcement learning by\nrecalling past experiences from an episodic memory. We propose a new\nmodel-based episodic memory of trajectories addressing current limitations of\nepisodic control. Our memory estimates trajectory values, guiding the agent\ntowards good policies. Built upon the memory, we construct a complementary\nlearning model via a dynamic hybrid control unifying model-based, episodic and\nhabitual learning into a single architecture. Experiments demonstrate that our\nmodel allows significantly faster and better learning than other strong\nreinforcement learning agents across a variety of environments including\nstochastic and non-Markovian settings.", + "authors": "Hung Le, Thommen Karimpanal George, Majid Abdolshah, Truyen Tran, Svetha Venkatesh", + "published": "2021-11-03", + "updated": "2021-11-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.02380v2", + "title": "Adaptive Rollout Length for Model-Based RL Using Model-Free Deep RL", + "abstract": "Model-based reinforcement learning promises to learn an optimal policy from\nfewer interactions with the environment compared to model-free reinforcement\nlearning by learning an intermediate model of the environment in order to\npredict future interactions. When predicting a sequence of interactions, the\nrollout length, which limits the prediction horizon, is a critical\nhyperparameter as accuracy of the predictions diminishes in the regions that\nare further away from real experience. As a result, with a longer rollout\nlength, an overall worse policy is learned in the long run. Thus, the\nhyperparameter provides a trade-off between quality and efficiency. In this\nwork, we frame the problem of tuning the rollout length as a meta-level\nsequential decision-making problem that optimizes the final policy learned by\nmodel-based reinforcement learning given a fixed budget of environment\ninteractions by adapting the hyperparameter dynamically based on feedback from\nthe learning process, such as accuracy of the model and the remaining budget of\ninteractions. We use model-free deep reinforcement learning to solve the\nmeta-level decision problem and demonstrate that our approach outperforms\ncommon heuristic baselines on two well-known reinforcement learning\nenvironments.", + "authors": "Abhinav Bhatia, Philip S. Thomas, Shlomo Zilberstein", + "published": "2022-06-06", + "updated": "2022-06-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.04816v1", + "title": "Characterizing Policy Divergence for Personalized Meta-Reinforcement Learning", + "abstract": "Despite ample motivation from costly exploration and limited trajectory data,\nrapidly adapting to new environments with few-shot reinforcement learning (RL)\ncan remain a challenging task, especially with respect to personalized\nsettings. Here, we consider the problem of recommending optimal policies to a\nset of multiple entities each with potentially different characteristics, such\nthat individual entities may parameterize distinct environments with unique\ntransition dynamics. Inspired by existing literature in meta-learning, we\nextend previous work by focusing on the notion that certain environments are\nmore similar to each other than others in personalized settings, and propose a\nmodel-free meta-learning algorithm that prioritizes past experiences by\nrelevance during gradient-based adaptation. Our algorithm involves\ncharacterizing past policy divergence through methods in inverse reinforcement\nlearning, and we illustrate how such metrics are able to effectively\ndistinguish past policy parameters by the environment they were deployed in,\nleading to more effective fast adaptation during test time. To study\npersonalization more effectively we introduce a navigation testbed to\nspecifically incorporate environment diversity across training episodes, and\ndemonstrate that our approach outperforms meta-learning alternatives with\nrespect to few-shot reinforcement learning in personalized settings.", + "authors": "Michael Zhang", + "published": "2020-10-09", + "updated": "2020-10-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1506.00685v1", + "title": "Model-based reinforcement learning for infinite-horizon approximate optimal tracking", + "abstract": "This paper provides an approximate online adaptive solution to the\ninfinite-horizon optimal tracking problem for control-affine continuous-time\nnonlinear systems with unknown drift dynamics. Model-based reinforcement\nlearning is used to relax the persistence of excitation condition. Model-based\nreinforcement learning is implemented using a concurrent learning-based system\nidentifier to simulate experience by evaluating the Bellman error over\nunexplored areas of the state space. Tracking of the desired trajectory and\nconvergence of the developed policy to a neighborhood of the optimal policy are\nestablished via Lyapunov-based stability analysis. Simulation results\ndemonstrate the effectiveness of the developed technique.", + "authors": "Rushikesh Kamalapurkar, Lindsey Andrews, Patrick Walters, Warren E. Dixon", + "published": "2015-06-01", + "updated": "2015-06-01", + "primary_cat": "cs.SY", + "cats": [ + "cs.SY", + "math.OC" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.07240v1", + "title": "Model-Reference Reinforcement Learning for Collision-Free Tracking Control of Autonomous Surface Vehicles", + "abstract": "This paper presents a novel model-reference reinforcement learning algorithm\nfor the intelligent tracking control of uncertain autonomous surface vehicles\nwith collision avoidance. The proposed control algorithm combines a\nconventional control method with reinforcement learning to enhance control\naccuracy and intelligence. In the proposed control design, a nominal system is\nconsidered for the design of a baseline tracking controller using a\nconventional control approach. The nominal system also defines the desired\nbehaviour of uncertain autonomous surface vehicles in an obstacle-free\nenvironment. Thanks to reinforcement learning, the overall tracking controller\nis capable of compensating for model uncertainties and achieving collision\navoidance at the same time in environments with obstacles. In comparison to\ntraditional deep reinforcement learning methods, our proposed learning-based\ncontrol can provide stability guarantees and better sample efficiency. We\ndemonstrate the performance of the new algorithm using an example of autonomous\nsurface vehicles.", + "authors": "Qingrui Zhang, Wei Pan, Vasso Reppa", + "published": "2020-08-17", + "updated": "2020-08-17", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.LG", + "cs.RO", + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.16348v2", + "title": "Rating-based Reinforcement Learning", + "abstract": "This paper develops a novel rating-based reinforcement learning approach that\nuses human ratings to obtain human guidance in reinforcement learning.\nDifferent from the existing preference-based and ranking-based reinforcement\nlearning paradigms, based on human relative preferences over sample pairs, the\nproposed rating-based reinforcement learning approach is based on human\nevaluation of individual trajectories without relative comparisons between\nsample pairs. The rating-based reinforcement learning approach builds on a new\nprediction model for human ratings and a novel multi-class loss function. We\nconduct several experimental studies based on synthetic ratings and real human\nratings to evaluate the effectiveness and benefits of the new rating-based\nreinforcement learning approach.", + "authors": "Devin White, Mingkang Wu, Ellen Novoseller, Vernon J. Lawhern, Nicholas Waytowich, Yongcan Cao", + "published": "2023-07-30", + "updated": "2024-01-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.13529v2", + "title": "Lyapunov-Based Reinforcement Learning State Estimator", + "abstract": "In this paper, we consider the state estimation problem for nonlinear\nstochastic discrete-time systems. We combine Lyapunov's method in control\ntheory and deep reinforcement learning to design the state estimator. We\ntheoretically prove the convergence of the bounded estimate error solely using\nthe data simulated from the model. An actor-critic reinforcement learning\nalgorithm is proposed to learn the state estimator approximated by a deep\nneural network. The convergence of the algorithm is analysed. The proposed\nLyapunov-based reinforcement learning state estimator is compared with a number\nof existing nonlinear filtering methods through Monte Carlo simulations,\nshowing its advantage in terms of estimate convergence even under some system\nuncertainties such as covariance shift in system noise and randomly missing\nmeasurements. To the best of our knowledge, this is the first reinforcement\nlearning based nonlinear state estimator with bounded estimate error\nperformance guarantee.", + "authors": "Liang Hu, Chengwei Wu, Wei Pan", + "published": "2020-10-26", + "updated": "2021-01-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO", + "cs.SY", + "eess.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2301.11520v3", + "title": "SNeRL: Semantic-aware Neural Radiance Fields for Reinforcement Learning", + "abstract": "As previous representations for reinforcement learning cannot effectively\nincorporate a human-intuitive understanding of the 3D environment, they usually\nsuffer from sub-optimal performances. In this paper, we present Semantic-aware\nNeural Radiance Fields for Reinforcement Learning (SNeRL), which jointly\noptimizes semantic-aware neural radiance fields (NeRF) with a convolutional\nencoder to learn 3D-aware neural implicit representation from multi-view\nimages. We introduce 3D semantic and distilled feature fields in parallel to\nthe RGB radiance fields in NeRF to learn semantic and object-centric\nrepresentation for reinforcement learning. SNeRL outperforms not only previous\npixel-based representations but also recent 3D-aware representations both in\nmodel-free and model-based reinforcement learning.", + "authors": "Dongseok Shim, Seungjae Lee, H. Jin Kim", + "published": "2023-01-27", + "updated": "2023-05-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2009.14365v1", + "title": "Toolpath design for additive manufacturing using deep reinforcement learning", + "abstract": "Toolpath optimization of metal-based additive manufacturing processes is\ncurrently hampered by the high-dimensionality of its design space. In this\nwork, a reinforcement learning platform is proposed that dynamically learns\ntoolpath strategies to build an arbitrary part. To this end, three prominent\nmodel-free reinforcement learning formulations are investigated to design\nadditive manufacturing toolpaths and demonstrated for two cases of dense and\nsparse reward structures. The results indicate that this learning-based\ntoolpath design approach achieves high scores, especially when a dense reward\nstructure is present.", + "authors": "Mojtaba Mozaffar, Ablodghani Ebrahimi, Jian Cao", + "published": "2020-09-30", + "updated": "2020-09-30", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1705.03562v1", + "title": "Deep Episodic Value Iteration for Model-based Meta-Reinforcement Learning", + "abstract": "We present a new deep meta reinforcement learner, which we call Deep Episodic\nValue Iteration (DEVI). DEVI uses a deep neural network to learn a similarity\nmetric for a non-parametric model-based reinforcement learning algorithm. Our\nmodel is trained end-to-end via back-propagation. Despite being trained using\nthe model-free Q-learning objective, we show that DEVI's model-based internal\nstructure provides `one-shot' transfer to changes in reward and transition\nstructure, even for tasks with very high-dimensional state spaces.", + "authors": "Steven Stenberg Hansen", + "published": "2017-05-09", + "updated": "2017-05-09", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.AI", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1906.08312v1", + "title": "Calibrated Model-Based Deep Reinforcement Learning", + "abstract": "Estimates of predictive uncertainty are important for accurate model-based\nplanning and reinforcement learning. However, predictive\nuncertainties---especially ones derived from modern deep learning systems---can\nbe inaccurate and impose a bottleneck on performance. This paper explores which\nuncertainties are needed for model-based reinforcement learning and argues that\ngood uncertainties must be calibrated, i.e. their probabilities should match\nempirical frequencies of predicted events. We describe a simple way to augment\nany model-based reinforcement learning agent with a calibrated model and show\nthat doing so consistently improves planning, sample complexity, and\nexploration. On the \\textsc{HalfCheetah} MuJoCo task, our system achieves\nstate-of-the-art performance using 50\\% fewer samples than the current leading\napproach. Our findings suggest that calibration can improve the performance of\nmodel-based reinforcement learning with minimal computational and\nimplementation overhead.", + "authors": "Ali Malik, Volodymyr Kuleshov, Jiaming Song, Danny Nemer, Harlan Seymour, Stefano Ermon", + "published": "2019-06-19", + "updated": "2019-06-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2012.07369v2", + "title": "Learning for MPC with Stability & Safety Guarantees", + "abstract": "The combination of learning methods with Model Predictive Control (MPC) has\nattracted a significant amount of attention in the recent literature. The hope\nof this combination is to reduce the reliance of MPC schemes on accurate\nmodels, and to tap into the fast developing machine learning and reinforcement\nlearning tools to exploit the growing amount of data available for many\nsystems. In particular, the combination of reinforcement learning and MPC has\nbeen proposed as a viable and theoretically justified approach to introduce\nexplainable, safe and stable policies in reinforcement learning. However, a\nformal theory detailing how the safety and stability of an MPC-based policy can\nbe maintained through the parameter updates delivered by the learning tools is\nstill lacking. This paper addresses this gap. The theory is developed for the\ngeneric Robust MPC case, and applied in simulation in the robust tube-based\nlinear MPC case, where the theory is fairly easy to deploy in practice. The\npaper focuses on Reinforcement Learning as a learning tool, but it applies to\nany learning method that updates the MPC parameters online.", + "authors": "S\u00e9bastien Gros, Mario Zanon", + "published": "2020-12-14", + "updated": "2022-07-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SY", + "eess.SY", + "math.OC" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.02429v1", + "title": "AD4RL: Autonomous Driving Benchmarks for Offline Reinforcement Learning with Value-based Dataset", + "abstract": "Offline reinforcement learning has emerged as a promising technology by\nenhancing its practicality through the use of pre-collected large datasets.\nDespite its practical benefits, most algorithm development research in offline\nreinforcement learning still relies on game tasks with synthetic datasets. To\naddress such limitations, this paper provides autonomous driving datasets and\nbenchmarks for offline reinforcement learning research. We provide 19 datasets,\nincluding real-world human driver's datasets, and seven popular offline\nreinforcement learning algorithms in three realistic driving scenarios. We also\nprovide a unified decision-making process model that can operate effectively\nacross different scenarios, serving as a reference framework in algorithm\ndesign. Our research lays the groundwork for further collaborations in the\ncommunity to explore practical aspects of existing reinforcement learning\nmethods. Dataset and codes can be found in https://sites.google.com/view/ad4rl.", + "authors": "Dongsu Lee, Chanin Eom, Minhae Kwon", + "published": "2024-04-03", + "updated": "2024-04-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.11738v1", + "title": "Optimising Stochastic Routing for Taxi Fleets with Model Enhanced Reinforcement Learning", + "abstract": "The future of mobility-as-a-Service (Maas)should embrace an integrated system\nof ride-hailing, street-hailing and ride-sharing with optimised intelligent\nvehicle routing in response to a real-time, stochastic demand pattern. We aim\nto optimise routing policies for a large fleet of vehicles for street-hailing\nservices, given a stochastic demand pattern in small to medium-sized road\nnetworks. A model-based dispatch algorithm, a high performance model-free\nreinforcement learning based algorithm and a novel hybrid algorithm combining\nthe benefits of both the top-down approach and the model-free reinforcement\nlearning have been proposed to route the \\emph{vacant} vehicles. We design our\nreinforcement learning based routing algorithm using proximal policy\noptimisation and combined intrinsic and extrinsic rewards to strike a balance\nbetween exploration and exploitation. Using a large-scale agent-based\nmicroscopic simulation platform to evaluate our proposed algorithms, our\nmodel-free reinforcement learning and hybrid algorithm show excellent\nperformance on both artificial road network and community-based Singapore road\nnetwork with empirical demands, and our hybrid algorithm can significantly\naccelerate the model-free learner in the process of learning.", + "authors": "Shen Ren, Qianxiao Li, Liye Zhang, Zheng Qin, Bo Yang", + "published": "2020-10-22", + "updated": "2020-10-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "nlin.AO", + "physics.soc-ph" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2204.01409v1", + "title": "Safe Controller for Output Feedback Linear Systems using Model-Based Reinforcement Learning", + "abstract": "The objective of this research is to enable safety-critical systems to\nsimultaneously learn and execute optimal control policies in a safe manner to\nachieve complex autonomy. Learning optimal policies via trial and error, i.e.,\ntraditional reinforcement learning, is difficult to implement in\nsafety-critical systems, particularly when task restarts are unavailable. Safe\nmodel-based reinforcement learning techniques based on a barrier transformation\nhave recently been developed to address this problem. However, these methods\nrely on full state feedback, limiting their usability in a real-world\nenvironment. In this work, an output-feedback safe model-based reinforcement\nlearning technique based on a novel barrier-aware dynamic state estimator has\nbeen designed to address this issue. The developed approach facilitates\nsimultaneous learning and execution of safe control policies for\nsafety-critical linear systems. Simulation results indicate that barrier\ntransformation is an effective approach to achieve online reinforcement\nlearning in safety-critical systems using output feedback.", + "authors": "S M Nahid Mahmud, Moad Abudia, Scott A Nivison, Zachary I. Bell, Rushikesh Kamalapurkar", + "published": "2022-04-04", + "updated": "2022-04-04", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.02219v1", + "title": "Uncertainty-Based Out-of-Distribution Detection in Deep Reinforcement Learning", + "abstract": "We consider the problem of detecting out-of-distribution (OOD) samples in\ndeep reinforcement learning. In a value based reinforcement learning setting,\nwe propose to use uncertainty estimation techniques directly on the agent's\nvalue estimating neural network to detect OOD samples. The focus of our work\nlies in analyzing the suitability of approximate Bayesian inference methods and\nrelated ensembling techniques that generate uncertainty estimates. Although\nprior work has shown that dropout-based variational inference techniques and\nbootstrap-based approaches can be used to model epistemic uncertainty, the\nsuitability for detecting OOD samples in deep reinforcement learning remains an\nopen question. Our results show that uncertainty estimation can be used to\ndifferentiate in- from out-of-distribution samples. Over the complete training\nprocess of the reinforcement learning agents, bootstrap-based approaches tend\nto produce more reliable epistemic uncertainty estimates, when compared to\ndropout-based approaches.", + "authors": "Andreas Sedlmeier, Thomas Gabor, Thomy Phan, Lenz Belzner, Claudia Linnhoff-Popien", + "published": "2019-01-08", + "updated": "2019-01-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.15175v1", + "title": "Coordinated Reinforcement Learning for Optimizing Mobile Networks", + "abstract": "Mobile networks are composed of many base stations and for each of them many\nparameters must be optimized to provide good services. Automatically and\ndynamically optimizing all these entities is challenging as they are sensitive\nto variations in the environment and can affect each other through\ninterferences. Reinforcement learning (RL) algorithms are good candidates to\nautomatically learn base station configuration strategies from incoming data\nbut they are often hard to scale to many agents. In this work, we demonstrate\nhow to use coordination graphs and reinforcement learning in a complex\napplication involving hundreds of cooperating agents. We show how mobile\nnetworks can be modeled using coordination graphs and how network optimization\nproblems can be solved efficiently using multi- agent reinforcement learning.\nThe graph structure occurs naturally from expert knowledge about the network\nand allows to explicitly learn coordinating behaviors between the antennas\nthrough edge value functions represented by neural networks. We show\nempirically that coordinated reinforcement learning outperforms other methods.\nThe use of local RL updates and parameter sharing can handle a large number of\nagents without sacrificing coordination which makes it well suited to optimize\nthe ever denser networks brought by 5G and beyond.", + "authors": "Maxime Bouton, Hasan Farooq, Julien Forgeat, Shruti Bothe, Meral Shirazipour, Per Karlsson", + "published": "2021-09-30", + "updated": "2021-09-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.12142v1", + "title": "Bridging Imagination and Reality for Model-Based Deep Reinforcement Learning", + "abstract": "Sample efficiency has been one of the major challenges for deep reinforcement\nlearning. Recently, model-based reinforcement learning has been proposed to\naddress this challenge by performing planning on imaginary trajectories with a\nlearned world model. However, world model learning may suffer from overfitting\nto training trajectories, and thus model-based value estimation and policy\nsearch will be pone to be sucked in an inferior local policy. In this paper, we\npropose a novel model-based reinforcement learning algorithm, called BrIdging\nReality and Dream (BIRD). It maximizes the mutual information between imaginary\nand real trajectories so that the policy improvement learned from imaginary\ntrajectories can be easily generalized to real trajectories. We demonstrate\nthat our approach improves sample efficiency of model-based planning, and\nachieves state-of-the-art performance on challenging visual control benchmarks.", + "authors": "Guangxiang Zhu, Minghao Zhang, Honglak Lee, Chongjie Zhang", + "published": "2020-10-23", + "updated": "2020-10-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2308.11336v1", + "title": "On the Opportunities and Challenges of Offline Reinforcement Learning for Recommender Systems", + "abstract": "Reinforcement learning serves as a potent tool for modeling dynamic user\ninterests within recommender systems, garnering increasing research attention\nof late. However, a significant drawback persists: its poor data efficiency,\nstemming from its interactive nature. The training of reinforcement\nlearning-based recommender systems demands expensive online interactions to\namass adequate trajectories, essential for agents to learn user preferences.\nThis inefficiency renders reinforcement learning-based recommender systems a\nformidable undertaking, necessitating the exploration of potential solutions.\nRecent strides in offline reinforcement learning present a new perspective.\nOffline reinforcement learning empowers agents to glean insights from offline\ndatasets and deploy learned policies in online settings. Given that recommender\nsystems possess extensive offline datasets, the framework of offline\nreinforcement learning aligns seamlessly. Despite being a burgeoning field,\nworks centered on recommender systems utilizing offline reinforcement learning\nremain limited. This survey aims to introduce and delve into offline\nreinforcement learning within recommender systems, offering an inclusive review\nof existing literature in this domain. Furthermore, we strive to underscore\nprevalent challenges, opportunities, and future pathways, poised to propel\nresearch in this evolving field.", + "authors": "Xiaocong Chen, Siyu Wang, Julian McAuley, Dietmar Jannach, Lina Yao", + "published": "2023-08-22", + "updated": "2023-08-22", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2304.00006v1", + "title": "Bi-directional personalization reinforcement learning-based architecture with active learning using a multi-model data service for the travel nursing industry", + "abstract": "The challenges of using inadequate online recruitment systems can be\naddressed with machine learning and software engineering techniques.\nBi-directional personalization reinforcement learning-based architecture with\nactive learning can get recruiters to recommend qualified applicants and also\nenable applicants to receive personalized job recommendations. This paper\nfocuses on how machine learning techniques can enhance the recruitment process\nin the travel nursing industry by helping speed up data acquisition using a\nmulti-model data service and then providing personalized recommendations using\nbi-directional reinforcement learning with active learning. This need was\nespecially evident when trying to respond to the overwhelming needs of\nhealthcare facilities during the COVID-19 pandemic. The need for traveling\nnurses and other healthcare professionals was more evident during the lockdown\nperiod. A data service was architected for job feed processing using an\norchestration of natural language processing (NLP) models that synthesize\njob-related data into a database efficiently and accurately. The multi-model\ndata service provided the data necessary to develop a bi-directional\npersonalization system using reinforcement learning with active learning that\ncould recommend travel nurses and healthcare professionals to recruiters and\nprovide job recommendations to applicants using an internally developed smart\nmatch score as a basis. The bi-directional personalization reinforcement\nlearning-based architecture with active learning combines two personalization\nsystems - one that runs forward to recommend qualified candidates for jobs and\nanother that runs backward and recommends jobs for applicants.", + "authors": "Ezana N. Beyenne", + "published": "2023-03-14", + "updated": "2023-03-14", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.LG", + "I.2" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.01659v1", + "title": "Reinforcement Learning for Battery Energy Storage Dispatch augmented with Model-based Optimizer", + "abstract": "Reinforcement learning has been found useful in solving optimal power flow\n(OPF) problems in electric power distribution systems. However, the use of\nlargely model-free reinforcement learning algorithms that completely ignore the\nphysics-based modeling of the power grid compromises the optimizer performance\nand poses scalability challenges. This paper proposes a novel approach to\nsynergistically combine the physics-based models with learning-based algorithms\nusing imitation learning to solve distribution-level OPF problems.\nSpecifically, we propose imitation learning based improvements in deep\nreinforcement learning (DRL) methods to solve the OPF problem for a specific\ncase of battery storage dispatch in the power distribution systems. The\nproposed imitation learning algorithm uses the approximate optimal solutions\nobtained from a linearized model-based OPF solver to provide a good initial\npolicy for the DRL algorithms while improving the training efficiency. The\neffectiveness of the proposed approach is demonstrated using IEEE 34-bus and\n123-bus distribution feeders with numerous distribution-level battery storage\nsystems.", + "authors": "Gayathri Krishnamoorthy, Anamika Dubey", + "published": "2021-09-02", + "updated": "2021-09-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SY", + "eess.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.07789v1", + "title": "Safe Reinforcement Learning by Imagining the Near Future", + "abstract": "Safe reinforcement learning is a promising path toward applying reinforcement\nlearning algorithms to real-world problems, where suboptimal behaviors may lead\nto actual negative consequences. In this work, we focus on the setting where\nunsafe states can be avoided by planning ahead a short time into the future. In\nthis setting, a model-based agent with a sufficiently accurate model can avoid\nunsafe states. We devise a model-based algorithm that heavily penalizes unsafe\ntrajectories, and derive guarantees that our algorithm can avoid unsafe states\nunder certain assumptions. Experiments demonstrate that our algorithm can\nachieve competitive rewards with fewer safety violations in several continuous\ncontrol tasks.", + "authors": "Garrett Thomas, Yuping Luo, Tengyu Ma", + "published": "2022-02-15", + "updated": "2022-02-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.09064v2", + "title": "Can Interpretable Reinforcement Learning Manage Prosperity Your Way?", + "abstract": "Personalisation of products and services is fast becoming the driver of\nsuccess in banking and commerce. Machine learning holds the promise of gaining\na deeper understanding of and tailoring to customers' needs and preferences.\nWhereas traditional solutions to financial decision problems frequently rely on\nmodel assumptions, reinforcement learning is able to exploit large amounts of\ndata to improve customer modelling and decision-making in complex financial\nenvironments with fewer assumptions. Model explainability and interpretability\npresent challenges from a regulatory perspective which demands transparency for\nacceptance; they also offer the opportunity for improved insight into and\nunderstanding of customers. Post-hoc approaches are typically used for\nexplaining pretrained reinforcement learning models. Based on our previous\nmodeling of customer spending behaviour, we adapt our recent reinforcement\nlearning algorithm that intrinsically characterizes desirable behaviours and we\ntransition to the problem of asset management. We train inherently\ninterpretable reinforcement learning agents to give investment advice that is\naligned with prototype financial personality traits which are combined to make\na final recommendation. We observe that the trained agents' advice adheres to\ntheir intended characteristics, they learn the value of compound growth, and,\nwithout any explicit reference, the notion of risk as well as improved policy\nconvergence.", + "authors": "Charl Maree, Christian Omlin", + "published": "2022-02-18", + "updated": "2022-06-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2209.06604v1", + "title": "Learning state correspondence of reinforcement learning tasks for knowledge transfer", + "abstract": "Deep reinforcement learning has shown an ability to achieve super-human\nperformance in solving complex reinforcement learning (RL) tasks only from\nraw-pixels. However, it fails to reuse knowledge from previously learnt tasks\nto solve new, unseen ones. Generalizing and reusing knowledge are the\nfundamental requirements for creating a truly intelligent agent. This work\nproposes a general method for one-to-one transfer learning based on generative\nadversarial network model tailored to RL task.", + "authors": "Marko Ruman, Tatiana V. Guy", + "published": "2022-09-14", + "updated": "2022-09-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1806.01265v2", + "title": "Equivalence Between Wasserstein and Value-Aware Loss for Model-based Reinforcement Learning", + "abstract": "Learning a generative model is a key component of model-based reinforcement\nlearning. Though learning a good model in the tabular setting is a simple task,\nlearning a useful model in the approximate setting is challenging. In this\ncontext, an important question is the loss function used for model learning as\nvarying the loss function can have a remarkable impact on effectiveness of\nplanning. Recently Farahmand et al. (2017) proposed a value-aware model\nlearning (VAML) objective that captures the structure of value function during\nmodel learning. Using tools from Asadi et al. (2018), we show that minimizing\nthe VAML objective is in fact equivalent to minimizing the Wasserstein metric.\nThis equivalence improves our understanding of value-aware models, and also\ncreates a theoretical foundation for applications of Wasserstein in model-based\nreinforcement~learning.", + "authors": "Kavosh Asadi, Evan Cater, Dipendra Misra, Michael L. Littman", + "published": "2018-06-01", + "updated": "2018-07-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.00766v1", + "title": "Tracking the Race Between Deep Reinforcement Learning and Imitation Learning -- Extended Version", + "abstract": "Learning-based approaches for solving large sequential decision making\nproblems have become popular in recent years. The resulting agents perform\ndifferently and their characteristics depend on those of the underlying\nlearning approach. Here, we consider a benchmark planning problem from the\nreinforcement learning domain, the Racetrack, to investigate the properties of\nagents derived from different deep (reinforcement) learning approaches. We\ncompare the performance of deep supervised learning, in particular imitation\nlearning, to reinforcement learning for the Racetrack model. We find that\nimitation learning yields agents that follow more risky paths. In contrast, the\ndecisions of deep reinforcement learning are more foresighted, i.e., avoid\nstates in which fatal decisions are more likely. Our evaluations show that for\nthis sequential decision making problem, deep reinforcement learning performs\nbest in many aspects even though for imitation learning optimal decisions are\nconsidered.", + "authors": "Timo P. Gros, Daniel H\u00f6ller, J\u00f6rg Hoffmann, Verena Wolf", + "published": "2020-08-03", + "updated": "2020-08-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.05546v2", + "title": "Multi-Agent Quantum Reinforcement Learning using Evolutionary Optimization", + "abstract": "Multi-Agent Reinforcement Learning is becoming increasingly more important in\ntimes of autonomous driving and other smart industrial applications.\nSimultaneously a promising new approach to Reinforcement Learning arises using\nthe inherent properties of quantum mechanics, reducing the trainable parameters\nof a model significantly. However, gradient-based Multi-Agent Quantum\nReinforcement Learning methods often have to struggle with barren plateaus,\nholding them back from matching the performance of classical approaches. We\nbuild upon an existing approach for gradient free Quantum Reinforcement\nLearning and propose three genetic variations with Variational Quantum Circuits\nfor Multi-Agent Reinforcement Learning using evolutionary optimization. We\nevaluate our genetic variations in the Coin Game environment and also compare\nthem to classical approaches. We showed that our Variational Quantum Circuit\napproaches perform significantly better compared to a neural network with a\nsimilar amount of trainable parameters. Compared to the larger neural network,\nour approaches archive similar results using $97.88\\%$ less parameters.", + "authors": "Michael K\u00f6lle, Felix Topp, Thomy Phan, Philipp Altmann, Jonas N\u00fc\u00dflein, Claudia Linnhoff-Popien", + "published": "2023-11-09", + "updated": "2024-01-13", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph", + "cs.AI", + "cs.MA" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2006.07178v2", + "title": "Meta-Reinforcement Learning Robust to Distributional Shift via Model Identification and Experience Relabeling", + "abstract": "Reinforcement learning algorithms can acquire policies for complex tasks\nautonomously. However, the number of samples required to learn a diverse set of\nskills can be prohibitively large. While meta-reinforcement learning methods\nhave enabled agents to leverage prior experience to adapt quickly to new tasks,\ntheir performance depends crucially on how close the new task is to the\npreviously experienced tasks. Current approaches are either not able to\nextrapolate well, or can do so at the expense of requiring extremely large\namounts of data for on-policy meta-training. In this work, we present model\nidentification and experience relabeling (MIER), a meta-reinforcement learning\nalgorithm that is both efficient and extrapolates well when faced with\nout-of-distribution tasks at test time. Our method is based on a simple\ninsight: we recognize that dynamics models can be adapted efficiently and\nconsistently with off-policy data, more easily than policies and value\nfunctions. These dynamics models can then be used to continue training policies\nand value functions for out-of-distribution tasks without using\nmeta-reinforcement learning at all, by generating synthetic experience for the\nnew task.", + "authors": "Russell Mendonca, Xinyang Geng, Chelsea Finn, Sergey Levine", + "published": "2020-06-12", + "updated": "2020-06-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1910.11914v3", + "title": "On the convergence of projective-simulation-based reinforcement learning in Markov decision processes", + "abstract": "In recent years, the interest in leveraging quantum effects for enhancing\nmachine learning tasks has significantly increased. Many algorithms speeding up\nsupervised and unsupervised learning were established. The first framework in\nwhich ways to exploit quantum resources specifically for the broader context of\nreinforcement learning were found is projective simulation. Projective\nsimulation presents an agent-based reinforcement learning approach designed in\na manner which may support quantum walk-based speed-ups. Although classical\nvariants of projective simulation have been benchmarked against common\nreinforcement learning algorithms, very few formal theoretical analyses have\nbeen provided for its performance in standard learning scenarios. In this\npaper, we provide a detailed formal discussion of the properties of this model.\nSpecifically, we prove that one version of the projective simulation model,\nunderstood as a reinforcement learning approach, converges to optimal behavior\nin a large class of Markov decision processes. This proof shows that a\nphysically-inspired approach to reinforcement learning can guarantee to\nconverge.", + "authors": "Walter L. Boyajian, Jens Clausen, Lea M. Trenkwalder, Vedran Dunjko, Hans J. Briegel", + "published": "2019-10-25", + "updated": "2020-11-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "quant-ph", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.02900v2", + "title": "Meta Federated Reinforcement Learning for Distributed Resource Allocation", + "abstract": "In cellular networks, resource allocation is usually performed in a\ncentralized way, which brings huge computation complexity to the base station\n(BS) and high transmission overhead. This paper explores a distributed resource\nallocation method that aims to maximize energy efficiency (EE) while ensuring\nthe quality of service (QoS) for users. Specifically, in order to address\nwireless channel conditions, we propose a robust meta federated reinforcement\nlearning (\\textit{MFRL}) framework that allows local users to optimize transmit\npower and assign channels using locally trained neural network models, so as to\noffload computational burden from the cloud server to the local users, reducing\ntransmission overhead associated with local channel state information. The BS\nperforms the meta learning procedure to initialize a general global model,\nenabling rapid adaptation to different environments with improved EE\nperformance. The federated learning technique, based on decentralized\nreinforcement learning, promotes collaboration and mutual benefits among users.\nAnalysis and numerical results demonstrate that the proposed \\textit{MFRL}\nframework accelerates the reinforcement learning process, decreases\ntransmission overhead, and offloads computation, while outperforming the\nconventional decentralized reinforcement learning algorithm in terms of\nconvergence speed and EE performance across various scenarios.", + "authors": "Zelin Ji, Zhijin Qin, Xiaoming Tao", + "published": "2023-07-06", + "updated": "2023-07-09", + "primary_cat": "eess.SP", + "cats": [ + "eess.SP", + "cs.SY", + "eess.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.14766v1", + "title": "Reinforcement Learning from Statistical Feedback: the Journey from AB Testing to ANT Testing", + "abstract": "Reinforcement Learning from Human Feedback (RLHF) has played a crucial role\nin the success of large models such as ChatGPT. RLHF is a reinforcement\nlearning framework which combines human feedback to improve learning\neffectiveness and performance. However, obtaining preferences feedback manually\nis quite expensive in commercial applications. Some statistical commercial\nindicators are usually more valuable and always ignored in RLHF. There exists a\ngap between commercial target and model training. In our research, we will\nattempt to fill this gap with statistical business feedback instead of human\nfeedback, using AB testing which is a well-established statistical method.\nReinforcement Learning from Statistical Feedback (RLSF) based on AB testing is\nproposed. Statistical inference methods are used to obtain preferences for\ntraining the reward network, which fine-tunes the pre-trained model in\nreinforcement learning framework, achieving greater business value.\nFurthermore, we extend AB testing with double selections at a single time-point\nto ANT testing with multiple selections at different feedback time points.\nMoreover, we design numerical experiences to validate the effectiveness of our\nalgorithm framework.", + "authors": "Feiyang Han, Yimin Wei, Zhaofeng Liu, Yanxing Qi", + "published": "2023-11-24", + "updated": "2023-11-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "math.ST", + "stat.ME", + "stat.TH" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.06914v1", + "title": "Model-assisted Reinforcement Learning of a Quadrotor", + "abstract": "In recent times, reinforcement learning has produced baffling results when it\ncomes to performing control tasks with highly non-linear systems. The\nimpressive results always outweigh the potential vulnerabilities or\nuncertainties associated with the agents when deployed in the real-world. While\nthe performance is remarkable compared to the classical control algorithms, the\nreinforcement learning-based methods suffer from two flaws, robustness and\ninterpretability, which are vital for contemporary real-world applications. The\npaper attempts to alleviate such problems with reinforcement learning and\nproposes the concept of model-assisted reinforcement learning to induce a\nnotion of conservativeness in the agents. The control task considered for the\nexperiment involves navigating a CrazyFlie quadrotor. The paper also describes\na way of reformulating the task to have the flexibility of tuning the level of\nconservativeness via multi-objective reinforcement learning. The results\ninclude a comparison of the vanilla reinforcement learning approaches and the\nproposed approach. The metrics are evaluated by systematically injecting\ndisturbances to classify the inherent robustness and conservativeness of the\nagents. More concrete arguments are made by computing and comparing the\nbackward reachability tubes of the RL policies by solving the\nHamilton-Jacobi-Bellman partial differential equation (HJ PDE).", + "authors": "Arshad Javeed", + "published": "2023-11-12", + "updated": "2023-11-12", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.07315v1", + "title": "An introduction to reinforcement learning for neuroscience", + "abstract": "Reinforcement learning has a rich history in neuroscience, from early work on\ndopamine as a reward prediction error signal for temporal difference learning\n(Schultz et al., 1997) to recent work suggesting that dopamine could implement\na form of 'distributional reinforcement learning' popularized in deep learning\n(Dabney et al., 2020). Throughout this literature, there has been a tight link\nbetween theoretical advances in reinforcement learning and neuroscientific\nexperiments and findings. As a result, the theories describing our experimental\ndata have become increasingly complex and difficult to navigate. In this\nreview, we cover the basic theory underlying classical work in reinforcement\nlearning and build up to an introductory overview of methods used in modern\ndeep reinforcement learning that have found applications in systems\nneuroscience. We start with an overview of the reinforcement learning problem\nand classical temporal difference algorithms, followed by a discussion of\n'model-free' and 'model-based' reinforcement learning together with methods\nsuch as DYNA and successor representations that fall in between these two\ncategories. Throughout these sections, we highlight the close parallels between\nthe machine learning methods and related work in both experimental and\ntheoretical neuroscience. We then provide an introduction to deep reinforcement\nlearning with examples of how these methods have been used to model different\nlearning phenomena in the systems neuroscience literature, such as\nmeta-reinforcement learning (Wang et al., 2018) and distributional\nreinforcement learning (Dabney et al., 2020). Code that implements the methods\ndiscussed in this work and generates the figures is also provided.", + "authors": "Kristopher T. Jensen", + "published": "2023-11-13", + "updated": "2023-11-13", + "primary_cat": "q-bio.NC", + "cats": [ + "q-bio.NC", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1906.12189v1", + "title": "Learning-based Model Predictive Control for Safe Exploration and Reinforcement Learning", + "abstract": "Reinforcement learning has been successfully used to solve difficult tasks in\ncomplex unknown environments. However, these methods typically do not provide\nany safety guarantees during the learning process. This is particularly\nproblematic, since reinforcement learning agent actively explore their\nenvironment. This prevents their use in safety-critical, real-world\napplications. In this paper, we present a learning-based model predictive\ncontrol scheme that provides high-probability safety guarantees throughout the\nlearning process. Based on a reliable statistical model, we construct provably\naccurate confidence intervals on predicted trajectories. Unlike previous\napproaches, we allow for input-dependent uncertainties. Based on these reliable\npredictions, we guarantee that trajectories satisfy safety constraints.\nMoreover, we use a terminal set constraint to recursively guarantee the\nexistence of safe control actions at every iteration. We evaluate the resulting\nalgorithm to safely explore the dynamics of an inverted pendulum and to solve a\nreinforcement learning task on a cart-pole system with safety constraints.", + "authors": "Torsten Koller, Felix Berkenkamp, Matteo Turchetta, Joschka Boedecker, Andreas Krause", + "published": "2019-06-27", + "updated": "2019-06-27", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.AI", + "cs.LG", + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.11142v1", + "title": "Two-stage Deep Reinforcement Learning for Inverter-based Volt-VAR Control in Active Distribution Networks", + "abstract": "Model-based Vol/VAR optimization method is widely used to eliminate voltage\nviolations and reduce network losses. However, the parameters of active\ndistribution networks(ADNs) are not onsite identified, so significant errors\nmay be involved in the model and make the model-based method infeasible. To\ncope with this critical issue, we propose a novel two-stage deep reinforcement\nlearning (DRL) method to improve the voltage profile by regulating\ninverter-based energy resources, which consists of offline stage and online\nstage. In the offline stage, a highly efficient adversarial reinforcement\nlearning algorithm is developed to train an offline agent robust to the model\nmismatch. In the sequential online stage, we transfer the offline agent safely\nas the online agent to perform continuous learning and controlling online with\nsignificantly improved safety and efficiency. Numerical simulations on IEEE\ntest cases not only demonstrate that the proposed adversarial reinforcement\nlearning algorithm outperforms the state-of-art algorithm, but also show that\nour proposed two-stage method achieves much better performance than the\nexisting DRL based methods in the online application.", + "authors": "Haotian Liu, Wenchuan Wu", + "published": "2020-05-20", + "updated": "2020-05-20", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.LG", + "cs.SY", + "J.7; C.3" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2308.14897v1", + "title": "Statistically Efficient Variance Reduction with Double Policy Estimation for Off-Policy Evaluation in Sequence-Modeled Reinforcement Learning", + "abstract": "Offline reinforcement learning aims to utilize datasets of previously\ngathered environment-action interaction records to learn a policy without\naccess to the real environment. Recent work has shown that offline\nreinforcement learning can be formulated as a sequence modeling problem and\nsolved via supervised learning with approaches such as decision transformer.\nWhile these sequence-based methods achieve competitive results over\nreturn-to-go methods, especially on tasks that require longer episodes or with\nscarce rewards, importance sampling is not considered to correct the policy\nbias when dealing with off-policy data, mainly due to the absence of behavior\npolicy and the use of deterministic evaluation policies. To this end, we\npropose DPE: an RL algorithm that blends offline sequence modeling and offline\nreinforcement learning with Double Policy Estimation (DPE) in a unified\nframework with statistically proven properties on variance reduction. We\nvalidate our method in multiple tasks of OpenAI Gym with D4RL benchmarks. Our\nmethod brings a performance improvements on selected methods which outperforms\nSOTA baselines in several tasks, demonstrating the advantages of enabling\ndouble policy estimation for sequence-modeled reinforcement learning.", + "authors": "Hanhan Zhou, Tian Lan, Vaneet Aggarwal", + "published": "2023-08-28", + "updated": "2023-08-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.DC" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1903.08543v6", + "title": "Optimizing thermodynamic trajectories using evolutionary and gradient-based reinforcement learning", + "abstract": "Using a model heat engine, we show that neural network-based reinforcement\nlearning can identify thermodynamic trajectories of maximal efficiency. We\nconsider both gradient and gradient-free reinforcement learning. We use an\nevolutionary learning algorithm to evolve a population of neural networks,\nsubject to a directive to maximize the efficiency of a trajectory composed of a\nset of elementary thermodynamic processes; the resulting networks learn to\ncarry out the maximally-efficient Carnot, Stirling, or Otto cycles. When given\nan additional irreversible process, this evolutionary scheme learns a\npreviously unknown thermodynamic cycle. Gradient-based reinforcement learning\nis able to learn the Stirling cycle, whereas an evolutionary approach achieves\nthe optimal Carnot cycle. Our results show how the reinforcement learning\nstrategies developed for game playing can be applied to solve physical problems\nconditioned upon path-extensive order parameters.", + "authors": "Chris Beeler, Uladzimir Yahorau, Rory Coles, Kyle Mills, Stephen Whitelam, Isaac Tamblyn", + "published": "2019-03-20", + "updated": "2021-11-22", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE", + "cond-mat.stat-mech", + "cs.LG", + "physics.comp-ph" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2012.09737v2", + "title": "Model-free and Bayesian Ensembling Model-based Deep Reinforcement Learning for Particle Accelerator Control Demonstrated on the FERMI FEL", + "abstract": "Reinforcement learning holds tremendous promise in accelerator controls. The\nprimary goal of this paper is to show how this approach can be utilised on an\noperational level on accelerator physics problems. Despite the success of\nmodel-free reinforcement learning in several domains, sample-efficiency still\nis a bottle-neck, which might be encompassed by model-based methods. We compare\nwell-suited purely model-based to model-free reinforcement learning applied to\nthe intensity optimisation on the FERMI FEL system. We find that the\nmodel-based approach demonstrates higher representational power and\nsample-efficiency, while the asymptotic performance of the model-free method is\nslightly superior. The model-based algorithm is implemented in a DYNA-style\nusing an uncertainty aware model, and the model-free algorithm is based on\ntailored deep Q-learning. In both cases, the algorithms were implemented in a\nway, which presents increased noise robustness as omnipresent in accelerator\ncontrol problems. Code is released in\nhttps://github.com/MathPhysSim/FERMI_RL_Paper.", + "authors": "Simon Hirlaender, Niky Bruchon", + "published": "2020-12-17", + "updated": "2022-01-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SY", + "eess.SY", + "physics.acc-ph", + "I.2; J.2" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1804.07193v3", + "title": "Lipschitz Continuity in Model-based Reinforcement Learning", + "abstract": "We examine the impact of learning Lipschitz continuous models in the context\nof model-based reinforcement learning. We provide a novel bound on multi-step\nprediction error of Lipschitz models where we quantify the error using the\nWasserstein metric. We go on to prove an error bound for the value-function\nestimate arising from Lipschitz models and show that the estimated value\nfunction is itself Lipschitz. We conclude with empirical results that show the\nbenefits of controlling the Lipschitz constant of neural-network models.", + "authors": "Kavosh Asadi, Dipendra Misra, Michael L. Littman", + "published": "2018-04-19", + "updated": "2018-07-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1305.1809v2", + "title": "Cover Tree Bayesian Reinforcement Learning", + "abstract": "This paper proposes an online tree-based Bayesian approach for reinforcement\nlearning. For inference, we employ a generalised context tree model. This\ndefines a distribution on multivariate Gaussian piecewise-linear models, which\ncan be updated in closed form. The tree structure itself is constructed using\nthe cover tree method, which remains efficient in high dimensional spaces. We\ncombine the model with Thompson sampling and approximate dynamic programming to\nobtain effective exploration policies in unknown environments. The flexibility\nand computational simplicity of the model render it suitable for many\nreinforcement learning problems in continuous state spaces. We demonstrate this\nin an experimental comparison with least squares policy iteration.", + "authors": "Nikolaos Tziortziotis, Christos Dimitrakakis, Konstantinos Blekas", + "published": "2013-05-08", + "updated": "2014-05-02", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2301.10119v2", + "title": "Minimal Value-Equivalent Partial Models for Scalable and Robust Planning in Lifelong Reinforcement Learning", + "abstract": "Learning models of the environment from pure interaction is often considered\nan essential component of building lifelong reinforcement learning agents.\nHowever, the common practice in model-based reinforcement learning is to learn\nmodels that model every aspect of the agent's environment, regardless of\nwhether they are important in coming up with optimal decisions or not. In this\npaper, we argue that such models are not particularly well-suited for\nperforming scalable and robust planning in lifelong reinforcement learning\nscenarios and we propose new kinds of models that only model the relevant\naspects of the environment, which we call \"minimal value-equivalent partial\nmodels\". After providing a formal definition for these models, we provide\ntheoretical results demonstrating the scalability advantages of performing\nplanning with such models and then perform experiments to empirically\nillustrate our theoretical results. Then, we provide some useful heuristics on\nhow to learn these kinds of models with deep learning architectures and\nempirically demonstrate that models learned in such a way can allow for\nperforming planning that is robust to distribution shifts and compounding model\nerrors. Overall, both our theoretical and empirical results suggest that\nminimal value-equivalent partial models can provide significant benefits to\nperforming scalable and robust planning in lifelong reinforcement learning\nscenarios.", + "authors": "Safa Alver, Doina Precup", + "published": "2023-01-24", + "updated": "2023-06-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2007.12666v5", + "title": "Safe Model-Based Reinforcement Learning for Systems with Parametric Uncertainties", + "abstract": "Reinforcement learning has been established over the past decade as an\neffective tool to find optimal control policies for dynamical systems, with\nrecent focus on approaches that guarantee safety during the learning and/or\nexecution phases. In general, safety guarantees are critical in reinforcement\nlearning when the system is safety-critical and/or task restarts are not\npractically feasible. In optimal control theory, safety requirements are often\nexpressed in terms of state and/or control constraints. In recent years,\nreinforcement learning approaches that rely on persistent excitation have been\ncombined with a barrier transformation to learn the optimal control policies\nunder state constraints. To soften the excitation requirements, model-based\nreinforcement learning methods that rely on exact model knowledge have also\nbeen integrated with the barrier transformation framework. The objective of\nthis paper is to develop safe reinforcement learning method for deterministic\nnonlinear systems, with parametric uncertainties in the model, to learn\napproximate constrained optimal policies without relying on stringent\nexcitation conditions. To that end, a model-based reinforcement learning\ntechnique that utilizes a novel filtered concurrent learning method, along with\na barrier transformation, is developed in this paper to realize simultaneous\nlearning of unknown model parameters and approximate optimal state-constrained\ncontrol policies for safety-critical systems.", + "authors": "S M Nahid Mahmud, Scott A Nivison, Zachary I. Bell, Rushikesh Kamalapurkar", + "published": "2020-07-24", + "updated": "2021-10-05", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.SY", + "math.OC" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1812.09968v1", + "title": "VMAV-C: A Deep Attention-based Reinforcement Learning Algorithm for Model-based Control", + "abstract": "Recent breakthroughs in Go play and strategic games have witnessed the great\npotential of reinforcement learning in intelligently scheduling in uncertain\nenvironment, but some bottlenecks are also encountered when we generalize this\nparadigm to universal complex tasks. Among them, the low efficiency of data\nutilization in model-free reinforcement algorithms is of great concern. In\ncontrast, the model-based reinforcement learning algorithms can reveal\nunderlying dynamics in learning environments and seldom suffer the data\nutilization problem. To address the problem, a model-based reinforcement\nlearning algorithm with attention mechanism embedded is proposed as an\nextension of World Models in this paper. We learn the environment model through\nMixture Density Network Recurrent Network(MDN-RNN) for agents to interact, with\ncombinations of variational auto-encoder(VAE) and attention incorporated in\nstate value estimates during the process of learning policy. In this way, agent\ncan learn optimal policies through less interactions with actual environment,\nand final experiments demonstrate the effectiveness of our model in control\nproblem.", + "authors": "Xingxing Liang, Qi Wang, Yanghe Feng, Zhong Liu, Jincai Huang", + "published": "2018-12-24", + "updated": "2018-12-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.NE" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1802.10592v2", + "title": "Model-Ensemble Trust-Region Policy Optimization", + "abstract": "Model-free reinforcement learning (RL) methods are succeeding in a growing\nnumber of tasks, aided by recent advances in deep learning. However, they tend\nto suffer from high sample complexity, which hinders their use in real-world\ndomains. Alternatively, model-based reinforcement learning promises to reduce\nsample complexity, but tends to require careful tuning and to date have\nsucceeded mainly in restrictive domains where simple models are sufficient for\nlearning. In this paper, we analyze the behavior of vanilla model-based\nreinforcement learning methods when deep neural networks are used to learn both\nthe model and the policy, and show that the learned policy tends to exploit\nregions where insufficient data is available for the model to be learned,\ncausing instability in training. To overcome this issue, we propose to use an\nensemble of models to maintain the model uncertainty and regularize the\nlearning process. We further show that the use of likelihood ratio derivatives\nyields much more stable learning than backpropagation through time. Altogether,\nour approach Model-Ensemble Trust-Region Policy Optimization (ME-TRPO)\nsignificantly reduces the sample complexity compared to model-free deep RL\nmethods on challenging continuous control benchmark tasks.", + "authors": "Thanard Kurutach, Ignasi Clavera, Yan Duan, Aviv Tamar, Pieter Abbeel", + "published": "2018-02-28", + "updated": "2018-10-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2105.00822v2", + "title": "Generative Adversarial Reward Learning for Generalized Behavior Tendency Inference", + "abstract": "Recent advances in reinforcement learning have inspired increasing interest\nin learning user modeling adaptively through dynamic interactions, e.g., in\nreinforcement learning based recommender systems. Reward function is crucial\nfor most of reinforcement learning applications as it can provide the guideline\nabout the optimization. However, current reinforcement-learning-based methods\nrely on manually-defined reward functions, which cannot adapt to dynamic and\nnoisy environments. Besides, they generally use task-specific reward functions\nthat sacrifice generalization ability. We propose a generative inverse\nreinforcement learning for user behavioral preference modelling, to address the\nabove issues. Instead of using predefined reward functions, our model can\nautomatically learn the rewards from user's actions based on discriminative\nactor-critic network and Wasserstein GAN. Our model provides a general way of\ncharacterizing and explaining underlying behavioral tendencies, and our\nexperiments show our method outperforms state-of-the-art methods in a variety\nof scenarios, namely traffic signal control, online recommender systems, and\nscanpath prediction.", + "authors": "Xiaocong Chen, Lina Yao, Xianzhi Wang, Aixin Sun, Wenjie Zhang, Quan Z. Sheng", + "published": "2021-05-03", + "updated": "2021-05-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.IR" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2011.10714v1", + "title": "Double Meta-Learning for Data Efficient Policy Optimization in Non-Stationary Environments", + "abstract": "We are interested in learning models of non-stationary environments, which\ncan be framed as a multi-task learning problem. Model-free reinforcement\nlearning algorithms can achieve good asymptotic performance in multi-task\nlearning at a cost of extensive sampling, due to their approach, which requires\nlearning from scratch. While model-based approaches are among the most data\nefficient learning algorithms, they still struggle with complex tasks and model\nuncertainties. Meta-reinforcement learning addresses the efficiency and\ngeneralization challenges on multi task learning by quickly leveraging the\nmeta-prior policy for a new task. In this paper, we propose a\nmeta-reinforcement learning approach to learn the dynamic model of a\nnon-stationary environment to be used for meta-policy optimization later. Due\nto the sample efficiency of model-based learning methods, we are able to\nsimultaneously train both the meta-model of the non-stationary environment and\nthe meta-policy until dynamic model convergence. Then, the meta-learned dynamic\nmodel of the environment will generate simulated data for meta-policy\noptimization. Our experiment demonstrates that our proposed method can\nmeta-learn the policy in a non-stationary environment with the data efficiency\nof model-based learning approaches while achieving the high asymptotic\nperformance of model-free meta-reinforcement learning.", + "authors": "Elahe Aghapour, Nora Ayanian", + "published": "2020-11-21", + "updated": "2020-11-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.05440v1", + "title": "Delay-Aware Model-Based Reinforcement Learning for Continuous Control", + "abstract": "Action delays degrade the performance of reinforcement learning in many\nreal-world systems. This paper proposes a formal definition of delay-aware\nMarkov Decision Process and proves it can be transformed into standard MDP with\naugmented states using the Markov reward process. We develop a delay-aware\nmodel-based reinforcement learning framework that can incorporate the\nmulti-step delay into the learned system models without learning effort.\nExperiments with the Gym and MuJoCo platforms show that the proposed\ndelay-aware model-based algorithm is more efficient in training and\ntransferable between systems with various durations of delay compared with\noff-policy model-free reinforcement learning methods. Codes available at:\nhttps://github.com/baimingc/dambrl.", + "authors": "Baiming Chen, Mengdi Xu, Liang Li, Ding Zhao", + "published": "2020-05-11", + "updated": "2020-05-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1708.07738v1", + "title": "A Function Approximation Method for Model-based High-Dimensional Inverse Reinforcement Learning", + "abstract": "This works handles the inverse reinforcement learning problem in\nhigh-dimensional state spaces, which relies on an efficient solution of\nmodel-based high-dimensional reinforcement learning problems. To solve the\ncomputationally expensive reinforcement learning problems, we propose a\nfunction approximation method to ensure that the Bellman Optimality Equation\nalways holds, and then estimate a function based on the observed human actions\nfor inverse reinforcement learning problems. The time complexity of the\nproposed method is linearly proportional to the cardinality of the action set,\nthus it can handle high-dimensional even continuous state spaces efficiently.\nWe test the proposed method in a simulated environment to show its accuracy,\nand three clinical tasks to show how it can be used to evaluate a doctor's\nproficiency.", + "authors": "Kun Li, Joel W. Burdick", + "published": "2017-08-23", + "updated": "2017-08-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2003.13839v1", + "title": "Model-Reference Reinforcement Learning Control of Autonomous Surface Vehicles with Uncertainties", + "abstract": "This paper presents a novel model-reference reinforcement learning control\nmethod for uncertain autonomous surface vehicles. The proposed control combines\na conventional control method with deep reinforcement learning. With the\nconventional control, we can ensure the learning-based control law provides\nclosed-loop stability for the overall system, and potentially increase the\nsample efficiency of the deep reinforcement learning. With the reinforcement\nlearning, we can directly learn a control law to compensate for modeling\nuncertainties. In the proposed control, a nominal system is employed for the\ndesign of a baseline control law using a conventional control approach. The\nnominal system also defines the desired performance for uncertain autonomous\nvehicles to follow. In comparison with traditional deep reinforcement learning\nmethods, our proposed learning-based control can provide stability guarantees\nand better sample efficiency. We demonstrate the performance of the new\nalgorithm via extensive simulation results.", + "authors": "Qingrui Zhang, Wei Pan, Vasso Reppa", + "published": "2020-03-30", + "updated": "2020-03-30", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.AI", + "cs.LG", + "cs.RO", + "cs.SY", + "math.OC" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.08162v1", + "title": "Causal Reasoning from Meta-reinforcement Learning", + "abstract": "Discovering and exploiting the causal structure in the environment is a\ncrucial challenge for intelligent agents. Here we explore whether causal\nreasoning can emerge via meta-reinforcement learning. We train a recurrent\nnetwork with model-free reinforcement learning to solve a range of problems\nthat each contain causal structure. We find that the trained agent can perform\ncausal reasoning in novel situations in order to obtain rewards. The agent can\nselect informative interventions, draw causal inferences from observational\ndata, and make counterfactual predictions. Although established formal causal\nreasoning algorithms also exist, in this paper we show that such reasoning can\narise from model-free reinforcement learning, and suggest that causal reasoning\nin complex settings may benefit from the more end-to-end learning-based\napproaches presented here. This work also offers new strategies for structured\nexploration in reinforcement learning, by providing agents with the ability to\nperform -- and interpret -- experiments.", + "authors": "Ishita Dasgupta, Jane Wang, Silvia Chiappa, Jovana Mitrovic, Pedro Ortega, David Raposo, Edward Hughes, Peter Battaglia, Matthew Botvinick, Zeb Kurth-Nelson", + "published": "2019-01-23", + "updated": "2019-01-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2006.09234v1", + "title": "Model Embedding Model-Based Reinforcement Learning", + "abstract": "Model-based reinforcement learning (MBRL) has shown its advantages in\nsample-efficiency over model-free reinforcement learning (MFRL). Despite the\nimpressive results it achieves, it still faces a trade-off between the ease of\ndata generation and model bias. In this paper, we propose a simple and elegant\nmodel-embedding model-based reinforcement learning (MEMB) algorithm in the\nframework of the probabilistic reinforcement learning. To balance the\nsample-efficiency and model bias, we exploit both real and imaginary data in\nthe training. In particular, we embed the model in the policy update and learn\n$Q$ and $V$ functions from the real data set. We provide the theoretical\nanalysis of MEMB with the Lipschitz continuity assumption on the model and\npolicy. At last, we evaluate MEMB on several benchmarks and demonstrate our\nalgorithm can achieve state-of-the-art performance.", + "authors": "Xiaoyu Tan, Chao Qu, Junwu Xiong, James Zhang", + "published": "2020-06-16", + "updated": "2020-06-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2011.01734v1", + "title": "Differentiable Physics Models for Real-world Offline Model-based Reinforcement Learning", + "abstract": "A limitation of model-based reinforcement learning (MBRL) is the exploitation\nof errors in the learned models. Black-box models can fit complex dynamics with\nhigh fidelity, but their behavior is undefined outside of the data\ndistribution.Physics-based models are better at extrapolating, due to the\ngeneral validity of their informed structure, but underfit in the real world\ndue to the presence of unmodeled phenomena. In this work, we demonstrate\nexperimentally that for the offline model-based reinforcement learning setting,\nphysics-based models can be beneficial compared to high-capacity function\napproximators if the mechanical structure is known. Physics-based models can\nlearn to perform the ball in a cup (BiC) task on a physical manipulator using\nonly 4 minutes of sampled data using offline MBRL. We find that black-box\nmodels consistently produce unviable policies for BiC as all predicted\ntrajectories diverge to physically impossible state, despite having access to\nmore data than the physics-based model. In addition, we generalize the approach\nof physics parameter identification from modeling holonomic multi-body systems\nto systems with nonholonomic dynamics using end-to-end automatic\ndifferentiation.\n Videos: https://sites.google.com/view/ball-in-a-cup-in-4-minutes/", + "authors": "Michael Lutter, Johannes Silberbauer, Joe Watson, Jan Peters", + "published": "2020-11-03", + "updated": "2020-11-03", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1709.09346v2", + "title": "Cold-Start Reinforcement Learning with Softmax Policy Gradient", + "abstract": "Policy-gradient approaches to reinforcement learning have two common and\nundesirable overhead procedures, namely warm-start training and sample variance\nreduction. In this paper, we describe a reinforcement learning method based on\na softmax value function that requires neither of these procedures. Our method\ncombines the advantages of policy-gradient methods with the efficiency and\nsimplicity of maximum-likelihood approaches. We apply this new cold-start\nreinforcement learning method in training sequence generation models for\nstructured output prediction problems. Empirical evidence validates this method\non automatic summarization and image captioning tasks.", + "authors": "Nan Ding, Radu Soricut", + "published": "2017-09-27", + "updated": "2017-10-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.03688v1", + "title": "A Computational Model of Representation Learning in the Brain Cortex, Integrating Unsupervised and Reinforcement Learning", + "abstract": "A common view on the brain learning processes proposes that the three classic\nlearning paradigms -- unsupervised, reinforcement, and supervised -- take place\nin respectively the cortex, the basal-ganglia, and the cerebellum. However,\ndopamine outbursts, usually assumed to encode reward, are not limited to the\nbasal ganglia but also reach prefrontal, motor, and higher sensory cortices. We\npropose that in the cortex the same reward-based trial-and-error processes\nmight support not only the acquisition of motor representations but also of\nsensory representations. In particular, reward signals might guide\ntrial-and-error processes that mix with associative learning processes to\nsupport the acquisition of representations better serving downstream action\nselection. We tested the soundness of this hypothesis with a computational\nmodel that integrates unsupervised learning (Contrastive Divergence) and\nreinforcement learning (REINFORCE). The model was tested with a task requiring\ndifferent responses to different visual images grouped in categories involving\neither colour, shape, or size. Results show that a balanced mix of unsupervised\nand reinforcement learning processes leads to the best performance. Indeed,\nexcessive unsupervised learning tends to under-represent task-relevant features\nwhile excessive reinforcement learning tends to initially learn slowly and then\nto incur in local minima. These results stimulate future empirical studies on\ncategory learning directed to investigate similar effects in the extrastriate\nvisual cortices. Moreover, they prompt further computational investigations\ndirected to study the possible advantages of integrating unsupervised and\nreinforcement learning processes.", + "authors": "Giovanni Granato, Emilio Cartoni, Federico Da Rold, Andrea Mattera, Gianluca Baldassarre", + "published": "2021-06-07", + "updated": "2021-06-07", + "primary_cat": "q-bio.NC", + "cats": [ + "q-bio.NC", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.16543v2", + "title": "Model-based deep reinforcement learning for accelerated learning from flow simulations", + "abstract": "In recent years, deep reinforcement learning has emerged as a technique to\nsolve closed-loop flow control problems. Employing simulation-based\nenvironments in reinforcement learning enables a priori end-to-end optimization\nof the control system, provides a virtual testbed for safety-critical control\napplications, and allows to gain a deep understanding of the control\nmechanisms. While reinforcement learning has been applied successfully in a\nnumber of rather simple flow control benchmarks, a major bottleneck toward\nreal-world applications is the high computational cost and turnaround time of\nflow simulations. In this contribution, we demonstrate the benefits of\nmodel-based reinforcement learning for flow control applications. Specifically,\nwe optimize the policy by alternating between trajectories sampled from flow\nsimulations and trajectories sampled from an ensemble of environment models.\nThe model-based learning reduces the overall training time by up to $85\\%$ for\nthe fluidic pinball test case. Even larger savings are expected for more\ndemanding flow simulations.", + "authors": "Andre Weiner, Janis Geise", + "published": "2024-02-26", + "updated": "2024-04-10", + "primary_cat": "physics.flu-dyn", + "cats": [ + "physics.flu-dyn", + "cs.CE", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2009.09781v1", + "title": "Rethinking Supervised Learning and Reinforcement Learning in Task-Oriented Dialogue Systems", + "abstract": "Dialogue policy learning for task-oriented dialogue systems has enjoyed great\nprogress recently mostly through employing reinforcement learning methods.\nHowever, these approaches have become very sophisticated. It is time to\nre-evaluate it. Are we really making progress developing dialogue agents only\nbased on reinforcement learning? We demonstrate how (1)~traditional supervised\nlearning together with (2)~a simulator-free adversarial learning method can be\nused to achieve performance comparable to state-of-the-art RL-based methods.\nFirst, we introduce a simple dialogue action decoder to predict the appropriate\nactions. Then, the traditional multi-label classification solution for dialogue\npolicy learning is extended by adding dense layers to improve the dialogue\nagent performance. Finally, we employ the Gumbel-Softmax estimator to\nalternatively train the dialogue agent and the dialogue reward model without\nusing reinforcement learning. Based on our extensive experimentation, we can\nconclude the proposed methods can achieve more stable and higher performance\nwith fewer efforts, such as the domain knowledge required to design a user\nsimulator and the intractable parameter tuning in reinforcement learning. Our\nmain goal is not to beat reinforcement learning with supervised learning, but\nto demonstrate the value of rethinking the role of reinforcement learning and\nsupervised learning in optimizing task-oriented dialogue systems.", + "authors": "Ziming Li, Julia Kiseleva, Maarten de Rijke", + "published": "2020-09-21", + "updated": "2020-09-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2012.00743v1", + "title": "Adaptive Neural Architectures for Recommender Systems", + "abstract": "Deep learning has proved an effective means to capture the non-linear\nassociations of user preferences. However, the main drawback of existing deep\nlearning architectures is that they follow a fixed recommendation strategy,\nignoring users' real time-feedback. Recent advances of deep reinforcement\nstrategies showed that recommendation policies can be continuously updated\nwhile users interact with the system. In doing so, we can learn the optimal\npolicy that fits to users' preferences over the recommendation sessions. The\nmain drawback of deep reinforcement strategies is that are based on predefined\nand fixed neural architectures. To shed light on how to handle this issue, in\nthis study we first present deep reinforcement learning strategies for\nrecommendation and discuss the main limitations due to the fixed neural\narchitectures. Then, we detail how recent advances on progressive neural\narchitectures are used for consecutive tasks in other research domains.\nFinally, we present the key challenges to fill the gap between deep\nreinforcement learning and adaptive neural architectures. We provide guidelines\nfor searching for the best neural architecture based on each user feedback via\nreinforcement learning, while considering the prediction performance on\nreal-time recommendations and the model complexity.", + "authors": "Dimitrios Rafailidis, Stefanos Antaris", + "published": "2020-11-11", + "updated": "2020-11-11", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2301.03933v1", + "title": "Hint assisted reinforcement learning: an application in radio astronomy", + "abstract": "Model based reinforcement learning has proven to be more sample efficient\nthan model free methods. On the other hand, the construction of a dynamics\nmodel in model based reinforcement learning has increased complexity. Data\nprocessing tasks in radio astronomy are such situations where the original\nproblem which is being solved by reinforcement learning itself is the creation\nof a model. Fortunately, many methods based on heuristics or signal processing\ndo exist to perform the same tasks and we can leverage them to propose the best\naction to take, or in other words, to provide a `hint'. We propose to use\n`hints' generated by the environment as an aid to the reinforcement learning\nprocess mitigating the complexity of model construction. We modify the soft\nactor critic algorithm to use hints and use the alternating direction method of\nmultipliers algorithm with inequality constraints to train the agent. Results\nin several environments show that we get the increased sample efficiency by\nusing hints as compared to model free methods.", + "authors": "Sarod Yatawatta", + "published": "2023-01-10", + "updated": "2023-01-10", + "primary_cat": "astro-ph.IM", + "cats": [ + "astro-ph.IM", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.07905v2", + "title": "Reinforcement Learning Ship Autopilot: Sample efficient and Model Predictive Control-based Approach", + "abstract": "In this research we focus on developing a reinforcement learning system for a\nchallenging task: autonomous control of a real-sized boat, with difficulties\narising from large uncertainties in the challenging ocean environment and the\nextremely high cost of exploring and sampling with a real boat. To this end, we\nexplore a novel Gaussian processes (GP) based reinforcement learning approach\nthat combines sample-efficient model-based reinforcement learning and model\npredictive control (MPC). Our approach, sample-efficient probabilistic model\npredictive control (SPMPC), iteratively learns a Gaussian process dynamics\nmodel and uses it to efficiently update control signals within the MPC closed\ncontrol loop. A system using SPMPC is built to efficiently learn an autopilot\ntask. After investigating its performance in a simulation modeled upon real\nboat driving data, the proposed system successfully learns to drive a\nreal-sized boat equipped with a single engine and sensors measuring GPS, speed,\ndirection, and wind in an autopilot task without human demonstration.", + "authors": "Yunduan Cui, Shigeki Osaki, Takamitsu Matsubara", + "published": "2019-01-23", + "updated": "2019-07-23", + "primary_cat": "cs.SY", + "cats": [ + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.03188v3", + "title": "Optimizing Quantum Variational Circuits with Deep Reinforcement Learning", + "abstract": "Quantum Machine Learning (QML) is considered to be one of the most promising\napplications of near term quantum devices. However, the optimization of quantum\nmachine learning models presents numerous challenges arising from the\nimperfections of hardware and the fundamental obstacles in navigating an\nexponentially scaling Hilbert space. In this work, we evaluate the potential of\ncontemporary methods in deep reinforcement learning to augment gradient based\noptimization routines in quantum variational circuits. We find that\nreinforcement learning augmented optimizers consistently outperform gradient\ndescent in noisy environments. All code and pretrained weights are available to\nreplicate the results or deploy the models at:\nhttps://github.com/lockwo/rl_qvc_opt.", + "authors": "Owen Lockwood", + "published": "2021-09-07", + "updated": "2022-05-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "quant-ph" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.07260v1", + "title": "TIAGo RL: Simulated Reinforcement Learning Environments with Tactile Data for Mobile Robots", + "abstract": "Tactile information is important for robust performance in robotic tasks that\ninvolve physical interaction, such as object manipulation. However, with more\ndata included in the reasoning and control process, modeling behavior becomes\nincreasingly difficult. Deep Reinforcement Learning (DRL) produced promising\nresults for learning complex behavior in various domains, including\ntactile-based manipulation in robotics. In this work, we present our\nopen-source reinforcement learning environments for the TIAGo service robot.\nThey produce tactile sensor measurements that resemble those of a real\nsensorised gripper for TIAGo, encouraging research in transfer learning of DRL\npolicies. Lastly, we show preliminary training results of a learned force\ncontrol policy and compare it to a classical PI controller.", + "authors": "Luca Lach, Francesco Ferro, Robert Haschke", + "published": "2023-11-13", + "updated": "2023-11-13", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1712.04170v2", + "title": "Interpretable Policies for Reinforcement Learning by Genetic Programming", + "abstract": "The search for interpretable reinforcement learning policies is of high\nacademic and industrial interest. Especially for industrial systems, domain\nexperts are more likely to deploy autonomously learned controllers if they are\nunderstandable and convenient to evaluate. Basic algebraic equations are\nsupposed to meet these requirements, as long as they are restricted to an\nadequate complexity. Here we introduce the genetic programming for\nreinforcement learning (GPRL) approach based on model-based batch reinforcement\nlearning and genetic programming, which autonomously learns policy equations\nfrom pre-existing default state-action trajectory samples. GPRL is compared to\na straight-forward method which utilizes genetic programming for symbolic\nregression, yielding policies imitating an existing well-performing, but\nnon-interpretable policy. Experiments on three reinforcement learning\nbenchmarks, i.e., mountain car, cart-pole balancing, and industrial benchmark,\ndemonstrate the superiority of our GPRL approach compared to the symbolic\nregression method. GPRL is capable of producing well-performing interpretable\nreinforcement learning policies from pre-existing default trajectory data.", + "authors": "Daniel Hein, Steffen Udluft, Thomas A. Runkler", + "published": "2017-12-12", + "updated": "2018-04-04", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.NE", + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + } + ], + [ + { + "url": "http://arxiv.org/abs/2402.10670v2", + "title": "OpenFMNav: Towards Open-Set Zero-Shot Object Navigation via Vision-Language Foundation Models", + "abstract": "Object navigation (ObjectNav) requires an agent to navigate through unseen\nenvironments to find queried objects. Many previous methods attempted to solve\nthis task by relying on supervised or reinforcement learning, where they are\ntrained on limited household datasets with close-set objects. However, two key\nchallenges are unsolved: understanding free-form natural language instructions\nthat demand open-set objects, and generalizing to new environments in a\nzero-shot manner. Aiming to solve the two challenges, in this paper, we propose\nOpenFMNav, an Open-set Foundation Model based framework for zero-shot object\nNavigation. We first unleash the reasoning abilities of large language models\n(LLMs) to extract proposed objects from natural language instructions that meet\nthe user's demand. We then leverage the generalizability of large vision\nlanguage models (VLMs) to actively discover and detect candidate objects from\nthe scene, building a Versatile Semantic Score Map (VSSM). Then, by conducting\ncommon sense reasoning on VSSM, our method can perform effective\nlanguage-guided exploration and exploitation of the scene and finally reach the\ngoal. By leveraging the reasoning and generalizing abilities of foundation\nmodels, our method can understand free-form human instructions and perform\neffective open-set zero-shot navigation in diverse environments. Extensive\nexperiments on the HM3D ObjectNav benchmark show that our method surpasses all\nthe strong baselines on all metrics, proving our method's effectiveness.\nFurthermore, we perform real robot demonstrations to validate our method's\nopen-set-ness and generalizability to real-world environments.", + "authors": "Yuxuan Kuang, Hai Lin, Meng Jiang", + "published": "2024-02-16", + "updated": "2024-03-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.RO" + ], + "label": "Original Paper", + "paper_cat": "Model AND Based AND Reinforcement AND Learning", + "gt": "2.1 Embodied Navigation Embodied navigation is a fundamental yet challenging task in robotics and embodied AI since it is the precursor to many downstream robotic tasks, such as object manipulation and teleoperation. In such scenarios, given a specific goal and egocentric observations, agents are required to move to a desired location within a maximum timestep. Due to the importance of embodied navigation, recent years have witnessed several branches of navigation tasks with different goal specifications. For instance, point goal navigation (PointNav) (Wijmans et al., 2019; Savva et al., 2019) uses point coordinates in the space as the goal; image goal navigation (ImageNav) (Chaplot et al., 2020b; Savinov et al., 2018) requires the agent to move where the given image is taken; and vision-language navEnvironment Action \ud835\udc82\ud835\udc95 Control Policy Frontier Goal \ud835\udc6e\ud835\udc95 PerceptVLM DiscoverVLM ProposeLLM ReasonLLM Instruction \ud835\udc70 Prior Objects \ud835\udc76\ud835\udc91\ud835\udc93\ud835\udc8a Object Prompt \ud835\udc91\ud835\udc95 Observation \ud835\udc90\ud835\udc95 (RGBD + Pose) Versatile Semantic Score Map \ud835\udc74\ud835\udc95 \ud835\udf48\ud835\udc87\ud835\udc93\ud835\udc86\ud835\udc92 Frontiers {\ud835\udc6d\ud835\udc8a} Frontier Scores {\ud835\udc7a\ud835\udc8a} Proposal Objects \ud835\udc76\ud835\udc91\ud835\udc93\ud835\udc90 Discovered Objects \ud835\udc76\ud835\udc85\ud835\udc8a\ud835\udc94 Instruction Thoughts \ud835\udc7b Figure 2: The framework of our proposed OpenFMNav. Based on the natural language instruction and observations, we utilize foundation models to interpret human instructions and construct a Versatile Semantic Score Map (VSSM), on which we perform common sense reasoning and scoring to conduct language-guided frontier-based exploration. igation (VLN) (Anderson et al., 2018; Ku et al., 2020) requires the agent to follow step-by-step instructions to reach the location; and in object navigation (ObjectNav) (Batra et al., 2020), the agent is required to find objects of specified categories. Compared to vision-language navigation (VLN), which offers detailed and step-by-step instructions and requires an agent to strictly follow the trajectories conditioned by step-by-step instructions, object navigation (ObjectNav) is particularly challenging since the agent needs to do semantic recognition to find the goal and needs more efficient exploration than VLN since there are no step-by-step instructions (Chen et al., 2023). It is also more common in real life that humans will give ambiguous demands (Wang et al., 2023) rather than detailed instructions in VLN. Additionally, many VLN datasets (Anderson et al., 2018; Ku et al., 2020) are typically discretized into checker-like waypoint graphs, which makes it difficult to deploy algorithms in the real world. Compared to VLN, ObjectNav is object-centric and continuous so that it can be easily deployed and extended to many downstream robotic tasks like object manipulation. To take a step further, in this paper, we propose a solution to the problem of open-set-ness in ObjectNav by introducing a framework that transforms the paradigm of ObjectNav from given close-set category names to free-form natural language instructions with open-set objects. This transformation will help bridge the interaction between humans and embodied agents, making it more useful in real-world applications. Compared to existing works (Majumdar et al., 2023; Wang et al., 2023), our method doesn\u2019t need prior occupancy maps and pre-exploration in the beginning and thus can navigate in unseen environments. Furthermore, our method addresses the overfitting issue in embodied navigation and easily generalizes to the real world in a zero-shot manner, enabling intelligent robot agents to navigate in more diverse environments. 2.2 Zero-Shot Object Navigation As Gu et al. (2022) elaborates, embodied navigation faces a severe challenge of data scarcity, limiting the amount and distribution of available data for training. Methods directly supervised on these limited data cannot generalize to diverse real-world environments. Therefore, recent years have witnessed great progress in Zero-Shot Object Navigation (ZSON). Methods proposed by Majumdar et al. (2022); Gadre et al. (2023); Yokoyama et al. (2023) leverage CLIP (Radford et al., 2021) or BLIP-2 (Li et al., 2023) embedded features to compute similarities between object goal and input image and construct an implicit map for certain goal objects to guide navigation. Other methods, such as those proposed by Zhou et al. (2023); Dorbala et al. (2023); Yu et al. (2023); Shah et al. (2023), leverage object detectors to construct metric maps and use large language models to conduct reasoning. Cai et al. (2023) leverages foundation models to perform basic image processing and trains a locomotion module to navigate to certain chosen pixel points. 2.3 Foundation Models Foundation models (Bommasani et al., 2022) are large-scale models that are pre-trained on vast amounts of data and can perform general tasks. The sheer volume of pretraining data endows them with exceptional generalizability, which allows them to perform zero-shot inference. Moreover, the extensive training data helps foundation models acquire common sense about our physical world, making them ideal for real-world applications. Foundation models, particularly the large language models (LLMs), also have an intriguing feature \u2014 In-Context Learning (ICL) (Dong et al., 2023). This feature enables these models to follow pre-defined instructions to ground their output into certain patterns. By combining ICL with common sense learned from the large-scale data, foundation models can effectively perform semantic common sense reasoning and guesswork to provide intuitions of possible exploration directions like human beings, as illustrated in Zhou et al. (2023); Yu et al. (2023); Shah et al. (2023). For example, if the goal is a \u201ctoilet\u201d, from common sense it is highly possible to find it around an area that contains a \u201cbathtub\u201d. According to different modalities, foundation models can be mainly divided into Visual Foundation Models (VFM), such as SAM (Kirillov et al., 2023), Large Language Models (LLM), such as GPT-3.5/GPT-4 (Ouyang et al., 2022; OpenAI, 2023) and LLaMA/LLaMA-2 (Touvron et al., 2023a,b), and Vision Language Models (VLM), such as GPT-4V (Yang et al., 2023b), CLIP (Radford et al., 2021), Grounded-SAM (Liu et al., 2023), etc. There are also foundation models covering other modalities, such as audio (Yang et al., 2023a) and video (Xu et al., 2021). In this paper, we use VLMs and LLMs since our setting only involves vision and language modalities.", + "pre_questions": [], + "main_content": "Introduction As a fundamental task in robotics and embodied AI, object navigation requires an agent to navi1We show further information and demo videos on https://yxkryptonite.github.io/OpenFMNav/. Find the bed. Find the bed with the blue mattress next to the window. I'm exhausted. I need to lie down and rest. Close-Set: Find the bed. Find the bed with the blue mattress next to the window. I'm exhausted. I need to lie down and rest. Open-Set: VLM LLM Foundation Models Language-guided Exploration & Exploitation Find the goal! Figure 1: Leveraging foundation models, our proposed OpenFMNav can follow free-form natural language instructions with open-set objects and achieve effective zero-shot object navigation. gate through unseen environments to find queried objects. Compared to other robotic tasks, it is particularly important because it is a prerequisite for robots to interact with objects. To address this issue, several household datasets and benchmarks, such as MP3D (Chang et al., 2017), Gibson (Xia et al., 2018) and HM3D (Ramakrishnan et al., 2021) are proposed. Many previous studies (Chaplot et al., 2020a; Ramrakhya et al., 2022; Zhang et al., 2023) have attempted to solve this problem through supervised or reinforcement learning, where they are trained on particular household datasets above with close-set objects and comparable environments. However, there are two significant challenges remaining unsolved. First, as shown in Fig 1, in many scenarios, instead of only mentioning an object category (e.g., \u201cFind the bed.\u201d), humans often provide free-form instructions, either specifying objects with specific characteristics (e.g., \u201cFind the bed with the blue mattress next to the window.\u201d), or expressing their demand without explicitly mentioning the object (e.g., \u201cI\u2019m exhausted. I need to lie down and rest.\u201d). These natural language instructions may demand open-set objects not included in the training vocabulary. In such cases, existing supervised or reinforcement learning-based arXiv:2402.10670v2 [cs.CL] 25 Mar 2024 methods fail to understand these natural language instructions since they require specific object categories and were trained to perform close-set object detection. Second, due to the data scarcity of embodied navigation (Gu et al., 2022), these methods are typically trained on limited datasets that only cover household environments, which causes severe overfitting issues and prevents them from generalizing to unseen and diverse environments, let alone performing zero-shot navigation. To address the first challenge, some initial progress has been made in understanding free-form natural language instructions with open-set objects. For instance, demand-driven navigation (DDN) was proposed by Wang et al. (2023) to map human instructions to a demand-conditioned attribute space. However, it is still limited to household settings and cannot be generalized to various environments. Another approach was suggested by Majumdar et al. (2023), which involves finding objects with specific attributes and eliminating distractors. However, it needs 2D occupancy maps and preexploration of the scene in the beginning, which are unavailable in unseen environments. On the second challenge, recent years have witnessed progress in Zero-Shot Object Navigation (ZSON) (Majumdar et al., 2022; Gadre et al., 2023; Yokoyama et al., 2023; Zhou et al., 2023; Dorbala et al., 2023; Yu et al., 2023; Shah et al., 2023; Cai et al., 2023; Liang et al., 2023). However, some of these works (Majumdar et al., 2022; Yu et al., 2023; Cai et al., 2023) require data to train specific modules such as locomotion planning, and hence are not real \u201cZero-Shot\u201d. More importantly, these methods cannot conduct explicit and comprehensive reasoning on free-form natural language instructions, leading to their low performance and preventing them from being applied to many downstream robotic tasks. To better address the aforementioned two key challenges, in this paper, we propose OpenFMNav, a novel framework based on foundation models to achieve effective open-set zero-shot navigation. To this end, we utilize foundation models to leverage their reasoning abilities and generalizability to interpret human instructions and actively explore the environment. To be more specific, we first leverage large language models to extract initially proposed objects from natural language instructions and merge them with user-defined prior objects and objects discovered by vision language models. We then construct an object prompt to detect and segment objects from the observation image, leveraging large vision language models. By using depth images to project the segmentation masks to the space, we can build a 2D top-down Versatile Semantic Score Map (VSSM) of the whole scene, on which we sample frontiers with semantic information for a large language model to conduct common sense reasoning and wisely choose frontiers to guide navigation. This way, we can perform language-guided exploration and exploitation of the scene and achieve effective open-set zeroshot object navigation without prior training on any household datasets. Moreover, unlike previous map-based methods such as Zhou et al. (2023); Yu et al. (2023); Shah et al. (2023); Yokoyama et al. (2023), the VSSM produced by our method will keep updating during the navigation, which better adapts to changing environments and can be further used in downstream robotic tasks, such as multi-goal navigation and mobile manipulation. We conduct extensive experiments on the HM3D ObjectNav benchmark (Yadav et al., 2022a). Results show that our method outperforms the Stateof-the-Art open-set zero-shot object navigation method (Zhou et al., 2023) by over 15% on success rate and surpasses all the strong baselines on all metrics, validating the effectiveness and superiority of our framework. Additionally, our method has been proven to understand free-form natural language instructions with open-set objects and generalize well to real-world environments through real robot demonstrations. 3.1 Problem Statement and Method Overview Problem Statement. As shown in Fig. 1, in an unfamiliar environment, given a natural language instruction I, an embodied agent needs to explore the environment in search of a certain queried object. At timestep t, the agent is provided with egocentric RGBD observation ot and should output an action at such as move_forward, turn_left, stop, etc. A successful navigation is defined as finding the queried object within the maximum navigation timestep. Method Overview. As shown in Fig. 2, given a starting point and human instruction I, the agent first utilizes the ProposeLLM to propose possible objects to meet the instruction. At timestep t, the agent can leverage the DiscoverVLM to discover new objects from the scene and check whether they can meet the instruction. Along with prior defined objects and proposal objects, the full object list is then converted into an object prompt pt for foundation models to reason. Given current RGBD observation ot, the PerceptVLM will detect and segment object masks based on pt, constructing a Versatile Semantic Score Map (VSSM) Mt, on which possible exploration frontiers are sampled. Finally, the ReasonLLM will conduct common sense reasoning based on the semantic information of frontiers and give the next frontier goal Gt to explore, which will be executed by an underlying control policy to output low-level actions. The whole process is looped until the object is found or the agent fails. 3.2 Discovery and Perception Discovery. Given a free-form human instruction I that may contain open-set objects, we first leverage a ProposeLLM to get all possible proposal objects Opro that can satisfy the instruction. Each proposal object contains attributes such as color, location, etc., to satisfy fine-grained instructions. At timestep t, given egocentric RGBD and pose observations ot, we propose a DiscoverVLM using GPT-4V (Yang et al., 2023b) that actively discovers novel objects Odis from the RGB image. Meanwhile, the DiscoverVLM also conducts reasoning on the instruction, trying to discover objects that potentially meet the instruction and update Opro. Extracting novel objects from the environment is essential for open-set navigation since they may contain scene-specific information that helps to find the goal. To save time and cost, the DiscoverVLM is randomly activated by a frequency parameter \u03c3freq. Perception. After getting proposal objects Opro and discovered objects Odis, we merge them with prior objects Opri to construct an object prompt pt to feed into our PerceptVLM based on GroundedSAM (Liu et al., 2023) to detect and segment all the appearing objects in pt from the RGB image of ot. Note that due to the BERT encoder (Devlin et al., 2019) and powerful SAM backbone (Kirillov et al., 2023) in the PerceptVLM, it can achieve open-set object detection in high granularities. This process will output object masks with confidence scores for further mapping and reasoning. 3.3 Mapping and Reasoning Mapping. At timestep t, based on the confidence scores of object masks produced by PerceptVLM and the depth image and pose in ot, we project the masks to the top-down 2D space and construct a Versatile Semantic Score Map (VSSM) Mt \u2208RH\u00d7W\u00d7(C+2), which contains C channels of object semantics, and two channels of the occupied area and explored area, with a resolution of H \u00d7 W. Each element in the map is a score in [0, 1] instead of binary labels. Since we continuously discover novel objects from the environment, the C is versatile so that we can keep updating the map, enabling life-long learning and downstream robotic tasks. Also, instead of filling binary labels into semantic channels, we fill each semantic channel with confidence scores, with which we can easily update the map if there is a change in the environment. Reasoning. Based on Mt, we can sample frontiers {Fi} with semantic information in unexplored areas for further exploration. To choose the next frontier to explore, we leverage ReasonLLM by unleashing the power of LLM\u2019s common sense reasoning. Specifically, given the semantic information around each frontier, we construct a query template in the form of \u201cThis area contains A, B and C.\u201d. Combined with the thought T produced by Chain-of-Thought (Wei et al., 2022) prompting from ProposeLLM and the object prompt pt, the ReasonLLM will conduct common sense reasoning as in Section 2.3 and rate these frontiers to pick one frontier goal Gt which is most likely to find the object goal. This frontier goal Gt will guide the agent for further exploration and produce low-level actions to control the agent. Instead of directly asking the LLM which frontier to explore for once or multiple times (Shah et al., 2023), we leverage another reasoning process, which prompts the LLM to rate these frontiers {Fi} to scores {Si}, in which Si \u2208[0, 1], indicating the likelihood to find the goal. Then, the frontier with the highest score will be picked out for further exploration. By leveraging this rating process, ReasonLLM can map its common sense Algorithm 1: Pseudo-Code of the Overall Algorithm for OpenFMNav Data: Natural Language Instruction I, Prior Objects Opri, Discovery Frequency \u03c3freq, Frontier Goal Update Interval \u03b4 t \u21900; done \u2190False; G0, M0, Odis \u2190None; Opro, T \u2190ProposeLLM(I); while not done do ot \u2190getObservation(); if toDiscover(\u03c3freq) then Odis, Opro \u2190DiscoverVLM(ot, I); end pt \u2190getPrompt(Opro, Odis, Opri); Masks \u2190PerceptVLM(ot, pt); Mt \u2190semanticMapping(Mt\u22121, Masks, ot); if Opro in Mt then Gt \u2190getLocation(Mt, Opro); else if t % \u03b4 == 0 then {Fi} \u2190sampleFrontiers(Mt); {Si} \u2190ReasonLLM({Fi}, pt, T); Gt \u2190getLocation(Mt, argmax({Si})); else Gt \u2190Gt\u22121; end end Opri \u2190updateObj(Opro, Odis, Opri); at \u2190FMMPlanner(Mt, Gt); done \u2190stepAction(at, t); t \u2190t + 1; end to concrete numbers that reflect the actual ranking, leading to better reasoning. We verified its effectiveness in Section 4.5. It\u2019s also worth mentioning that to balance exploration and exploitation, ReasonLLM is activated at regular timestep intervals \u03b4 to update Gt. At other timesteps, the frontier goal Gt remains unchanged to fully explore the previously chosen frontier Gt\u2212\u03b4. After obtaining the frontier goal and the occupancy channel in Mt, we utilize a control policy based on the Fast Marching Method (FMM) (Sethian, 1999) to output a low-level action at to control the agent. This closes the loop and goes to the next timestep t + 1. We present the whole process of our OpenFMNav algorithm in Algorithm 1. 4 Experiments In this section, we evaluate our method comprehensively in simulation to show our method\u2019s effectiveness compared to baseline methods. We also conducted ablation studies to validate the effectiveness of our framework design. Method Open-Set Zero-Shot SR (%) \u2191 SPL \u2191 FBE (Gervet et al., 2023) \u00d7 \u2713 23.7 0.123 SemExp (Chaplot et al., 2020a) \u00d7 \u00d7 37.9 0.188 ZSON (Majumdar et al., 2022) \u2713 \u00d7 25.5 0.126 GoW (Gadre et al., 2023) \u2713 \u2713 32.0 0.181 ESC (Zhou et al., 2023) \u2713 \u2713 38.5 0.220 L3MVN (Yu et al., 2023) \u00d7 \u2713 50.4 0.231 L3MVN + GPT-4 (Yu et al., 2023) \u00d7 \u2713 51.8 0.234 PixNav (Cai et al., 2023) \u2713 \u00d7 37.9 0.205 OpenFMNav (Ours) \u2713 \u2713 54.9 0.244 Table 1: Comparison between different methods on the HM3D ObjectNav benchmark. Our method outperforms all the baseline methods on all metrics and achieves open-set zero-shot object navigation. 4.1 Experimental Setup In the simulation, we evaluate on the HM3D ObjectNav benchmark based on the Habitat Matterport 3D Semantics Dataset (Yadav et al., 2022b), which contains 80 train scenes and 20 validation scenes. We utilize the validation scenes for evaluation. There are, in total, 2000 episodes and six goal classes (chair, couch, potted plant, bed, toilet, and tv) in the dataset. The action space of the robot agent is {stop, move_forward, turn_left, turn_right, look_up, look_down}. The forward distance is set to 0.25m, and the rotation angle is set to 30 degrees. Following previous works (Zhou et al., 2023; Cai et al., 2023), we utilize Success Rate (SR) metric to measure whether an agent can find our desired objects. We also report results of Success weighted by Path Length (SPL) to measure the navigation efficiency. 4.2 Implementation Details In our method, the foundation models we use are: GPT-4 (text-only) (OpenAI, 2023) for ProposeLLM and ReasonLLM, and GPT-4V (Yang et al., 2023b) for DiscoverVLM. For PerceptVLM, we utilize Grounded-SAM, which first leverages Grounding DINO (Liu et al., 2023) to produce bounding boxes given the RGB image in ot and object prompt pt, and then leverages Segment Anything Model (SAM) (Kirillov et al., 2023) for each bounding box to produce high-granularity object masks for semantic mapping. Moreover, we utilize the Chain-of-Thought (CoT) (Wei et al., 2022) prompting technique to fully exploit the reasoning abilities of ProposeLLM, ReasonLLM and DiscoverVLM. The prompts we used can be found in Appendix C. In the simulation, we set the update interval \u03b4 to 20 timesteps, discovery frequency \u03c3freq to 0.01, and the initial prior objects to a subset of HM3D object categories, which can be found in Appendix B. 4.3 Baseline Methods We compare our method with several recent works, with a focus on open-set and zero-shot object navigation baselines to verify our framework\u2019s effectiveness. We classify these baseline methods into \u201cOpen-Set\u201d and \u201cZero-Shot\u201d or not. Here, we define \u201cOpen-Set\u201d as that the method can find whatever object category we want, and define \u201cZero-Shot\u201d as that the agent hasn\u2019t been trained or finetuned on any of the data previously, including images, episodes, and locomotion planning. The baseline methods are as follows: \u2022 FBE (Gervet et al., 2023). This baseline method employs a classical robotics pipeline for mapping and a frontier-based exploration algorithm. \u2022 SemExp (Chaplot et al., 2020a). A method that explores and searches for the target using close-set semantic maps and reinforcement learning. \u2022 ZSON (Majumdar et al., 2022). An RGBbased zero-shot object navigation baseline using CLIP (Radford et al., 2021) to embed scene features. It is trained on ImageNav and directly transferred to ObjectNav. \u2022 GoW (Gadre et al., 2023). A modification of CoW (Gadre et al., 2023) implemented by Zhou et al. (2023) that uses GLIP (Li* et al., 2022) for object detection and the vanilla fronter-based exploration method. Method SR (%) \u2191 SPL \u2191 w/o GPT-4 53.6 0.230 w/o CoT 51.8 0.208 w/o Discovery 50.0 0.222 w/o Scoring 50.0 0.208 Ours 55.4 0.239 Table 2: Ablation studies on different components of our method. Experiments are conducted under the same uniformly sampled episodes. \u2022 ESC (Zhou et al., 2023). A map-based zero-shot object navigation baseline that uses GLIP (Li* et al., 2022) to detect objects and rooms, and combines LLM with soft commonsense constraints for planning. \u2022 L3MVN (Yu et al., 2023). An LLM-based baseline that finetunes a close-set object detector (Jiang et al., 2018) and an LLM to conduct frontier-based exploration. We also conduct experiments that replace its LLM with GPT-4 for fairer comparisons. \u2022 PixNav (Cai et al., 2023). A recent work that solely uses foundation models to pick out navigation pixels and trains a locomotion module to navigate to chosen pixels. 4.4 Results and Analysis We report the main results in Table 1. Our method surpasses all the baselines on both Success Rate (SR) and Success weighted by Path Length (SPL), especially compared with open-set zero-shot methods. Our method surpasses the previous State-ofthe-Art method on open-set zero-shot object navigation (Zhou et al., 2023) by over 15% on the success rate metric, suggesting that our framework is indeed effective. First, we compare our method with previous semantic map based methods, such as SemExp (Chaplot et al., 2020a), ESC (Zhou et al., 2023) and L3MVN (Yu et al., 2023). The results show that our method performs better since we utilize DiscoverVLM to construct VSSM with versatile outof-vocabulary class labels, such as \u201cmarble statue\u201d and \u201crange hood\u201d, which helps to alleviate the issue of limited categories and enriches the semantic information of the environment. Also, compared to these methods, our method achieves open-set navigation, which better adapts to complex situations and real-world applications. 15.2 12.5 11.7 10.7 10.7 14.3 19.6 18.7 20.5 18.7 16.9 16.1 19.6 18.8 15.2 0 5 10 15 20 25 w/o GPT-4 w/o CoT w/o Discovery w/o Scoring Ours Collision Exploration Detection Figure 3: Types and percentages of failure cases in ablation methods. Compared with other open-set baselines, such as PixNav (Cai et al., 2023), ZSON (Majumdar et al., 2022) and GoW (Gadre et al., 2023), our method constructs an explicit map where all discovered objects are presented. Therefore, we can boost LLMs\u2019 reasoning abilities to balance between exploration and exploitation and make the agent move to where the goal is most likely to be. Also, the map constructed by our method is maintained and updated, which is perfect for life-long learning, enabling downstream robotic tasks with further natural language instructions, while methods like Gadre et al. (2023); Yokoyama et al. (2023) only construct implicit maps for a certain goal, which is useless after the navigation. 4.5 Ablation Studies Probing deeper into our method design, we also performed ablation studies on various components of our pipeline. Note that to save time and cost, we test all the ablation methods on a subset of the full dataset under the same uniformly sampled episodes so that there can be slight differences in the result of our method. Table 2 shows that modifying multiple components of our framework leads to significantly worse performance. We also categorized the failure cases into different types and report their percentages in Fig. 3, in which Collision refers to the situation where the agent cannot avoid colliding with the environment, Exploration means the agent times out while trying to find the goal, and Detection means the agent mistakenly identifies a wrong object as the goal. Effectiveness of using larger models. First, we analyze the usage of GPT-4 for LLMs. Compared to only using GPT-3.5, using larger GPT-4 achieves better performance (+1.8%), reducing failure cases Find a red chair. ProposeLLM Thought: The instruction contains a specific object goal, which is \"red chair\". I will directly output \"red chair\". \ud835\udc42!\"#: [red chair] PerceptVLM \ud835\udc42$%&: [computer monitor, whiteboard] \ud835\udc90\ud835\udc95 \ud835\udc42!\"#: [red chair] \ud835\udc42$%&: [computer monitor, whiteboard] \ud835\udc42!\"%: [couch, desk\u2026] DiscoverVLM Can you get a robot arm? ProposeLLM Thought: The instruction is specific, indicating a need for a \"robot arm\". I will directly output \"robot arm\". \ud835\udc42!\"#: [robot arm] PerceptVLM \ud835\udc42$%&: [cart, cables, button] \ud835\udc42!\"#: [robot arm] \ud835\udc42$%&: [cart, cables, button] \ud835\udc42!\"%: [couch, desk\u2026] DiscoverVLM \ud835\udc90\ud835\udc95 I need to wash my hands! ProposeLLM Thought: The instruction is general and indicates a need related to hand washing. Common objects associated with hand washing in an indoor environment include a sink, soap, and a towel for drying hands. I will list these objects. \ud835\udc42!\"#: [sink, soap, towel] PerceptVLM \ud835\udc42$%&: [3D printer, keyboard, tap] \ud835\udc42!\"#: [sink, soap, towel, tap] \ud835\udc42$%&: [3D printer, keyboard] \ud835\udc42!\"%: [cabinet, desk\u2026] DiscoverVLM \ud835\udc90\ud835\udc95 (a) Robust to distractors (b) Robust to open-set objects (c) Robust to free-form demands Figure 4: Qualitative studies in the real world. Text marked in red indicates objects that potentially satisfy the instruction. Results show that our method is robust to natural language instructions, including distractors, open-set objects and free-form demands. of Collision and Detection. However, the percentage of Exploration is slightly higher, showing that larger models have more diverse answers that encourage more exploration, which potentially causes more time out. Effectiveness of our joint reasoning pipeline. Then, we analyze different foundation model components. We found that using CoT prompting (+3.6%) and scoring prompting (+5.4%) are essential to the strong performance of OpenFMNav since they generate more reasoning chains that elicit the common sense of large language models. Also, compared to restricting the object set, leveraging DiscoverVLM not only enables more free-form natural language instructions from users\u2019 input but also enriches the scene\u2019s semantics, which helps the reasoning for frontier-based exploration and improves performance (+5.4%). These efforts reduce failure cases of all categories. 5 Navigation in the Real World We further conduct real robot demonstrations to show our method\u2019s ability to understand free-form natural language instructions and perform open-set zero-shot navigation in the real world. 5.1 Real Robot Setup For robots, we use a TurtleBot4 robot with scalable structures to navigate on the ground. We limit its action space to {stop, move_forward, turn_left, turn_right}. As in the simulation, we set the forward distance to 0.25m and the rotation angle to 30 degrees. For robotic perception, we use a Kinect RGBD camera to capture RGBD images. For real-world environments, we select multiple rooms (including offices, labs, and meeting rooms) with sufficient space and various objects for the robot to navigate. These rooms contain not only common objects like \u201cchair\u201d, \u201ccouch\u201d, \u201cdesk\u201d, \u201ccomputer\u201d, \u201ccabinet\u201d, etc., but also less common ones like \u201crobot arm\u201d, \u201c3D printer\u201d, \u201ccoffee machines\u201d, etc. 5.2 Qualitative Studies We conduct qualitative studies on our OpenFMNav in the real world, as shown in Fig. 4. The results show that our method can perform effective zero-shot navigation in the real world given freeform natural language instructions. Especially, our method is robust to distractors, open-set objects and free-form demands. For distractors, rather than object categories, our proposed ProposeLLM can extract the attributes in the instruction (\u201cred chair\u201d), which can be further detected and segmented by PerceptVLM. In Fig. 4(a), we can see that, among the three chairs in the observation, only the red chair is masked. For open-set objects, due to the large-scale training data of foundation models, our method can also navigate to objects that are uncommon and out-ofvocabulary, such as the \u201crobot arm\u201d in Fig. 4(b). Another intriguing feature of our method is that our method can adaptively add up goals in the navigation. This happens when the instruction is a freeform demand for ambiguous objects. For example, in Fig. 4(c), when the user needs to wash hands, the ProposeLLM first proposed \u201csink\u201d, \u201csoap\u201d and \u201ctowel\u201d, but they are not necessarily present in the scene. When the agent explores the environment, the DiscoverVLM can actively discover what\u2019s new in the environment and reason about whether they can potentially fulfill the user\u2019s demand. In this case, a \u201ctap\u201d is discovered and identified as a goal so that the agent can directly navigate to it without further exploration. This is extremely helpful when the humans are also unaware of the scene details. 6 Conclusions In this paper, we presented a novel framework, OpenFMNav, for open-set zero-shot object navigation. By leveraging foundation models, our method could understand free-form natural language instructions, conduct reasoning, and perform effective zero-shot object navigation. Extensive experiments showed the superiority of our framework. Finally, we conducted real robot demonstrations to validate our method\u2019s open-set-ness and generalizability to real-world environments. Ethics Statement In this paper, we present a method for open-set zero-shot object navigation. This method can be used for zero-shot robotic navigation in diverse scenarios, such as home robots, warehouse robots, and so on. Our work further addresses the issue of ambiguous or free-form natural language instructions, benefitting the interaction between humans and robots. However, foundation models can have safety issues and risks such as privacy leaks and jailbreaking (Deng et al., 2023; Chao et al., 2023), which need to be further addressed. Limitations While extensive experiments validate the effectiveness of our method design, there exist a number of limitations in our work. First, our method requires relatively accurate depth sensors to build the 2D map, while the observed depths and camera poses may have much noise in reality, causing performance degradation. Moreover, we acknowledge that our method requires stable Internet connections to get responses from APIs of foundation models, limiting the potential of large-scale deployment in harsh environments. Another limitation is that the use of LLMs may not always be real-time, which can cause latency issues. We hope future works on depth sensing, LLM quantization, and edge computing can mitigate such limitations. Acknowledgements This work was partially supported by NSF IIS2119531, IIS-2137396, IIS-2142827, IIS-2234058, CCF-1901059, and ONR N00014-22-1-2507." + }, + { + "url": "http://arxiv.org/abs/1803.00653v1", + "title": "Semi-parametric Topological Memory for Navigation", + "abstract": "We introduce a new memory architecture for navigation in previously unseen\nenvironments, inspired by landmark-based navigation in animals. The proposed\nsemi-parametric topological memory (SPTM) consists of a (non-parametric) graph\nwith nodes corresponding to locations in the environment and a (parametric)\ndeep network capable of retrieving nodes from the graph based on observations.\nThe graph stores no metric information, only connectivity of locations\ncorresponding to the nodes. We use SPTM as a planning module in a navigation\nsystem. Given only 5 minutes of footage of a previously unseen maze, an\nSPTM-based navigation agent can build a topological map of the environment and\nuse it to confidently navigate towards goals. The average success rate of the\nSPTM agent in goal-directed navigation across test environments is higher than\nthe best-performing baseline by a factor of three. A video of the agent is\navailable at https://youtu.be/vRF7f4lhswo", + "authors": "Nikolay Savinov, Alexey Dosovitskiy, Vladlen Koltun", + "published": "2018-03-01", + "updated": "2018-03-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2312.03275v1", + "title": "VLFM: Vision-Language Frontier Maps for Zero-Shot Semantic Navigation", + "abstract": "Understanding how humans leverage semantic knowledge to navigate unfamiliar\nenvironments and decide where to explore next is pivotal for developing robots\ncapable of human-like search behaviors. We introduce a zero-shot navigation\napproach, Vision-Language Frontier Maps (VLFM), which is inspired by human\nreasoning and designed to navigate towards unseen semantic objects in novel\nenvironments. VLFM builds occupancy maps from depth observations to identify\nfrontiers, and leverages RGB observations and a pre-trained vision-language\nmodel to generate a language-grounded value map. VLFM then uses this map to\nidentify the most promising frontier to explore for finding an instance of a\ngiven target object category. We evaluate VLFM in photo-realistic environments\nfrom the Gibson, Habitat-Matterport 3D (HM3D), and Matterport 3D (MP3D)\ndatasets within the Habitat simulator. Remarkably, VLFM achieves\nstate-of-the-art results on all three datasets as measured by success weighted\nby path length (SPL) for the Object Goal Navigation task. Furthermore, we show\nthat VLFM's zero-shot nature enables it to be readily deployed on real-world\nrobots such as the Boston Dynamics Spot mobile manipulation platform. We deploy\nVLFM on Spot and demonstrate its capability to efficiently navigate to target\nobjects within an office building in the real world, without any prior\nknowledge of the environment. The accomplishments of VLFM underscore the\npromising potential of vision-language models in advancing the field of\nsemantic navigation. Videos of real-world deployment can be viewed at\nnaoki.io/vlfm.", + "authors": "Naoki Yokoyama, Sehoon Ha, Dhruv Batra, Jiuguang Wang, Bernadette Bucher", + "published": "2023-12-06", + "updated": "2023-12-06", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2301.12597v3", + "title": "BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models", + "abstract": "The cost of vision-and-language pre-training has become increasingly\nprohibitive due to end-to-end training of large-scale models. This paper\nproposes BLIP-2, a generic and efficient pre-training strategy that bootstraps\nvision-language pre-training from off-the-shelf frozen pre-trained image\nencoders and frozen large language models. BLIP-2 bridges the modality gap with\na lightweight Querying Transformer, which is pre-trained in two stages. The\nfirst stage bootstraps vision-language representation learning from a frozen\nimage encoder. The second stage bootstraps vision-to-language generative\nlearning from a frozen language model. BLIP-2 achieves state-of-the-art\nperformance on various vision-language tasks, despite having significantly\nfewer trainable parameters than existing methods. For example, our model\noutperforms Flamingo80B by 8.7% on zero-shot VQAv2 with 54x fewer trainable\nparameters. We also demonstrate the model's emerging capabilities of zero-shot\nimage-to-text generation that can follow natural language instructions.", + "authors": "Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi", + "published": "2023-01-30", + "updated": "2023-06-15", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2303.05499v4", + "title": "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection", + "abstract": "In this paper, we present an open-set object detector, called Grounding DINO,\nby marrying Transformer-based detector DINO with grounded pre-training, which\ncan detect arbitrary objects with human inputs such as category names or\nreferring expressions. The key solution of open-set object detection is\nintroducing language to a closed-set detector for open-set concept\ngeneralization. To effectively fuse language and vision modalities, we\nconceptually divide a closed-set detector into three phases and propose a tight\nfusion solution, which includes a feature enhancer, a language-guided query\nselection, and a cross-modality decoder for cross-modality fusion. While\nprevious works mainly evaluate open-set object detection on novel categories,\nwe propose to also perform evaluations on referring expression comprehension\nfor objects specified with attributes. Grounding DINO performs remarkably well\non all three settings, including benchmarks on COCO, LVIS, ODinW, and\nRefCOCO/+/g. Grounding DINO achieves a $52.5$ AP on the COCO detection\nzero-shot transfer benchmark, i.e., without any training data from COCO. It\nsets a new record on the ODinW zero-shot benchmark with a mean $26.1$ AP. Code\nwill be available at \\url{https://github.com/IDEA-Research/GroundingDINO}.", + "authors": "Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang", + "published": "2023-03-09", + "updated": "2023-03-20", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2005.12256v2", + "title": "Neural Topological SLAM for Visual Navigation", + "abstract": "This paper studies the problem of image-goal navigation which involves\nnavigating to the location indicated by a goal image in a novel previously\nunseen environment. To tackle this problem, we design topological\nrepresentations for space that effectively leverage semantics and afford\napproximate geometric reasoning. At the heart of our representations are nodes\nwith associated semantic features, that are interconnected using coarse\ngeometric information. We describe supervised learning-based algorithms that\ncan build, maintain and use such representations under noisy actuation.\nExperimental study in visually and physically realistic simulation suggests\nthat our method builds effective representations that capture structural\nregularities and efficiently solve long-horizon navigation problems. We observe\na relative improvement of more than 50% over existing methods that study this\ntask.", + "authors": "Devendra Singh Chaplot, Ruslan Salakhutdinov, Abhinav Gupta, Saurabh Gupta", + "published": "2020-05-25", + "updated": "2020-05-28", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG", + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2108.07258v3", + "title": "On the Opportunities and Risks of Foundation Models", + "abstract": "AI is undergoing a paradigm shift with the rise of models (e.g., BERT,\nDALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a\nwide range of downstream tasks. We call these models foundation models to\nunderscore their critically central yet incomplete character. This report\nprovides a thorough account of the opportunities and risks of foundation\nmodels, ranging from their capabilities (e.g., language, vision, robotics,\nreasoning, human interaction) and technical principles(e.g., model\narchitectures, training procedures, data, systems, security, evaluation,\ntheory) to their applications (e.g., law, healthcare, education) and societal\nimpact (e.g., inequity, misuse, economic and environmental impact, legal and\nethical considerations). Though foundation models are based on standard deep\nlearning and transfer learning, their scale results in new emergent\ncapabilities,and their effectiveness across so many tasks incentivizes\nhomogenization. Homogenization provides powerful leverage but demands caution,\nas the defects of the foundation model are inherited by all the adapted models\ndownstream. Despite the impending widespread deployment of foundation models,\nwe currently lack a clear understanding of how they work, when they fail, and\nwhat they are even capable of due to their emergent properties. To tackle these\nquestions, we believe much of the critical research on foundation models will\nrequire deep interdisciplinary collaboration commensurate with their\nfundamentally sociotechnical nature.", + "authors": "Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher R\u00e9, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tram\u00e8r, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang", + "published": "2021-08-16", + "updated": "2022-07-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CY" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2010.07954v1", + "title": "Room-Across-Room: Multilingual Vision-and-Language Navigation with Dense Spatiotemporal Grounding", + "abstract": "We introduce Room-Across-Room (RxR), a new Vision-and-Language Navigation\n(VLN) dataset. RxR is multilingual (English, Hindi, and Telugu) and larger\n(more paths and instructions) than other VLN datasets. It emphasizes the role\nof language in VLN by addressing known biases in paths and eliciting more\nreferences to visible entities. Furthermore, each word in an instruction is\ntime-aligned to the virtual poses of instruction creators and validators. We\nestablish baseline scores for monolingual and multilingual settings and\nmultitask learning when including Room-to-Room annotations. We also provide\nresults for a model that learns from synchronized pose traces by focusing only\non portions of the panorama attended to in human demonstrations. The size,\nscope and detail of RxR dramatically expands the frontier for research on\nembodied language agents in simulated, photo-realistic environments.", + "authors": "Alexander Ku, Peter Anderson, Roma Patel, Eugene Ie, Jason Baldridge", + "published": "2020-10-15", + "updated": "2020-10-15", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2302.13971v1", + "title": "LLaMA: Open and Efficient Foundation Language Models", + "abstract": "We introduce LLaMA, a collection of foundation language models ranging from\n7B to 65B parameters. We train our models on trillions of tokens, and show that\nit is possible to train state-of-the-art models using publicly available\ndatasets exclusively, without resorting to proprietary and inaccessible\ndatasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks,\nand LLaMA-65B is competitive with the best models, Chinchilla-70B and\nPaLM-540B. We release all our models to the research community.", + "authors": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth\u00e9e Lacroix, Baptiste Rozi\u00e8re, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample", + "published": "2023-02-27", + "updated": "2023-02-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2109.14084v2", + "title": "VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding", + "abstract": "We present VideoCLIP, a contrastive approach to pre-train a unified model for\nzero-shot video and text understanding, without using any labels on downstream\ntasks. VideoCLIP trains a transformer for video and text by contrasting\ntemporally overlapping positive video-text pairs with hard negatives from\nnearest neighbor retrieval. Our experiments on a diverse series of downstream\ntasks, including sequence-level text-video retrieval, VideoQA, token-level\naction localization, and action segmentation reveal state-of-the-art\nperformance, surpassing prior work, and in some cases even outperforming\nsupervised approaches. Code is made available at\nhttps://github.com/pytorch/fairseq/tree/main/examples/MMPT.", + "authors": "Hu Xu, Gargi Ghosh, Po-Yao Huang, Dmytro Okhonko, Armen Aghajanyan, Florian Metze, Luke Zettlemoyer, Christoph Feichtenhofer", + "published": "2021-09-28", + "updated": "2021-10-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2304.02643v1", + "title": "Segment Anything", + "abstract": "We introduce the Segment Anything (SA) project: a new task, model, and\ndataset for image segmentation. Using our efficient model in a data collection\nloop, we built the largest segmentation dataset to date (by far), with over 1\nbillion masks on 11M licensed and privacy respecting images. The model is\ndesigned and trained to be promptable, so it can transfer zero-shot to new\nimage distributions and tasks. We evaluate its capabilities on numerous tasks\nand find that its zero-shot performance is impressive -- often competitive with\nor even superior to prior fully supervised results. We are releasing the\nSegment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and\n11M images at https://segment-anything.com to foster research into foundation\nmodels for computer vision.", + "authors": "Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Doll\u00e1r, Ross Girshick", + "published": "2023-04-05", + "updated": "2023-04-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2006.13171v2", + "title": "ObjectNav Revisited: On Evaluation of Embodied Agents Navigating to Objects", + "abstract": "We revisit the problem of Object-Goal Navigation (ObjectNav). In its simplest\nform, ObjectNav is defined as the task of navigating to an object, specified by\nits label, in an unexplored environment. In particular, the agent is\ninitialized at a random location and pose in an environment and asked to find\nan instance of an object category, e.g., find a chair, by navigating to it.\n As the community begins to show increased interest in semantic goal\nspecification for navigation tasks, a number of different often-inconsistent\ninterpretations of this task are emerging. This document summarizes the\nconsensus recommendations of this working group on ObjectNav. In particular, we\nmake recommendations on subtle but important details of evaluation criteria\n(for measuring success when navigating towards a target object), the agent's\nembodiment parameters, and the characteristics of the environments within which\nthe task is carried out. Finally, we provide a detailed description of the\ninstantiation of these recommendations in challenges organized at the Embodied\nAI workshop at CVPR 2020 http://embodied-ai.org .", + "authors": "Dhruv Batra, Aaron Gokaslan, Aniruddha Kembhavi, Oleksandr Maksymets, Roozbeh Mottaghi, Manolis Savva, Alexander Toshev, Erik Wijmans", + "published": "2020-06-23", + "updated": "2020-08-30", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2301.13166v3", + "title": "ESC: Exploration with Soft Commonsense Constraints for Zero-shot Object Navigation", + "abstract": "The ability to accurately locate and navigate to a specific object is a\ncrucial capability for embodied agents that operate in the real world and\ninteract with objects to complete tasks. Such object navigation tasks usually\nrequire large-scale training in visual environments with labeled objects, which\ngeneralizes poorly to novel objects in unknown environments. In this work, we\npresent a novel zero-shot object navigation method, Exploration with Soft\nCommonsense constraints (ESC), that transfers commonsense knowledge in\npre-trained models to open-world object navigation without any navigation\nexperience nor any other training on the visual environments. First, ESC\nleverages a pre-trained vision and language model for open-world prompt-based\ngrounding and a pre-trained commonsense language model for room and object\nreasoning. Then ESC converts commonsense knowledge into navigation actions by\nmodeling it as soft logic predicates for efficient exploration. Extensive\nexperiments on MP3D, HM3D, and RoboTHOR benchmarks show that our ESC method\nimproves significantly over baselines, and achieves new state-of-the-art\nresults for zero-shot object navigation (e.g., 288% relative Success Rate\nimprovement than CoW on MP3D).", + "authors": "Kaiwen Zhou, Kaizhi Zheng, Connor Pryor, Yilin Shen, Hongxia Jin, Lise Getoor, Xin Eric Wang", + "published": "2023-01-30", + "updated": "2023-07-06", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CV", + "cs.LG", + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2309.10309v2", + "title": "Bridging Zero-shot Object Navigation and Foundation Models through Pixel-Guided Navigation Skill", + "abstract": "Zero-shot object navigation is a challenging task for home-assistance robots.\nThis task emphasizes visual grounding, commonsense inference and locomotion\nabilities, where the first two are inherent in foundation models. But for the\nlocomotion part, most works still depend on map-based planning approaches. The\ngap between RGB space and map space makes it difficult to directly transfer the\nknowledge from foundation models to navigation tasks. In this work, we propose\na Pixel-guided Navigation skill (PixNav), which bridges the gap between the\nfoundation models and the embodied navigation task. It is straightforward for\nrecent foundation models to indicate an object by pixels, and with pixels as\nthe goal specification, our method becomes a versatile navigation policy\ntowards all different kinds of objects. Besides, our PixNav is a pure RGB-based\npolicy that can reduce the cost of home-assistance robots. Experiments\ndemonstrate the robustness of the PixNav which achieves 80+% success rate in\nthe local path-planning task. To perform long-horizon object navigation, we\ndesign an LLM-based planner to utilize the commonsense knowledge between\nobjects and rooms to select the best waypoint. Evaluations across both\nphotorealistic indoor simulators and real-world environments validate the\neffectiveness of our proposed navigation strategy. Code and video demos are\navailable at https://github.com/wzcai99/Pixel-Navigator.", + "authors": "Wenzhe Cai, Siyuan Huang, Guangran Cheng, Yuxing Long, Peng Gao, Changyin Sun, Hao Dong", + "published": "2023-09-19", + "updated": "2023-09-21", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1711.07280v3", + "title": "Vision-and-Language Navigation: Interpreting visually-grounded navigation instructions in real environments", + "abstract": "A robot that can carry out a natural-language instruction has been a dream\nsince before the Jetsons cartoon series imagined a life of leisure mediated by\na fleet of attentive robot helpers. It is a dream that remains stubbornly\ndistant. However, recent advances in vision and language methods have made\nincredible progress in closely related areas. This is significant because a\nrobot interpreting a natural-language navigation instruction on the basis of\nwhat it sees is carrying out a vision and language process that is similar to\nVisual Question Answering. Both tasks can be interpreted as visually grounded\nsequence-to-sequence translation problems, and many of the same methods are\napplicable. To enable and encourage the application of vision and language\nmethods to the problem of interpreting visually-grounded navigation\ninstructions, we present the Matterport3D Simulator -- a large-scale\nreinforcement learning environment based on real imagery. Using this simulator,\nwhich can in future support a range of embodied vision and language tasks, we\nprovide the first benchmark dataset for visually-grounded natural language\nnavigation in real buildings -- the Room-to-Room (R2R) dataset.", + "authors": "Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko S\u00fcnderhauf, Ian Reid, Stephen Gould, Anton van den Hengel", + "published": "2017-11-20", + "updated": "2018-04-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.CL", + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2309.17421v2", + "title": "The Dawn of LMMs: Preliminary Explorations with GPT-4V(ision)", + "abstract": "Large multimodal models (LMMs) extend large language models (LLMs) with\nmulti-sensory skills, such as visual understanding, to achieve stronger generic\nintelligence. In this paper, we analyze the latest model, GPT-4V(ision), to\ndeepen the understanding of LMMs. The analysis focuses on the intriguing tasks\nthat GPT-4V can perform, containing test samples to probe the quality and\ngenericity of GPT-4V's capabilities, its supported inputs and working modes,\nand the effective ways to prompt the model. In our approach to exploring\nGPT-4V, we curate and organize a collection of carefully designed qualitative\nsamples spanning a variety of domains and tasks. Observations from these\nsamples demonstrate that GPT-4V's unprecedented ability in processing\narbitrarily interleaved multimodal inputs and the genericity of its\ncapabilities together make GPT-4V a powerful multimodal generalist system.\nFurthermore, GPT-4V's unique capability of understanding visual markers drawn\non input images can give rise to new human-computer interaction methods such as\nvisual referring prompting. We conclude the report with in-depth discussions on\nthe emerging application scenarios and the future research directions for\nGPT-4V-based systems. We hope that this preliminary exploration will inspire\nfuture research on the next-generation multimodal task formulation, new ways to\nexploit and enhance LMMs to solve real-world problems, and gaining better\nunderstanding of multimodal foundation models. Finally, we acknowledge that the\nmodel under our study is solely the product of OpenAI's innovative work, and\nthey should be fully credited for its development. Please see the GPT-4V\ncontributions paper for the authorship and credit attribution:\nhttps://cdn.openai.com/contributions/gpt-4v.pdf", + "authors": "Zhengyuan Yang, Linjie Li, Kevin Lin, Jianfeng Wang, Chung-Ching Lin, Zicheng Liu, Lijuan Wang", + "published": "2023-09-29", + "updated": "2023-10-11", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2203.12667v3", + "title": "Vision-and-Language Navigation: A Survey of Tasks, Methods, and Future Directions", + "abstract": "A long-term goal of AI research is to build intelligent agents that can\ncommunicate with humans in natural language, perceive the environment, and\nperform real-world tasks. Vision-and-Language Navigation (VLN) is a fundamental\nand interdisciplinary research topic towards this goal, and receives increasing\nattention from natural language processing, computer vision, robotics, and\nmachine learning communities. In this paper, we review contemporary studies in\nthe emerging field of VLN, covering tasks, evaluation metrics, methods, etc.\nThrough structured analysis of current progress and challenges, we highlight\nthe limitations of current VLN and opportunities for future work. This paper\nserves as a thorough reference for the VLN research community.", + "authors": "Jing Gu, Eliana Stefani, Qi Wu, Jesse Thomason, Xin Eric Wang", + "published": "2022-03-22", + "updated": "2022-06-03", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2303.05499v4", + "title": "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection", + "abstract": "In this paper, we present an open-set object detector, called Grounding DINO,\nby marrying Transformer-based detector DINO with grounded pre-training, which\ncan detect arbitrary objects with human inputs such as category names or\nreferring expressions. The key solution of open-set object detection is\nintroducing language to a closed-set detector for open-set concept\ngeneralization. To effectively fuse language and vision modalities, we\nconceptually divide a closed-set detector into three phases and propose a tight\nfusion solution, which includes a feature enhancer, a language-guided query\nselection, and a cross-modality decoder for cross-modality fusion. While\nprevious works mainly evaluate open-set object detection on novel categories,\nwe propose to also perform evaluations on referring expression comprehension\nfor objects specified with attributes. Grounding DINO performs remarkably well\non all three settings, including benchmarks on COCO, LVIS, ODinW, and\nRefCOCO/+/g. Grounding DINO achieves a $52.5$ AP on the COCO detection\nzero-shot transfer benchmark, i.e., without any training data from COCO. It\nsets a new record on the ODinW zero-shot benchmark with a mean $26.1$ AP. Code\nwill be available at \\url{https://github.com/IDEA-Research/GroundingDINO}.", + "authors": "Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang", + "published": "2023-03-09", + "updated": "2023-03-20", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2203.10421v2", + "title": "CoWs on Pasture: Baselines and Benchmarks for Language-Driven Zero-Shot Object Navigation", + "abstract": "For robots to be generally useful, they must be able to find arbitrary\nobjects described by people (i.e., be language-driven) even without expensive\nnavigation training on in-domain data (i.e., perform zero-shot inference). We\nexplore these capabilities in a unified setting: language-driven zero-shot\nobject navigation (L-ZSON). Inspired by the recent success of open-vocabulary\nmodels for image classification, we investigate a straightforward framework,\nCLIP on Wheels (CoW), to adapt open-vocabulary models to this task without\nfine-tuning. To better evaluate L-ZSON, we introduce the Pasture benchmark,\nwhich considers finding uncommon objects, objects described by spatial and\nappearance attributes, and hidden objects described relative to visible\nobjects. We conduct an in-depth empirical study by directly deploying 21 CoW\nbaselines across Habitat, RoboTHOR, and Pasture. In total, we evaluate over 90k\nnavigation episodes and find that (1) CoW baselines often struggle to leverage\nlanguage descriptions, but are proficient at finding uncommon objects. (2) A\nsimple CoW, with CLIP-based object localization and classical exploration --\nand no additional training -- matches the navigation efficiency of a\nstate-of-the-art ZSON method trained for 500M steps on Habitat MP3D data. This\nsame CoW provides a 15.6 percentage point improvement in success over a\nstate-of-the-art RoboTHOR ZSON model.", + "authors": "Samir Yitzhak Gadre, Mitchell Wortsman, Gabriel Ilharco, Ludwig Schmidt, Shuran Song", + "published": "2022-03-20", + "updated": "2022-12-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG", + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2309.08138v3", + "title": "Find What You Want: Learning Demand-conditioned Object Attribute Space for Demand-driven Navigation", + "abstract": "The task of Visual Object Navigation (VON) involves an agent's ability to\nlocate a particular object within a given scene. In order to successfully\naccomplish the VON task, two essential conditions must be fulfilled:1) the user\nmust know the name of the desired object; and 2) the user-specified object must\nactually be present within the scene. To meet these conditions, a simulator can\nincorporate pre-defined object names and positions into the metadata of the\nscene. However, in real-world scenarios, it is often challenging to ensure that\nthese conditions are always met. Human in an unfamiliar environment may not\nknow which objects are present in the scene, or they may mistakenly specify an\nobject that is not actually present. Nevertheless, despite these challenges,\nhuman may still have a demand for an object, which could potentially be\nfulfilled by other objects present within the scene in an equivalent manner.\nHence, we propose Demand-driven Navigation (DDN), which leverages the user's\ndemand as the task instruction and prompts the agent to find the object matches\nthe specified demand. DDN aims to relax the stringent conditions of VON by\nfocusing on fulfilling the user's demand rather than relying solely on\npredefined object categories or names. We propose a method first acquire\ntextual attribute features of objects by extracting common knowledge from a\nlarge language model. These textual attribute features are subsequently aligned\nwith visual attribute features using Contrastive Language-Image Pre-training\n(CLIP). By incorporating the visual attribute features as prior knowledge, we\nenhance the navigation process. Experiments on AI2Thor with the ProcThor\ndataset demonstrate the visual attribute features improve the agent's\nnavigation performance and outperform the baseline methods commonly used in\nVON.", + "authors": "Hongcheng Wang, Andy Guan Hong Chen, Xiaoqi Li, Mingdong Wu, Hao Dong", + "published": "2023-09-15", + "updated": "2023-11-06", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2304.05501v2", + "title": "L3MVN: Leveraging Large Language Models for Visual Target Navigation", + "abstract": "Visual target navigation in unknown environments is a crucial problem in\nrobotics. Despite extensive investigation of classical and learning-based\napproaches in the past, robots lack common-sense knowledge about household\nobjects and layouts. Prior state-of-the-art approaches to this task rely on\nlearning the priors during the training and typically require significant\nexpensive resources and time for learning. To address this, we propose a new\nframework for visual target navigation that leverages Large Language Models\n(LLM) to impart common sense for object searching. Specifically, we introduce\ntwo paradigms: (i) zero-shot and (ii) feed-forward approaches that use language\nto find the relevant frontier from the semantic map as a long-term goal and\nexplore the environment efficiently. Our analysis demonstrates the notable\nzero-shot generalization and transfer capabilities from the use of language.\nExperiments on Gibson and Habitat-Matterport 3D (HM3D) demonstrate that the\nproposed framework significantly outperforms existing map-based methods in\nterms of success rate and generalization. Ablation analysis also indicates that\nthe common-sense knowledge from the language model leads to more efficient\nsemantic exploration. Finally, we provide a real robot experiment to verify the\napplicability of our framework in real-world scenarios. The supplementary video\nand code can be accessed via the following link:\nhttps://sites.google.com/view/l3mvn.", + "authors": "Bangguo Yu, Hamidreza Kasaei, Ming Cao", + "published": "2023-04-11", + "updated": "2023-12-25", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.00704v5", + "title": "UniAudio: An Audio Foundation Model Toward Universal Audio Generation", + "abstract": "Large Language models (LLM) have demonstrated the capability to handle a\nvariety of generative tasks. This paper presents the UniAudio system, which,\nunlike prior task-specific approaches, leverages LLM techniques to generate\nmultiple types of audio (including speech, sounds, music, and singing) with\ngiven input conditions. UniAudio 1) first tokenizes all types of target audio\nalong with other condition modalities, 2) concatenates source-target pair as a\nsingle sequence, and 3) performs next-token prediction using LLM. Also, a\nmulti-scale Transformer model is proposed to handle the overly long sequences\ncaused by the residual vector quantization based neural codec in tokenization.\nTraining of UniAudio is scaled up to 165K hours of audio and 1B parameters,\nbased on all generative tasks, aiming to obtain sufficient prior knowledge not\nonly in the intrinsic properties of audio but also the inter-relationship\nbetween audio and other modalities. Therefore, the trained UniAudio model has\nthe potential to become a foundation model for universal audio generation: it\nshows strong capability in all trained tasks and can seamlessly support new\naudio generation tasks after simple fine-tuning. Experiments demonstrate that\nUniAudio achieves state-of-the-art or at least competitive results on most of\nthe 11 tasks. Demo and code are released at\nhttps://github.com/yangdongchao/UniAudio", + "authors": "Dongchao Yang, Jinchuan Tian, Xu Tan, Rongjie Huang, Songxiang Liu, Xuankai Chang, Jiatong Shi, Sheng Zhao, Jiang Bian, Xixin Wu, Zhou Zhao, Shinji Watanabe, Helen Meng", + "published": "2023-10-01", + "updated": "2023-12-11", + "primary_cat": "cs.SD", + "cats": [ + "cs.SD", + "eess.AS" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1911.00357v2", + "title": "DD-PPO: Learning Near-Perfect PointGoal Navigators from 2.5 Billion Frames", + "abstract": "We present Decentralized Distributed Proximal Policy Optimization (DD-PPO), a\nmethod for distributed reinforcement learning in resource-intensive simulated\nenvironments. DD-PPO is distributed (uses multiple machines), decentralized\n(lacks a centralized server), and synchronous (no computation is ever stale),\nmaking it conceptually simple and easy to implement. In our experiments on\ntraining virtual robots to navigate in Habitat-Sim, DD-PPO exhibits near-linear\nscaling -- achieving a speedup of 107x on 128 GPUs over a serial\nimplementation. We leverage this scaling to train an agent for 2.5 Billion\nsteps of experience (the equivalent of 80 years of human experience) -- over 6\nmonths of GPU-time training in under 3 days of wall-clock time with 64 GPUs.\n This massive-scale training not only sets the state of art on Habitat\nAutonomous Navigation Challenge 2019, but essentially solves the task\n--near-perfect autonomous navigation in an unseen environment without access to\na map, directly from an RGB-D camera and a GPS+Compass sensor. Fortuitously,\nerror vs computation exhibits a power-law-like distribution; thus, 90% of peak\nperformance is obtained relatively early (at 100 million steps) and relatively\ncheaply (under 1 day with 8 GPUs). Finally, we show that the scene\nunderstanding and navigation policies learned can be transferred to other\nnavigation tasks -- the analog of ImageNet pre-training + task-specific\nfine-tuning for embodied AI. Our model outperforms ImageNet pre-trained CNNs on\nthese transfer tasks and can serve as a universal resource (all models and code\nare publicly available).", + "authors": "Erik Wijmans, Abhishek Kadian, Ari Morcos, Stefan Lee, Irfan Essa, Devi Parikh, Manolis Savva, Dhruv Batra", + "published": "2019-11-01", + "updated": "2020-01-20", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2206.12403v2", + "title": "ZSON: Zero-Shot Object-Goal Navigation using Multimodal Goal Embeddings", + "abstract": "We present a scalable approach for learning open-world object-goal navigation\n(ObjectNav) -- the task of asking a virtual robot (agent) to find any instance\nof an object in an unexplored environment (e.g., \"find a sink\"). Our approach\nis entirely zero-shot -- i.e., it does not require ObjectNav rewards or\ndemonstrations of any kind. Instead, we train on the image-goal navigation\n(ImageNav) task, in which agents find the location where a picture (i.e., goal\nimage) was captured. Specifically, we encode goal images into a multimodal,\nsemantic embedding space to enable training semantic-goal navigation\n(SemanticNav) agents at scale in unannotated 3D environments (e.g., HM3D).\nAfter training, SemanticNav agents can be instructed to find objects described\nin free-form natural language (e.g., \"sink\", \"bathroom sink\", etc.) by\nprojecting language goals into the same multimodal, semantic embedding space.\nAs a result, our approach enables open-world ObjectNav. We extensively evaluate\nour agents on three ObjectNav datasets (Gibson, HM3D, and MP3D) and observe\nabsolute improvements in success of 4.2% - 20.0% over existing zero-shot\nmethods. For reference, these gains are similar or better than the 5%\nimprovement in success between the Habitat 2020 and 2021 ObjectNav challenge\nwinners. In an open-world setting, we discover that our agents can generalize\nto compound instructions with a room explicitly mentioned (e.g., \"Find a\nkitchen sink\") and when the target room can be inferred (e.g., \"Find a sink and\na stove\").", + "authors": "Arjun Majumdar, Gunjan Aggarwal, Bhavika Devnani, Judy Hoffman, Dhruv Batra", + "published": "2022-06-24", + "updated": "2023-10-13", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG", + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.10103v1", + "title": "Navigation with Large Language Models: Semantic Guesswork as a Heuristic for Planning", + "abstract": "Navigation in unfamiliar environments presents a major challenge for robots:\nwhile mapping and planning techniques can be used to build up a representation\nof the world, quickly discovering a path to a desired goal in unfamiliar\nsettings with such methods often requires lengthy mapping and exploration.\nHumans can rapidly navigate new environments, particularly indoor environments\nthat are laid out logically, by leveraging semantics -- e.g., a kitchen often\nadjoins a living room, an exit sign indicates the way out, and so forth.\nLanguage models can provide robots with such knowledge, but directly using\nlanguage models to instruct a robot how to reach some destination can also be\nimpractical: while language models might produce a narrative about how to reach\nsome goal, because they are not grounded in real-world observations, this\nnarrative might be arbitrarily wrong. Therefore, in this paper we study how the\n``semantic guesswork'' produced by language models can be utilized as a guiding\nheuristic for planning algorithms. Our method, Language Frontier Guide (LFG),\nuses the language model to bias exploration of novel real-world environments by\nincorporating the semantic knowledge stored in language models as a search\nheuristic for planning with either topological or metric maps. We evaluate LFG\nin challenging real-world environments and simulated benchmarks, outperforming\nuninformed exploration and other ways of using language models.", + "authors": "Dhruv Shah, Michael Equi, Blazej Osinski, Fei Xia, Brian Ichter, Sergey Levine", + "published": "2023-10-16", + "updated": "2023-10-16", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI", + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2308.05602v1", + "title": "Object Goal Navigation with Recursive Implicit Maps", + "abstract": "Object goal navigation aims to navigate an agent to locations of a given\nobject category in unseen environments. Classical methods explicitly build maps\nof environments and require extensive engineering while lacking semantic\ninformation for object-oriented exploration. On the other hand, end-to-end\nlearning methods alleviate manual map design and predict actions using implicit\nrepresentations. Such methods, however, lack an explicit notion of geometry and\nmay have limited ability to encode navigation history. In this work, we propose\nan implicit spatial map for object goal navigation. Our implicit map is\nrecursively updated with new observations at each step using a transformer. To\nencourage spatial reasoning, we introduce auxiliary tasks and train our model\nto reconstruct explicit maps as well as to predict visual features, semantic\nlabels and actions. Our method significantly outperforms the state of the art\non the challenging MP3D dataset and generalizes well to the HM3D dataset. We\nsuccessfully deploy our model on a real robot and achieve encouraging object\ngoal navigation results in real scenes using only a few real-world\ndemonstrations. Code, trained models and videos are available at\n\\url{https://www.di.ens.fr/willow/research/onav_rim/}.", + "authors": "Shizhe Chen, Thomas Chabal, Ivan Laptev, Cordelia Schmid", + "published": "2023-08-10", + "updated": "2023-08-10", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2309.08138v3", + "title": "Find What You Want: Learning Demand-conditioned Object Attribute Space for Demand-driven Navigation", + "abstract": "The task of Visual Object Navigation (VON) involves an agent's ability to\nlocate a particular object within a given scene. In order to successfully\naccomplish the VON task, two essential conditions must be fulfilled:1) the user\nmust know the name of the desired object; and 2) the user-specified object must\nactually be present within the scene. To meet these conditions, a simulator can\nincorporate pre-defined object names and positions into the metadata of the\nscene. However, in real-world scenarios, it is often challenging to ensure that\nthese conditions are always met. Human in an unfamiliar environment may not\nknow which objects are present in the scene, or they may mistakenly specify an\nobject that is not actually present. Nevertheless, despite these challenges,\nhuman may still have a demand for an object, which could potentially be\nfulfilled by other objects present within the scene in an equivalent manner.\nHence, we propose Demand-driven Navigation (DDN), which leverages the user's\ndemand as the task instruction and prompts the agent to find the object matches\nthe specified demand. DDN aims to relax the stringent conditions of VON by\nfocusing on fulfilling the user's demand rather than relying solely on\npredefined object categories or names. We propose a method first acquire\ntextual attribute features of objects by extracting common knowledge from a\nlarge language model. These textual attribute features are subsequently aligned\nwith visual attribute features using Contrastive Language-Image Pre-training\n(CLIP). By incorporating the visual attribute features as prior knowledge, we\nenhance the navigation process. Experiments on AI2Thor with the ProcThor\ndataset demonstrate the visual attribute features improve the agent's\nnavigation performance and outperform the baseline methods commonly used in\nVON.", + "authors": "Hongcheng Wang, Andy Guan Hong Chen, Xiaoqi Li, Mingdong Wu, Hao Dong", + "published": "2023-09-15", + "updated": "2023-11-06", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2103.00020v1", + "title": "Learning Transferable Visual Models From Natural Language Supervision", + "abstract": "State-of-the-art computer vision systems are trained to predict a fixed set\nof predetermined object categories. This restricted form of supervision limits\ntheir generality and usability since additional labeled data is needed to\nspecify any other visual concept. Learning directly from raw text about images\nis a promising alternative which leverages a much broader source of\nsupervision. We demonstrate that the simple pre-training task of predicting\nwhich caption goes with which image is an efficient and scalable way to learn\nSOTA image representations from scratch on a dataset of 400 million (image,\ntext) pairs collected from the internet. After pre-training, natural language\nis used to reference learned visual concepts (or describe new ones) enabling\nzero-shot transfer of the model to downstream tasks. We study the performance\nof this approach by benchmarking on over 30 different existing computer vision\ndatasets, spanning tasks such as OCR, action recognition in videos,\ngeo-localization, and many types of fine-grained object classification. The\nmodel transfers non-trivially to most tasks and is often competitive with a\nfully supervised baseline without the need for any dataset specific training.\nFor instance, we match the accuracy of the original ResNet-50 on ImageNet\nzero-shot without needing to use any of the 1.28 million training examples it\nwas trained on. We release our code and pre-trained model weights at\nhttps://github.com/OpenAI/CLIP.", + "authors": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever", + "published": "2021-02-26", + "updated": "2021-02-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1903.08543v6", + "title": "Optimizing thermodynamic trajectories using evolutionary and gradient-based reinforcement learning", + "abstract": "Using a model heat engine, we show that neural network-based reinforcement\nlearning can identify thermodynamic trajectories of maximal efficiency. We\nconsider both gradient and gradient-free reinforcement learning. We use an\nevolutionary learning algorithm to evolve a population of neural networks,\nsubject to a directive to maximize the efficiency of a trajectory composed of a\nset of elementary thermodynamic processes; the resulting networks learn to\ncarry out the maximally-efficient Carnot, Stirling, or Otto cycles. When given\nan additional irreversible process, this evolutionary scheme learns a\npreviously unknown thermodynamic cycle. Gradient-based reinforcement learning\nis able to learn the Stirling cycle, whereas an evolutionary approach achieves\nthe optimal Carnot cycle. Our results show how the reinforcement learning\nstrategies developed for game playing can be applied to solve physical problems\nconditioned upon path-extensive order parameters.", + "authors": "Chris Beeler, Uladzimir Yahorau, Rory Coles, Kyle Mills, Stephen Whitelam, Isaac Tamblyn", + "published": "2019-03-20", + "updated": "2021-11-22", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE", + "cond-mat.stat-mech", + "cs.LG", + "physics.comp-ph" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2308.14897v1", + "title": "Statistically Efficient Variance Reduction with Double Policy Estimation for Off-Policy Evaluation in Sequence-Modeled Reinforcement Learning", + "abstract": "Offline reinforcement learning aims to utilize datasets of previously\ngathered environment-action interaction records to learn a policy without\naccess to the real environment. Recent work has shown that offline\nreinforcement learning can be formulated as a sequence modeling problem and\nsolved via supervised learning with approaches such as decision transformer.\nWhile these sequence-based methods achieve competitive results over\nreturn-to-go methods, especially on tasks that require longer episodes or with\nscarce rewards, importance sampling is not considered to correct the policy\nbias when dealing with off-policy data, mainly due to the absence of behavior\npolicy and the use of deterministic evaluation policies. To this end, we\npropose DPE: an RL algorithm that blends offline sequence modeling and offline\nreinforcement learning with Double Policy Estimation (DPE) in a unified\nframework with statistically proven properties on variance reduction. We\nvalidate our method in multiple tasks of OpenAI Gym with D4RL benchmarks. Our\nmethod brings a performance improvements on selected methods which outperforms\nSOTA baselines in several tasks, demonstrating the advantages of enabling\ndouble policy estimation for sequence-modeled reinforcement learning.", + "authors": "Hanhan Zhou, Tian Lan, Vaneet Aggarwal", + "published": "2023-08-28", + "updated": "2023-08-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.DC" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1709.09346v2", + "title": "Cold-Start Reinforcement Learning with Softmax Policy Gradient", + "abstract": "Policy-gradient approaches to reinforcement learning have two common and\nundesirable overhead procedures, namely warm-start training and sample variance\nreduction. In this paper, we describe a reinforcement learning method based on\na softmax value function that requires neither of these procedures. Our method\ncombines the advantages of policy-gradient methods with the efficiency and\nsimplicity of maximum-likelihood approaches. We apply this new cold-start\nreinforcement learning method in training sequence generation models for\nstructured output prediction problems. Empirical evidence validates this method\non automatic summarization and image captioning tasks.", + "authors": "Nan Ding, Radu Soricut", + "published": "2017-09-27", + "updated": "2017-10-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2102.05612v1", + "title": "Personalization for Web-based Services using Offline Reinforcement Learning", + "abstract": "Large-scale Web-based services present opportunities for improving UI\npolicies based on observed user interactions. We address challenges of learning\nsuch policies through model-free offline Reinforcement Learning (RL) with\noff-policy training. Deployed in a production system for user authentication in\na major social network, it significantly improves long-term objectives. We\narticulate practical challenges, compare several ML techniques, provide\ninsights on training and evaluation of RL models, and discuss generalizations.", + "authors": "Pavlos Athanasios Apostolopoulos, Zehui Wang, Hanson Wang, Chad Zhou, Kittipat Virochsiri, Norm Zhou, Igor L. Markov", + "published": "2021-02-10", + "updated": "2021-02-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.HC", + "cs.SE" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2006.07178v2", + "title": "Meta-Reinforcement Learning Robust to Distributional Shift via Model Identification and Experience Relabeling", + "abstract": "Reinforcement learning algorithms can acquire policies for complex tasks\nautonomously. However, the number of samples required to learn a diverse set of\nskills can be prohibitively large. While meta-reinforcement learning methods\nhave enabled agents to leverage prior experience to adapt quickly to new tasks,\ntheir performance depends crucially on how close the new task is to the\npreviously experienced tasks. Current approaches are either not able to\nextrapolate well, or can do so at the expense of requiring extremely large\namounts of data for on-policy meta-training. In this work, we present model\nidentification and experience relabeling (MIER), a meta-reinforcement learning\nalgorithm that is both efficient and extrapolates well when faced with\nout-of-distribution tasks at test time. Our method is based on a simple\ninsight: we recognize that dynamics models can be adapted efficiently and\nconsistently with off-policy data, more easily than policies and value\nfunctions. These dynamics models can then be used to continue training policies\nand value functions for out-of-distribution tasks without using\nmeta-reinforcement learning at all, by generating synthetic experience for the\nnew task.", + "authors": "Russell Mendonca, Xinyang Geng, Chelsea Finn, Sergey Levine", + "published": "2020-06-12", + "updated": "2020-06-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.07315v1", + "title": "An introduction to reinforcement learning for neuroscience", + "abstract": "Reinforcement learning has a rich history in neuroscience, from early work on\ndopamine as a reward prediction error signal for temporal difference learning\n(Schultz et al., 1997) to recent work suggesting that dopamine could implement\na form of 'distributional reinforcement learning' popularized in deep learning\n(Dabney et al., 2020). Throughout this literature, there has been a tight link\nbetween theoretical advances in reinforcement learning and neuroscientific\nexperiments and findings. As a result, the theories describing our experimental\ndata have become increasingly complex and difficult to navigate. In this\nreview, we cover the basic theory underlying classical work in reinforcement\nlearning and build up to an introductory overview of methods used in modern\ndeep reinforcement learning that have found applications in systems\nneuroscience. We start with an overview of the reinforcement learning problem\nand classical temporal difference algorithms, followed by a discussion of\n'model-free' and 'model-based' reinforcement learning together with methods\nsuch as DYNA and successor representations that fall in between these two\ncategories. Throughout these sections, we highlight the close parallels between\nthe machine learning methods and related work in both experimental and\ntheoretical neuroscience. We then provide an introduction to deep reinforcement\nlearning with examples of how these methods have been used to model different\nlearning phenomena in the systems neuroscience literature, such as\nmeta-reinforcement learning (Wang et al., 2018) and distributional\nreinforcement learning (Dabney et al., 2020). Code that implements the methods\ndiscussed in this work and generates the figures is also provided.", + "authors": "Kristopher T. Jensen", + "published": "2023-11-13", + "updated": "2023-11-13", + "primary_cat": "q-bio.NC", + "cats": [ + "q-bio.NC", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1804.07193v3", + "title": "Lipschitz Continuity in Model-based Reinforcement Learning", + "abstract": "We examine the impact of learning Lipschitz continuous models in the context\nof model-based reinforcement learning. We provide a novel bound on multi-step\nprediction error of Lipschitz models where we quantify the error using the\nWasserstein metric. We go on to prove an error bound for the value-function\nestimate arising from Lipschitz models and show that the estimated value\nfunction is itself Lipschitz. We conclude with empirical results that show the\nbenefits of controlling the Lipschitz constant of neural-network models.", + "authors": "Kavosh Asadi, Dipendra Misra, Michael L. Littman", + "published": "2018-04-19", + "updated": "2018-07-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2110.14524v1", + "title": "Model based Multi-agent Reinforcement Learning with Tensor Decompositions", + "abstract": "A challenge in multi-agent reinforcement learning is to be able to generalize\nover intractable state-action spaces. Inspired from Tesseract [Mahajan et al.,\n2021], this position paper investigates generalisation in state-action space\nover unexplored state-action pairs by modelling the transition and reward\nfunctions as tensors of low CP-rank. Initial experiments on synthetic MDPs show\nthat using tensor decompositions in a model-based reinforcement learning\nalgorithm can lead to much faster convergence if the true transition and reward\nfunctions are indeed of low rank.", + "authors": "Pascal Van Der Vaart, Anuj Mahajan, Shimon Whiteson", + "published": "2021-10-27", + "updated": "2021-10-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.MA" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.15175v1", + "title": "Coordinated Reinforcement Learning for Optimizing Mobile Networks", + "abstract": "Mobile networks are composed of many base stations and for each of them many\nparameters must be optimized to provide good services. Automatically and\ndynamically optimizing all these entities is challenging as they are sensitive\nto variations in the environment and can affect each other through\ninterferences. Reinforcement learning (RL) algorithms are good candidates to\nautomatically learn base station configuration strategies from incoming data\nbut they are often hard to scale to many agents. In this work, we demonstrate\nhow to use coordination graphs and reinforcement learning in a complex\napplication involving hundreds of cooperating agents. We show how mobile\nnetworks can be modeled using coordination graphs and how network optimization\nproblems can be solved efficiently using multi- agent reinforcement learning.\nThe graph structure occurs naturally from expert knowledge about the network\nand allows to explicitly learn coordinating behaviors between the antennas\nthrough edge value functions represented by neural networks. We show\nempirically that coordinated reinforcement learning outperforms other methods.\nThe use of local RL updates and parameter sharing can handle a large number of\nagents without sacrificing coordination which makes it well suited to optimize\nthe ever denser networks brought by 5G and beyond.", + "authors": "Maxime Bouton, Hasan Farooq, Julien Forgeat, Shruti Bothe, Meral Shirazipour, Per Karlsson", + "published": "2021-09-30", + "updated": "2021-09-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2305.03360v1", + "title": "A Survey on Offline Model-Based Reinforcement Learning", + "abstract": "Model-based approaches are becoming increasingly popular in the field of\noffline reinforcement learning, with high potential in real-world applications\ndue to the model's capability of thoroughly utilizing the large historical\ndatasets available with supervised learning techniques. This paper presents a\nliterature review of recent work in offline model-based reinforcement learning,\na field that utilizes model-based approaches in offline reinforcement learning.\nThe survey provides a brief overview of the concepts and recent developments in\nboth offline reinforcement learning and model-based reinforcement learning, and\ndiscuss the intersection of the two fields. We then presents key relevant\npapers in the field of offline model-based reinforcement learning and discuss\ntheir methods, particularly their approaches in solving the issue of\ndistributional shift, the main problem faced by all current offline model-based\nreinforcement learning methods. We further discuss key challenges faced by the\nfield, and suggest possible directions for future work.", + "authors": "Haoyang He", + "published": "2023-05-05", + "updated": "2023-05-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SY", + "eess.SY", + "I.2.6; I.2.8" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.01474v1", + "title": "Offline Reinforcement Learning with Causal Structured World Models", + "abstract": "Model-based methods have recently shown promising for offline reinforcement\nlearning (RL), aiming to learn good policies from historical data without\ninteracting with the environment. Previous model-based offline RL methods learn\nfully connected nets as world-models that map the states and actions to the\nnext-step states. However, it is sensible that a world-model should adhere to\nthe underlying causal effect such that it will support learning an effective\npolicy generalizing well in unseen states. In this paper, We first provide\ntheoretical results that causal world-models can outperform plain world-models\nfor offline RL by incorporating the causal structure into the generalization\nerror bound. We then propose a practical algorithm, oFfline mOdel-based\nreinforcement learning with CaUsal Structure (FOCUS), to illustrate the\nfeasibility of learning and leveraging causal structure in offline RL.\nExperimental results on two benchmarks show that FOCUS reconstructs the\nunderlying causal structure accurately and robustly. Consequently, it performs\nbetter than the plain model-based offline RL algorithms and other causal\nmodel-based RL algorithms.", + "authors": "Zheng-Mao Zhu, Xiong-Hui Chen, Hong-Long Tian, Kun Zhang, Yang Yu", + "published": "2022-06-03", + "updated": "2022-06-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1906.08312v1", + "title": "Calibrated Model-Based Deep Reinforcement Learning", + "abstract": "Estimates of predictive uncertainty are important for accurate model-based\nplanning and reinforcement learning. However, predictive\nuncertainties---especially ones derived from modern deep learning systems---can\nbe inaccurate and impose a bottleneck on performance. This paper explores which\nuncertainties are needed for model-based reinforcement learning and argues that\ngood uncertainties must be calibrated, i.e. their probabilities should match\nempirical frequencies of predicted events. We describe a simple way to augment\nany model-based reinforcement learning agent with a calibrated model and show\nthat doing so consistently improves planning, sample complexity, and\nexploration. On the \\textsc{HalfCheetah} MuJoCo task, our system achieves\nstate-of-the-art performance using 50\\% fewer samples than the current leading\napproach. Our findings suggest that calibration can improve the performance of\nmodel-based reinforcement learning with minimal computational and\nimplementation overhead.", + "authors": "Ali Malik, Volodymyr Kuleshov, Jiaming Song, Danny Nemer, Harlan Seymour, Stefano Ermon", + "published": "2019-06-19", + "updated": "2019-06-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1305.1809v2", + "title": "Cover Tree Bayesian Reinforcement Learning", + "abstract": "This paper proposes an online tree-based Bayesian approach for reinforcement\nlearning. For inference, we employ a generalised context tree model. This\ndefines a distribution on multivariate Gaussian piecewise-linear models, which\ncan be updated in closed form. The tree structure itself is constructed using\nthe cover tree method, which remains efficient in high dimensional spaces. We\ncombine the model with Thompson sampling and approximate dynamic programming to\nobtain effective exploration policies in unknown environments. The flexibility\nand computational simplicity of the model render it suitable for many\nreinforcement learning problems in continuous state spaces. We demonstrate this\nin an experimental comparison with least squares policy iteration.", + "authors": "Nikolaos Tziortziotis, Christos Dimitrakakis, Konstantinos Blekas", + "published": "2013-05-08", + "updated": "2014-05-02", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.07915v2", + "title": "Barrier Certified Safety Learning Control: When Sum-of-Square Programming Meets Reinforcement Learning", + "abstract": "Safety guarantee is essential in many engineering implementations.\nReinforcement learning provides a useful way to strengthen safety. However,\nreinforcement learning algorithms cannot completely guarantee safety over\nrealistic operations. To address this issue, this work adopts control barrier\nfunctions over reinforcement learning, and proposes a compensated algorithm to\ncompletely maintain safety. Specifically, a sum-of-squares programming has been\nexploited to search for the optimal controller, and tune the learning\nhyperparameters simultaneously. Thus, the control actions are pledged to be\nalways within the safe region. The effectiveness of proposed method is\ndemonstrated via an inverted pendulum model. Compared to quadratic programming\nbased reinforcement learning methods, our sum-of-squares programming based\nreinforcement learning has shown its superiority.", + "authors": "Hejun Huang, Zhenglong Li, Dongkun Han", + "published": "2022-06-16", + "updated": "2022-06-29", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.LG", + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2011.01734v1", + "title": "Differentiable Physics Models for Real-world Offline Model-based Reinforcement Learning", + "abstract": "A limitation of model-based reinforcement learning (MBRL) is the exploitation\nof errors in the learned models. Black-box models can fit complex dynamics with\nhigh fidelity, but their behavior is undefined outside of the data\ndistribution.Physics-based models are better at extrapolating, due to the\ngeneral validity of their informed structure, but underfit in the real world\ndue to the presence of unmodeled phenomena. In this work, we demonstrate\nexperimentally that for the offline model-based reinforcement learning setting,\nphysics-based models can be beneficial compared to high-capacity function\napproximators if the mechanical structure is known. Physics-based models can\nlearn to perform the ball in a cup (BiC) task on a physical manipulator using\nonly 4 minutes of sampled data using offline MBRL. We find that black-box\nmodels consistently produce unviable policies for BiC as all predicted\ntrajectories diverge to physically impossible state, despite having access to\nmore data than the physics-based model. In addition, we generalize the approach\nof physics parameter identification from modeling holonomic multi-body systems\nto systems with nonholonomic dynamics using end-to-end automatic\ndifferentiation.\n Videos: https://sites.google.com/view/ball-in-a-cup-in-4-minutes/", + "authors": "Michael Lutter, Johannes Silberbauer, Joe Watson, Jan Peters", + "published": "2020-11-03", + "updated": "2020-11-03", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.12142v1", + "title": "Bridging Imagination and Reality for Model-Based Deep Reinforcement Learning", + "abstract": "Sample efficiency has been one of the major challenges for deep reinforcement\nlearning. Recently, model-based reinforcement learning has been proposed to\naddress this challenge by performing planning on imaginary trajectories with a\nlearned world model. However, world model learning may suffer from overfitting\nto training trajectories, and thus model-based value estimation and policy\nsearch will be pone to be sucked in an inferior local policy. In this paper, we\npropose a novel model-based reinforcement learning algorithm, called BrIdging\nReality and Dream (BIRD). It maximizes the mutual information between imaginary\nand real trajectories so that the policy improvement learned from imaginary\ntrajectories can be easily generalized to real trajectories. We demonstrate\nthat our approach improves sample efficiency of model-based planning, and\nachieves state-of-the-art performance on challenging visual control benchmarks.", + "authors": "Guangxiang Zhu, Minghao Zhang, Honglak Lee, Chongjie Zhang", + "published": "2020-10-23", + "updated": "2020-10-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1705.07460v1", + "title": "Experience enrichment based task independent reward model", + "abstract": "For most reinforcement learning approaches, the learning is performed by\nmaximizing an accumulative reward that is expectedly and manually defined for\nspecific tasks. However, in real world, rewards are emergent phenomena from the\ncomplex interactions between agents and environments. In this paper, we propose\nan implicit generic reward model for reinforcement learning. Unlike those\nrewards that are manually defined for specific tasks, such implicit reward is\ntask independent. It only comes from the deviation from the agents' previous\nexperiences.", + "authors": "Min Xu", + "published": "2017-05-21", + "updated": "2017-05-21", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.07525v1", + "title": "Using Collision Momentum in Deep Reinforcement Learning Based Adversarial Pedestrian Modeling", + "abstract": "Recent research in pedestrian simulation often aims to develop realistic\nbehaviors in various situations, but it is challenging for existing algorithms\nto generate behaviors that identify weaknesses in automated vehicles'\nperformance in extreme and unlikely scenarios and edge cases. To address this,\nspecialized pedestrian behavior algorithms are needed. Current research focuses\non realistic trajectories using social force models and reinforcement learning\nbased models. However, we propose a reinforcement learning algorithm that\nspecifically targets collisions and better uncovers unique failure modes of\nautomated vehicle controllers. Our algorithm is efficient and generates more\nsevere collisions, allowing for the identification and correction of weaknesses\nin autonomous driving algorithms in complex and varied scenarios.", + "authors": "Dianwei Chen, Ekim Yurtsever, Keith Redmill, Umit Ozguner", + "published": "2023-06-13", + "updated": "2023-06-13", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.01977v1", + "title": "Accelerating Goal-Directed Reinforcement Learning by Model Characterization", + "abstract": "We propose a hybrid approach aimed at improving the sample efficiency in\ngoal-directed reinforcement learning. We do this via a two-step mechanism where\nfirstly, we approximate a model from Model-Free reinforcement learning. Then,\nwe leverage this approximate model along with a notion of reachability using\nMean First Passage Times to perform Model-Based reinforcement learning. Built\non such a novel observation, we design two new algorithms - Mean First Passage\nTime based Q-Learning (MFPT-Q) and Mean First Passage Time based DYNA\n(MFPT-DYNA), that have been fundamentally modified from the state-of-the-art\nreinforcement learning techniques. Preliminary results have shown that our\nhybrid approaches converge with much fewer iterations than their corresponding\nstate-of-the-art counterparts and therefore requiring much fewer samples and\nmuch fewer training trials to converge.", + "authors": "Shoubhik Debnath, Gaurav Sukhatme, Lantao Liu", + "published": "2019-01-04", + "updated": "2019-01-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2209.06604v1", + "title": "Learning state correspondence of reinforcement learning tasks for knowledge transfer", + "abstract": "Deep reinforcement learning has shown an ability to achieve super-human\nperformance in solving complex reinforcement learning (RL) tasks only from\nraw-pixels. However, it fails to reuse knowledge from previously learnt tasks\nto solve new, unseen ones. Generalizing and reusing knowledge are the\nfundamental requirements for creating a truly intelligent agent. This work\nproposes a general method for one-to-one transfer learning based on generative\nadversarial network model tailored to RL task.", + "authors": "Marko Ruman, Tatiana V. Guy", + "published": "2022-09-14", + "updated": "2022-09-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2003.13839v1", + "title": "Model-Reference Reinforcement Learning Control of Autonomous Surface Vehicles with Uncertainties", + "abstract": "This paper presents a novel model-reference reinforcement learning control\nmethod for uncertain autonomous surface vehicles. The proposed control combines\na conventional control method with deep reinforcement learning. With the\nconventional control, we can ensure the learning-based control law provides\nclosed-loop stability for the overall system, and potentially increase the\nsample efficiency of the deep reinforcement learning. With the reinforcement\nlearning, we can directly learn a control law to compensate for modeling\nuncertainties. In the proposed control, a nominal system is employed for the\ndesign of a baseline control law using a conventional control approach. The\nnominal system also defines the desired performance for uncertain autonomous\nvehicles to follow. In comparison with traditional deep reinforcement learning\nmethods, our proposed learning-based control can provide stability guarantees\nand better sample efficiency. We demonstrate the performance of the new\nalgorithm via extensive simulation results.", + "authors": "Qingrui Zhang, Wei Pan, Vasso Reppa", + "published": "2020-03-30", + "updated": "2020-03-30", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.AI", + "cs.LG", + "cs.RO", + "cs.SY", + "math.OC" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1810.01112v1", + "title": "The Dreaming Variational Autoencoder for Reinforcement Learning Environments", + "abstract": "Reinforcement learning has shown great potential in generalizing over raw\nsensory data using only a single neural network for value optimization. There\nare several challenges in the current state-of-the-art reinforcement learning\nalgorithms that prevent them from converging towards the global optima. It is\nlikely that the solution to these problems lies in short- and long-term\nplanning, exploration and memory management for reinforcement learning\nalgorithms. Games are often used to benchmark reinforcement learning algorithms\nas they provide a flexible, reproducible, and easy to control environment.\nRegardless, few games feature a state-space where results in exploration,\nmemory, and planning are easily perceived. This paper presents The Dreaming\nVariational Autoencoder (DVAE), a neural network based generative modeling\narchitecture for exploration in environments with sparse feedback. We further\npresent Deep Maze, a novel and flexible maze engine that challenges DVAE in\npartial and fully-observable state-spaces, long-horizon tasks, and\ndeterministic and stochastic problems. We show initial findings and encourage\nfurther work in reinforcement learning driven by generative exploration.", + "authors": "Per-Arne Andersen, Morten Goodwin, Ole-Christoffer Granmo", + "published": "2018-10-02", + "updated": "2018-10-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2211.10688v2", + "title": "ReInform: Selecting paths with reinforcement learning for contextualized link prediction", + "abstract": "We propose to use reinforcement learning to inform transformer-based\ncontextualized link prediction models by providing paths that are most useful\nfor predicting the correct answer. This is in contrast to previous approaches,\nthat either used reinforcement learning (RL) to directly search for the answer,\nor based their prediction on limited or randomly selected context. Our\nexperiments on WN18RR and FB15k-237 show that contextualized link prediction\nmodels consistently outperform RL-based answer search, and that additional\nimprovements (of up to 13.5% MRR) can be gained by combining RL with a link\nprediction model. The PyTorch implementation of the RL agent is available at\nhttps://github.com/marina-sp/reinform", + "authors": "Marina Speranskaya, Sameh Methias, Benjamin Roth", + "published": "2022-11-19", + "updated": "2023-01-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.08162v1", + "title": "Causal Reasoning from Meta-reinforcement Learning", + "abstract": "Discovering and exploiting the causal structure in the environment is a\ncrucial challenge for intelligent agents. Here we explore whether causal\nreasoning can emerge via meta-reinforcement learning. We train a recurrent\nnetwork with model-free reinforcement learning to solve a range of problems\nthat each contain causal structure. We find that the trained agent can perform\ncausal reasoning in novel situations in order to obtain rewards. The agent can\nselect informative interventions, draw causal inferences from observational\ndata, and make counterfactual predictions. Although established formal causal\nreasoning algorithms also exist, in this paper we show that such reasoning can\narise from model-free reinforcement learning, and suggest that causal reasoning\nin complex settings may benefit from the more end-to-end learning-based\napproaches presented here. This work also offers new strategies for structured\nexploration in reinforcement learning, by providing agents with the ability to\nperform -- and interpret -- experiments.", + "authors": "Ishita Dasgupta, Jane Wang, Silvia Chiappa, Jovana Mitrovic, Pedro Ortega, David Raposo, Edward Hughes, Peter Battaglia, Matthew Botvinick, Zeb Kurth-Nelson", + "published": "2019-01-23", + "updated": "2019-01-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.01794v1", + "title": "Imitation Game: A Model-based and Imitation Learning Deep Reinforcement Learning Hybrid", + "abstract": "Autonomous and learning systems based on Deep Reinforcement Learning have\nfirmly established themselves as a foundation for approaches to creating\nresilient and efficient Cyber-Physical Energy Systems. However, most current\napproaches suffer from two distinct problems: Modern model-free algorithms such\nas Soft Actor Critic need a high number of samples to learn a meaningful\npolicy, as well as a fallback to ward against concept drifts (e. g.,\ncatastrophic forgetting). In this paper, we present the work in progress\ntowards a hybrid agent architecture that combines model-based Deep\nReinforcement Learning with imitation learning to overcome both problems.", + "authors": "Eric MSP Veith, Torben Logemann, Aleksandr Berezin, Arlena Well\u00dfow, Stephan Balduin", + "published": "2024-04-02", + "updated": "2024-04-02", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2301.11520v3", + "title": "SNeRL: Semantic-aware Neural Radiance Fields for Reinforcement Learning", + "abstract": "As previous representations for reinforcement learning cannot effectively\nincorporate a human-intuitive understanding of the 3D environment, they usually\nsuffer from sub-optimal performances. In this paper, we present Semantic-aware\nNeural Radiance Fields for Reinforcement Learning (SNeRL), which jointly\noptimizes semantic-aware neural radiance fields (NeRF) with a convolutional\nencoder to learn 3D-aware neural implicit representation from multi-view\nimages. We introduce 3D semantic and distilled feature fields in parallel to\nthe RGB radiance fields in NeRF to learn semantic and object-centric\nrepresentation for reinforcement learning. SNeRL outperforms not only previous\npixel-based representations but also recent 3D-aware representations both in\nmodel-free and model-based reinforcement learning.", + "authors": "Dongseok Shim, Seungjae Lee, H. Jin Kim", + "published": "2023-01-27", + "updated": "2023-05-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1609.03348v4", + "title": "A Threshold-based Scheme for Reinforcement Learning in Neural Networks", + "abstract": "A generic and scalable Reinforcement Learning scheme for Artificial Neural\nNetworks is presented, providing a general purpose learning machine. By\nreference to a node threshold three features are described 1) A mechanism for\nPrimary Reinforcement, capable of solving linearly inseparable problems 2) The\nlearning scheme is extended to include a mechanism for Conditioned\nReinforcement, capable of forming long term strategy 3) The learning scheme is\nmodified to use a threshold-based deep learning algorithm, providing a robust\nand biologically inspired alternative to backpropagation. The model may be used\nfor supervised as well as unsupervised training regimes.", + "authors": "Thomas H. Ward", + "published": "2016-09-12", + "updated": "2017-01-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NE" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1712.04170v2", + "title": "Interpretable Policies for Reinforcement Learning by Genetic Programming", + "abstract": "The search for interpretable reinforcement learning policies is of high\nacademic and industrial interest. Especially for industrial systems, domain\nexperts are more likely to deploy autonomously learned controllers if they are\nunderstandable and convenient to evaluate. Basic algebraic equations are\nsupposed to meet these requirements, as long as they are restricted to an\nadequate complexity. Here we introduce the genetic programming for\nreinforcement learning (GPRL) approach based on model-based batch reinforcement\nlearning and genetic programming, which autonomously learns policy equations\nfrom pre-existing default state-action trajectory samples. GPRL is compared to\na straight-forward method which utilizes genetic programming for symbolic\nregression, yielding policies imitating an existing well-performing, but\nnon-interpretable policy. Experiments on three reinforcement learning\nbenchmarks, i.e., mountain car, cart-pole balancing, and industrial benchmark,\ndemonstrate the superiority of our GPRL approach compared to the symbolic\nregression method. GPRL is capable of producing well-performing interpretable\nreinforcement learning policies from pre-existing default trajectory data.", + "authors": "Daniel Hein, Steffen Udluft, Thomas A. Runkler", + "published": "2017-12-12", + "updated": "2018-04-04", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.NE", + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.02025v1", + "title": "Between Rate-Distortion Theory & Value Equivalence in Model-Based Reinforcement Learning", + "abstract": "The quintessential model-based reinforcement-learning agent iteratively\nrefines its estimates or prior beliefs about the true underlying model of the\nenvironment. Recent empirical successes in model-based reinforcement learning\nwith function approximation, however, eschew the true model in favor of a\nsurrogate that, while ignoring various facets of the environment, still\nfacilitates effective planning over behaviors. Recently formalized as the value\nequivalence principle, this algorithmic technique is perhaps unavoidable as\nreal-world reinforcement learning demands consideration of a simple,\ncomputationally-bounded agent interacting with an overwhelmingly complex\nenvironment. In this work, we entertain an extreme scenario wherein some\ncombination of immense environment complexity and limited agent capacity\nentirely precludes identifying an exactly value-equivalent model. In light of\nthis, we embrace a notion of approximate value equivalence and introduce an\nalgorithm for incrementally synthesizing simple and useful approximations of\nthe environment from which an agent might still recover near-optimal behavior.\nCrucially, we recognize the information-theoretic nature of this lossy\nenvironment compression problem and use the appropriate tools of\nrate-distortion theory to make mathematically precise how value equivalence can\nlend tractability to otherwise intractable sequential decision-making problems.", + "authors": "Dilip Arumugam, Benjamin Van Roy", + "published": "2022-06-04", + "updated": "2022-06-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.IT", + "math.IT" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2009.09781v1", + "title": "Rethinking Supervised Learning and Reinforcement Learning in Task-Oriented Dialogue Systems", + "abstract": "Dialogue policy learning for task-oriented dialogue systems has enjoyed great\nprogress recently mostly through employing reinforcement learning methods.\nHowever, these approaches have become very sophisticated. It is time to\nre-evaluate it. Are we really making progress developing dialogue agents only\nbased on reinforcement learning? We demonstrate how (1)~traditional supervised\nlearning together with (2)~a simulator-free adversarial learning method can be\nused to achieve performance comparable to state-of-the-art RL-based methods.\nFirst, we introduce a simple dialogue action decoder to predict the appropriate\nactions. Then, the traditional multi-label classification solution for dialogue\npolicy learning is extended by adding dense layers to improve the dialogue\nagent performance. Finally, we employ the Gumbel-Softmax estimator to\nalternatively train the dialogue agent and the dialogue reward model without\nusing reinforcement learning. Based on our extensive experimentation, we can\nconclude the proposed methods can achieve more stable and higher performance\nwith fewer efforts, such as the domain knowledge required to design a user\nsimulator and the intractable parameter tuning in reinforcement learning. Our\nmain goal is not to beat reinforcement learning with supervised learning, but\nto demonstrate the value of rethinking the role of reinforcement learning and\nsupervised learning in optimizing task-oriented dialogue systems.", + "authors": "Ziming Li, Julia Kiseleva, Maarten de Rijke", + "published": "2020-09-21", + "updated": "2020-09-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.03188v3", + "title": "Optimizing Quantum Variational Circuits with Deep Reinforcement Learning", + "abstract": "Quantum Machine Learning (QML) is considered to be one of the most promising\napplications of near term quantum devices. However, the optimization of quantum\nmachine learning models presents numerous challenges arising from the\nimperfections of hardware and the fundamental obstacles in navigating an\nexponentially scaling Hilbert space. In this work, we evaluate the potential of\ncontemporary methods in deep reinforcement learning to augment gradient based\noptimization routines in quantum variational circuits. We find that\nreinforcement learning augmented optimizers consistently outperform gradient\ndescent in noisy environments. All code and pretrained weights are available to\nreplicate the results or deploy the models at:\nhttps://github.com/lockwo/rl_qvc_opt.", + "authors": "Owen Lockwood", + "published": "2021-09-07", + "updated": "2022-05-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "quant-ph" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.02900v2", + "title": "Meta Federated Reinforcement Learning for Distributed Resource Allocation", + "abstract": "In cellular networks, resource allocation is usually performed in a\ncentralized way, which brings huge computation complexity to the base station\n(BS) and high transmission overhead. This paper explores a distributed resource\nallocation method that aims to maximize energy efficiency (EE) while ensuring\nthe quality of service (QoS) for users. Specifically, in order to address\nwireless channel conditions, we propose a robust meta federated reinforcement\nlearning (\\textit{MFRL}) framework that allows local users to optimize transmit\npower and assign channels using locally trained neural network models, so as to\noffload computational burden from the cloud server to the local users, reducing\ntransmission overhead associated with local channel state information. The BS\nperforms the meta learning procedure to initialize a general global model,\nenabling rapid adaptation to different environments with improved EE\nperformance. The federated learning technique, based on decentralized\nreinforcement learning, promotes collaboration and mutual benefits among users.\nAnalysis and numerical results demonstrate that the proposed \\textit{MFRL}\nframework accelerates the reinforcement learning process, decreases\ntransmission overhead, and offloads computation, while outperforming the\nconventional decentralized reinforcement learning algorithm in terms of\nconvergence speed and EE performance across various scenarios.", + "authors": "Zelin Ji, Zhijin Qin, Xiaoming Tao", + "published": "2023-07-06", + "updated": "2023-07-09", + "primary_cat": "eess.SP", + "cats": [ + "eess.SP", + "cs.SY", + "eess.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1709.05067v1", + "title": "Deep Reinforcement Learning for Conversational AI", + "abstract": "Deep reinforcement learning is revolutionizing the artificial intelligence\nfield. Currently, it serves as a good starting point for constructing\nintelligent autonomous systems which offer a better knowledge of the visual\nworld. It is possible to scale deep reinforcement learning with the use of deep\nlearning and do amazing tasks such as use of pixels in playing video games. In\nthis paper, key concepts of deep reinforcement learning including reward\nfunction, differences between reinforcement learning and supervised learning\nand models for implementation of reinforcement are discussed. Key challenges\nrelated to the implementation of reinforcement learning in conversational AI\ndomain are identified as well as discussed in detail. Various conversational\nmodels which are based on deep reinforcement learning (as well as deep\nlearning) are also discussed. In summary, this paper discusses key aspects of\ndeep reinforcement learning which are crucial for designing an efficient\nconversational AI.", + "authors": "Mahipal Jadeja, Neelanshi Varia, Agam Shah", + "published": "2017-09-15", + "updated": "2017-09-15", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1708.07738v1", + "title": "A Function Approximation Method for Model-based High-Dimensional Inverse Reinforcement Learning", + "abstract": "This works handles the inverse reinforcement learning problem in\nhigh-dimensional state spaces, which relies on an efficient solution of\nmodel-based high-dimensional reinforcement learning problems. To solve the\ncomputationally expensive reinforcement learning problems, we propose a\nfunction approximation method to ensure that the Bellman Optimality Equation\nalways holds, and then estimate a function based on the observed human actions\nfor inverse reinforcement learning problems. The time complexity of the\nproposed method is linearly proportional to the cardinality of the action set,\nthus it can handle high-dimensional even continuous state spaces efficiently.\nWe test the proposed method in a simulated environment to show its accuracy,\nand three clinical tasks to show how it can be used to evaluate a doctor's\nproficiency.", + "authors": "Kun Li, Joel W. Burdick", + "published": "2017-08-23", + "updated": "2017-08-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2012.00743v1", + "title": "Adaptive Neural Architectures for Recommender Systems", + "abstract": "Deep learning has proved an effective means to capture the non-linear\nassociations of user preferences. However, the main drawback of existing deep\nlearning architectures is that they follow a fixed recommendation strategy,\nignoring users' real time-feedback. Recent advances of deep reinforcement\nstrategies showed that recommendation policies can be continuously updated\nwhile users interact with the system. In doing so, we can learn the optimal\npolicy that fits to users' preferences over the recommendation sessions. The\nmain drawback of deep reinforcement strategies is that are based on predefined\nand fixed neural architectures. To shed light on how to handle this issue, in\nthis study we first present deep reinforcement learning strategies for\nrecommendation and discuss the main limitations due to the fixed neural\narchitectures. Then, we detail how recent advances on progressive neural\narchitectures are used for consecutive tasks in other research domains.\nFinally, we present the key challenges to fill the gap between deep\nreinforcement learning and adaptive neural architectures. We provide guidelines\nfor searching for the best neural architecture based on each user feedback via\nreinforcement learning, while considering the prediction performance on\nreal-time recommendations and the model complexity.", + "authors": "Dimitrios Rafailidis, Stefanos Antaris", + "published": "2020-11-11", + "updated": "2020-11-11", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2312.15385v1", + "title": "Discrete-Time Mean-Variance Strategy Based on Reinforcement Learning", + "abstract": "This paper studies a discrete-time mean-variance model based on reinforcement\nlearning. Compared with its continuous-time counterpart in \\cite{zhou2020mv},\nthe discrete-time model makes more general assumptions about the asset's return\ndistribution. Using entropy to measure the cost of exploration, we derive the\noptimal investment strategy, whose density function is also Gaussian type.\nAdditionally, we design the corresponding reinforcement learning algorithm.\nBoth simulation experiments and empirical analysis indicate that our\ndiscrete-time model exhibits better applicability when analyzing real-world\ndata than the continuous-time model.", + "authors": "Xiangyu Cui, Xun Li, Yun Shi, Si Zhao", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "q-fin.MF", + "cats": [ + "q-fin.MF", + "cs.LG", + "q-fin.PM" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.11142v1", + "title": "Two-stage Deep Reinforcement Learning for Inverter-based Volt-VAR Control in Active Distribution Networks", + "abstract": "Model-based Vol/VAR optimization method is widely used to eliminate voltage\nviolations and reduce network losses. However, the parameters of active\ndistribution networks(ADNs) are not onsite identified, so significant errors\nmay be involved in the model and make the model-based method infeasible. To\ncope with this critical issue, we propose a novel two-stage deep reinforcement\nlearning (DRL) method to improve the voltage profile by regulating\ninverter-based energy resources, which consists of offline stage and online\nstage. In the offline stage, a highly efficient adversarial reinforcement\nlearning algorithm is developed to train an offline agent robust to the model\nmismatch. In the sequential online stage, we transfer the offline agent safely\nas the online agent to perform continuous learning and controlling online with\nsignificantly improved safety and efficiency. Numerical simulations on IEEE\ntest cases not only demonstrate that the proposed adversarial reinforcement\nlearning algorithm outperforms the state-of-art algorithm, but also show that\nour proposed two-stage method achieves much better performance than the\nexisting DRL based methods in the online application.", + "authors": "Haotian Liu, Wenchuan Wu", + "published": "2020-05-20", + "updated": "2020-05-20", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.LG", + "cs.SY", + "J.7; C.3" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.09450v1", + "title": "Adversarial Imitation Learning via Random Search", + "abstract": "Developing agents that can perform challenging complex tasks is the goal of\nreinforcement learning. The model-free reinforcement learning has been\nconsidered as a feasible solution. However, the state of the art research has\nbeen to develop increasingly complicated techniques. This increasing complexity\nmakes the reconstruction difficult. Furthermore, the problem of reward\ndependency is still exists. As a result, research on imitation learning, which\nlearns policy from a demonstration of experts, has begun to attract attention.\nImitation learning directly learns policy based on data on the behavior of the\nexperts without the explicit reward signal provided by the environment.\nHowever, imitation learning tries to optimize policies based on deep\nreinforcement learning such as trust region policy optimization. As a result,\ndeep reinforcement learning based imitation learning also poses a crisis of\nreproducibility. The issue of complex model-free model has received\nconsiderable critical attention. A derivative-free optimization based\nreinforcement learning and the simplification on policies obtain competitive\nperformance on the dynamic complex tasks. The simplified policies and\nderivative free methods make algorithm be simple. The reconfiguration of\nresearch demo becomes easy. In this paper, we propose an imitation learning\nmethod that takes advantage of the derivative-free optimization with simple\nlinear policies. The proposed method performs simple random search in the\nparameter space of policies and shows computational efficiency. Experiments in\nthis paper show that the proposed model, without a direct reward signal from\nthe environment, obtains competitive performance on the MuJoCo locomotion\ntasks.", + "authors": "MyungJae Shin, Joongheon Kim", + "published": "2020-08-21", + "updated": "2020-08-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.02219v1", + "title": "Uncertainty-Based Out-of-Distribution Detection in Deep Reinforcement Learning", + "abstract": "We consider the problem of detecting out-of-distribution (OOD) samples in\ndeep reinforcement learning. In a value based reinforcement learning setting,\nwe propose to use uncertainty estimation techniques directly on the agent's\nvalue estimating neural network to detect OOD samples. The focus of our work\nlies in analyzing the suitability of approximate Bayesian inference methods and\nrelated ensembling techniques that generate uncertainty estimates. Although\nprior work has shown that dropout-based variational inference techniques and\nbootstrap-based approaches can be used to model epistemic uncertainty, the\nsuitability for detecting OOD samples in deep reinforcement learning remains an\nopen question. Our results show that uncertainty estimation can be used to\ndifferentiate in- from out-of-distribution samples. Over the complete training\nprocess of the reinforcement learning agents, bootstrap-based approaches tend\nto produce more reliable epistemic uncertainty estimates, when compared to\ndropout-based approaches.", + "authors": "Andreas Sedlmeier, Thomas Gabor, Thomy Phan, Lenz Belzner, Claudia Linnhoff-Popien", + "published": "2019-01-08", + "updated": "2019-01-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1705.03562v1", + "title": "Deep Episodic Value Iteration for Model-based Meta-Reinforcement Learning", + "abstract": "We present a new deep meta reinforcement learner, which we call Deep Episodic\nValue Iteration (DEVI). DEVI uses a deep neural network to learn a similarity\nmetric for a non-parametric model-based reinforcement learning algorithm. Our\nmodel is trained end-to-end via back-propagation. Despite being trained using\nthe model-free Q-learning objective, we show that DEVI's model-based internal\nstructure provides `one-shot' transfer to changes in reward and transition\nstructure, even for tasks with very high-dimensional state spaces.", + "authors": "Steven Stenberg Hansen", + "published": "2017-05-09", + "updated": "2017-05-09", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.AI", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2009.14365v1", + "title": "Toolpath design for additive manufacturing using deep reinforcement learning", + "abstract": "Toolpath optimization of metal-based additive manufacturing processes is\ncurrently hampered by the high-dimensionality of its design space. In this\nwork, a reinforcement learning platform is proposed that dynamically learns\ntoolpath strategies to build an arbitrary part. To this end, three prominent\nmodel-free reinforcement learning formulations are investigated to design\nadditive manufacturing toolpaths and demonstrated for two cases of dense and\nsparse reward structures. The results indicate that this learning-based\ntoolpath design approach achieves high scores, especially when a dense reward\nstructure is present.", + "authors": "Mojtaba Mozaffar, Ablodghani Ebrahimi, Jian Cao", + "published": "2020-09-30", + "updated": "2020-09-30", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.13489v2", + "title": "Boosting Reinforcement Learning and Planning with Demonstrations: A Survey", + "abstract": "Although reinforcement learning has seen tremendous success recently, this\nkind of trial-and-error learning can be impractical or inefficient in complex\nenvironments. The use of demonstrations, on the other hand, enables agents to\nbenefit from expert knowledge rather than having to discover the best action to\ntake through exploration. In this survey, we discuss the advantages of using\ndemonstrations in sequential decision making, various ways to apply\ndemonstrations in learning-based decision making paradigms (for example,\nreinforcement learning and planning in the learned models), and how to collect\nthe demonstrations in various scenarios. Additionally, we exemplify a practical\npipeline for generating and utilizing demonstrations in the recently proposed\nManiSkill robot learning benchmark.", + "authors": "Tongzhou Mu, Hao Su", + "published": "2023-03-23", + "updated": "2023-03-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.04816v1", + "title": "Characterizing Policy Divergence for Personalized Meta-Reinforcement Learning", + "abstract": "Despite ample motivation from costly exploration and limited trajectory data,\nrapidly adapting to new environments with few-shot reinforcement learning (RL)\ncan remain a challenging task, especially with respect to personalized\nsettings. Here, we consider the problem of recommending optimal policies to a\nset of multiple entities each with potentially different characteristics, such\nthat individual entities may parameterize distinct environments with unique\ntransition dynamics. Inspired by existing literature in meta-learning, we\nextend previous work by focusing on the notion that certain environments are\nmore similar to each other than others in personalized settings, and propose a\nmodel-free meta-learning algorithm that prioritizes past experiences by\nrelevance during gradient-based adaptation. Our algorithm involves\ncharacterizing past policy divergence through methods in inverse reinforcement\nlearning, and we illustrate how such metrics are able to effectively\ndistinguish past policy parameters by the environment they were deployed in,\nleading to more effective fast adaptation during test time. To study\npersonalization more effectively we introduce a navigation testbed to\nspecifically incorporate environment diversity across training episodes, and\ndemonstrate that our approach outperforms meta-learning alternatives with\nrespect to few-shot reinforcement learning in personalized settings.", + "authors": "Michael Zhang", + "published": "2020-10-09", + "updated": "2020-10-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.07240v1", + "title": "Model-Reference Reinforcement Learning for Collision-Free Tracking Control of Autonomous Surface Vehicles", + "abstract": "This paper presents a novel model-reference reinforcement learning algorithm\nfor the intelligent tracking control of uncertain autonomous surface vehicles\nwith collision avoidance. The proposed control algorithm combines a\nconventional control method with reinforcement learning to enhance control\naccuracy and intelligence. In the proposed control design, a nominal system is\nconsidered for the design of a baseline tracking controller using a\nconventional control approach. The nominal system also defines the desired\nbehaviour of uncertain autonomous surface vehicles in an obstacle-free\nenvironment. Thanks to reinforcement learning, the overall tracking controller\nis capable of compensating for model uncertainties and achieving collision\navoidance at the same time in environments with obstacles. In comparison to\ntraditional deep reinforcement learning methods, our proposed learning-based\ncontrol can provide stability guarantees and better sample efficiency. We\ndemonstrate the performance of the new algorithm using an example of autonomous\nsurface vehicles.", + "authors": "Qingrui Zhang, Wei Pan, Vasso Reppa", + "published": "2020-08-17", + "updated": "2020-08-17", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.LG", + "cs.RO", + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2301.10119v2", + "title": "Minimal Value-Equivalent Partial Models for Scalable and Robust Planning in Lifelong Reinforcement Learning", + "abstract": "Learning models of the environment from pure interaction is often considered\nan essential component of building lifelong reinforcement learning agents.\nHowever, the common practice in model-based reinforcement learning is to learn\nmodels that model every aspect of the agent's environment, regardless of\nwhether they are important in coming up with optimal decisions or not. In this\npaper, we argue that such models are not particularly well-suited for\nperforming scalable and robust planning in lifelong reinforcement learning\nscenarios and we propose new kinds of models that only model the relevant\naspects of the environment, which we call \"minimal value-equivalent partial\nmodels\". After providing a formal definition for these models, we provide\ntheoretical results demonstrating the scalability advantages of performing\nplanning with such models and then perform experiments to empirically\nillustrate our theoretical results. Then, we provide some useful heuristics on\nhow to learn these kinds of models with deep learning architectures and\nempirically demonstrate that models learned in such a way can allow for\nperforming planning that is robust to distribution shifts and compounding model\nerrors. Overall, both our theoretical and empirical results suggest that\nminimal value-equivalent partial models can provide significant benefits to\nperforming scalable and robust planning in lifelong reinforcement learning\nscenarios.", + "authors": "Safa Alver, Doina Precup", + "published": "2023-01-24", + "updated": "2023-06-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2007.12666v5", + "title": "Safe Model-Based Reinforcement Learning for Systems with Parametric Uncertainties", + "abstract": "Reinforcement learning has been established over the past decade as an\neffective tool to find optimal control policies for dynamical systems, with\nrecent focus on approaches that guarantee safety during the learning and/or\nexecution phases. In general, safety guarantees are critical in reinforcement\nlearning when the system is safety-critical and/or task restarts are not\npractically feasible. In optimal control theory, safety requirements are often\nexpressed in terms of state and/or control constraints. In recent years,\nreinforcement learning approaches that rely on persistent excitation have been\ncombined with a barrier transformation to learn the optimal control policies\nunder state constraints. To soften the excitation requirements, model-based\nreinforcement learning methods that rely on exact model knowledge have also\nbeen integrated with the barrier transformation framework. The objective of\nthis paper is to develop safe reinforcement learning method for deterministic\nnonlinear systems, with parametric uncertainties in the model, to learn\napproximate constrained optimal policies without relying on stringent\nexcitation conditions. To that end, a model-based reinforcement learning\ntechnique that utilizes a novel filtered concurrent learning method, along with\na barrier transformation, is developed in this paper to realize simultaneous\nlearning of unknown model parameters and approximate optimal state-constrained\ncontrol policies for safety-critical systems.", + "authors": "S M Nahid Mahmud, Scott A Nivison, Zachary I. Bell, Rushikesh Kamalapurkar", + "published": "2020-07-24", + "updated": "2021-10-05", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.SY", + "math.OC" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.07905v2", + "title": "Reinforcement Learning Ship Autopilot: Sample efficient and Model Predictive Control-based Approach", + "abstract": "In this research we focus on developing a reinforcement learning system for a\nchallenging task: autonomous control of a real-sized boat, with difficulties\narising from large uncertainties in the challenging ocean environment and the\nextremely high cost of exploring and sampling with a real boat. To this end, we\nexplore a novel Gaussian processes (GP) based reinforcement learning approach\nthat combines sample-efficient model-based reinforcement learning and model\npredictive control (MPC). Our approach, sample-efficient probabilistic model\npredictive control (SPMPC), iteratively learns a Gaussian process dynamics\nmodel and uses it to efficiently update control signals within the MPC closed\ncontrol loop. A system using SPMPC is built to efficiently learn an autopilot\ntask. After investigating its performance in a simulation modeled upon real\nboat driving data, the proposed system successfully learns to drive a\nreal-sized boat equipped with a single engine and sensors measuring GPS, speed,\ndirection, and wind in an autopilot task without human demonstration.", + "authors": "Yunduan Cui, Shigeki Osaki, Takamitsu Matsubara", + "published": "2019-01-23", + "updated": "2019-07-23", + "primary_cat": "cs.SY", + "cats": [ + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.02429v1", + "title": "AD4RL: Autonomous Driving Benchmarks for Offline Reinforcement Learning with Value-based Dataset", + "abstract": "Offline reinforcement learning has emerged as a promising technology by\nenhancing its practicality through the use of pre-collected large datasets.\nDespite its practical benefits, most algorithm development research in offline\nreinforcement learning still relies on game tasks with synthetic datasets. To\naddress such limitations, this paper provides autonomous driving datasets and\nbenchmarks for offline reinforcement learning research. We provide 19 datasets,\nincluding real-world human driver's datasets, and seven popular offline\nreinforcement learning algorithms in three realistic driving scenarios. We also\nprovide a unified decision-making process model that can operate effectively\nacross different scenarios, serving as a reference framework in algorithm\ndesign. Our research lays the groundwork for further collaborations in the\ncommunity to explore practical aspects of existing reinforcement learning\nmethods. Dataset and codes can be found in https://sites.google.com/view/ad4rl.", + "authors": "Dongsu Lee, Chanin Eom, Minhae Kwon", + "published": "2024-04-03", + "updated": "2024-04-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.05546v2", + "title": "Multi-Agent Quantum Reinforcement Learning using Evolutionary Optimization", + "abstract": "Multi-Agent Reinforcement Learning is becoming increasingly more important in\ntimes of autonomous driving and other smart industrial applications.\nSimultaneously a promising new approach to Reinforcement Learning arises using\nthe inherent properties of quantum mechanics, reducing the trainable parameters\nof a model significantly. However, gradient-based Multi-Agent Quantum\nReinforcement Learning methods often have to struggle with barren plateaus,\nholding them back from matching the performance of classical approaches. We\nbuild upon an existing approach for gradient free Quantum Reinforcement\nLearning and propose three genetic variations with Variational Quantum Circuits\nfor Multi-Agent Reinforcement Learning using evolutionary optimization. We\nevaluate our genetic variations in the Coin Game environment and also compare\nthem to classical approaches. We showed that our Variational Quantum Circuit\napproaches perform significantly better compared to a neural network with a\nsimilar amount of trainable parameters. Compared to the larger neural network,\nour approaches archive similar results using $97.88\\%$ less parameters.", + "authors": "Michael K\u00f6lle, Felix Topp, Thomy Phan, Philipp Altmann, Jonas N\u00fc\u00dflein, Claudia Linnhoff-Popien", + "published": "2023-11-09", + "updated": "2024-01-13", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph", + "cs.AI", + "cs.MA" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.07260v1", + "title": "TIAGo RL: Simulated Reinforcement Learning Environments with Tactile Data for Mobile Robots", + "abstract": "Tactile information is important for robust performance in robotic tasks that\ninvolve physical interaction, such as object manipulation. However, with more\ndata included in the reasoning and control process, modeling behavior becomes\nincreasingly difficult. Deep Reinforcement Learning (DRL) produced promising\nresults for learning complex behavior in various domains, including\ntactile-based manipulation in robotics. In this work, we present our\nopen-source reinforcement learning environments for the TIAGo service robot.\nThey produce tactile sensor measurements that resemble those of a real\nsensorised gripper for TIAGo, encouraging research in transfer learning of DRL\npolicies. Lastly, we show preliminary training results of a learned force\ncontrol policy and compare it to a classical PI controller.", + "authors": "Luca Lach, Francesco Ferro, Robert Haschke", + "published": "2023-11-13", + "updated": "2023-11-13", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1703.04489v1", + "title": "Reinforcement Learning for Transition-Based Mention Detection", + "abstract": "This paper describes an application of reinforcement learning to the mention\ndetection task. We define a novel action-based formulation for the mention\ndetection task, in which a model can flexibly revise past labeling decisions by\ngrouping together tokens and assigning partial mention labels. We devise a\nmethod to create mention-level episodes and we train a model by rewarding\ncorrectly labeled complete mentions, irrespective of the inner structure\ncreated. The model yields results which are on par with a competitive\nsupervised counterpart while being more flexible in terms of achieving targeted\nbehavior through reward modeling and generating internal mention structure,\nespecially on longer mentions.", + "authors": "Georgiana Dinu, Wael Hamza, Radu Florian", + "published": "2017-03-13", + "updated": "2017-03-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.13044v1", + "title": "Reinforcement Learning with Feedback-modulated TD-STDP", + "abstract": "Spiking neuron networks have been used successfully to solve simple\nreinforcement learning tasks with continuous action set applying learning rules\nbased on spike-timing-dependent plasticity (STDP). However, most of these\nmodels cannot be applied to reinforcement learning tasks with discrete action\nset since they assume that the selected action is a deterministic function of\nfiring rate of neurons, which is continuous. In this paper, we propose a new\nSTDP-based learning rule for spiking neuron networks which contains feedback\nmodulation. We show that the STDP-based learning rule can be used to solve\nreinforcement learning tasks with discrete action set at a speed similar to\nstandard reinforcement learning algorithms when applied to the CartPole and\nLunarLander tasks. Moreover, we demonstrate that the agent is unable to solve\nthese tasks if feedback modulation is omitted from the learning rule. We\nconclude that feedback modulation allows better credit assignment when only the\nunits contributing to the executed action and TD error participate in learning.", + "authors": "Stephen Chung, Robert Kozma", + "published": "2020-08-29", + "updated": "2020-08-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML", + "I.2.8" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2305.00477v2", + "title": "Posterior Sampling for Deep Reinforcement Learning", + "abstract": "Despite remarkable successes, deep reinforcement learning algorithms remain\nsample inefficient: they require an enormous amount of trial and error to find\ngood policies. Model-based algorithms promise sample efficiency by building an\nenvironment model that can be used for planning. Posterior Sampling for\nReinforcement Learning is such a model-based algorithm that has attracted\nsignificant interest due to its performance in the tabular setting. This paper\nintroduces Posterior Sampling for Deep Reinforcement Learning (PSDRL), the\nfirst truly scalable approximation of Posterior Sampling for Reinforcement\nLearning that retains its model-based essence. PSDRL combines efficient\nuncertainty quantification over latent state space models with a specially\ntailored continual planning algorithm based on value-function approximation.\nExtensive experiments on the Atari benchmark show that PSDRL significantly\noutperforms previous state-of-the-art attempts at scaling up posterior sampling\nwhile being competitive with a state-of-the-art (model-based) reinforcement\nlearning method, both in sample efficiency and computational efficiency.", + "authors": "Remo Sasso, Michelangelo Conserva, Paulo Rauber", + "published": "2023-04-30", + "updated": "2023-05-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "68T07", + "I.2.m" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.12516v2", + "title": "Prioritized Experience-based Reinforcement Learning with Human Guidance for Autonomous Driving", + "abstract": "Reinforcement learning (RL) requires skillful definition and remarkable\ncomputational efforts to solve optimization and control problems, which could\nimpair its prospect. Introducing human guidance into reinforcement learning is\na promising way to improve learning performance. In this paper, a comprehensive\nhuman guidance-based reinforcement learning framework is established. A novel\nprioritized experience replay mechanism that adapts to human guidance in the\nreinforcement learning process is proposed to boost the efficiency and\nperformance of the reinforcement learning algorithm. To relieve the heavy\nworkload on human participants, a behavior model is established based on an\nincremental online learning method to mimic human actions. We design two\nchallenging autonomous driving tasks for evaluating the proposed algorithm.\nExperiments are conducted to access the training and testing performance and\nlearning mechanism of the proposed algorithm. Comparative results against the\nstate-of-the-art methods suggest the advantages of our algorithm in terms of\nlearning efficiency, performance, and robustness.", + "authors": "Jingda Wu, Zhiyu Huang, Wenhui Huang, Chen Lv", + "published": "2021-09-26", + "updated": "2022-11-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1012.1552v1", + "title": "Bridging the Gap between Reinforcement Learning and Knowledge Representation: A Logical Off- and On-Policy Framework", + "abstract": "Knowledge Representation is important issue in reinforcement learning. In\nthis paper, we bridge the gap between reinforcement learning and knowledge\nrepresentation, by providing a rich knowledge representation framework, based\non normal logic programs with answer set semantics, that is capable of solving\nmodel-free reinforcement learning problems for more complex do-mains and\nexploits the domain-specific knowledge. We prove the correctness of our\napproach. We show that the complexity of finding an offline and online policy\nfor a model-free reinforcement learning problem in our approach is NP-complete.\nMoreover, we show that any model-free reinforcement learning problem in MDP\nenvironment can be encoded as a SAT problem. The importance of that is\nmodel-free reinforcement", + "authors": "Emad Saad", + "published": "2010-12-07", + "updated": "2010-12-07", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG", + "cs.LO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1910.03016v4", + "title": "Is a Good Representation Sufficient for Sample Efficient Reinforcement Learning?", + "abstract": "Modern deep learning methods provide effective means to learn good\nrepresentations. However, is a good representation itself sufficient for sample\nefficient reinforcement learning? This question has largely been studied only\nwith respect to (worst-case) approximation error, in the more classical\napproximate dynamic programming literature. With regards to the statistical\nviewpoint, this question is largely unexplored, and the extant body of\nliterature mainly focuses on conditions which permit sample efficient\nreinforcement learning with little understanding of what are necessary\nconditions for efficient reinforcement learning.\n This work shows that, from the statistical viewpoint, the situation is far\nsubtler than suggested by the more traditional approximation viewpoint, where\nthe requirements on the representation that suffice for sample efficient RL are\neven more stringent. Our main results provide sharp thresholds for\nreinforcement learning methods, showing that there are hard limitations on what\nconstitutes good function approximation (in terms of the dimensionality of the\nrepresentation), where we focus on natural representational conditions relevant\nto value-based, model-based, and policy-based learning. These lower bounds\nhighlight that having a good (value-based, model-based, or policy-based)\nrepresentation in and of itself is insufficient for efficient reinforcement\nlearning, unless the quality of this approximation passes certain hard\nthresholds. Furthermore, our lower bounds also imply exponential separations on\nthe sample complexity between 1) value-based learning with perfect\nrepresentation and value-based learning with a good-but-not-perfect\nrepresentation, 2) value-based learning and policy-based learning, 3)\npolicy-based learning and supervised learning and 4) reinforcement learning and\nimitation learning.", + "authors": "Simon S. Du, Sham M. Kakade, Ruosong Wang, Lin F. Yang", + "published": "2019-10-07", + "updated": "2020-02-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "math.OC", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2304.00006v1", + "title": "Bi-directional personalization reinforcement learning-based architecture with active learning using a multi-model data service for the travel nursing industry", + "abstract": "The challenges of using inadequate online recruitment systems can be\naddressed with machine learning and software engineering techniques.\nBi-directional personalization reinforcement learning-based architecture with\nactive learning can get recruiters to recommend qualified applicants and also\nenable applicants to receive personalized job recommendations. This paper\nfocuses on how machine learning techniques can enhance the recruitment process\nin the travel nursing industry by helping speed up data acquisition using a\nmulti-model data service and then providing personalized recommendations using\nbi-directional reinforcement learning with active learning. This need was\nespecially evident when trying to respond to the overwhelming needs of\nhealthcare facilities during the COVID-19 pandemic. The need for traveling\nnurses and other healthcare professionals was more evident during the lockdown\nperiod. A data service was architected for job feed processing using an\norchestration of natural language processing (NLP) models that synthesize\njob-related data into a database efficiently and accurately. The multi-model\ndata service provided the data necessary to develop a bi-directional\npersonalization system using reinforcement learning with active learning that\ncould recommend travel nurses and healthcare professionals to recruiters and\nprovide job recommendations to applicants using an internally developed smart\nmatch score as a basis. The bi-directional personalization reinforcement\nlearning-based architecture with active learning combines two personalization\nsystems - one that runs forward to recommend qualified candidates for jobs and\nanother that runs backward and recommends jobs for applicants.", + "authors": "Ezana N. Beyenne", + "published": "2023-03-14", + "updated": "2023-03-14", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.LG", + "I.2" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2006.09234v1", + "title": "Model Embedding Model-Based Reinforcement Learning", + "abstract": "Model-based reinforcement learning (MBRL) has shown its advantages in\nsample-efficiency over model-free reinforcement learning (MFRL). Despite the\nimpressive results it achieves, it still faces a trade-off between the ease of\ndata generation and model bias. In this paper, we propose a simple and elegant\nmodel-embedding model-based reinforcement learning (MEMB) algorithm in the\nframework of the probabilistic reinforcement learning. To balance the\nsample-efficiency and model bias, we exploit both real and imaginary data in\nthe training. In particular, we embed the model in the policy update and learn\n$Q$ and $V$ functions from the real data set. We provide the theoretical\nanalysis of MEMB with the Lipschitz continuity assumption on the model and\npolicy. At last, we evaluate MEMB on several benchmarks and demonstrate our\nalgorithm can achieve state-of-the-art performance.", + "authors": "Xiaoyu Tan, Chao Qu, Junwu Xiong, James Zhang", + "published": "2020-06-16", + "updated": "2020-06-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.14766v1", + "title": "Reinforcement Learning from Statistical Feedback: the Journey from AB Testing to ANT Testing", + "abstract": "Reinforcement Learning from Human Feedback (RLHF) has played a crucial role\nin the success of large models such as ChatGPT. RLHF is a reinforcement\nlearning framework which combines human feedback to improve learning\neffectiveness and performance. However, obtaining preferences feedback manually\nis quite expensive in commercial applications. Some statistical commercial\nindicators are usually more valuable and always ignored in RLHF. There exists a\ngap between commercial target and model training. In our research, we will\nattempt to fill this gap with statistical business feedback instead of human\nfeedback, using AB testing which is a well-established statistical method.\nReinforcement Learning from Statistical Feedback (RLSF) based on AB testing is\nproposed. Statistical inference methods are used to obtain preferences for\ntraining the reward network, which fine-tunes the pre-trained model in\nreinforcement learning framework, achieving greater business value.\nFurthermore, we extend AB testing with double selections at a single time-point\nto ANT testing with multiple selections at different feedback time points.\nMoreover, we design numerical experiences to validate the effectiveness of our\nalgorithm framework.", + "authors": "Feiyang Han, Yimin Wei, Zhaofeng Liu, Yanxing Qi", + "published": "2023-11-24", + "updated": "2023-11-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "math.ST", + "stat.ME", + "stat.TH" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2212.08232v1", + "title": "Offline Robot Reinforcement Learning with Uncertainty-Guided Human Expert Sampling", + "abstract": "Recent advances in batch (offline) reinforcement learning have shown\npromising results in learning from available offline data and proved offline\nreinforcement learning to be an essential toolkit in learning control policies\nin a model-free setting. An offline reinforcement learning algorithm applied to\na dataset collected by a suboptimal non-learning-based algorithm can result in\na policy that outperforms the behavior agent used to collect the data. Such a\nscenario is frequent in robotics, where existing automation is collecting\noperational data. Although offline learning techniques can learn from data\ngenerated by a sub-optimal behavior agent, there is still an opportunity to\nimprove the sample complexity of existing offline reinforcement learning\nalgorithms by strategically introducing human demonstration data into the\ntraining process. To this end, we propose a novel approach that uses\nuncertainty estimation to trigger the injection of human demonstration data and\nguide policy training towards optimal behavior while reducing overall sample\ncomplexity. Our experiments show that this approach is more sample efficient\nwhen compared to a naive way of combining expert data with data collected from\na sub-optimal agent. We augmented an existing offline reinforcement learning\nalgorithm Conservative Q-Learning with our approach and performed experiments\non data collected from MuJoCo and OffWorld Gym learning environments.", + "authors": "Ashish Kumar, Ilya Kuzovkin", + "published": "2022-12-16", + "updated": "2022-12-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1406.1853v2", + "title": "Model-based Reinforcement Learning and the Eluder Dimension", + "abstract": "We consider the problem of learning to optimize an unknown Markov decision\nprocess (MDP). We show that, if the MDP can be parameterized within some known\nfunction class, we can obtain regret bounds that scale with the dimensionality,\nrather than cardinality, of the system. We characterize this dependence\nexplicitly as $\\tilde{O}(\\sqrt{d_K d_E T})$ where $T$ is time elapsed, $d_K$ is\nthe Kolmogorov dimension and $d_E$ is the \\emph{eluder dimension}. These\nrepresent the first unified regret bounds for model-based reinforcement\nlearning and provide state of the art guarantees in several important settings.\nMoreover, we present a simple and computationally efficient algorithm\n\\emph{posterior sampling for reinforcement learning} (PSRL) that satisfies\nthese bounds.", + "authors": "Ian Osband, Benjamin Van Roy", + "published": "2014-06-07", + "updated": "2014-10-31", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.11738v1", + "title": "Optimising Stochastic Routing for Taxi Fleets with Model Enhanced Reinforcement Learning", + "abstract": "The future of mobility-as-a-Service (Maas)should embrace an integrated system\nof ride-hailing, street-hailing and ride-sharing with optimised intelligent\nvehicle routing in response to a real-time, stochastic demand pattern. We aim\nto optimise routing policies for a large fleet of vehicles for street-hailing\nservices, given a stochastic demand pattern in small to medium-sized road\nnetworks. A model-based dispatch algorithm, a high performance model-free\nreinforcement learning based algorithm and a novel hybrid algorithm combining\nthe benefits of both the top-down approach and the model-free reinforcement\nlearning have been proposed to route the \\emph{vacant} vehicles. We design our\nreinforcement learning based routing algorithm using proximal policy\noptimisation and combined intrinsic and extrinsic rewards to strike a balance\nbetween exploration and exploitation. Using a large-scale agent-based\nmicroscopic simulation platform to evaluate our proposed algorithms, our\nmodel-free reinforcement learning and hybrid algorithm show excellent\nperformance on both artificial road network and community-based Singapore road\nnetwork with empirical demands, and our hybrid algorithm can significantly\naccelerate the model-free learner in the process of learning.", + "authors": "Shen Ren, Qianxiao Li, Liye Zhang, Zheng Qin, Bo Yang", + "published": "2020-10-22", + "updated": "2020-10-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "nlin.AO", + "physics.soc-ph" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1811.00128v1", + "title": "Towards a Simple Approach to Multi-step Model-based Reinforcement Learning", + "abstract": "When environmental interaction is expensive, model-based reinforcement\nlearning offers a solution by planning ahead and avoiding costly mistakes.\nModel-based agents typically learn a single-step transition model. In this\npaper, we propose a multi-step model that predicts the outcome of an action\nsequence with variable length. We show that this model is easy to learn, and\nthat the model can make policy-conditional predictions. We report preliminary\nresults that show a clear advantage for the multi-step model compared to its\none-step counterpart.", + "authors": "Kavosh Asadi, Evan Cater, Dipendra Misra, Michael L. Littman", + "published": "2018-10-31", + "updated": "2018-10-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2105.00822v2", + "title": "Generative Adversarial Reward Learning for Generalized Behavior Tendency Inference", + "abstract": "Recent advances in reinforcement learning have inspired increasing interest\nin learning user modeling adaptively through dynamic interactions, e.g., in\nreinforcement learning based recommender systems. Reward function is crucial\nfor most of reinforcement learning applications as it can provide the guideline\nabout the optimization. However, current reinforcement-learning-based methods\nrely on manually-defined reward functions, which cannot adapt to dynamic and\nnoisy environments. Besides, they generally use task-specific reward functions\nthat sacrifice generalization ability. We propose a generative inverse\nreinforcement learning for user behavioral preference modelling, to address the\nabove issues. Instead of using predefined reward functions, our model can\nautomatically learn the rewards from user's actions based on discriminative\nactor-critic network and Wasserstein GAN. Our model provides a general way of\ncharacterizing and explaining underlying behavioral tendencies, and our\nexperiments show our method outperforms state-of-the-art methods in a variety\nof scenarios, namely traffic signal control, online recommender systems, and\nscanpath prediction.", + "authors": "Xiaocong Chen, Lina Yao, Xianzhi Wang, Aixin Sun, Wenjie Zhang, Quan Z. Sheng", + "published": "2021-05-03", + "updated": "2021-05-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.IR" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.16348v2", + "title": "Rating-based Reinforcement Learning", + "abstract": "This paper develops a novel rating-based reinforcement learning approach that\nuses human ratings to obtain human guidance in reinforcement learning.\nDifferent from the existing preference-based and ranking-based reinforcement\nlearning paradigms, based on human relative preferences over sample pairs, the\nproposed rating-based reinforcement learning approach is based on human\nevaluation of individual trajectories without relative comparisons between\nsample pairs. The rating-based reinforcement learning approach builds on a new\nprediction model for human ratings and a novel multi-class loss function. We\nconduct several experimental studies based on synthetic ratings and real human\nratings to evaluate the effectiveness and benefits of the new rating-based\nreinforcement learning approach.", + "authors": "Devin White, Mingkang Wu, Ellen Novoseller, Vernon J. Lawhern, Nicholas Waytowich, Yongcan Cao", + "published": "2023-07-30", + "updated": "2024-01-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.16543v2", + "title": "Model-based deep reinforcement learning for accelerated learning from flow simulations", + "abstract": "In recent years, deep reinforcement learning has emerged as a technique to\nsolve closed-loop flow control problems. Employing simulation-based\nenvironments in reinforcement learning enables a priori end-to-end optimization\nof the control system, provides a virtual testbed for safety-critical control\napplications, and allows to gain a deep understanding of the control\nmechanisms. While reinforcement learning has been applied successfully in a\nnumber of rather simple flow control benchmarks, a major bottleneck toward\nreal-world applications is the high computational cost and turnaround time of\nflow simulations. In this contribution, we demonstrate the benefits of\nmodel-based reinforcement learning for flow control applications. Specifically,\nwe optimize the policy by alternating between trajectories sampled from flow\nsimulations and trajectories sampled from an ensemble of environment models.\nThe model-based learning reduces the overall training time by up to $85\\%$ for\nthe fluidic pinball test case. Even larger savings are expected for more\ndemanding flow simulations.", + "authors": "Andre Weiner, Janis Geise", + "published": "2024-02-26", + "updated": "2024-04-10", + "primary_cat": "physics.flu-dyn", + "cats": [ + "physics.flu-dyn", + "cs.CE", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.02104v2", + "title": "Model-Based Episodic Memory Induces Dynamic Hybrid Controls", + "abstract": "Episodic control enables sample efficiency in reinforcement learning by\nrecalling past experiences from an episodic memory. We propose a new\nmodel-based episodic memory of trajectories addressing current limitations of\nepisodic control. Our memory estimates trajectory values, guiding the agent\ntowards good policies. Built upon the memory, we construct a complementary\nlearning model via a dynamic hybrid control unifying model-based, episodic and\nhabitual learning into a single architecture. Experiments demonstrate that our\nmodel allows significantly faster and better learning than other strong\nreinforcement learning agents across a variety of environments including\nstochastic and non-Markovian settings.", + "authors": "Hung Le, Thommen Karimpanal George, Majid Abdolshah, Truyen Tran, Svetha Venkatesh", + "published": "2021-11-03", + "updated": "2021-11-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2012.07369v2", + "title": "Learning for MPC with Stability & Safety Guarantees", + "abstract": "The combination of learning methods with Model Predictive Control (MPC) has\nattracted a significant amount of attention in the recent literature. The hope\nof this combination is to reduce the reliance of MPC schemes on accurate\nmodels, and to tap into the fast developing machine learning and reinforcement\nlearning tools to exploit the growing amount of data available for many\nsystems. In particular, the combination of reinforcement learning and MPC has\nbeen proposed as a viable and theoretically justified approach to introduce\nexplainable, safe and stable policies in reinforcement learning. However, a\nformal theory detailing how the safety and stability of an MPC-based policy can\nbe maintained through the parameter updates delivered by the learning tools is\nstill lacking. This paper addresses this gap. The theory is developed for the\ngeneric Robust MPC case, and applied in simulation in the robust tube-based\nlinear MPC case, where the theory is fairly easy to deploy in practice. The\npaper focuses on Reinforcement Learning as a learning tool, but it applies to\nany learning method that updates the MPC parameters online.", + "authors": "S\u00e9bastien Gros, Mario Zanon", + "published": "2020-12-14", + "updated": "2022-07-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SY", + "eess.SY", + "math.OC" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2012.09737v2", + "title": "Model-free and Bayesian Ensembling Model-based Deep Reinforcement Learning for Particle Accelerator Control Demonstrated on the FERMI FEL", + "abstract": "Reinforcement learning holds tremendous promise in accelerator controls. The\nprimary goal of this paper is to show how this approach can be utilised on an\noperational level on accelerator physics problems. Despite the success of\nmodel-free reinforcement learning in several domains, sample-efficiency still\nis a bottle-neck, which might be encompassed by model-based methods. We compare\nwell-suited purely model-based to model-free reinforcement learning applied to\nthe intensity optimisation on the FERMI FEL system. We find that the\nmodel-based approach demonstrates higher representational power and\nsample-efficiency, while the asymptotic performance of the model-free method is\nslightly superior. The model-based algorithm is implemented in a DYNA-style\nusing an uncertainty aware model, and the model-free algorithm is based on\ntailored deep Q-learning. In both cases, the algorithms were implemented in a\nway, which presents increased noise robustness as omnipresent in accelerator\ncontrol problems. Code is released in\nhttps://github.com/MathPhysSim/FERMI_RL_Paper.", + "authors": "Simon Hirlaender, Niky Bruchon", + "published": "2020-12-17", + "updated": "2022-01-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SY", + "eess.SY", + "physics.acc-ph", + "I.2; J.2" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.07789v1", + "title": "Safe Reinforcement Learning by Imagining the Near Future", + "abstract": "Safe reinforcement learning is a promising path toward applying reinforcement\nlearning algorithms to real-world problems, where suboptimal behaviors may lead\nto actual negative consequences. In this work, we focus on the setting where\nunsafe states can be avoided by planning ahead a short time into the future. In\nthis setting, a model-based agent with a sufficiently accurate model can avoid\nunsafe states. We devise a model-based algorithm that heavily penalizes unsafe\ntrajectories, and derive guarantees that our algorithm can avoid unsafe states\nunder certain assumptions. Experiments demonstrate that our algorithm can\nachieve competitive rewards with fewer safety violations in several continuous\ncontrol tasks.", + "authors": "Garrett Thomas, Yuping Luo, Tengyu Ma", + "published": "2022-02-15", + "updated": "2022-02-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.12095v1", + "title": "Document-editing Assistants and Model-based Reinforcement Learning as a Path to Conversational AI", + "abstract": "Intelligent assistants that follow commands or answer simple questions, such\nas Siri and Google search, are among the most economically important\napplications of AI. Future conversational AI assistants promise even greater\ncapabilities and a better user experience through a deeper understanding of the\ndomain, the user, or the user's purposes. But what domain and what methods are\nbest suited to researching and realizing this promise? In this article we argue\nfor the domain of voice document editing and for the methods of model-based\nreinforcement learning. The primary advantages of voice document editing are\nthat the domain is tightly scoped and that it provides something for the\nconversation to be about (the document) that is delimited and fully accessible\nto the intelligent assistant. The advantages of reinforcement learning in\ngeneral are that its methods are designed to learn from interaction without\nexplicit instruction and that it formalizes the purposes of the assistant.\nModel-based reinforcement learning is needed in order to genuinely understand\nthe domain of discourse and thereby work efficiently with the user to achieve\ntheir goals. Together, voice document editing and model-based reinforcement\nlearning comprise a promising research direction for achieving conversational\nAI.", + "authors": "Katya Kudashkina, Patrick M. Pilarski, Richard S. Sutton", + "published": "2020-08-27", + "updated": "2020-08-27", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.HC", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.09013v1", + "title": "Self-Inspection Method of Unmanned Aerial Vehicles in Power Plants Using Deep Q-Network Reinforcement Learning", + "abstract": "For the purpose of inspecting power plants, autonomous robots can be built\nusing reinforcement learning techniques. The method replicates the environment\nand employs a simple reinforcement learning (RL) algorithm. This strategy might\nbe applied in several sectors, including the electricity generation sector. A\npre-trained model with perception, planning, and action is suggested by the\nresearch. To address optimization problems, such as the Unmanned Aerial Vehicle\n(UAV) navigation problem, Deep Q-network (DQN), a reinforcement learning-based\nframework that Deepmind launched in 2015, incorporates both deep learning and\nQ-learning. To overcome problems with current procedures, the research proposes\na power plant inspection system incorporating UAV autonomous navigation and DQN\nreinforcement learning. These training processes set reward functions with\nreference to states and consider both internal and external effect factors,\nwhich distinguishes them from other reinforcement learning training techniques\nnow in use. The key components of the reinforcement learning segment of the\ntechnique, for instance, introduce states such as the simulation of a wind\nfield, the battery charge level of an unmanned aerial vehicle, the height the\nUAV reached, etc. The trained model makes it more likely that the inspection\nstrategy will be applied in practice by enabling the UAV to move around on its\nown in difficult environments. The average score of the model converges to\n9,000. The trained model allowed the UAV to make the fewest number of rotations\nnecessary to go to the target point.", + "authors": "Haoran Guan", + "published": "2023-03-16", + "updated": "2023-03-16", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2204.01409v1", + "title": "Safe Controller for Output Feedback Linear Systems using Model-Based Reinforcement Learning", + "abstract": "The objective of this research is to enable safety-critical systems to\nsimultaneously learn and execute optimal control policies in a safe manner to\nachieve complex autonomy. Learning optimal policies via trial and error, i.e.,\ntraditional reinforcement learning, is difficult to implement in\nsafety-critical systems, particularly when task restarts are unavailable. Safe\nmodel-based reinforcement learning techniques based on a barrier transformation\nhave recently been developed to address this problem. However, these methods\nrely on full state feedback, limiting their usability in a real-world\nenvironment. In this work, an output-feedback safe model-based reinforcement\nlearning technique based on a novel barrier-aware dynamic state estimator has\nbeen designed to address this issue. The developed approach facilitates\nsimultaneous learning and execution of safe control policies for\nsafety-critical linear systems. Simulation results indicate that barrier\ntransformation is an effective approach to achieve online reinforcement\nlearning in safety-critical systems using output feedback.", + "authors": "S M Nahid Mahmud, Moad Abudia, Scott A Nivison, Zachary I. Bell, Rushikesh Kamalapurkar", + "published": "2022-04-04", + "updated": "2022-04-04", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + } + ], + [ + { + "url": "http://arxiv.org/abs/2402.04867v2", + "title": "Multimodal Query Suggestion with Multi-Agent Reinforcement Learning from Human Feedback", + "abstract": "In the rapidly evolving landscape of information retrieval, search engines\nstrive to provide more personalized and relevant results to users. Query\nsuggestion systems play a crucial role in achieving this goal by assisting\nusers in formulating effective queries. However, existing query suggestion\nsystems mainly rely on textual inputs, potentially limiting user search\nexperiences for querying images. In this paper, we introduce a novel Multimodal\nQuery Suggestion (MMQS) task, which aims to generate query suggestions based on\nuser query images to improve the intentionality and diversity of search\nresults. We present the RL4Sugg framework, leveraging the power of Large\nLanguage Models (LLMs) with Multi-Agent Reinforcement Learning from Human\nFeedback to optimize the generation process. Through comprehensive experiments,\nwe validate the effectiveness of RL4Sugg, demonstrating a 18% improvement\ncompared to the best existing approach. Moreover, the MMQS has been transferred\ninto real-world search engine products, which yield enhanced user engagement.\nOur research advances query suggestion systems and provides a new perspective\non multimodal information retrieval.", + "authors": "Zheng Wang, Bingzheng Gan, Wei Shi", + "published": "2024-02-07", + "updated": "2024-02-09", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR" + ], + "label": "Original Paper", + "paper_cat": "Model AND Based AND Reinforcement AND Learning", + "gt": "Query Suggestion. Query suggestion is a feature of search engines that provides users with a list of possible queries based on their current query inputs. We review the literature in terms of Textual Query Suggestion (TQS) and Visual Query Suggestion (VQS). For TQS, it relies on the text of the user\u2019s query to generate a list of possible textual queries. There are a number of different methods for generating the query suggestions, including (i) query auto completion [2, 39], (ii) query spelling correction [17], (iii) query expansion [5], and (iv) query rewriting [14, 15]. Overall, TQS does not use any visual information, such as images, to generate suggestions. For VQS, it is introduced by Zha et al. [50, 51], which offers users both textual and visual suggestions based on their query text. This enables users to conveniently specify their search intentions. When a user selects a text-image pair from the suggestion list, the VQS system performs an image search using the provided text and employs the selected image to filter initial search results by leveraging its visual content. Subsequently, many techniques are proposed for the VQS. For example, Zeng et al. [49] develop a new client-side photo search system, which uses VQS and joint textimage hashing to improve the search accuracy and efficiency. Li et al. [28] study video search, and a multimodal method is developed to process the joint text and images suggestions produced by VQS. Overall, our MMQS problem differs from VQS mainly in that the user\u2019s query input is different. In MMQS, the input is images, while Multimodal Query Suggestion with Multi-Agent Reinforcement Learning from Human Feedback WWW\u201924, May 13\u201317, 2024, Singapore in VQS, it is text. Additionally, Bian et al. [3] study a new setting of VQS called Visual Query Attributes Suggestion (VQAS), where an image is inputted and VQAS suggests informative attributes (e.g., color, texture, shape) extracted from the query image via some SVM classifiers. These attributes allow users to select and express more precise search intents. Our work differs from VQAS in two aspects. First, MMQS outputs query suggestions instead of those image attributes, where the suggestions need satisfying the intentionality and diversity properties. Second, we propose a multi-agent reinforcement learning based framework to generate the suggestions from large language models instead of choosing those pre-defined attributes using the classifiers. Vision-Language Pre-training. Our work is related to VisionLanguage Pre-training (VLP) in techniques. VLP aims to train a multimodal foundation model to align the relationships between images and text, and then the model is used to support various downstream vision-and-language tasks (e.g., image captioning or visual question answering). The literature on VLP training strategies can be categorized into three main approaches: end-to-end pretraining [7, 18, 21, 25\u201327, 35, 41, 42], modular pre-training [1, 6, 12, 16, 25, 30, 52, 53], and zero-shot [38, 44, 47]. Our work falls into the modular pre-training, where it makes use of off-the-shelf pre-trained models, keeping them frozen during the pre-training. Existing studies can be categorized according to different frozen components, including the approaches that freeze image encoders [52, 53], language models [6, 12, 16], and both [1, 25, 30]. Specifically, Zhai et al. [52] study Locked-image Tuning (LiT), where it fine-tunes language models via contrastive learning to extract useful representations from locked pre-trained image models for new vision tasks. Driess et al. [12] propose embodied language models, which integrate visual information through a projector into language models. It freezes the language model, and just trains the image encoder with the projector for robotics tasks. Flamingo [1] freezes both image encoders and language models, and introduces cross-attention layers into the language model to incorporate visual features during the fine-tuning. Similarly, BLIP2 [25] introduces an adapter called Q-Former, which injects visual features into the language model. Our RL4Sugg freezes both image encoders and language models, where we introduce two lightweight agents for fine-tuning, which align the input image to generate query suggestions with RLHF. Reinforcement Learning from Human Feedback. Reinforcement Learning from Human Feedback (RLHF) is an active research area that focuses on training RL agents using human-generated feedback, which is originally developed for training simple robots to interact with real-world environments for complex tasks such as Atari games [9]. Recently, RLHF has been applied to fine-tune various language tasks including text summarization [45], dialogue systems [19, 48], machine translation [22], semantic parsing [24], and review generation [8]. For example, InstructGPT [33] collects a dataset of model desired outputs written by human labelers, and it then adopts RLHF to fine-tune GPT-3 [4]. In this paper, we propose a novel multi-agent reinforcement learning framework, which incorporates RLHF to generate human intentional query suggestions. To our best knowledge, this is the first of its kind.", + "pre_questions": [], + "main_content": "Multimodal Query Suggestion with Multi-Agent Reinforcement Learning from Human Feedback Zheng Wang1, Bingzheng Gan1, Wei Shi1 1Huawei Singapore Research Center, Singapore {wangzheng155,gan.bingzheng,w.shi}@huawei.com ABSTRACT In the rapidly evolving landscape of information retrieval, search engines strive to provide more personalized and relevant results to users. Query suggestion systems play a crucial role in achieving this goal by assisting users in formulating effective queries. However, existing query suggestion systems mainly rely on textual inputs, potentially limiting user search experiences for querying images. In this paper, we introduce a novel Multimodal Query Suggestion (MMQS) task, which aims to generate query suggestions based on user query images to improve the intentionality and diversity of search results. We present the RL4Sugg framework, leveraging the power of Large Language Models (LLMs) with Multi-Agent Reinforcement Learning from Human Feedback to optimize the generation process. Through comprehensive experiments, we validate the effectiveness of RL4Sugg, demonstrating a 18% improvement compared to the best existing approach. Moreover, the MMQS has been transferred into real-world search engine products, which yield enhanced user engagement. Our research advances query suggestion systems and provides a new perspective on multimodal information retrieval. CCS CONCEPTS \u2022 Information systems \u2192Query suggestion. KEYWORDS multimodal query suggestion, multi-agent reinforcement learning from human feedback, vision-language pre-training ACM Reference Format: Zheng Wang1, Bingzheng Gan1, Wei Shi1. 2024. Multimodal Query Suggestion with Multi-Agent Reinforcement Learning from Human Feedback. In Proceedings of the ACM Web Conference 2024 (WWW\u201924), May 13\u201317, 2024, Singapore. ACM, New York, NY, USA, 12 pages. https://doi.org/10.1145/ 3543507.3583304 1 INTRODUCTION Search engines have become an indispensable tool for information retrieval, aiding users in finding relevant content in vast online repositories. Traditional keyword-based search methods [23, 46], Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. WWW\u201924, May 13\u201317, 2024, Singapore \u00a9 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-9416-1/23/04...$15.00 https://doi.org/10.1145/3543507.3583304 bicycle bicycle poker bicycle shop bicycle pump bicycle bicycle poker bicycle shop bicycle pump Guess what you want to search How to fix a broken bicycle chain Nearby bicycle repair stalls Why does the bicycle chain frequently break (a) Textual Query Suggestion (b) Visual Query Suggestion (c) Multimodal Query Suggestion Figure 1: Illustration of MMQS problem. while effective, often require users to precisely articulate their information needs, leading to potential challenges in formulating accurate queries. To enhance the search experience and provide more user-friendly alternatives, query suggestion systems have gained prominence. These systems aim to generate relevant and contextually appropriate suggestions based on users\u2019 current query input, reducing the cognitive burden on users and increasing the efficiency of information discovery. There are two well-established query suggestion systems that have been extensively studied: Textual Query Suggestion (TQS) [2, 5, 14, 15, 17, 39] and Visual Query Suggestion (VQS) [28, 49\u201351]. In TQS, it is capable to automatically suggest a list of keywords based on users\u2019 current queries, a feature that many existing search engines have already implemented. Its primary purpose is to assist users in formulating their search intents clearly and conveniently (as illustrated in Figure 1(a)). In VQS, the suggestions generated by TQS might be inadequate for users who lack familiarity with the suggested terms. To address this issue, incorporating visual examples along with the suggestions can greatly improve the user experience and help users better understand the context (as illustrated in Figure 1(b)). The limitation of these systems is that they mainly rely on users\u2019 text inputs to generate potential suggestions. However, images contain rich information that can be quickly perceived. There are some situations where users can imagine what they desire but find it challenging to express it concisely in words. For example, imagine a scenario where a user\u2019s bicycle breaks down while riding on the street. In such a case, the intuitive search for the user would be to quickly take a photo of the bicycle to query for a solution rather than relying on TQS or VQS to describe the current issue in text. If the user types \u201cbicycle\u201d in a search box, the suggestions provided may be \u201cbicycle poker\u201d, \u201cbicycle shop\u201d, and \u201cbicycle pump\u201d, which are all irrelevant in expressing the user\u2019s intent. In addition, to further enhance the query suggestions, it is desirable for the system to not only provide guidance on fixing arXiv:2402.04867v2 [cs.IR] 9 Feb 2024 WWW\u201924, May 13\u201317, 2024, Singapore Zheng Wang1, Bingzheng Gan1, Wei Shi1 a broken bicycle but also offer other useful information, such as nearby bicycle repair stalls and possible reasons why his/her bicycle frequently breaks. These diverse choices allow users to explore the information they may need effectively (as illustrated in Figure 1(c)). Motivated by practical scenarios, we introduce a novel query formulation, called Multimodal Query Suggestion (MMQS). It takes a user query image as input and generates query suggestions to response to the user\u2019s search intent. Given that the query suggestions are intended to assist users in activating search engines, the design of MMQS focuses on two essential properties: \u2022 Intentionality: The primary goal of MMQS is to capture the user\u2019s search intent effectively. Visual data presents an opportunity to infer implicit information needs that might be challenging to articulate in words. By incorporating visual cues from user query images, MMQS aims to provide query suggestions that accurately reflect the user\u2019s underlying intent and support more focused and relevant searches. Diversity: MMQS generates query suggestions that encompass and relevant searches. \u2022 Diversity: MMQS generates query suggestions that encompass different aspects of the query image, thereby expanding the search space. This empowers users to explore multiple aspects of information discovery, enhancing the overall search experience. Challenges and a New Solution. The formulation of the MMQS problem introduces several challenges that need innovative solutions. Data Collection (C1): Integrating multimodal data comprising both textual and visual information poses unique data preparation challenges. Specifically, it involves generating image-suggestion pairs, a property not presents in many publicly available imagetext datasets (e.g., COCO Captions [29] or Flickr30k Entities [34]). Moreover, annotating user intent can be time-consuming and lacks clear guidelines. Therefore, developing efficient and effective strategies for data collection, automated pairing, and reliable annotation becomes crucial for the success of MMQS. Capturing Intentionality and Diversity (C2): Inferring user intent from a query image and generating diverse suggestions is a complex task. It requires understanding the visual context and associations between images and textual suggestions. Achieving both intentionality and diversity meanwhile in the generated suggestions necessitates carefully designed techniques to align with user intent and avoid redundancy. To address the aforementioned challenges, we propose a novel RL4Sugg framework, leveraging the capabilities of Large Language Models (LLMs) with Multi-Agent Reinforcement Learning to generate query suggestions based on input images. To tackle C1, we leverage the current GPT language generation capabilities to automate the collection of image-suggestion pairs and user intent annotations based on potential clicks. We employ a threshold-based mechanism that selectively involves manual effort for suggestions with lower confidence scores, ensuring high-quality annotations while striking a balance between automation and human input in the data labeling process. To tackle C2, we study a novel solution based on multi-agent reinforcement learning, where we employ two distinct agents within the framework: Agent-I, responsible for intentionality, and Agent-D, responsible for diversity. Specifically, the Agent-I first generates a set of intentional candidate suggestions, which incorporates a RewardNet and a PolicyNet tailored for this task. The RewardNet utilizes multi-task learning to align image-suggestion pairs and assigns rewards to these pairs. Following this, the PolicyNet is trained through Reinforcement Learning from Human Feedback (RLHF) to enhance the intentionality of the suggestions. Further, the Agent-D selects diverse suggestions from the candidate pool, which is designed to cooperate with the AgentI to ensure that both intentionality and diversity are optimized explicitly in an end-to-end training. Our contributions can be summarized as follows: \u2022 The MMQS Task: We introduce a novel query formulation, called Multimodal Query Suggestion (MMQS), which addresses the gap between multimodal data and query suggestions in search engines. Our objective is to improve the user search experience by providing intentional and diverse query suggestions generated from user query images. To the best of our knowledge, this work presents the first attempt in its kind. The Framework: We present a novel framework called presents the first attempt in its kind. \u2022 The RL4Sugg Framework: We present a novel framework called RL4Sugg, which is designed to generate query suggestions using user input images. By leveraging the capabilities of LLMs and multi-agent reinforcement learning, RL4Sugg optimizes the intentionality and diversity of the generated suggestions through an end-to-end training. Comprehensive Experiments: We conduct extensive experian end-to-end training. \u2022 Comprehensive Experiments: We conduct extensive experiments on two real-world datasets and achieve promising results than various baselines. Our experiments demonstrate the effectiveness of our proposed framework in generating intentional and diverse query suggestions (e.g., it demonstrates 18% improvement compared to the best baseline method). In addition, the proposed MMQS has been transferred into products, and the results show that the deployed system effectively enhances user engagement of search engines. We study the problem of Multimodal Query Suggestion (MMQS), which is formulated below. Problem 1 (MMQS). Given a user query image, denoted as \ud835\udc3c, MMQS aims to recommend textual suggestions, denoted as S =< \ud835\udc461,\ud835\udc462, ...,\ud835\udc46\ud835\udc3e>. The suggestions are used to help users activate search engines, and thus they need to meet the following two properties: Intentionality: the suggested queries should align with the content depicted in the query image, and effectively capture the user\u2019s intent to offer meaningful options for initiating the search. Diversity: the suggested queries should reflect different aspects of the query image, offering users a diverse set of choices and avoiding redundancy among them. By fulfilling these properties, MMQS aims to enrich the user experience by offering intentional and diverse query suggestions derived from the input query image. MMQS provides a foundational feature for supporting two types of search engines: generationbased and retrieval-based (to be introduced in Section 4.5). 4 METHODOLOGY 4.1 Overview of RL4Sugg The proposed solution RL4Sugg addresses the problem of Multimodal Query Suggestion (MMQS) by generating intentional and diverse query suggestions based on user query images. It consists of several key components, including data collection (Section 4.2), Agent-I training (Section 4.3), and Agent-D training (Section 4.4). The overall framework is shown in Figure 2. In data collection, the language generation capabilities of LLMs are utilized to automate the collection of image-suggestion pairs and the annotation of user intents. This approach combines the efficiency of LLM automation and the reliability of human annotation together to ensure data quality for training. In Agent-I, it generates candidate suggestions by combining a RewardNet and a PolicyNet to capture intentionality. The RewardNet is trained using annotated image-suggestion pairs to assign scores (rewards) indicating the user interest in clicking suggestions. This involves a multi-task learning approach optimizing three pre-training tasks to generate informative rewards. The PolicyNet adopts a two-tower structure to capture visual and textual features and incorporates a Language Model (LLM) to enhance understanding and generation capabilities. It formulates the Markov Decision Process (MDP) for generation, refined through Reinforcement Learning from Human Feedback (RLHF) to ensure alignment with user intents. In Agent-D, it leverages lightweight neural networks to select diverse suggestions from the candidate pool provided by Agent-I, whose MDP is designed so that the two agents cooperatively optimize the both intentionality and diversity of the output suggestions in an end-to-end manner. We explain some insights behind the RL4Sugg design as follows. (1) RL4Sugg is built based on the combination of LLM automation and human annotation for preparing the training data. It simplifies the data collection process, and reduces the reliance on human annotators for RLHF. (2) The multi-task learning in the RewardNet and RLHF in the PolicyNet enable the Agent-I to learn from various tasks and user feedback, leading to improved performance in generating user intentional suggestions. (3) The Agent-D is trained to WWW\u201924, May 13\u201317, 2024, Singapore Zheng Wang1, Bingzheng Gan1, Wei Shi1 Table 1: An example of data collection. Step 1: GPT-4 generates candidate suggestions for an image. Step 2: The model assigns a label (1 or 0) to each suggestion, indicating user click intent, along with a confidence (0 to 1). Step 3: Suggestions with low confidence are filtered out by a confidence threshold (e.g., 0.5) and then undergo human annotation to produce the final labels. Step 1 Step 2 Step 3 Query Image Suggestions (generated by GPT) GPT Labels Conf Thres (0.5) Human Labels Final Labels How to fix a broken bicycle chain 1 0.7 \u221a 1 Bicycle chain cleaning 1 0.3 \u00d7 0 0 Bicycle brand rankings 0 0.6 \u221a 0 Nearby bicycle repair stalls 1 0.8 \u221a 1 Mountain bike prices 1 0.4 \u00d7 0 0 minimize the similarity between output suggestions, which ensures that the output suggestions are informative and provide various search aspects for users. Further, Agent-D and Agent-I are trained cooperatively to ensure that the output maintains both intentionality and diversity. This is achieved by optimizing both intentionality and diversity explicitly with multi-agent reinforcement learning. 4.2 Data Collection This process involves collecting image-suggestion pairs and annotating user intents regarding their likelihood to click on the suggestions or not. However, relying solely on human crowd-sourcing for data collection can be time-consuming and lack clear guidelines. To address this, inspired by language generation capabilities from recent GPT models [13, 30, 32], we propose a novel approach using GPT-4 to automate image-suggestion pair collection and user intent annotation based on potential clicks. This approach provides a balance between automation (by GPT-4) and manual effort (by human annotators) through a threshold-based mechanism. To better illustrate the labeling process, we present a running example in Table 1, which involves three key steps, and the detailed descriptions are included in Appendix Section A.1. We note that the proposed labeling approach offers several novel aspects in the field of text annotation tasks [13, 30, 32]. First, by utilizing GPT-4\u2019s language generation capabilities, we can generate a wide range of candidate suggestions based on image content, providing a comprehensive set of options for users. Second, the labeling and confidence estimation step enhance the reliability of the generated suggestions by quantifying the model\u2019s confidence. Third, the threshold-based mechanism introduces a customizable parameter, which facilitates the workload adjustment between automation and human effort according to specific requirements. 4.3 Agent-I: Generating Intentional Candidate Suggestions 4.3.1 RewardNet. In this section, we introduce the training process of the RewardNet, utilizing the annotated image-suggestion pair data. The RewardNet provides rewards (e.g., a value ranging between 0 and 1) for each image-suggestion pair, indicating the likelihood of user interest in clicking the suggestion for a given query image. Below, we present the model architecture and training details for the RewardNet. Model Architecture. As shown in Figure 2, our RewardNet employs a Q-Former structure [25], which incorporates an ImageTower and a Text-Tower, both utilizing transformer-based modules with shared self-attention layers to capture visual and textual features. In the Image-Tower, it incorporates a pre-trained frozen image encoder to extract visual features. To achieve this, we introduce learnable query embeddings as inputs, enabling interactions between queries via self-attention layers and with frozen image features through cross-attention layers. In the Text-Tower, textual suggestions interact with learnable query embeddings through shared self-attention layers. Training Paradigm. We adopt multi-task learning for the RewardNet, optimizing three pre-training tasks: Image-Suggestion Alignment (ISA), Image-Suggestion Generation (ISG), and ImageSuggestion Matching (ISM). The rationale behind the approach is to enhance the RewardNet\u2019s training process, facilitating the generation of informative rewards guided by these typical tasks. In ISA, the goal is to align image and suggestion representations to bring similar pairs closer and push dissimilar ones apart. This is achieved through a contrastive approach. We sample a batch of image-suggestion pairs, each with a label of 1. (2) For each pair < \ud835\udc3c\ud835\udc56,\ud835\udc46\ud835\udc56>, we represent them as vectors v\ud835\udc3c \ud835\udc56and v\ud835\udc46 \ud835\udc56via two towers. We treat v\ud835\udc46 \ud835\udc56as the positive of v\ud835\udc3c \ud835\udc56(the anchor), because \ud835\udc3c\ud835\udc56and \ud835\udc46\ud835\udc56have a label of 1, and other suggestions in the batch are considered as the negatives. Then, let L\ud835\udc3c,\ud835\udc46denote a contrast, which encourages the suggestions to align with the anchor image by comparing their positive and negative pairs, that is, L\ud835\udc3c,\ud835\udc46= \u2211\ufe01 <\ud835\udc3c\ud835\udc56,\ud835\udc46\ud835\udc56>\u2208V \u2212log exp \u0010 max v\ud835\udc3c \ud835\udc56\u2208V\ud835\udc3c \ud835\udc56 v\ud835\udc3c \ud835\udc56\u00b7 v\ud835\udc46 \ud835\udc56/\ud835\udf0f \u0011 \u00cd <\ud835\udc3c\ud835\udc57,\ud835\udc46\ud835\udc57>\u2208V,\ud835\udc57\u2260\ud835\udc56 exp \u0010 max v\ud835\udc3c \ud835\udc56\u2208V\ud835\udc3c \ud835\udc56 v\ud835\udc3c \ud835\udc56\u00b7 v\ud835\udc46 \ud835\udc57/\ud835\udf0f \u0011 , (1) where \ud835\udf0frepresents a temperature parameter. To determine the image-text similarity, we compute the pairwise similarity between each query embedding v\ud835\udc3c \ud835\udc56\u2208V\ud835\udc3c \ud835\udc56and v\ud835\udc46 \ud835\udc56, and select the highest similarity value. Symmetrically, we can define L\ud835\udc46,\ud835\udc3cby anchoring at v\ud835\udc46 \ud835\udc56, then the loss LISA is defined as LISA = (L\ud835\udc3c,\ud835\udc46+ L\ud835\udc46,\ud835\udc3c)/2. (2) In ISG, the goal is to generate suggestions based on the underlying image content, thereby enhancing the RewardNet\u2019s ability to accurately assign scores to image-suggestion pairs. This is achieved by ensuring that the generated suggestions are semantically consistent with the visual context of the grounded image. Specifically, given an image-suggestion < \ud835\udc3c,\ud835\udc46> pair, where the suggestion \ud835\udc46 corresponds to a sequence of word tokens \ud835\udc46=< w1, ..., w\ud835\udc5a>, we employ a language generation loss to maximize the conditional probability \ud835\udc43as Multimodal Query Suggestion with Multi-Agent Reinforcement Learning from Human Feedback WWW\u201924, May 13\u201317, 2024, Singapore Action Suggestion FC Self Attention Cross Attention Feed Forward LLM Decoder Image Encoder Query emb Query emb Image Encoder RewardNet PolicyNet Image-Tower RLHF Policy Gradient Reward FC Agent-D Agent-I Self Attention Feed Forward Image-Tower Text-Tower Image Suggestion Generation Image-Tower Text-Tower Image Suggestion Matching Image Suggestion Alignment Q-Former State Action Reward Q-Former State Input Image Bicycle brand rankings Nearby bicycle repair Bicycle chain cleaning How to fix a broken bicycle chain Text-Tower Ouput Suggestions Input Image Figure 2: Training overview of Agent-I and Agent-D. Agent-I trains the RewardNet on three tasks (ISA, ISG, ISM) using learnable query embeddings, while the PolicyNet is trained with RLHF to generate candidate suggestions \ud835\udc46\u2032 1,\ud835\udc46\u2032 2, ...,\ud835\udc46\u2032 \ud835\udc41for intentionality. Agent-D learns to select diverse suggestions from the candidates via policy gradient and outputs the final \ud835\udc3esuggestions. LISG = \u2211\ufe01 \ud835\udc56 \u2212log \ud835\udc43(w\ud835\udc56|w1:\ud835\udc56\u22121, \ud835\udc3c). (3) In ISM, the goal is to establish a precise alignment between image and suggestion representations through fine-grained learning. This involves a binary classification task in which the model is to predict whether an image-suggestion pair is positive (matched) or negative (unmatched). To achieve this, we use a hard negative mining strategy, where hard negative samples are image-related suggestions labeled as 0. The rationale is that while some suggestions are related to the query image, they fail to capture the user\u2019s search intent. By optimizing with these hard samples, the RewardNet is encouraged to assign high scores to the pairs exhibiting a strong intention. Then, the objective is trained using a binary cross-entropy loss, formulated as LISM = \u2212\ud835\udc66\u2217log(\ud835\udc43) + (\ud835\udc66\u22121) \u2217log(1 \u2212\ud835\udc43), (4) where \ud835\udc66denotes the true label (either 0 or 1), and \ud835\udc43is the predicted probability of the positive class. Finally, the RewardNet is trained using a multi-task learning approach, where the loss function LRN is defined as LRN = LISA + LISG + LISM. (5) Note that the reward is then obtained as the predicted probability \ud835\udc43 in the ISM task, where it scores a normalized value ranging from 0 to 1, which avoids potential data scale issues that may arise during the training process, that is \ud835\udc5f\ud835\udf03(\ud835\udc3c,\ud835\udc46) = \ud835\udc43, (6) where \ud835\udc5f\ud835\udf03(\ud835\udc3c,\ud835\udc46) denotes the reward for a given query image \ud835\udc3cand its associated suggestion \ud835\udc46, and \ud835\udf03denotes the RewardNet parameters. 4.3.2 PolicyNet. The objective of MMQS is to generate query suggestions that align with users\u2019 intended search queries, specifically those that are more likely to be clicked. This motivates us to explore the application of Reinforcement Learning from Human Feedback (RLHF) technique in training the PolicyNet. Below, we present the model architecture, and MDPs in the PolicyNet. Model Architecture. In PolicyNet, we adopt a similar two-tower structure as presented in the RewardNet, to capture both visual and textual features. Additionally, we aim to leverage the language generation capability of a LLM by establishing a connection between the Image-Tower and the LLM. As shown in Figure 2, the connection is implemented using a fully-connected (FC) layer. The FC layer projects the output query embeddings to align with the dimensionality of the LLM\u2019s text embedding, and then these projected query embeddings are concatenated at the beginning of the input text embeddings of the LLM. This integration serves the visual information as soft prompts, conditioning the LLM on the visual representations to generate language. Notably, the LLM is kept frozen during training to facilitate the process. MDP for Generating Suggestions. To enhance the intentionality of the generated suggestions, we model the process with RLHF, involving states, actions, and rewards. States: The state s\ud835\udc3cis defined by the learned query embeddings of an input query image, which undergoes a process to extract the representation. Specifically, the image is first encoded using a frozen Vision Transformer (ViT) [35], which produces a fixedlength representation of the image that captures its visual features. Then, some learnable query embeddings are generated as the design in RewardNet, these embeddings represent the different aspects of the query image that the model should attend to, and the query embeddings are then passed through cross-attention layers, which allow them to interact with the frozen visual features. By leveraging this approach, we can effectively incorporate the contextual relationships between the queries and the image features, and forming a comprehensive representation of the state. Actions: The action \ud835\udc4e\ud835\udc3cis defined by the generated suggestions via a LLM, which conditions on the state representation to generate language. Here, We employ a decoder-only language model (e.g., OPT [54]) for its simplicity and efficiency, as it does not require encoding input information, and only to generate suggestions that WWW\u201924, May 13\u201317, 2024, Singapore Zheng Wang1, Bingzheng Gan1, Wei Shi1 are relevant to the image. This enables our training more efficiently and reduces GPU requirements. Rewards: The reward \ud835\udc5f\ud835\udc3cis obtained from the RewardNet according to Equation 6. The purpose of training the reward model is to accurately predict the quality of a generated suggestion, as assessed by human judgment. It is important to note that Agent-I\u2019s action involves exploring candidate suggestions, and the reward cannot be immediately observed because the final suggestions have not yet been generated. When the action is to provide the candidates for Agent-D to choose final suggestions within this candidate pool, some reward signal can be acquired (e.g., measuring the intentionality of suggestions). Subsequently, the PolicyNet would be updated accordingly through RLHF (more training details are presented in Section 4.4). This approach facilitates the cooperation between Agent-I and Agent-D, guiding them towards the joint goal of producing intentional and diverse suggestions in the final output. 4.4 Agent-D: Choosing Diverse Suggestions from the Candidates MDP for Choosing Suggestions. We further introduce an AgentD to enhance the overall diversity of suggestions and provide users with a more comprehensive selection. We discuss the rationale behind the introduction of this agent. One straightforward method to increase diversity is to employ post-processing techniques like clustering. This technique groups similar candidate suggestions into clusters and selects the cluster centers as output to reduce redundancy. However, such post-processing faces two challenges: (1) the model cannot directly generate both intentional and diverse suggestions, which makes further optimization difficult; (2) the clustered suggestions prioritize diversity but may sacrifice intentionality in the output. To tackle the challenges, we consider the diversity as one of the training objectives managed by Agent-D, where it calculates semantic similarity between suggestions, and cooperative training with Agent-I during the policy training process. This end-to-end optimization empowers the language model to generate suggestions that exhibit both intentionality and diversity. To accomplish this task, we use a sliding window algorithm with a window size denoted as \ud835\udc3e. The candidate suggestions provided by Agent-I are represented as < \ud835\udc46\u2032 1,\ud835\udc46\u2032 2, ...,\ud835\udc46\u2032 \ud835\udc41>, and AgentD\u2019s objective is to select the \ud835\udc3ediverse suggestions from this set (where \ud835\udc3e< \ud835\udc41). Here is how the sliding window algorithm operates: (1) The algorithm begins by scanning the first \ud835\udc3esuggestions and deciding which one within the window should be omitted. (2) It then inserts the next suggestion into the window and repeats the decision-making process. (3) This scanning and decision-making continue until all suggestions have been processed. (4) Finally, the algorithm maintains and outputs the best \ud835\udc3esuggestions during the scanning, which correspond to the highest diversity. Diversity is measured by computing pairwise semantic similarities among the \ud835\udc3esuggestions < \ud835\udc461,\ud835\udc462, ...,\ud835\udc46\ud835\udc3e>, typically involving a subtraction operation (where a larger diversity implies smaller similarity), i.e., \ud835\udc37\ud835\udc3c\ud835\udc49= 1 2 \u2212 \u00cd 1\u2264\ud835\udc56<\ud835\udc57\u2264\ud835\udc3e\ud835\udf0e(\ud835\udc46\ud835\udc56,\ud835\udc46\ud835\udc57) \ud835\udc3e\u2217(\ud835\udc3e\u22121) , (7) where \ud835\udf0e(\u00b7, \u00b7) represents a similarity measurement between two suggestions, typically calculated using methods like cosine similarity with S-BERT [36]. This similarity score is then normalized to a value between 0 and 1 for clarity. Below, we introduce the MDP of Agent-D, which decides the process of selecting which suggestions to drop from the window. This decision-making process is guided by lightweight fully-connected (FC) neural networks trained through the policy gradient method [40, 43]. States: In the context where we have \ud835\udc41candidate suggestions denoted as < \ud835\udc46\u2032 1,\ud835\udc46\u2032 2, ...,\ud835\udc46\u2032 \ud835\udc41>, we utilize S-BERT embeddings [36] to capture their semantic features, which are represented as b\ud835\udc46 \ud835\udc56for each suggestion (1 \u2264\ud835\udc56\u2264\ud835\udc41). The state s\ud835\udc37is defined by concatenating these \ud835\udc41embeddings, i.e., s\ud835\udc37= {b\ud835\udc46 1, b\ud835\udc46 2, ..., b\ud835\udc46 \ud835\udc41}. Actions: We denote an action of Agent-D as \ud835\udc4e\ud835\udc37, and the design of these actions is based on the previous discussion, which involves dropping one of the \ud835\udc3esuggestions in the sliding window and inserting the next suggestion into the window. Formally, the actions are defined as \ud835\udc4e\ud835\udc37= \ud835\udc58where 1 \u2264\ud835\udc58\u2264\ud835\udc3e. In this notation, when action \ud835\udc4e\ud835\udc37= \ud835\udc58, it means that the \ud835\udc58-th suggestion should be dropped, and the \ud835\udc3e+ 1-th suggestion should be inserted into the window. Consider the consequence of dropping the \ud835\udc58-th suggestion, this action transitions the environment to the next state as s\u2032\ud835\udc37= {b\ud835\udc46 1, ..., b\ud835\udc46 \ud835\udc58\u22121, b\ud835\udc46 \ud835\udc58+1, ..., b\ud835\udc46 \ud835\udc3e, b\ud835\udc46 \ud835\udc3e+1, ..., b\ud835\udc46 \ud835\udc41, O}, where O represents a zero vector, which is used to pad the state s\u2032\ud835\udc37into a fixed-length vector. This fixed-length vector is then fed into the fully-connected (FC) policy network. Rewards: We denote the reward as \ud835\udc5f\ud835\udc37. The reward associated with the transition from state s\ud835\udc37to state s\u2032\ud835\udc37after taking action \ud835\udc4e\ud835\udc37 is defined as: \ud835\udc5f\ud835\udc37= s\u2032\ud835\udc37.\ud835\udc37\ud835\udc3c\ud835\udc49\ud835\udc4f\ud835\udc52\ud835\udc60\ud835\udc61\u2212s\ud835\udc37.\ud835\udc37\ud835\udc3c\ud835\udc49\ud835\udc4f\ud835\udc52\ud835\udc60\ud835\udc61, where s\ud835\udc37.\ud835\udc37\ud835\udc3c\ud835\udc49\ud835\udc4f\ud835\udc52\ud835\udc60\ud835\udc61 represents the maintained best diversity value found at state s\ud835\udc37 during the scanning according to Equation 7. With this reward definition, the objective of the MDP, which is to maximize the cumulative rewards, aligns with the goal of discovering the greatest diversity among the suggestions. To illustrate this alignment, consider the process as it moves through a sequence of states: s\ud835\udc37 1 , s\ud835\udc37 2 , ..., s\ud835\udc37 \ud835\udc41, ending at s\ud835\udc37 \ud835\udc41. We can denote the rewards received at these states, except for the termination state s\ud835\udc37 \ud835\udc41, as \ud835\udc5f\ud835\udc37 1 ,\ud835\udc5f\ud835\udc37 2 , ...,\ud835\udc5f\ud835\udc37 \ud835\udc41\u22121. When future rewards are not discounted, we have: \ud835\udc41 \u2211\ufe01 \ud835\udc61=2 \ud835\udc5f\ud835\udc37 \ud835\udc61\u22121 = \ud835\udc41 \u2211\ufe01 \ud835\udc61=2 (s\ud835\udc37 \ud835\udc61.\ud835\udc37\ud835\udc3c\ud835\udc49\ud835\udc4f\ud835\udc52\ud835\udc60\ud835\udc61\u2212s\ud835\udc37 \ud835\udc61\u22121.\ud835\udc37\ud835\udc3c\ud835\udc49\ud835\udc4f\ud835\udc52\ud835\udc60\ud835\udc61) = s\ud835\udc37 \ud835\udc41.\ud835\udc37\ud835\udc3c\ud835\udc49\ud835\udc4f\ud835\udc52\ud835\udc60\ud835\udc61\u2212s\ud835\udc37 1 .\ud835\udc37\ud835\udc3c\ud835\udc49\ud835\udc4f\ud835\udc52\ud835\udc60\ud835\udc61, (8) where s\ud835\udc37 \ud835\udc41.\ud835\udc37\ud835\udc3c\ud835\udc49\ud835\udc4f\ud835\udc52\ud835\udc60\ud835\udc61corresponds to the highest diversity value found during the scanning process, and s\ud835\udc37 1 .\ud835\udc37\ud835\udc3c\ud835\udc49\ud835\udc4f\ud835\udc52\ud835\udc60\ud835\udc61represents the initial diversity value, which remains constant. Consequently, maximizing the cumulative rewards is equivalent to maximizing the diversity that can be discovered. Learning Policies of Agent-I and Agent-D. We discuss the learning process of the two agents. For Agent-I, to train the PolicyNet, which involves two stages: (1) warm-start stage and (2) training stage. In (1), we study the Supervised Fine-Tuning (SFT), which equips the LLM with the basic abilities to generate suggestions, where the two towers of the PolicyNet are trained using a multi-task learning approach (ISA, ISG, and ISM) according to Equation 4, which allows them to learn from different related tasks simultaneously. In (2), we utilize the PPO algorithm [37] to fine-tune the SFT model for achieving Multimodal Query Suggestion with Multi-Agent Reinforcement Learning from Human Feedback WWW\u201924, May 13\u201317, 2024, Singapore the intentionality, where the environment is modeled as a bandit setting, i.e., when a random query image is presented, the model generates a suggestion in response to the image, and ends the episode. The loss contains the following components: i) Following the output suggestions (denoted by < \ud835\udc461,\ud835\udc462, ...,\ud835\udc46\ud835\udc3e>) by Agent-D, the environment calculates a reward\ud835\udc5f\ud835\udc3cvia the RewardNet according to Equation 6. ii) In addition, we fine-tune the connection (i.e., the FC layer) between the LLM and the two-tower using a language generation loss on the output suggestions according to Equation 3. By conditioning the LLM on the output from the two-tower to generate language, it can capture the visual cues presented in the input image. iii) To prevent over-optimization of the RewardNet, we further incorporate a penalty for the KL divergence [19] between the learned RL policy, denoted as \ud835\udf0bRL \ud835\udf19 with parameters \ud835\udf19, and its original SFT policy, denoted as \ud835\udf0bSFT. Formally, the loss of Agent-I is presented as LI = \u2212\ud835\udc5f\ud835\udc3c+ \ud835\udefdlog(\ud835\udf0bRL \ud835\udf19(\ud835\udc4e\ud835\udc3c|s\ud835\udc3c)/\ud835\udf0bSFT(\ud835\udc4e\ud835\udc3c|s\ud835\udc3c)) \u2212\ud835\udefe \u2211\ufe01 \ud835\udc56 log \ud835\udc43(w\ud835\udc56|w1:\ud835\udc56\u22121, \ud835\udc3c), (9) where \ud835\udefdand \ud835\udefeare two coefficients to control the strength of the KL penalty and language loss. For each output suggestion \ud835\udc46\ud835\udc56(1 \u2264\ud835\udc56\u2264 \ud835\udc3e), it corresponds to a sequence of word tokens \ud835\udc46\ud835\udc56=< w1, ..., w\ud835\udc5a> for the language generation. For Agent-D, the core problem of its MDP is to acquire a policy that guides the agent in selecting actions denoted as \ud835\udc4e\ud835\udc37. These actions are determined based on the constructed states s\ud835\udc37with the objective of maximizing the cumulative reward, denoted as \ud835\udc45\ud835\udc41. We employ a policy gradient method for learning this policy, called the REINFORCE algorithm [40, 43]. To elaborate, we introduce a stochastic policy denoted as \ud835\udf0b\ud835\udf03(\ud835\udc4e\ud835\udc37|s\ud835\udc37). This policy is responsible for probabilistically sampling an action\ud835\udc4e\ud835\udc37for a given state s\ud835\udc37using a neural network, where the network\u2019s parameters are represented as \ud835\udf03. The loss function for Agent-D is then formulated as follows: LD = \u2212\ud835\udc45\ud835\udc41ln \ud835\udf0b\ud835\udf03(\ud835\udc4e\ud835\udc37|s\ud835\udc37). (10) 4.5 Discussion on Applications and Cold-start Supporting Generation-based and Retrieval-based Applications. We explore RL4Sugg capabilities in two search engine scenarios: (1) generation-based and (2) retrieval-based. In (1), RL4Sugg can naturally generate query suggestions using its language generation capability from LLMs in response to users\u2019 image queries across diverse domains. In (2), RL4Sugg specializes in providing query suggestions for specific domains with narrower focuses, like E-commerce shopping websites, where the query suggestions are limited to their commodities, and can be prepared in advance. It leverages its ability to represent images and language in PolicyNet\u2019s two-tower. Query suggestions are stored as vector representations in a database, and vector-based retrieval, such as HNSW, enhances search efficiency. During inference, RL4Sugg extracts the user\u2019s image representation and retrieves Top-\ud835\udc3equery suggestions with high similarity. Notably, this approach offers several advantages, including efficient query response, and by precomputing and storing the query suggestions in a database, the quality of these suggestions can be guaranteed in advance. Handling the Cold-start Problem. Since RL4Sugg relies on annotator feedback to understand search intentionality, RL4Sugg faces a potential cold-start issue for recommending suggestions when the learned knowledge is insufficient for online user queries. To tackle this issue, we employ online learning to continuously fine-tunes both agents by Equation 9 and 10, using newly recorded query images and user-clicked suggestions (i.e., labeling as 1), ensuring the model\u2019s policy remains up-to-date for online use. In Section 5.2, we validate this approach, and the results show significant improvements by 8.3% in user experience, which indicates the positive impact of this strategy in practice. 5 EXPERIMENTS 5.1 Experimental Setup Datasets and Ground Truth. We conduct experiments on two real-world datasets: Business and ImageNet [11]. The Business dataset contains around 50,000 user query images collected from a real image search engine between January 2022 and January 2023. We randomly sample 80% of these images for training, and the remaining for testing. For each image, we collect 5 suggestions following the data collection process described in Section 4.2, where 46.9% suggestions are labeled by the GPT model, and the remaining suggestions are labeled by 20 human labelers. Among them, 75.8% image-suggestion pairs are with the label 1. Similarly, we collect 1,000 image-suggestion pairs with labels from the ImageNet, which are used to test the transferability of the model fine-tuned on the Business and to perform zero-shot evaluations on the ImageNet. By following [33], we then discuss the ground truth for evaluation, considering both the retrieval and generation tasks. For the retrieval task, we establish the ground truth of each query image by considering its suggestions with a label of 1. For quality control, we randomly pick 10% labeled image-suggestion pairs, ask 5 other checkers to label these suggestions independently. We employ majority voting to aggregate the labels, and compute the accuracy (denoted by \ud835\udeff) of the labels by the labelers against the aggregated ones by the checkers. The \ud835\udeffis 76.7% for the Business and 78.3% for the ImageNet, which demonstrate the high accuracy of our evaluations. For the generation task, we let the human labelers to assess the suggestions generated by various baseline methods and RL4Sugg. These labeled suggestions are then reviewed by 5 other checkers. Similarly, we report the \ud835\udeffvalues as a measure of quality verification. Baseline. We carefully examine existing vision-language models, and identify the following baseline methods: Flamingo, BLIP-2, LLaVA for the generation task, and CLIP, BLIP-2 for the retrieval task. These methods have comparable parameter sizes of LLM backbones as our OPT2.7B for addressing the MMQS problem. Notably, these models are open-sourced, and we fine-tune them using our collected image-suggestion pairs for fair comparisons. The details are introduced in Appendix Section A.2 due to the page limit. Implementation Details. The implementation details of RL4Sugg and training process can be found in Appendix Section A.3 due to the page limit. Evaluation Metrics. We evaluate the RL4Sugg in terms of the generation task and the retrieval task. For the generation task, We WWW\u201924, May 13\u201317, 2024, Singapore Zheng Wang1, Bingzheng Gan1, Wei Shi1 Table 2: Effectiveness of generation-based applications, finetuned on Business and zero-shot transferred to ImageNet, where \ud835\udeffindicates the accuracy of labeling the generated suggestions as introduced in Section 5.1. Models #Train/#Total Params Business Fine-tuned ImageNet 0-Shot DCG DIV \ud835\udeff DCG DIV \ud835\udeff Flamingo 1.4B/3.4B 0.73 0.25 81.7% 0.67 0.23 80.6% BLIP-2 104M/3.1B 0.59 0.17 68.3% 0.47 0.18 69.2% LLaVA 14M/13B 0.60 0.25 73.3% 0.47 0.24 76.5% RL4Sugg 208M/3.1B 0.89 0.25 83.3% 0.87 0.24 86.9% Table 3: Effectiveness of retrieval-based applications, finetuned on Business and zero-shot transferred to ImageNet. Models #Train/#Total Params Business Fine-tuned ImageNet 0-shot PNR R@1 R@3 PNR R@1 R@3 CLIP 300M/300M 1.30 0.23 0.33 0.90 0.21 0.32 BLIP-2 104M/3.1B 1.05 0.27 0.60 0.73 0.26 0.58 RL4Sugg 208M/3.1B 2.80 0.63 0.83 2.17 0.58 0.74 Table 4: Ablation study (Business). Components DCG DIV RL4Sugg 0.89 0.25 w/o RLHF (SFT) 0.78 0.24 w/o Agent-D (Agent-I only) 0.89 0.19 w/o Agent-D (greedy) 0.82 0.23 report Discounted Cumulative Gain (DCG) and Good vs. Same vs. Bad (GSB) by following [10, 31]. For the retrieval task, we report positive-negative ratio (PNR) and Recall@K by following [20, 31]. In addition, We report the DIV according to Equation 7 for measuring the diversity within a set of output query suggestions. Overall, superior results are indicated by higher values of DCG, GSB, PNR, Recall@\ud835\udc3e, and DIV. The detailed description is included in Appendix Section A.4 due to the page limit. 5.2 Experimental Results (1) Effectiveness evaluation (comparison with baseline methods). We conduct experiments to evaluate the effectiveness of both the generation and retrieval tasks. The model is fine-tuned on Business (with reported trainable parameters) and directly tested on ImageNet for transferability. For the generation task, we query 300 images on both Business and ImageNet datasets, where RL4Sugg outperforms all baseline models in terms of DCG, demonstrating strong transferability. The best baseline model, Flamingo, achieves a DCG of 0.73 (18% lower than RL4Sugg). All models exhibit similar diversity, except BLIP-2, which occasionally generates synonymous query suggestions, and LLaVA, which tends to produce longer suggestions. Since query suggestions are based on query images containing necessary entities and common grammar structures, overall diversity values for all models are not very high. For the retrieval task, as shown in Table 3, RL4Sugg shows better PNR and Recall compared with the other two baseline models on both datasets. (2) Ablation study. To evaluate the effectiveness of the two agents in RL4Sugg, we conduct an ablation study. We replace the RLHF in Agent-I and use the SFT model only; we remove the Agent-D, or replace it with a pre-defined rule of dropping the most similar suggestion within the window. We present the results in Table 4, which Table 5: Online A/B Test (Business). Metric Cold-start A (old RL4Sugg) B (new RL4Sugg) Impr # Click behaviors 0.46% (vs. old RL4Sugg) DCG 0.83 0.89 6.7% GSB 8.3% (vs. old RL4Sugg) PNR 2.61 2.80 6.8% R@1 0.57 0.63 9.5% R@3 0.75 0.83 9.6% demonstrate that both agents contribute to improving the performance. Specifically, removing RLHF training from Agent-I results in a dramatic drop in DCG from 0.89 to 0.78, highlighting RLHF\u2019s ability to capture human intentionality. Removing Agent-D leads to a substantial decrease in diversity from 0.25 to 0.19. Alternatively, using a rule to greedily drop suggestions also reduces diversity and DCG from 0.89 to 0.82, as it lacks consideration for intentionality. The inclusion of Agent-D, which interacts with Agent-I during training, enhances the generation of diversified query suggestions while preserving intentionality. (3) Parameter study (varying confidence threshold in data collection). We investigate the effect of varying confidence threshold in data collection on the generation task and the retrieval task. The results and detailed analysis are included in Appendix Section A.6 due to the page limit. Overall, we observe that a moderate threshold can produce good results and save human efforts. (4) Online A/B Test. We conduct an online A/B test to compare the new system (after online learning for the cold-start problem) with the old system for one month. The results as shown in Table 5 demonstrate that the fine-tuned model via online learning can largely improve the overall user experience, e.g., it increases the number of click behaviors by 0.46%. In addition, we collect online cases, and compare the two systems with the real user-generated queries via manual evaluation. We observe that the new system can largely outperform the base system. (5) Qualitative results. We qualitatively evaluate the generated query suggestions. The detailed visualization results and analysis are in Appendix Section A.7 due to the page limit. Overall, we observe that the suggestions align well with user search intents. 6 CONCLUSION In this paper, we introduce a novel Multimodal Query Suggestion (MMQS) framework that addresses the limitations of existing query suggestion systems by incorporating user query images. Through the MMQS approach, we significantly enhance the intentionality and diversity of query suggestions, resulting in a more user-centric and effective search experience. Extensive experiments conducted on two real-world datasets demonstrate a remarkable 18% improvement over the best existing approach. Moreover, our successful deployment of MMQS into real-world products showcases its practicality and potential for providing valuable insights in search engines. As a future direction, we plan to extend MMQS to accommodate other modalities, such as audio or video, to enhance its applicability in diverse real-world scenarios. Multimodal Query Suggestion with Multi-Agent Reinforcement Learning from Human Feedback WWW\u201924, May 13\u201317, 2024, Singapore" + }, + { + "url": "http://arxiv.org/abs/1405.2848v1", + "title": "Query Rewriting and Optimization for Ontological Databases", + "abstract": "Ontological queries are evaluated against a knowledge base consisting of an\nextensional database and an ontology (i.e., a set of logical assertions and\nconstraints which derive new intensional knowledge from the extensional\ndatabase), rather than directly on the extensional database. The evaluation and\noptimization of such queries is an intriguing new problem for database\nresearch. In this paper, we discuss two important aspects of this problem:\nquery rewriting and query optimization. Query rewriting consists of the\ncompilation of an ontological query into an equivalent first-order query\nagainst the underlying extensional database. We present a novel query rewriting\nalgorithm for rather general types of ontological constraints which is\nwell-suited for practical implementations. In particular, we show how a\nconjunctive query against a knowledge base, expressed using linear and sticky\nexistential rules, that is, members of the recently introduced Datalog+/-\nfamily of ontology languages, can be compiled into a union of conjunctive\nqueries (UCQ) against the underlying database. Ontological query optimization,\nin this context, attempts to improve this rewriting process so to produce\npossibly small and cost-effective UCQ rewritings for an input query.", + "authors": "Georg Gottlob, Giorgio Orsi, Andreas Pieris", + "published": "2014-05-12", + "updated": "2014-05-12", + "primary_cat": "cs.DB", + "cats": [ + "cs.DB", + "68P15", + "H.2.4; I.2.3" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2301.12597v3", + "title": "BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models", + "abstract": "The cost of vision-and-language pre-training has become increasingly\nprohibitive due to end-to-end training of large-scale models. This paper\nproposes BLIP-2, a generic and efficient pre-training strategy that bootstraps\nvision-language pre-training from off-the-shelf frozen pre-trained image\nencoders and frozen large language models. BLIP-2 bridges the modality gap with\na lightweight Querying Transformer, which is pre-trained in two stages. The\nfirst stage bootstraps vision-language representation learning from a frozen\nimage encoder. The second stage bootstraps vision-to-language generative\nlearning from a frozen language model. BLIP-2 achieves state-of-the-art\nperformance on various vision-language tasks, despite having significantly\nfewer trainable parameters than existing methods. For example, our model\noutperforms Flamingo80B by 8.7% on zero-shot VQAv2 with 54x fewer trainable\nparameters. We also demonstrate the model's emerging capabilities of zero-shot\nimage-to-text generation that can follow natural language instructions.", + "authors": "Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi", + "published": "2023-01-30", + "updated": "2023-06-15", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2304.08485v2", + "title": "Visual Instruction Tuning", + "abstract": "Instruction tuning large language models (LLMs) using machine-generated\ninstruction-following data has improved zero-shot capabilities on new tasks,\nbut the idea is less explored in the multimodal field. In this paper, we\npresent the first attempt to use language-only GPT-4 to generate multimodal\nlanguage-image instruction-following data. By instruction tuning on such\ngenerated data, we introduce LLaVA: Large Language and Vision Assistant, an\nend-to-end trained large multimodal model that connects a vision encoder and\nLLM for general-purpose visual and language understanding.Our early experiments\nshow that LLaVA demonstrates impressive multimodel chat abilities, sometimes\nexhibiting the behaviors of multimodal GPT-4 on unseen images/instructions, and\nyields a 85.1% relative score compared with GPT-4 on a synthetic multimodal\ninstruction-following dataset. When fine-tuned on Science QA, the synergy of\nLLaVA and GPT-4 achieves a new state-of-the-art accuracy of 92.53%. We make\nGPT-4 generated visual instruction tuning data, our model and code base\npublicly available.", + "authors": "Haotian Liu, Chunyuan Li, Qingyang Wu, Yong Jae Lee", + "published": "2023-04-17", + "updated": "2023-12-11", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2203.02155v1", + "title": "Training language models to follow instructions with human feedback", + "abstract": "Making language models bigger does not inherently make them better at\nfollowing a user's intent. For example, large language models can generate\noutputs that are untruthful, toxic, or simply not helpful to the user. In other\nwords, these models are not aligned with their users. In this paper, we show an\navenue for aligning language models with user intent on a wide range of tasks\nby fine-tuning with human feedback. Starting with a set of labeler-written\nprompts and prompts submitted through the OpenAI API, we collect a dataset of\nlabeler demonstrations of the desired model behavior, which we use to fine-tune\nGPT-3 using supervised learning. We then collect a dataset of rankings of model\noutputs, which we use to further fine-tune this supervised model using\nreinforcement learning from human feedback. We call the resulting models\nInstructGPT. In human evaluations on our prompt distribution, outputs from the\n1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3,\ndespite having 100x fewer parameters. Moreover, InstructGPT models show\nimprovements in truthfulness and reductions in toxic output generation while\nhaving minimal performance regressions on public NLP datasets. Even though\nInstructGPT still makes simple mistakes, our results show that fine-tuning with\nhuman feedback is a promising direction for aligning language models with human\nintent.", + "authors": "Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, Ryan Lowe", + "published": "2022-03-04", + "updated": "2022-03-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1706.03741v4", + "title": "Deep reinforcement learning from human preferences", + "abstract": "For sophisticated reinforcement learning (RL) systems to interact usefully\nwith real-world environments, we need to communicate complex goals to these\nsystems. In this work, we explore goals defined in terms of (non-expert) human\npreferences between pairs of trajectory segments. We show that this approach\ncan effectively solve complex RL tasks without access to the reward function,\nincluding Atari games and simulated robot locomotion, while providing feedback\non less than one percent of our agent's interactions with the environment. This\nreduces the cost of human oversight far enough that it can be practically\napplied to state-of-the-art RL systems. To demonstrate the flexibility of our\napproach, we show that we can successfully train complex novel behaviors with\nabout an hour of human time. These behaviors and environments are considerably\nmore complex than any that have been previously learned from human feedback.", + "authors": "Paul Christiano, Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, Dario Amodei", + "published": "2017-06-12", + "updated": "2023-02-17", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.AI", + "cs.HC", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2111.07991v3", + "title": "LiT: Zero-Shot Transfer with Locked-image text Tuning", + "abstract": "This paper presents contrastive-tuning, a simple method employing contrastive\ntraining to align image and text models while still taking advantage of their\npre-training. In our empirical study we find that locked pre-trained image\nmodels with unlocked text models work best. We call this instance of\ncontrastive-tuning \"Locked-image Tuning\" (LiT), which just teaches a text model\nto read out good representations from a pre-trained image model for new tasks.\nA LiT model gains the capability of zero-shot transfer to new vision tasks,\nsuch as image classification or retrieval. The proposed LiT is widely\napplicable; it works reliably with multiple pre-training methods (supervised\nand unsupervised) and across diverse architectures (ResNet, Vision Transformers\nand MLP-Mixer) using three different image-text datasets. With the\ntransformer-based pre-trained ViT-g/14 model, the LiT model achieves 85.2%\nzero-shot transfer accuracy on the ImageNet test set, and 82.5% on the\nchallenging out-of-distribution ObjectNet test set.", + "authors": "Xiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner, Daniel Keysers, Alexander Kolesnikov, Lucas Beyer", + "published": "2021-11-15", + "updated": "2022-06-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2102.10407v5", + "title": "VisualGPT: Data-efficient Adaptation of Pretrained Language Models for Image Captioning", + "abstract": "The ability to quickly learn from a small quantity oftraining data widens the\nrange of machine learning applications. In this paper, we propose a\ndata-efficient image captioning model, VisualGPT, which leverages the\nlinguistic knowledge from a large pretrained language model(LM). A crucial\nchallenge is to balance between the use of visual information in the image and\nprior linguistic knowledge acquired from pretraining. We designed a novel\nself-resurrecting encoder-decoder attention mechanism to quickly adapt the\npretrained LM as the language decoder ona small amount of in-domain training\ndata. The proposed self-resurrecting activation unit produces sparse\nactivations but has reduced susceptibility to zero gradients. We train the\nproposed model, VisualGPT, on 0.1%, 0.5% and 1% of MSCOCO and Conceptual\nCaptions training data. Under these conditions, we outperform the best baseline\nmodel by up to 10.8% CIDEr on MS COCO and upto 5.4% CIDEr on Conceptual\nCaptions. Further, Visual-GPT achieves the state-of-the-art result on IU X-ray,\na medical report generation dataset. To the best of our knowledge, this is the\nfirst work that improves data efficiency of image captioning by utilizing LM\npretrained on unimodal data. Our code is available at:\nhttps://github.com/Vision-CAIR/VisualGPT.", + "authors": "Jun Chen, Han Guo, Kai Yi, Boyang Li, Mohamed Elhoseiny", + "published": "2021-02-20", + "updated": "2022-03-30", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.CL", + "cs.MM" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2109.10862v2", + "title": "Recursively Summarizing Books with Human Feedback", + "abstract": "A major challenge for scaling machine learning is training models to perform\ntasks that are very difficult or time-consuming for humans to evaluate. We\npresent progress on this problem on the task of abstractive summarization of\nentire fiction novels. Our method combines learning from human feedback with\nrecursive task decomposition: we use models trained on smaller parts of the\ntask to assist humans in giving feedback on the broader task. We collect a\nlarge volume of demonstrations and comparisons from human labelers, and\nfine-tune GPT-3 using behavioral cloning and reward modeling to do\nsummarization recursively. At inference time, the model first summarizes small\nsections of the book and then recursively summarizes these summaries to produce\na summary of the entire book. Our human labelers are able to supervise and\nevaluate the models quickly, despite not having read the entire books\nthemselves. Our resulting model generates sensible summaries of entire books,\neven matching the quality of human-written summaries in a few cases ($\\sim5\\%$\nof books). We achieve state-of-the-art results on the recent BookSum dataset\nfor book-length summarization. A zero-shot question-answering model using these\nsummaries achieves state-of-the-art results on the challenging NarrativeQA\nbenchmark for answering questions about books and movie scripts. We release\ndatasets of samples from our model.", + "authors": "Jeff Wu, Long Ouyang, Daniel M. Ziegler, Nisan Stiennon, Ryan Lowe, Jan Leike, Paul Christiano", + "published": "2021-09-22", + "updated": "2021-09-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1904.13015v4", + "title": "Towards Coherent and Engaging Spoken Dialog Response Generation Using Automatic Conversation Evaluators", + "abstract": "Encoder-decoder based neural architectures serve as the basis of\nstate-of-the-art approaches in end-to-end open domain dialog systems. Since\nmost of such systems are trained with a maximum likelihood~(MLE) objective they\nsuffer from issues such as lack of generalizability and the generic response\nproblem, i.e., a system response that can be an answer to a large number of\nuser utterances, e.g., \"Maybe, I don't know.\" Having explicit feedback on the\nrelevance and interestingness of a system response at each turn can be a useful\nsignal for mitigating such issues and improving system quality by selecting\nresponses from different approaches. Towards this goal, we present a system\nthat evaluates chatbot responses at each dialog turn for coherence and\nengagement. Our system provides explicit turn-level dialog quality feedback,\nwhich we show to be highly correlated with human evaluation. To show that\nincorporating this feedback in the neural response generation models improves\ndialog quality, we present two different and complementary mechanisms to\nincorporate explicit feedback into a neural response generation model:\nreranking and direct modification of the loss function during training. Our\nstudies show that a response generation model that incorporates these combined\nfeedback mechanisms produce more engaging and coherent responses in an\nopen-domain spoken dialog setting, significantly improving the response quality\nusing both automatic and human evaluation.", + "authors": "Sanghyun Yi, Rahul Goel, Chandra Khatri, Alessandra Cervone, Tagyoung Chung, Behnam Hedayatnia, Anu Venkatesh, Raefer Gabriel, Dilek Hakkani-Tur", + "published": "2019-04-30", + "updated": "2019-11-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1805.01252v2", + "title": "Improving a Neural Semantic Parser by Counterfactual Learning from Human Bandit Feedback", + "abstract": "Counterfactual learning from human bandit feedback describes a scenario where\nuser feedback on the quality of outputs of a historic system is logged and used\nto improve a target system. We show how to apply this learning framework to\nneural semantic parsing. From a machine learning perspective, the key challenge\nlies in a proper reweighting of the estimator so as to avoid known degeneracies\nin counterfactual learning, while still being applicable to stochastic gradient\noptimization. To conduct experiments with human users, we devise an easy-to-use\ninterface to collect human feedback on semantic parses. Our work is the first\nto show that semantic parsers can be improved significantly by counterfactual\nlearning from logged human feedback data.", + "authors": "Carolin Lawrence, Stefan Riezler", + "published": "2018-05-03", + "updated": "2018-11-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1907.00456v2", + "title": "Way Off-Policy Batch Deep Reinforcement Learning of Implicit Human Preferences in Dialog", + "abstract": "Most deep reinforcement learning (RL) systems are not able to learn\neffectively from off-policy data, especially if they cannot explore online in\nthe environment. These are critical shortcomings for applying RL to real-world\nproblems where collecting data is expensive, and models must be tested offline\nbefore being deployed to interact with the environment -- e.g. systems that\nlearn from human interaction. Thus, we develop a novel class of off-policy\nbatch RL algorithms, which are able to effectively learn offline, without\nexploring, from a fixed batch of human interaction data. We leverage models\npre-trained on data as a strong prior, and use KL-control to penalize\ndivergence from this prior during RL training. We also use dropout-based\nuncertainty estimates to lower bound the target Q-values as a more efficient\nalternative to Double Q-Learning. The algorithms are tested on the problem of\nopen-domain dialog generation -- a challenging reinforcement learning problem\nwith a 20,000-dimensional action space. Using our Way Off-Policy algorithm, we\ncan extract multiple different reward functions post-hoc from collected human\ninteraction data, and learn effectively from all of these. We test the\nreal-world generalization of these systems by deploying them live to converse\nwith humans in an open-domain setting, and demonstrate that our algorithm\nachieves significant improvements over prior methods in off-policy batch RL.", + "authors": "Natasha Jaques, Asma Ghandeharioun, Judy Hanwen Shen, Craig Ferguson, Agata Lapedriza, Noah Jones, Shixiang Gu, Rosalind Picard", + "published": "2019-06-30", + "updated": "2019-07-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2204.14198v2", + "title": "Flamingo: a Visual Language Model for Few-Shot Learning", + "abstract": "Building models that can be rapidly adapted to novel tasks using only a\nhandful of annotated examples is an open challenge for multimodal machine\nlearning research. We introduce Flamingo, a family of Visual Language Models\n(VLM) with this ability. We propose key architectural innovations to: (i)\nbridge powerful pretrained vision-only and language-only models, (ii) handle\nsequences of arbitrarily interleaved visual and textual data, and (iii)\nseamlessly ingest images or videos as inputs. Thanks to their flexibility,\nFlamingo models can be trained on large-scale multimodal web corpora containing\narbitrarily interleaved text and images, which is key to endow them with\nin-context few-shot learning capabilities. We perform a thorough evaluation of\nour models, exploring and measuring their ability to rapidly adapt to a variety\nof image and video tasks. These include open-ended tasks such as visual\nquestion-answering, where the model is prompted with a question which it has to\nanswer; captioning tasks, which evaluate the ability to describe a scene or an\nevent; and close-ended tasks such as multiple-choice visual question-answering.\nFor tasks lying anywhere on this spectrum, a single Flamingo model can achieve\na new state of the art with few-shot learning, simply by prompting the model\nwith task-specific examples. On numerous benchmarks, Flamingo outperforms\nmodels fine-tuned on thousands of times more task-specific data.", + "authors": "Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, Roman Ring, Eliza Rutherford, Serkan Cabi, Tengda Han, Zhitao Gong, Sina Samangooei, Marianne Monteiro, Jacob Menick, Sebastian Borgeaud, Andrew Brock, Aida Nematzadeh, Sahand Sharifzadeh, Mikolaj Binkowski, Ricardo Barreira, Oriol Vinyals, Andrew Zisserman, Karen Simonyan", + "published": "2022-04-29", + "updated": "2022-11-15", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2303.17580v4", + "title": "HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face", + "abstract": "Solving complicated AI tasks with different domains and modalities is a key\nstep toward artificial general intelligence. While there are numerous AI models\navailable for various domains and modalities, they cannot handle complicated AI\ntasks autonomously. Considering large language models (LLMs) have exhibited\nexceptional abilities in language understanding, generation, interaction, and\nreasoning, we advocate that LLMs could act as a controller to manage existing\nAI models to solve complicated AI tasks, with language serving as a generic\ninterface to empower this. Based on this philosophy, we present HuggingGPT, an\nLLM-powered agent that leverages LLMs (e.g., ChatGPT) to connect various AI\nmodels in machine learning communities (e.g., Hugging Face) to solve AI tasks.\nSpecifically, we use ChatGPT to conduct task planning when receiving a user\nrequest, select models according to their function descriptions available in\nHugging Face, execute each subtask with the selected AI model, and summarize\nthe response according to the execution results. By leveraging the strong\nlanguage capability of ChatGPT and abundant AI models in Hugging Face,\nHuggingGPT can tackle a wide range of sophisticated AI tasks spanning different\nmodalities and domains and achieve impressive results in language, vision,\nspeech, and other challenging tasks, which paves a new way towards the\nrealization of artificial general intelligence.", + "authors": "Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, Yueting Zhuang", + "published": "2023-03-30", + "updated": "2023-12-03", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1811.00511v2", + "title": "Towards Coherent and Cohesive Long-form Text Generation", + "abstract": "Generating coherent and cohesive long-form texts is a challenging task.\nPrevious works relied on large amounts of human-generated texts to train neural\nlanguage models. However, few attempted to explicitly improve neural language\nmodels from the perspectives of coherence and cohesion. In this work, we\npropose a new neural language model that is equipped with two neural\ndiscriminators which provide feedback signals at the levels of sentence\n(cohesion) and paragraph (coherence). Our model is trained using a simple yet\nefficient variant of policy gradient, called negative-critical sequence\ntraining, which is proposed to eliminate the need of training a separate critic\nfor estimating baseline. Results demonstrate the effectiveness of our approach,\nshowing improvements over the strong baseline -- recurrent attention-based\nbidirectional MLE-trained neural language model.", + "authors": "Woon Sang Cho, Pengchuan Zhang, Yizhe Zhang, Xiujun Li, Michel Galley, Chris Brockett, Mengdi Wang, Jianfeng Gao", + "published": "2018-11-01", + "updated": "2019-05-29", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2303.03378v1", + "title": "PaLM-E: An Embodied Multimodal Language Model", + "abstract": "Large language models excel at a wide range of complex tasks. However,\nenabling general inference in the real world, e.g., for robotics problems,\nraises the challenge of grounding. We propose embodied language models to\ndirectly incorporate real-world continuous sensor modalities into language\nmodels and thereby establish the link between words and percepts. Input to our\nembodied language model are multi-modal sentences that interleave visual,\ncontinuous state estimation, and textual input encodings. We train these\nencodings end-to-end, in conjunction with a pre-trained large language model,\nfor multiple embodied tasks including sequential robotic manipulation planning,\nvisual question answering, and captioning. Our evaluations show that PaLM-E, a\nsingle large embodied multimodal model, can address a variety of embodied\nreasoning tasks, from a variety of observation modalities, on multiple\nembodiments, and further, exhibits positive transfer: the model benefits from\ndiverse joint training across internet-scale language, vision, and\nvisual-language domains. Our largest model, PaLM-E-562B with 562B parameters,\nin addition to being trained on robotics tasks, is a visual-language generalist\nwith state-of-the-art performance on OK-VQA, and retains generalist language\ncapabilities with increasing scale.", + "authors": "Danny Driess, Fei Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, Wenlong Huang, Yevgen Chebotar, Pierre Sermanet, Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, Pete Florence", + "published": "2023-03-06", + "updated": "2023-03-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2303.11381v1", + "title": "MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action", + "abstract": "We propose MM-REACT, a system paradigm that integrates ChatGPT with a pool of\nvision experts to achieve multimodal reasoning and action. In this paper, we\ndefine and explore a comprehensive list of advanced vision tasks that are\nintriguing to solve, but may exceed the capabilities of existing vision and\nvision-language models. To achieve such advanced visual intelligence, MM-REACT\nintroduces a textual prompt design that can represent text descriptions,\ntextualized spatial coordinates, and aligned file names for dense visual\nsignals such as images and videos. MM-REACT's prompt design allows language\nmodels to accept, associate, and process multimodal information, thereby\nfacilitating the synergetic combination of ChatGPT and various vision experts.\nZero-shot experiments demonstrate MM-REACT's effectiveness in addressing the\nspecified capabilities of interests and its wide application in different\nscenarios that require advanced visual understanding. Furthermore, we discuss\nand compare MM-REACT's system paradigm with an alternative approach that\nextends language models for multimodal scenarios through joint finetuning.\nCode, demo, video, and visualization are available at\nhttps://multimodal-react.github.io/", + "authors": "Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Ehsan Azarnasab, Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, Lijuan Wang", + "published": "2023-03-20", + "updated": "2023-03-20", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2212.10846v3", + "title": "From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models", + "abstract": "Large language models (LLMs) have demonstrated excellent zero-shot\ngeneralization to new language tasks. However, effective utilization of LLMs\nfor zero-shot visual question-answering (VQA) remains challenging, primarily\ndue to the modality disconnection and task disconnection between LLM and VQA\ntask. End-to-end training on vision and language data may bridge the\ndisconnections, but is inflexible and computationally expensive. To address\nthis issue, we propose \\emph{Img2Prompt}, a plug-and-play module that provides\nthe prompts that can bridge the aforementioned modality and task\ndisconnections, so that LLMs can perform zero-shot VQA tasks without end-to-end\ntraining. In order to provide such prompts, we further employ LLM-agnostic\nmodels to provide prompts that can describe image content and self-constructed\nquestion-answer pairs, which can effectively guide LLM to perform zero-shot VQA\ntasks. Img2Prompt offers the following benefits: 1) It can flexibly work with\nvarious LLMs to perform VQA. 2)~Without the needing of end-to-end training, it\nsignificantly reduces the cost of deploying LLM for zero-shot VQA tasks. 3) It\nachieves comparable or better performance than methods relying on end-to-end\ntraining. For example, we outperform Flamingo \\cite{Deepmind:Flamingo2022} by\n5.6\\% on VQAv2. On the challenging A-OKVQA dataset, our method even outperforms\nfew-shot methods by as much as 20\\%.", + "authors": "Jiaxian Guo, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Boyang Li, Dacheng Tao, Steven C. H. Hoi", + "published": "2022-12-21", + "updated": "2023-05-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.MM" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2303.04671v1", + "title": "Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models", + "abstract": "ChatGPT is attracting a cross-field interest as it provides a language\ninterface with remarkable conversational competency and reasoning capabilities\nacross many domains. However, since ChatGPT is trained with languages, it is\ncurrently not capable of processing or generating images from the visual world.\nAt the same time, Visual Foundation Models, such as Visual Transformers or\nStable Diffusion, although showing great visual understanding and generation\ncapabilities, they are only experts on specific tasks with one-round fixed\ninputs and outputs. To this end, We build a system called \\textbf{Visual\nChatGPT}, incorporating different Visual Foundation Models, to enable the user\nto interact with ChatGPT by 1) sending and receiving not only languages but\nalso images 2) providing complex visual questions or visual editing\ninstructions that require the collaboration of multiple AI models with\nmulti-steps. 3) providing feedback and asking for corrected results. We design\na series of prompts to inject the visual model information into ChatGPT,\nconsidering models of multiple inputs/outputs and models that require visual\nfeedback. Experiments show that Visual ChatGPT opens the door to investigating\nthe visual roles of ChatGPT with the help of Visual Foundation Models. Our\nsystem is publicly available at\n\\url{https://github.com/microsoft/visual-chatgpt}.", + "authors": "Chenfei Wu, Shengming Yin, Weizhen Qi, Xiaodong Wang, Zecheng Tang, Nan Duan", + "published": "2023-03-08", + "updated": "2023-03-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2209.05530v1", + "title": "Model-based Reinforcement Learning with Multi-step Plan Value Estimation", + "abstract": "A promising way to improve the sample efficiency of reinforcement learning is\nmodel-based methods, in which many explorations and evaluations can happen in\nthe learned models to save real-world samples. However, when the learned model\nhas a non-negligible model error, sequential steps in the model are hard to be\naccurately evaluated, limiting the model's utilization. This paper proposes to\nalleviate this issue by introducing multi-step plans to replace multi-step\nactions for model-based RL. We employ the multi-step plan value estimation,\nwhich evaluates the expected discounted return after executing a sequence of\naction plans at a given state, and updates the policy by directly computing the\nmulti-step policy gradient via plan value estimation. The new model-based\nreinforcement learning algorithm MPPVE (Model-based Planning Policy Learning\nwith Multi-step Plan Value Estimation) shows a better utilization of the\nlearned model and achieves a better sample efficiency than state-of-the-art\nmodel-based RL approaches.", + "authors": "Haoxin Lin, Yihao Sun, Jiaji Zhang, Yang Yu", + "published": "2022-09-12", + "updated": "2022-09-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.07240v1", + "title": "Model-Reference Reinforcement Learning for Collision-Free Tracking Control of Autonomous Surface Vehicles", + "abstract": "This paper presents a novel model-reference reinforcement learning algorithm\nfor the intelligent tracking control of uncertain autonomous surface vehicles\nwith collision avoidance. The proposed control algorithm combines a\nconventional control method with reinforcement learning to enhance control\naccuracy and intelligence. In the proposed control design, a nominal system is\nconsidered for the design of a baseline tracking controller using a\nconventional control approach. The nominal system also defines the desired\nbehaviour of uncertain autonomous surface vehicles in an obstacle-free\nenvironment. Thanks to reinforcement learning, the overall tracking controller\nis capable of compensating for model uncertainties and achieving collision\navoidance at the same time in environments with obstacles. In comparison to\ntraditional deep reinforcement learning methods, our proposed learning-based\ncontrol can provide stability guarantees and better sample efficiency. We\ndemonstrate the performance of the new algorithm using an example of autonomous\nsurface vehicles.", + "authors": "Qingrui Zhang, Wei Pan, Vasso Reppa", + "published": "2020-08-17", + "updated": "2020-08-17", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.LG", + "cs.RO", + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2012.00743v1", + "title": "Adaptive Neural Architectures for Recommender Systems", + "abstract": "Deep learning has proved an effective means to capture the non-linear\nassociations of user preferences. However, the main drawback of existing deep\nlearning architectures is that they follow a fixed recommendation strategy,\nignoring users' real time-feedback. Recent advances of deep reinforcement\nstrategies showed that recommendation policies can be continuously updated\nwhile users interact with the system. In doing so, we can learn the optimal\npolicy that fits to users' preferences over the recommendation sessions. The\nmain drawback of deep reinforcement strategies is that are based on predefined\nand fixed neural architectures. To shed light on how to handle this issue, in\nthis study we first present deep reinforcement learning strategies for\nrecommendation and discuss the main limitations due to the fixed neural\narchitectures. Then, we detail how recent advances on progressive neural\narchitectures are used for consecutive tasks in other research domains.\nFinally, we present the key challenges to fill the gap between deep\nreinforcement learning and adaptive neural architectures. We provide guidelines\nfor searching for the best neural architecture based on each user feedback via\nreinforcement learning, while considering the prediction performance on\nreal-time recommendations and the model complexity.", + "authors": "Dimitrios Rafailidis, Stefanos Antaris", + "published": "2020-11-11", + "updated": "2020-11-11", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1812.09968v1", + "title": "VMAV-C: A Deep Attention-based Reinforcement Learning Algorithm for Model-based Control", + "abstract": "Recent breakthroughs in Go play and strategic games have witnessed the great\npotential of reinforcement learning in intelligently scheduling in uncertain\nenvironment, but some bottlenecks are also encountered when we generalize this\nparadigm to universal complex tasks. Among them, the low efficiency of data\nutilization in model-free reinforcement algorithms is of great concern. In\ncontrast, the model-based reinforcement learning algorithms can reveal\nunderlying dynamics in learning environments and seldom suffer the data\nutilization problem. To address the problem, a model-based reinforcement\nlearning algorithm with attention mechanism embedded is proposed as an\nextension of World Models in this paper. We learn the environment model through\nMixture Density Network Recurrent Network(MDN-RNN) for agents to interact, with\ncombinations of variational auto-encoder(VAE) and attention incorporated in\nstate value estimates during the process of learning policy. In this way, agent\ncan learn optimal policies through less interactions with actual environment,\nand final experiments demonstrate the effectiveness of our model in control\nproblem.", + "authors": "Xingxing Liang, Qi Wang, Yanghe Feng, Zhong Liu, Jincai Huang", + "published": "2018-12-24", + "updated": "2018-12-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.NE" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2012.09737v2", + "title": "Model-free and Bayesian Ensembling Model-based Deep Reinforcement Learning for Particle Accelerator Control Demonstrated on the FERMI FEL", + "abstract": "Reinforcement learning holds tremendous promise in accelerator controls. The\nprimary goal of this paper is to show how this approach can be utilised on an\noperational level on accelerator physics problems. Despite the success of\nmodel-free reinforcement learning in several domains, sample-efficiency still\nis a bottle-neck, which might be encompassed by model-based methods. We compare\nwell-suited purely model-based to model-free reinforcement learning applied to\nthe intensity optimisation on the FERMI FEL system. We find that the\nmodel-based approach demonstrates higher representational power and\nsample-efficiency, while the asymptotic performance of the model-free method is\nslightly superior. The model-based algorithm is implemented in a DYNA-style\nusing an uncertainty aware model, and the model-free algorithm is based on\ntailored deep Q-learning. In both cases, the algorithms were implemented in a\nway, which presents increased noise robustness as omnipresent in accelerator\ncontrol problems. Code is released in\nhttps://github.com/MathPhysSim/FERMI_RL_Paper.", + "authors": "Simon Hirlaender, Niky Bruchon", + "published": "2020-12-17", + "updated": "2022-01-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SY", + "eess.SY", + "physics.acc-ph", + "I.2; J.2" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.09064v2", + "title": "Can Interpretable Reinforcement Learning Manage Prosperity Your Way?", + "abstract": "Personalisation of products and services is fast becoming the driver of\nsuccess in banking and commerce. Machine learning holds the promise of gaining\na deeper understanding of and tailoring to customers' needs and preferences.\nWhereas traditional solutions to financial decision problems frequently rely on\nmodel assumptions, reinforcement learning is able to exploit large amounts of\ndata to improve customer modelling and decision-making in complex financial\nenvironments with fewer assumptions. Model explainability and interpretability\npresent challenges from a regulatory perspective which demands transparency for\nacceptance; they also offer the opportunity for improved insight into and\nunderstanding of customers. Post-hoc approaches are typically used for\nexplaining pretrained reinforcement learning models. Based on our previous\nmodeling of customer spending behaviour, we adapt our recent reinforcement\nlearning algorithm that intrinsically characterizes desirable behaviours and we\ntransition to the problem of asset management. We train inherently\ninterpretable reinforcement learning agents to give investment advice that is\naligned with prototype financial personality traits which are combined to make\na final recommendation. We observe that the trained agents' advice adheres to\ntheir intended characteristics, they learn the value of compound growth, and,\nwithout any explicit reference, the notion of risk as well as improved policy\nconvergence.", + "authors": "Charl Maree, Christian Omlin", + "published": "2022-02-18", + "updated": "2022-06-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1712.04170v2", + "title": "Interpretable Policies for Reinforcement Learning by Genetic Programming", + "abstract": "The search for interpretable reinforcement learning policies is of high\nacademic and industrial interest. Especially for industrial systems, domain\nexperts are more likely to deploy autonomously learned controllers if they are\nunderstandable and convenient to evaluate. Basic algebraic equations are\nsupposed to meet these requirements, as long as they are restricted to an\nadequate complexity. Here we introduce the genetic programming for\nreinforcement learning (GPRL) approach based on model-based batch reinforcement\nlearning and genetic programming, which autonomously learns policy equations\nfrom pre-existing default state-action trajectory samples. GPRL is compared to\na straight-forward method which utilizes genetic programming for symbolic\nregression, yielding policies imitating an existing well-performing, but\nnon-interpretable policy. Experiments on three reinforcement learning\nbenchmarks, i.e., mountain car, cart-pole balancing, and industrial benchmark,\ndemonstrate the superiority of our GPRL approach compared to the symbolic\nregression method. GPRL is capable of producing well-performing interpretable\nreinforcement learning policies from pre-existing default trajectory data.", + "authors": "Daniel Hein, Steffen Udluft, Thomas A. Runkler", + "published": "2017-12-12", + "updated": "2018-04-04", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.NE", + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.13044v1", + "title": "Reinforcement Learning with Feedback-modulated TD-STDP", + "abstract": "Spiking neuron networks have been used successfully to solve simple\nreinforcement learning tasks with continuous action set applying learning rules\nbased on spike-timing-dependent plasticity (STDP). However, most of these\nmodels cannot be applied to reinforcement learning tasks with discrete action\nset since they assume that the selected action is a deterministic function of\nfiring rate of neurons, which is continuous. In this paper, we propose a new\nSTDP-based learning rule for spiking neuron networks which contains feedback\nmodulation. We show that the STDP-based learning rule can be used to solve\nreinforcement learning tasks with discrete action set at a speed similar to\nstandard reinforcement learning algorithms when applied to the CartPole and\nLunarLander tasks. Moreover, we demonstrate that the agent is unable to solve\nthese tasks if feedback modulation is omitted from the learning rule. We\nconclude that feedback modulation allows better credit assignment when only the\nunits contributing to the executed action and TD error participate in learning.", + "authors": "Stephen Chung, Robert Kozma", + "published": "2020-08-29", + "updated": "2020-08-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML", + "I.2.8" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1806.01265v2", + "title": "Equivalence Between Wasserstein and Value-Aware Loss for Model-based Reinforcement Learning", + "abstract": "Learning a generative model is a key component of model-based reinforcement\nlearning. Though learning a good model in the tabular setting is a simple task,\nlearning a useful model in the approximate setting is challenging. In this\ncontext, an important question is the loss function used for model learning as\nvarying the loss function can have a remarkable impact on effectiveness of\nplanning. Recently Farahmand et al. (2017) proposed a value-aware model\nlearning (VAML) objective that captures the structure of value function during\nmodel learning. Using tools from Asadi et al. (2018), we show that minimizing\nthe VAML objective is in fact equivalent to minimizing the Wasserstein metric.\nThis equivalence improves our understanding of value-aware models, and also\ncreates a theoretical foundation for applications of Wasserstein in model-based\nreinforcement~learning.", + "authors": "Kavosh Asadi, Evan Cater, Dipendra Misra, Michael L. Littman", + "published": "2018-06-01", + "updated": "2018-07-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1206.3281v1", + "title": "Model-Based Bayesian Reinforcement Learning in Large Structured Domains", + "abstract": "Model-based Bayesian reinforcement learning has generated significant\ninterest in the AI community as it provides an elegant solution to the optimal\nexploration-exploitation tradeoff in classical reinforcement learning.\nUnfortunately, the applicability of this type of approach has been limited to\nsmall domains due to the high complexity of reasoning about the joint posterior\nover model parameters. In this paper, we consider the use of factored\nrepresentations combined with online planning techniques, to improve\nscalability of these methods. The main contribution of this paper is a Bayesian\nframework for learning the structure and parameters of a dynamical system,\nwhile also simultaneously planning a (near-)optimal sequence of actions.", + "authors": "Stephane Ross, Joelle Pineau", + "published": "2012-06-13", + "updated": "2012-06-13", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.05546v2", + "title": "Multi-Agent Quantum Reinforcement Learning using Evolutionary Optimization", + "abstract": "Multi-Agent Reinforcement Learning is becoming increasingly more important in\ntimes of autonomous driving and other smart industrial applications.\nSimultaneously a promising new approach to Reinforcement Learning arises using\nthe inherent properties of quantum mechanics, reducing the trainable parameters\nof a model significantly. However, gradient-based Multi-Agent Quantum\nReinforcement Learning methods often have to struggle with barren plateaus,\nholding them back from matching the performance of classical approaches. We\nbuild upon an existing approach for gradient free Quantum Reinforcement\nLearning and propose three genetic variations with Variational Quantum Circuits\nfor Multi-Agent Reinforcement Learning using evolutionary optimization. We\nevaluate our genetic variations in the Coin Game environment and also compare\nthem to classical approaches. We showed that our Variational Quantum Circuit\napproaches perform significantly better compared to a neural network with a\nsimilar amount of trainable parameters. Compared to the larger neural network,\nour approaches archive similar results using $97.88\\%$ less parameters.", + "authors": "Michael K\u00f6lle, Felix Topp, Thomy Phan, Philipp Altmann, Jonas N\u00fc\u00dflein, Claudia Linnhoff-Popien", + "published": "2023-11-09", + "updated": "2024-01-13", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph", + "cs.AI", + "cs.MA" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.11142v1", + "title": "Two-stage Deep Reinforcement Learning for Inverter-based Volt-VAR Control in Active Distribution Networks", + "abstract": "Model-based Vol/VAR optimization method is widely used to eliminate voltage\nviolations and reduce network losses. However, the parameters of active\ndistribution networks(ADNs) are not onsite identified, so significant errors\nmay be involved in the model and make the model-based method infeasible. To\ncope with this critical issue, we propose a novel two-stage deep reinforcement\nlearning (DRL) method to improve the voltage profile by regulating\ninverter-based energy resources, which consists of offline stage and online\nstage. In the offline stage, a highly efficient adversarial reinforcement\nlearning algorithm is developed to train an offline agent robust to the model\nmismatch. In the sequential online stage, we transfer the offline agent safely\nas the online agent to perform continuous learning and controlling online with\nsignificantly improved safety and efficiency. Numerical simulations on IEEE\ntest cases not only demonstrate that the proposed adversarial reinforcement\nlearning algorithm outperforms the state-of-art algorithm, but also show that\nour proposed two-stage method achieves much better performance than the\nexisting DRL based methods in the online application.", + "authors": "Haotian Liu, Wenchuan Wu", + "published": "2020-05-20", + "updated": "2020-05-20", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.LG", + "cs.SY", + "J.7; C.3" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2105.00822v2", + "title": "Generative Adversarial Reward Learning for Generalized Behavior Tendency Inference", + "abstract": "Recent advances in reinforcement learning have inspired increasing interest\nin learning user modeling adaptively through dynamic interactions, e.g., in\nreinforcement learning based recommender systems. Reward function is crucial\nfor most of reinforcement learning applications as it can provide the guideline\nabout the optimization. However, current reinforcement-learning-based methods\nrely on manually-defined reward functions, which cannot adapt to dynamic and\nnoisy environments. Besides, they generally use task-specific reward functions\nthat sacrifice generalization ability. We propose a generative inverse\nreinforcement learning for user behavioral preference modelling, to address the\nabove issues. Instead of using predefined reward functions, our model can\nautomatically learn the rewards from user's actions based on discriminative\nactor-critic network and Wasserstein GAN. Our model provides a general way of\ncharacterizing and explaining underlying behavioral tendencies, and our\nexperiments show our method outperforms state-of-the-art methods in a variety\nof scenarios, namely traffic signal control, online recommender systems, and\nscanpath prediction.", + "authors": "Xiaocong Chen, Lina Yao, Xianzhi Wang, Aixin Sun, Wenjie Zhang, Quan Z. Sheng", + "published": "2021-05-03", + "updated": "2021-05-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.IR" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1802.10592v2", + "title": "Model-Ensemble Trust-Region Policy Optimization", + "abstract": "Model-free reinforcement learning (RL) methods are succeeding in a growing\nnumber of tasks, aided by recent advances in deep learning. However, they tend\nto suffer from high sample complexity, which hinders their use in real-world\ndomains. Alternatively, model-based reinforcement learning promises to reduce\nsample complexity, but tends to require careful tuning and to date have\nsucceeded mainly in restrictive domains where simple models are sufficient for\nlearning. In this paper, we analyze the behavior of vanilla model-based\nreinforcement learning methods when deep neural networks are used to learn both\nthe model and the policy, and show that the learned policy tends to exploit\nregions where insufficient data is available for the model to be learned,\ncausing instability in training. To overcome this issue, we propose to use an\nensemble of models to maintain the model uncertainty and regularize the\nlearning process. We further show that the use of likelihood ratio derivatives\nyields much more stable learning than backpropagation through time. Altogether,\nour approach Model-Ensemble Trust-Region Policy Optimization (ME-TRPO)\nsignificantly reduces the sample complexity compared to model-free deep RL\nmethods on challenging continuous control benchmark tasks.", + "authors": "Thanard Kurutach, Ignasi Clavera, Yan Duan, Aviv Tamar, Pieter Abbeel", + "published": "2018-02-28", + "updated": "2018-10-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.02380v2", + "title": "Adaptive Rollout Length for Model-Based RL Using Model-Free Deep RL", + "abstract": "Model-based reinforcement learning promises to learn an optimal policy from\nfewer interactions with the environment compared to model-free reinforcement\nlearning by learning an intermediate model of the environment in order to\npredict future interactions. When predicting a sequence of interactions, the\nrollout length, which limits the prediction horizon, is a critical\nhyperparameter as accuracy of the predictions diminishes in the regions that\nare further away from real experience. As a result, with a longer rollout\nlength, an overall worse policy is learned in the long run. Thus, the\nhyperparameter provides a trade-off between quality and efficiency. In this\nwork, we frame the problem of tuning the rollout length as a meta-level\nsequential decision-making problem that optimizes the final policy learned by\nmodel-based reinforcement learning given a fixed budget of environment\ninteractions by adapting the hyperparameter dynamically based on feedback from\nthe learning process, such as accuracy of the model and the remaining budget of\ninteractions. We use model-free deep reinforcement learning to solve the\nmeta-level decision problem and demonstrate that our approach outperforms\ncommon heuristic baselines on two well-known reinforcement learning\nenvironments.", + "authors": "Abhinav Bhatia, Philip S. Thomas, Shlomo Zilberstein", + "published": "2022-06-06", + "updated": "2022-06-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1506.00685v1", + "title": "Model-based reinforcement learning for infinite-horizon approximate optimal tracking", + "abstract": "This paper provides an approximate online adaptive solution to the\ninfinite-horizon optimal tracking problem for control-affine continuous-time\nnonlinear systems with unknown drift dynamics. Model-based reinforcement\nlearning is used to relax the persistence of excitation condition. Model-based\nreinforcement learning is implemented using a concurrent learning-based system\nidentifier to simulate experience by evaluating the Bellman error over\nunexplored areas of the state space. Tracking of the desired trajectory and\nconvergence of the developed policy to a neighborhood of the optimal policy are\nestablished via Lyapunov-based stability analysis. Simulation results\ndemonstrate the effectiveness of the developed technique.", + "authors": "Rushikesh Kamalapurkar, Lindsey Andrews, Patrick Walters, Warren E. Dixon", + "published": "2015-06-01", + "updated": "2015-06-01", + "primary_cat": "cs.SY", + "cats": [ + "cs.SY", + "math.OC" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2007.12666v5", + "title": "Safe Model-Based Reinforcement Learning for Systems with Parametric Uncertainties", + "abstract": "Reinforcement learning has been established over the past decade as an\neffective tool to find optimal control policies for dynamical systems, with\nrecent focus on approaches that guarantee safety during the learning and/or\nexecution phases. In general, safety guarantees are critical in reinforcement\nlearning when the system is safety-critical and/or task restarts are not\npractically feasible. In optimal control theory, safety requirements are often\nexpressed in terms of state and/or control constraints. In recent years,\nreinforcement learning approaches that rely on persistent excitation have been\ncombined with a barrier transformation to learn the optimal control policies\nunder state constraints. To soften the excitation requirements, model-based\nreinforcement learning methods that rely on exact model knowledge have also\nbeen integrated with the barrier transformation framework. The objective of\nthis paper is to develop safe reinforcement learning method for deterministic\nnonlinear systems, with parametric uncertainties in the model, to learn\napproximate constrained optimal policies without relying on stringent\nexcitation conditions. To that end, a model-based reinforcement learning\ntechnique that utilizes a novel filtered concurrent learning method, along with\na barrier transformation, is developed in this paper to realize simultaneous\nlearning of unknown model parameters and approximate optimal state-constrained\ncontrol policies for safety-critical systems.", + "authors": "S M Nahid Mahmud, Scott A Nivison, Zachary I. Bell, Rushikesh Kamalapurkar", + "published": "2020-07-24", + "updated": "2021-10-05", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.SY", + "math.OC" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2009.09781v1", + "title": "Rethinking Supervised Learning and Reinforcement Learning in Task-Oriented Dialogue Systems", + "abstract": "Dialogue policy learning for task-oriented dialogue systems has enjoyed great\nprogress recently mostly through employing reinforcement learning methods.\nHowever, these approaches have become very sophisticated. It is time to\nre-evaluate it. Are we really making progress developing dialogue agents only\nbased on reinforcement learning? We demonstrate how (1)~traditional supervised\nlearning together with (2)~a simulator-free adversarial learning method can be\nused to achieve performance comparable to state-of-the-art RL-based methods.\nFirst, we introduce a simple dialogue action decoder to predict the appropriate\nactions. Then, the traditional multi-label classification solution for dialogue\npolicy learning is extended by adding dense layers to improve the dialogue\nagent performance. Finally, we employ the Gumbel-Softmax estimator to\nalternatively train the dialogue agent and the dialogue reward model without\nusing reinforcement learning. Based on our extensive experimentation, we can\nconclude the proposed methods can achieve more stable and higher performance\nwith fewer efforts, such as the domain knowledge required to design a user\nsimulator and the intractable parameter tuning in reinforcement learning. Our\nmain goal is not to beat reinforcement learning with supervised learning, but\nto demonstrate the value of rethinking the role of reinforcement learning and\nsupervised learning in optimizing task-oriented dialogue systems.", + "authors": "Ziming Li, Julia Kiseleva, Maarten de Rijke", + "published": "2020-09-21", + "updated": "2020-09-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1705.03562v1", + "title": "Deep Episodic Value Iteration for Model-based Meta-Reinforcement Learning", + "abstract": "We present a new deep meta reinforcement learner, which we call Deep Episodic\nValue Iteration (DEVI). DEVI uses a deep neural network to learn a similarity\nmetric for a non-parametric model-based reinforcement learning algorithm. Our\nmodel is trained end-to-end via back-propagation. Despite being trained using\nthe model-free Q-learning objective, we show that DEVI's model-based internal\nstructure provides `one-shot' transfer to changes in reward and transition\nstructure, even for tasks with very high-dimensional state spaces.", + "authors": "Steven Stenberg Hansen", + "published": "2017-05-09", + "updated": "2017-05-09", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.AI", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1705.07460v1", + "title": "Experience enrichment based task independent reward model", + "abstract": "For most reinforcement learning approaches, the learning is performed by\nmaximizing an accumulative reward that is expectedly and manually defined for\nspecific tasks. However, in real world, rewards are emergent phenomena from the\ncomplex interactions between agents and environments. In this paper, we propose\nan implicit generic reward model for reinforcement learning. Unlike those\nrewards that are manually defined for specific tasks, such implicit reward is\ntask independent. It only comes from the deviation from the agents' previous\nexperiences.", + "authors": "Min Xu", + "published": "2017-05-21", + "updated": "2017-05-21", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2011.01734v1", + "title": "Differentiable Physics Models for Real-world Offline Model-based Reinforcement Learning", + "abstract": "A limitation of model-based reinforcement learning (MBRL) is the exploitation\nof errors in the learned models. Black-box models can fit complex dynamics with\nhigh fidelity, but their behavior is undefined outside of the data\ndistribution.Physics-based models are better at extrapolating, due to the\ngeneral validity of their informed structure, but underfit in the real world\ndue to the presence of unmodeled phenomena. In this work, we demonstrate\nexperimentally that for the offline model-based reinforcement learning setting,\nphysics-based models can be beneficial compared to high-capacity function\napproximators if the mechanical structure is known. Physics-based models can\nlearn to perform the ball in a cup (BiC) task on a physical manipulator using\nonly 4 minutes of sampled data using offline MBRL. We find that black-box\nmodels consistently produce unviable policies for BiC as all predicted\ntrajectories diverge to physically impossible state, despite having access to\nmore data than the physics-based model. In addition, we generalize the approach\nof physics parameter identification from modeling holonomic multi-body systems\nto systems with nonholonomic dynamics using end-to-end automatic\ndifferentiation.\n Videos: https://sites.google.com/view/ball-in-a-cup-in-4-minutes/", + "authors": "Michael Lutter, Johannes Silberbauer, Joe Watson, Jan Peters", + "published": "2020-11-03", + "updated": "2020-11-03", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.12142v1", + "title": "Bridging Imagination and Reality for Model-Based Deep Reinforcement Learning", + "abstract": "Sample efficiency has been one of the major challenges for deep reinforcement\nlearning. Recently, model-based reinforcement learning has been proposed to\naddress this challenge by performing planning on imaginary trajectories with a\nlearned world model. However, world model learning may suffer from overfitting\nto training trajectories, and thus model-based value estimation and policy\nsearch will be pone to be sucked in an inferior local policy. In this paper, we\npropose a novel model-based reinforcement learning algorithm, called BrIdging\nReality and Dream (BIRD). It maximizes the mutual information between imaginary\nand real trajectories so that the policy improvement learned from imaginary\ntrajectories can be easily generalized to real trajectories. We demonstrate\nthat our approach improves sample efficiency of model-based planning, and\nachieves state-of-the-art performance on challenging visual control benchmarks.", + "authors": "Guangxiang Zhu, Minghao Zhang, Honglak Lee, Chongjie Zhang", + "published": "2020-10-23", + "updated": "2020-10-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.02025v1", + "title": "Between Rate-Distortion Theory & Value Equivalence in Model-Based Reinforcement Learning", + "abstract": "The quintessential model-based reinforcement-learning agent iteratively\nrefines its estimates or prior beliefs about the true underlying model of the\nenvironment. Recent empirical successes in model-based reinforcement learning\nwith function approximation, however, eschew the true model in favor of a\nsurrogate that, while ignoring various facets of the environment, still\nfacilitates effective planning over behaviors. Recently formalized as the value\nequivalence principle, this algorithmic technique is perhaps unavoidable as\nreal-world reinforcement learning demands consideration of a simple,\ncomputationally-bounded agent interacting with an overwhelmingly complex\nenvironment. In this work, we entertain an extreme scenario wherein some\ncombination of immense environment complexity and limited agent capacity\nentirely precludes identifying an exactly value-equivalent model. In light of\nthis, we embrace a notion of approximate value equivalence and introduce an\nalgorithm for incrementally synthesizing simple and useful approximations of\nthe environment from which an agent might still recover near-optimal behavior.\nCrucially, we recognize the information-theoretic nature of this lossy\nenvironment compression problem and use the appropriate tools of\nrate-distortion theory to make mathematically precise how value equivalence can\nlend tractability to otherwise intractable sequential decision-making problems.", + "authors": "Dilip Arumugam, Benjamin Van Roy", + "published": "2022-06-04", + "updated": "2022-06-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.IT", + "math.IT" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.01794v1", + "title": "Imitation Game: A Model-based and Imitation Learning Deep Reinforcement Learning Hybrid", + "abstract": "Autonomous and learning systems based on Deep Reinforcement Learning have\nfirmly established themselves as a foundation for approaches to creating\nresilient and efficient Cyber-Physical Energy Systems. However, most current\napproaches suffer from two distinct problems: Modern model-free algorithms such\nas Soft Actor Critic need a high number of samples to learn a meaningful\npolicy, as well as a fallback to ward against concept drifts (e. g.,\ncatastrophic forgetting). In this paper, we present the work in progress\ntowards a hybrid agent architecture that combines model-based Deep\nReinforcement Learning with imitation learning to overcome both problems.", + "authors": "Eric MSP Veith, Torben Logemann, Aleksandr Berezin, Arlena Well\u00dfow, Stephan Balduin", + "published": "2024-04-02", + "updated": "2024-04-02", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2301.10119v2", + "title": "Minimal Value-Equivalent Partial Models for Scalable and Robust Planning in Lifelong Reinforcement Learning", + "abstract": "Learning models of the environment from pure interaction is often considered\nan essential component of building lifelong reinforcement learning agents.\nHowever, the common practice in model-based reinforcement learning is to learn\nmodels that model every aspect of the agent's environment, regardless of\nwhether they are important in coming up with optimal decisions or not. In this\npaper, we argue that such models are not particularly well-suited for\nperforming scalable and robust planning in lifelong reinforcement learning\nscenarios and we propose new kinds of models that only model the relevant\naspects of the environment, which we call \"minimal value-equivalent partial\nmodels\". After providing a formal definition for these models, we provide\ntheoretical results demonstrating the scalability advantages of performing\nplanning with such models and then perform experiments to empirically\nillustrate our theoretical results. Then, we provide some useful heuristics on\nhow to learn these kinds of models with deep learning architectures and\nempirically demonstrate that models learned in such a way can allow for\nperforming planning that is robust to distribution shifts and compounding model\nerrors. Overall, both our theoretical and empirical results suggest that\nminimal value-equivalent partial models can provide significant benefits to\nperforming scalable and robust planning in lifelong reinforcement learning\nscenarios.", + "authors": "Safa Alver, Doina Precup", + "published": "2023-01-24", + "updated": "2023-06-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1910.03016v4", + "title": "Is a Good Representation Sufficient for Sample Efficient Reinforcement Learning?", + "abstract": "Modern deep learning methods provide effective means to learn good\nrepresentations. However, is a good representation itself sufficient for sample\nefficient reinforcement learning? This question has largely been studied only\nwith respect to (worst-case) approximation error, in the more classical\napproximate dynamic programming literature. With regards to the statistical\nviewpoint, this question is largely unexplored, and the extant body of\nliterature mainly focuses on conditions which permit sample efficient\nreinforcement learning with little understanding of what are necessary\nconditions for efficient reinforcement learning.\n This work shows that, from the statistical viewpoint, the situation is far\nsubtler than suggested by the more traditional approximation viewpoint, where\nthe requirements on the representation that suffice for sample efficient RL are\neven more stringent. Our main results provide sharp thresholds for\nreinforcement learning methods, showing that there are hard limitations on what\nconstitutes good function approximation (in terms of the dimensionality of the\nrepresentation), where we focus on natural representational conditions relevant\nto value-based, model-based, and policy-based learning. These lower bounds\nhighlight that having a good (value-based, model-based, or policy-based)\nrepresentation in and of itself is insufficient for efficient reinforcement\nlearning, unless the quality of this approximation passes certain hard\nthresholds. Furthermore, our lower bounds also imply exponential separations on\nthe sample complexity between 1) value-based learning with perfect\nrepresentation and value-based learning with a good-but-not-perfect\nrepresentation, 2) value-based learning and policy-based learning, 3)\npolicy-based learning and supervised learning and 4) reinforcement learning and\nimitation learning.", + "authors": "Simon S. Du, Sham M. Kakade, Ruosong Wang, Lin F. Yang", + "published": "2019-10-07", + "updated": "2020-02-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "math.OC", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2211.10688v2", + "title": "ReInform: Selecting paths with reinforcement learning for contextualized link prediction", + "abstract": "We propose to use reinforcement learning to inform transformer-based\ncontextualized link prediction models by providing paths that are most useful\nfor predicting the correct answer. This is in contrast to previous approaches,\nthat either used reinforcement learning (RL) to directly search for the answer,\nor based their prediction on limited or randomly selected context. Our\nexperiments on WN18RR and FB15k-237 show that contextualized link prediction\nmodels consistently outperform RL-based answer search, and that additional\nimprovements (of up to 13.5% MRR) can be gained by combining RL with a link\nprediction model. The PyTorch implementation of the RL agent is available at\nhttps://github.com/marina-sp/reinform", + "authors": "Marina Speranskaya, Sameh Methias, Benjamin Roth", + "published": "2022-11-19", + "updated": "2023-01-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1703.04489v1", + "title": "Reinforcement Learning for Transition-Based Mention Detection", + "abstract": "This paper describes an application of reinforcement learning to the mention\ndetection task. We define a novel action-based formulation for the mention\ndetection task, in which a model can flexibly revise past labeling decisions by\ngrouping together tokens and assigning partial mention labels. We devise a\nmethod to create mention-level episodes and we train a model by rewarding\ncorrectly labeled complete mentions, irrespective of the inner structure\ncreated. The model yields results which are on par with a competitive\nsupervised counterpart while being more flexible in terms of achieving targeted\nbehavior through reward modeling and generating internal mention structure,\nespecially on longer mentions.", + "authors": "Georgiana Dinu, Wael Hamza, Radu Florian", + "published": "2017-03-13", + "updated": "2017-03-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2012.07369v2", + "title": "Learning for MPC with Stability & Safety Guarantees", + "abstract": "The combination of learning methods with Model Predictive Control (MPC) has\nattracted a significant amount of attention in the recent literature. The hope\nof this combination is to reduce the reliance of MPC schemes on accurate\nmodels, and to tap into the fast developing machine learning and reinforcement\nlearning tools to exploit the growing amount of data available for many\nsystems. In particular, the combination of reinforcement learning and MPC has\nbeen proposed as a viable and theoretically justified approach to introduce\nexplainable, safe and stable policies in reinforcement learning. However, a\nformal theory detailing how the safety and stability of an MPC-based policy can\nbe maintained through the parameter updates delivered by the learning tools is\nstill lacking. This paper addresses this gap. The theory is developed for the\ngeneric Robust MPC case, and applied in simulation in the robust tube-based\nlinear MPC case, where the theory is fairly easy to deploy in practice. The\npaper focuses on Reinforcement Learning as a learning tool, but it applies to\nany learning method that updates the MPC parameters online.", + "authors": "S\u00e9bastien Gros, Mario Zanon", + "published": "2020-12-14", + "updated": "2022-07-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SY", + "eess.SY", + "math.OC" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.07315v1", + "title": "An introduction to reinforcement learning for neuroscience", + "abstract": "Reinforcement learning has a rich history in neuroscience, from early work on\ndopamine as a reward prediction error signal for temporal difference learning\n(Schultz et al., 1997) to recent work suggesting that dopamine could implement\na form of 'distributional reinforcement learning' popularized in deep learning\n(Dabney et al., 2020). Throughout this literature, there has been a tight link\nbetween theoretical advances in reinforcement learning and neuroscientific\nexperiments and findings. As a result, the theories describing our experimental\ndata have become increasingly complex and difficult to navigate. In this\nreview, we cover the basic theory underlying classical work in reinforcement\nlearning and build up to an introductory overview of methods used in modern\ndeep reinforcement learning that have found applications in systems\nneuroscience. We start with an overview of the reinforcement learning problem\nand classical temporal difference algorithms, followed by a discussion of\n'model-free' and 'model-based' reinforcement learning together with methods\nsuch as DYNA and successor representations that fall in between these two\ncategories. Throughout these sections, we highlight the close parallels between\nthe machine learning methods and related work in both experimental and\ntheoretical neuroscience. We then provide an introduction to deep reinforcement\nlearning with examples of how these methods have been used to model different\nlearning phenomena in the systems neuroscience literature, such as\nmeta-reinforcement learning (Wang et al., 2018) and distributional\nreinforcement learning (Dabney et al., 2020). Code that implements the methods\ndiscussed in this work and generates the figures is also provided.", + "authors": "Kristopher T. Jensen", + "published": "2023-11-13", + "updated": "2023-11-13", + "primary_cat": "q-bio.NC", + "cats": [ + "q-bio.NC", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.05440v1", + "title": "Delay-Aware Model-Based Reinforcement Learning for Continuous Control", + "abstract": "Action delays degrade the performance of reinforcement learning in many\nreal-world systems. This paper proposes a formal definition of delay-aware\nMarkov Decision Process and proves it can be transformed into standard MDP with\naugmented states using the Markov reward process. We develop a delay-aware\nmodel-based reinforcement learning framework that can incorporate the\nmulti-step delay into the learned system models without learning effort.\nExperiments with the Gym and MuJoCo platforms show that the proposed\ndelay-aware model-based algorithm is more efficient in training and\ntransferable between systems with various durations of delay compared with\noff-policy model-free reinforcement learning methods. Codes available at:\nhttps://github.com/baimingc/dambrl.", + "authors": "Baiming Chen, Mengdi Xu, Liang Li, Ding Zhao", + "published": "2020-05-11", + "updated": "2020-05-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1804.07193v3", + "title": "Lipschitz Continuity in Model-based Reinforcement Learning", + "abstract": "We examine the impact of learning Lipschitz continuous models in the context\nof model-based reinforcement learning. We provide a novel bound on multi-step\nprediction error of Lipschitz models where we quantify the error using the\nWasserstein metric. We go on to prove an error bound for the value-function\nestimate arising from Lipschitz models and show that the estimated value\nfunction is itself Lipschitz. We conclude with empirical results that show the\nbenefits of controlling the Lipschitz constant of neural-network models.", + "authors": "Kavosh Asadi, Dipendra Misra, Michael L. Littman", + "published": "2018-04-19", + "updated": "2018-07-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1709.05067v1", + "title": "Deep Reinforcement Learning for Conversational AI", + "abstract": "Deep reinforcement learning is revolutionizing the artificial intelligence\nfield. Currently, it serves as a good starting point for constructing\nintelligent autonomous systems which offer a better knowledge of the visual\nworld. It is possible to scale deep reinforcement learning with the use of deep\nlearning and do amazing tasks such as use of pixels in playing video games. In\nthis paper, key concepts of deep reinforcement learning including reward\nfunction, differences between reinforcement learning and supervised learning\nand models for implementation of reinforcement are discussed. Key challenges\nrelated to the implementation of reinforcement learning in conversational AI\ndomain are identified as well as discussed in detail. Various conversational\nmodels which are based on deep reinforcement learning (as well as deep\nlearning) are also discussed. In summary, this paper discusses key aspects of\ndeep reinforcement learning which are crucial for designing an efficient\nconversational AI.", + "authors": "Mahipal Jadeja, Neelanshi Varia, Agam Shah", + "published": "2017-09-15", + "updated": "2017-09-15", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2305.03360v1", + "title": "A Survey on Offline Model-Based Reinforcement Learning", + "abstract": "Model-based approaches are becoming increasingly popular in the field of\noffline reinforcement learning, with high potential in real-world applications\ndue to the model's capability of thoroughly utilizing the large historical\ndatasets available with supervised learning techniques. This paper presents a\nliterature review of recent work in offline model-based reinforcement learning,\na field that utilizes model-based approaches in offline reinforcement learning.\nThe survey provides a brief overview of the concepts and recent developments in\nboth offline reinforcement learning and model-based reinforcement learning, and\ndiscuss the intersection of the two fields. We then presents key relevant\npapers in the field of offline model-based reinforcement learning and discuss\ntheir methods, particularly their approaches in solving the issue of\ndistributional shift, the main problem faced by all current offline model-based\nreinforcement learning methods. We further discuss key challenges faced by the\nfield, and suggest possible directions for future work.", + "authors": "Haoyang He", + "published": "2023-05-05", + "updated": "2023-05-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SY", + "eess.SY", + "I.2.6; I.2.8" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.06914v1", + "title": "Model-assisted Reinforcement Learning of a Quadrotor", + "abstract": "In recent times, reinforcement learning has produced baffling results when it\ncomes to performing control tasks with highly non-linear systems. The\nimpressive results always outweigh the potential vulnerabilities or\nuncertainties associated with the agents when deployed in the real-world. While\nthe performance is remarkable compared to the classical control algorithms, the\nreinforcement learning-based methods suffer from two flaws, robustness and\ninterpretability, which are vital for contemporary real-world applications. The\npaper attempts to alleviate such problems with reinforcement learning and\nproposes the concept of model-assisted reinforcement learning to induce a\nnotion of conservativeness in the agents. The control task considered for the\nexperiment involves navigating a CrazyFlie quadrotor. The paper also describes\na way of reformulating the task to have the flexibility of tuning the level of\nconservativeness via multi-objective reinforcement learning. The results\ninclude a comparison of the vanilla reinforcement learning approaches and the\nproposed approach. The metrics are evaluated by systematically injecting\ndisturbances to classify the inherent robustness and conservativeness of the\nagents. More concrete arguments are made by computing and comparing the\nbackward reachability tubes of the RL policies by solving the\nHamilton-Jacobi-Bellman partial differential equation (HJ PDE).", + "authors": "Arshad Javeed", + "published": "2023-11-12", + "updated": "2023-11-12", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2006.07178v2", + "title": "Meta-Reinforcement Learning Robust to Distributional Shift via Model Identification and Experience Relabeling", + "abstract": "Reinforcement learning algorithms can acquire policies for complex tasks\nautonomously. However, the number of samples required to learn a diverse set of\nskills can be prohibitively large. While meta-reinforcement learning methods\nhave enabled agents to leverage prior experience to adapt quickly to new tasks,\ntheir performance depends crucially on how close the new task is to the\npreviously experienced tasks. Current approaches are either not able to\nextrapolate well, or can do so at the expense of requiring extremely large\namounts of data for on-policy meta-training. In this work, we present model\nidentification and experience relabeling (MIER), a meta-reinforcement learning\nalgorithm that is both efficient and extrapolates well when faced with\nout-of-distribution tasks at test time. Our method is based on a simple\ninsight: we recognize that dynamics models can be adapted efficiently and\nconsistently with off-policy data, more easily than policies and value\nfunctions. These dynamics models can then be used to continue training policies\nand value functions for out-of-distribution tasks without using\nmeta-reinforcement learning at all, by generating synthetic experience for the\nnew task.", + "authors": "Russell Mendonca, Xinyang Geng, Chelsea Finn, Sergey Levine", + "published": "2020-06-12", + "updated": "2020-06-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2301.03933v1", + "title": "Hint assisted reinforcement learning: an application in radio astronomy", + "abstract": "Model based reinforcement learning has proven to be more sample efficient\nthan model free methods. On the other hand, the construction of a dynamics\nmodel in model based reinforcement learning has increased complexity. Data\nprocessing tasks in radio astronomy are such situations where the original\nproblem which is being solved by reinforcement learning itself is the creation\nof a model. Fortunately, many methods based on heuristics or signal processing\ndo exist to perform the same tasks and we can leverage them to propose the best\naction to take, or in other words, to provide a `hint'. We propose to use\n`hints' generated by the environment as an aid to the reinforcement learning\nprocess mitigating the complexity of model construction. We modify the soft\nactor critic algorithm to use hints and use the alternating direction method of\nmultipliers algorithm with inequality constraints to train the agent. Results\nin several environments show that we get the increased sample efficiency by\nusing hints as compared to model free methods.", + "authors": "Sarod Yatawatta", + "published": "2023-01-10", + "updated": "2023-01-10", + "primary_cat": "astro-ph.IM", + "cats": [ + "astro-ph.IM", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2102.03022v1", + "title": "Deceptive Reinforcement Learning for Privacy-Preserving Planning", + "abstract": "In this paper, we study the problem of deceptive reinforcement learning to\npreserve the privacy of a reward function. Reinforcement learning is the\nproblem of finding a behaviour policy based on rewards received from\nexploratory behaviour. A key ingredient in reinforcement learning is a reward\nfunction, which determines how much reward (negative or positive) is given and\nwhen. However, in some situations, we may want to keep a reward function\nprivate; that is, to make it difficult for an observer to determine the reward\nfunction used. We define the problem of privacy-preserving reinforcement\nlearning, and present two models for solving it. These models are based on\ndissimulation -- a form of deception that `hides the truth'. We evaluate our\nmodels both computationally and via human behavioural experiments. Results show\nthat the resulting policies are indeed deceptive, and that participants can\ndetermine the true reward function less reliably than that of an honest agent.", + "authors": "Zhengshang Liu, Yue Yang, Tim Miller, Peta Masters", + "published": "2021-02-05", + "updated": "2021-02-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.MA" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.13489v2", + "title": "Boosting Reinforcement Learning and Planning with Demonstrations: A Survey", + "abstract": "Although reinforcement learning has seen tremendous success recently, this\nkind of trial-and-error learning can be impractical or inefficient in complex\nenvironments. The use of demonstrations, on the other hand, enables agents to\nbenefit from expert knowledge rather than having to discover the best action to\ntake through exploration. In this survey, we discuss the advantages of using\ndemonstrations in sequential decision making, various ways to apply\ndemonstrations in learning-based decision making paradigms (for example,\nreinforcement learning and planning in the learned models), and how to collect\nthe demonstrations in various scenarios. Additionally, we exemplify a practical\npipeline for generating and utilizing demonstrations in the recently proposed\nManiSkill robot learning benchmark.", + "authors": "Tongzhou Mu, Hao Su", + "published": "2023-03-23", + "updated": "2023-03-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1810.03198v1", + "title": "Reinforcement Evolutionary Learning Method for self-learning", + "abstract": "In statistical modelling the biggest threat is concept drift which makes the\nmodel gradually showing deteriorating performance over time. There are state of\nthe art methodologies to detect the impact of concept drift, however general\nstrategy considered to overcome the issue in performance is to rebuild or\nre-calibrate the model periodically as the variable patterns for the model\nchanges significantly due to market change or consumer behavior change etc.\nQuantitative research is the most widely spread application of data science in\nMarketing or financial domain where applicability of state of the art\nreinforcement learning for auto-learning is less explored paradigm.\nReinforcement learning is heavily dependent on having a simulated environment\nwhich is majorly available for gaming or online systems, to learn from the live\nfeedback. However, there are some research happened on the area of online\nadvertisement, pricing etc where due to the nature of the online learning\nenvironment scope of reinforcement learning is explored. Our proposed solution\nis a reinforcement learning based, true self-learning algorithm which can adapt\nto the data change or concept drift and auto learn and self-calibrate for the\nnew patterns of the data solving the problem of concept drift.\n Keywords - Reinforcement learning, Genetic Algorithm, Q-learning,\nClassification modelling, CMA-ES, NES, Multi objective optimization, Concept\ndrift, Population stability index, Incremental learning, F1-measure, Predictive\nModelling, Self-learning, MCTS, AlphaGo, AlphaZero", + "authors": "Kumarjit Pathak, Jitin Kapila", + "published": "2018-10-07", + "updated": "2018-10-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NE", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.09013v1", + "title": "Self-Inspection Method of Unmanned Aerial Vehicles in Power Plants Using Deep Q-Network Reinforcement Learning", + "abstract": "For the purpose of inspecting power plants, autonomous robots can be built\nusing reinforcement learning techniques. The method replicates the environment\nand employs a simple reinforcement learning (RL) algorithm. This strategy might\nbe applied in several sectors, including the electricity generation sector. A\npre-trained model with perception, planning, and action is suggested by the\nresearch. To address optimization problems, such as the Unmanned Aerial Vehicle\n(UAV) navigation problem, Deep Q-network (DQN), a reinforcement learning-based\nframework that Deepmind launched in 2015, incorporates both deep learning and\nQ-learning. To overcome problems with current procedures, the research proposes\na power plant inspection system incorporating UAV autonomous navigation and DQN\nreinforcement learning. These training processes set reward functions with\nreference to states and consider both internal and external effect factors,\nwhich distinguishes them from other reinforcement learning training techniques\nnow in use. The key components of the reinforcement learning segment of the\ntechnique, for instance, introduce states such as the simulation of a wind\nfield, the battery charge level of an unmanned aerial vehicle, the height the\nUAV reached, etc. The trained model makes it more likely that the inspection\nstrategy will be applied in practice by enabling the UAV to move around on its\nown in difficult environments. The average score of the model converges to\n9,000. The trained model allowed the UAV to make the fewest number of rotations\nnecessary to go to the target point.", + "authors": "Haoran Guan", + "published": "2023-03-16", + "updated": "2023-03-16", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1611.00862v1", + "title": "Quantile Reinforcement Learning", + "abstract": "In reinforcement learning, the standard criterion to evaluate policies in a\nstate is the expectation of (discounted) sum of rewards. However, this\ncriterion may not always be suitable, we consider an alternative criterion\nbased on the notion of quantiles. In the case of episodic reinforcement\nlearning problems, we propose an algorithm based on stochastic approximation\nwith two timescales. We evaluate our proposition on a simple model of the TV\nshow, Who wants to be a millionaire.", + "authors": "Hugo Gilbert, Paul Weng", + "published": "2016-11-03", + "updated": "2016-11-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.11437v3", + "title": "Successor Features Combine Elements of Model-Free and Model-based Reinforcement Learning", + "abstract": "A key question in reinforcement learning is how an intelligent agent can\ngeneralize knowledge across different inputs. By generalizing across different\ninputs, information learned for one input can be immediately reused for\nimproving predictions for another input. Reusing information allows an agent to\ncompute an optimal decision-making strategy using less data. State\nrepresentation is a key element of the generalization process, compressing a\nhigh-dimensional input space into a low-dimensional latent state space. This\narticle analyzes properties of different latent state spaces, leading to new\nconnections between model-based and model-free reinforcement learning.\nSuccessor features, which predict frequencies of future observations, form a\nlink between model-based and model-free learning: Learning to predict future\nexpected reward outcomes, a key characteristic of model-based agents, is\nequivalent to learning successor features. Learning successor features is a\nform of temporal difference learning and is equivalent to learning to predict a\nsingle policy's utility, which is a characteristic of model-free agents.\nDrawing on the connection between model-based reinforcement learning and\nsuccessor features, we demonstrate that representations that are predictive of\nfuture reward outcomes generalize across variations in both transitions and\nrewards. This result extends previous work on successor features, which is\nconstrained to fixed transitions and assumes re-learning of the transferred\nstate representation.", + "authors": "Lucas Lehnert, Michael L. Littman", + "published": "2019-01-31", + "updated": "2020-10-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2112.01195v1", + "title": "Maximum Entropy Model-based Reinforcement Learning", + "abstract": "Recent advances in reinforcement learning have demonstrated its ability to\nsolve hard agent-environment interaction tasks on a super-human level. However,\nthe application of reinforcement learning methods to practical and real-world\ntasks is currently limited due to most RL state-of-art algorithms' sample\ninefficiency, i.e., the need for a vast number of training episodes. For\nexample, OpenAI Five algorithm that has beaten human players in Dota 2 has\ntrained for thousands of years of game time. Several approaches exist that\ntackle the issue of sample inefficiency, that either offers a more efficient\nusage of already gathered experience or aim to gain a more relevant and diverse\nexperience via a better exploration of an environment. However, to our\nknowledge, no such approach exists for model-based algorithms, that showed\ntheir high sample efficiency in solving hard control tasks with\nhigh-dimensional state space. This work connects exploration techniques and\nmodel-based reinforcement learning. We have designed a novel exploration method\nthat takes into account features of the model-based approach. We also\ndemonstrate through experiments that our method significantly improves the\nperformance of the model-based algorithm Dreamer.", + "authors": "Oleg Svidchenko, Aleksei Shpilman", + "published": "2021-12-02", + "updated": "2021-12-02", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1709.09346v2", + "title": "Cold-Start Reinforcement Learning with Softmax Policy Gradient", + "abstract": "Policy-gradient approaches to reinforcement learning have two common and\nundesirable overhead procedures, namely warm-start training and sample variance\nreduction. In this paper, we describe a reinforcement learning method based on\na softmax value function that requires neither of these procedures. Our method\ncombines the advantages of policy-gradient methods with the efficiency and\nsimplicity of maximum-likelihood approaches. We apply this new cold-start\nreinforcement learning method in training sequence generation models for\nstructured output prediction problems. Empirical evidence validates this method\non automatic summarization and image captioning tasks.", + "authors": "Nan Ding, Radu Soricut", + "published": "2017-09-27", + "updated": "2017-10-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.07525v1", + "title": "Using Collision Momentum in Deep Reinforcement Learning Based Adversarial Pedestrian Modeling", + "abstract": "Recent research in pedestrian simulation often aims to develop realistic\nbehaviors in various situations, but it is challenging for existing algorithms\nto generate behaviors that identify weaknesses in automated vehicles'\nperformance in extreme and unlikely scenarios and edge cases. To address this,\nspecialized pedestrian behavior algorithms are needed. Current research focuses\non realistic trajectories using social force models and reinforcement learning\nbased models. However, we propose a reinforcement learning algorithm that\nspecifically targets collisions and better uncovers unique failure modes of\nautomated vehicle controllers. Our algorithm is efficient and generates more\nsevere collisions, allowing for the identification and correction of weaknesses\nin autonomous driving algorithms in complex and varied scenarios.", + "authors": "Dianwei Chen, Ekim Yurtsever, Keith Redmill, Umit Ozguner", + "published": "2023-06-13", + "updated": "2023-06-13", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2312.15385v1", + "title": "Discrete-Time Mean-Variance Strategy Based on Reinforcement Learning", + "abstract": "This paper studies a discrete-time mean-variance model based on reinforcement\nlearning. Compared with its continuous-time counterpart in \\cite{zhou2020mv},\nthe discrete-time model makes more general assumptions about the asset's return\ndistribution. Using entropy to measure the cost of exploration, we derive the\noptimal investment strategy, whose density function is also Gaussian type.\nAdditionally, we design the corresponding reinforcement learning algorithm.\nBoth simulation experiments and empirical analysis indicate that our\ndiscrete-time model exhibits better applicability when analyzing real-world\ndata than the continuous-time model.", + "authors": "Xiangyu Cui, Xun Li, Yun Shi, Si Zhao", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "q-fin.MF", + "cats": [ + "q-fin.MF", + "cs.LG", + "q-fin.PM" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1810.01112v1", + "title": "The Dreaming Variational Autoencoder for Reinforcement Learning Environments", + "abstract": "Reinforcement learning has shown great potential in generalizing over raw\nsensory data using only a single neural network for value optimization. There\nare several challenges in the current state-of-the-art reinforcement learning\nalgorithms that prevent them from converging towards the global optima. It is\nlikely that the solution to these problems lies in short- and long-term\nplanning, exploration and memory management for reinforcement learning\nalgorithms. Games are often used to benchmark reinforcement learning algorithms\nas they provide a flexible, reproducible, and easy to control environment.\nRegardless, few games feature a state-space where results in exploration,\nmemory, and planning are easily perceived. This paper presents The Dreaming\nVariational Autoencoder (DVAE), a neural network based generative modeling\narchitecture for exploration in environments with sparse feedback. We further\npresent Deep Maze, a novel and flexible maze engine that challenges DVAE in\npartial and fully-observable state-spaces, long-horizon tasks, and\ndeterministic and stochastic problems. We show initial findings and encourage\nfurther work in reinforcement learning driven by generative exploration.", + "authors": "Per-Arne Andersen, Morten Goodwin, Ole-Christoffer Granmo", + "published": "2018-10-02", + "updated": "2018-10-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.07905v2", + "title": "Reinforcement Learning Ship Autopilot: Sample efficient and Model Predictive Control-based Approach", + "abstract": "In this research we focus on developing a reinforcement learning system for a\nchallenging task: autonomous control of a real-sized boat, with difficulties\narising from large uncertainties in the challenging ocean environment and the\nextremely high cost of exploring and sampling with a real boat. To this end, we\nexplore a novel Gaussian processes (GP) based reinforcement learning approach\nthat combines sample-efficient model-based reinforcement learning and model\npredictive control (MPC). Our approach, sample-efficient probabilistic model\npredictive control (SPMPC), iteratively learns a Gaussian process dynamics\nmodel and uses it to efficiently update control signals within the MPC closed\ncontrol loop. A system using SPMPC is built to efficiently learn an autopilot\ntask. After investigating its performance in a simulation modeled upon real\nboat driving data, the proposed system successfully learns to drive a\nreal-sized boat equipped with a single engine and sensors measuring GPS, speed,\ndirection, and wind in an autopilot task without human demonstration.", + "authors": "Yunduan Cui, Shigeki Osaki, Takamitsu Matsubara", + "published": "2019-01-23", + "updated": "2019-07-23", + "primary_cat": "cs.SY", + "cats": [ + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.02900v2", + "title": "Meta Federated Reinforcement Learning for Distributed Resource Allocation", + "abstract": "In cellular networks, resource allocation is usually performed in a\ncentralized way, which brings huge computation complexity to the base station\n(BS) and high transmission overhead. This paper explores a distributed resource\nallocation method that aims to maximize energy efficiency (EE) while ensuring\nthe quality of service (QoS) for users. Specifically, in order to address\nwireless channel conditions, we propose a robust meta federated reinforcement\nlearning (\\textit{MFRL}) framework that allows local users to optimize transmit\npower and assign channels using locally trained neural network models, so as to\noffload computational burden from the cloud server to the local users, reducing\ntransmission overhead associated with local channel state information. The BS\nperforms the meta learning procedure to initialize a general global model,\nenabling rapid adaptation to different environments with improved EE\nperformance. The federated learning technique, based on decentralized\nreinforcement learning, promotes collaboration and mutual benefits among users.\nAnalysis and numerical results demonstrate that the proposed \\textit{MFRL}\nframework accelerates the reinforcement learning process, decreases\ntransmission overhead, and offloads computation, while outperforming the\nconventional decentralized reinforcement learning algorithm in terms of\nconvergence speed and EE performance across various scenarios.", + "authors": "Zelin Ji, Zhijin Qin, Xiaoming Tao", + "published": "2023-07-06", + "updated": "2023-07-09", + "primary_cat": "eess.SP", + "cats": [ + "eess.SP", + "cs.SY", + "eess.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1912.03918v1", + "title": "Transformer Based Reinforcement Learning For Games", + "abstract": "Recent times have witnessed sharp improvements in reinforcement learning\ntasks using deep reinforcement learning techniques like Deep Q Networks, Policy\nGradients, Actor Critic methods which are based on deep learning based models\nand back-propagation of gradients to train such models. An active area of\nresearch in reinforcement learning is about training agents to play complex\nvideo games, which so far has been something accomplished only by human\nintelligence. Some state of the art performances in video game playing using\ndeep reinforcement learning are obtained by processing the sequence of frames\nfrom video games, passing them through a convolutional network to obtain\nfeatures and then using recurrent neural networks to figure out the action\nleading to optimal rewards. The recurrent neural network will learn to extract\nthe meaningful signal out of the sequence of such features. In this work, we\npropose a method utilizing a transformer network which have recently replaced\nRNNs in Natural Language Processing (NLP), and perform experiments to compare\nwith existing methods.", + "authors": "Uddeshya Upadhyay, Nikunj Shah, Sucheta Ravikanti, Mayanka Medhe", + "published": "2019-12-09", + "updated": "2019-12-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NE" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1910.11914v3", + "title": "On the convergence of projective-simulation-based reinforcement learning in Markov decision processes", + "abstract": "In recent years, the interest in leveraging quantum effects for enhancing\nmachine learning tasks has significantly increased. Many algorithms speeding up\nsupervised and unsupervised learning were established. The first framework in\nwhich ways to exploit quantum resources specifically for the broader context of\nreinforcement learning were found is projective simulation. Projective\nsimulation presents an agent-based reinforcement learning approach designed in\na manner which may support quantum walk-based speed-ups. Although classical\nvariants of projective simulation have been benchmarked against common\nreinforcement learning algorithms, very few formal theoretical analyses have\nbeen provided for its performance in standard learning scenarios. In this\npaper, we provide a detailed formal discussion of the properties of this model.\nSpecifically, we prove that one version of the projective simulation model,\nunderstood as a reinforcement learning approach, converges to optimal behavior\nin a large class of Markov decision processes. This proof shows that a\nphysically-inspired approach to reinforcement learning can guarantee to\nconverge.", + "authors": "Walter L. Boyajian, Jens Clausen, Lea M. Trenkwalder, Vedran Dunjko, Hans J. Briegel", + "published": "2019-10-25", + "updated": "2020-11-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "quant-ph", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.09450v1", + "title": "Adversarial Imitation Learning via Random Search", + "abstract": "Developing agents that can perform challenging complex tasks is the goal of\nreinforcement learning. The model-free reinforcement learning has been\nconsidered as a feasible solution. However, the state of the art research has\nbeen to develop increasingly complicated techniques. This increasing complexity\nmakes the reconstruction difficult. Furthermore, the problem of reward\ndependency is still exists. As a result, research on imitation learning, which\nlearns policy from a demonstration of experts, has begun to attract attention.\nImitation learning directly learns policy based on data on the behavior of the\nexperts without the explicit reward signal provided by the environment.\nHowever, imitation learning tries to optimize policies based on deep\nreinforcement learning such as trust region policy optimization. As a result,\ndeep reinforcement learning based imitation learning also poses a crisis of\nreproducibility. The issue of complex model-free model has received\nconsiderable critical attention. A derivative-free optimization based\nreinforcement learning and the simplification on policies obtain competitive\nperformance on the dynamic complex tasks. The simplified policies and\nderivative free methods make algorithm be simple. The reconfiguration of\nresearch demo becomes easy. In this paper, we propose an imitation learning\nmethod that takes advantage of the derivative-free optimization with simple\nlinear policies. The proposed method performs simple random search in the\nparameter space of policies and shows computational efficiency. Experiments in\nthis paper show that the proposed model, without a direct reward signal from\nthe environment, obtains competitive performance on the MuJoCo locomotion\ntasks.", + "authors": "MyungJae Shin, Joongheon Kim", + "published": "2020-08-21", + "updated": "2020-08-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.08162v1", + "title": "Causal Reasoning from Meta-reinforcement Learning", + "abstract": "Discovering and exploiting the causal structure in the environment is a\ncrucial challenge for intelligent agents. Here we explore whether causal\nreasoning can emerge via meta-reinforcement learning. We train a recurrent\nnetwork with model-free reinforcement learning to solve a range of problems\nthat each contain causal structure. We find that the trained agent can perform\ncausal reasoning in novel situations in order to obtain rewards. The agent can\nselect informative interventions, draw causal inferences from observational\ndata, and make counterfactual predictions. Although established formal causal\nreasoning algorithms also exist, in this paper we show that such reasoning can\narise from model-free reinforcement learning, and suggest that causal reasoning\nin complex settings may benefit from the more end-to-end learning-based\napproaches presented here. This work also offers new strategies for structured\nexploration in reinforcement learning, by providing agents with the ability to\nperform -- and interpret -- experiments.", + "authors": "Ishita Dasgupta, Jane Wang, Silvia Chiappa, Jovana Mitrovic, Pedro Ortega, David Raposo, Edward Hughes, Peter Battaglia, Matthew Botvinick, Zeb Kurth-Nelson", + "published": "2019-01-23", + "updated": "2019-01-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.02219v1", + "title": "Uncertainty-Based Out-of-Distribution Detection in Deep Reinforcement Learning", + "abstract": "We consider the problem of detecting out-of-distribution (OOD) samples in\ndeep reinforcement learning. In a value based reinforcement learning setting,\nwe propose to use uncertainty estimation techniques directly on the agent's\nvalue estimating neural network to detect OOD samples. The focus of our work\nlies in analyzing the suitability of approximate Bayesian inference methods and\nrelated ensembling techniques that generate uncertainty estimates. Although\nprior work has shown that dropout-based variational inference techniques and\nbootstrap-based approaches can be used to model epistemic uncertainty, the\nsuitability for detecting OOD samples in deep reinforcement learning remains an\nopen question. Our results show that uncertainty estimation can be used to\ndifferentiate in- from out-of-distribution samples. Over the complete training\nprocess of the reinforcement learning agents, bootstrap-based approaches tend\nto produce more reliable epistemic uncertainty estimates, when compared to\ndropout-based approaches.", + "authors": "Andreas Sedlmeier, Thomas Gabor, Thomy Phan, Lenz Belzner, Claudia Linnhoff-Popien", + "published": "2019-01-08", + "updated": "2019-01-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1507.06923v1", + "title": "A Reinforcement Learning Approach to Online Learning of Decision Trees", + "abstract": "Online decision tree learning algorithms typically examine all features of a\nnew data point to update model parameters. We propose a novel alternative,\nReinforcement Learning- based Decision Trees (RLDT), that uses Reinforcement\nLearning (RL) to actively examine a minimal number of features of a data point\nto classify it with high accuracy. Furthermore, RLDT optimizes a long term\nreturn, providing a better alternative to the traditional myopic greedy\napproach to growing decision trees. We demonstrate that this approach performs\nas well as batch learning algorithms and other online decision tree learning\nalgorithms, while making significantly fewer queries about the features of the\ndata points. We also show that RLDT can effectively handle concept drift.", + "authors": "Abhinav Garlapati, Aditi Raghunathan, Vaishnavh Nagarajan, Balaraman Ravindran", + "published": "2015-07-24", + "updated": "2015-07-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.04816v1", + "title": "Characterizing Policy Divergence for Personalized Meta-Reinforcement Learning", + "abstract": "Despite ample motivation from costly exploration and limited trajectory data,\nrapidly adapting to new environments with few-shot reinforcement learning (RL)\ncan remain a challenging task, especially with respect to personalized\nsettings. Here, we consider the problem of recommending optimal policies to a\nset of multiple entities each with potentially different characteristics, such\nthat individual entities may parameterize distinct environments with unique\ntransition dynamics. Inspired by existing literature in meta-learning, we\nextend previous work by focusing on the notion that certain environments are\nmore similar to each other than others in personalized settings, and propose a\nmodel-free meta-learning algorithm that prioritizes past experiences by\nrelevance during gradient-based adaptation. Our algorithm involves\ncharacterizing past policy divergence through methods in inverse reinforcement\nlearning, and we illustrate how such metrics are able to effectively\ndistinguish past policy parameters by the environment they were deployed in,\nleading to more effective fast adaptation during test time. To study\npersonalization more effectively we introduce a navigation testbed to\nspecifically incorporate environment diversity across training episodes, and\ndemonstrate that our approach outperforms meta-learning alternatives with\nrespect to few-shot reinforcement learning in personalized settings.", + "authors": "Michael Zhang", + "published": "2020-10-09", + "updated": "2020-10-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.12516v2", + "title": "Prioritized Experience-based Reinforcement Learning with Human Guidance for Autonomous Driving", + "abstract": "Reinforcement learning (RL) requires skillful definition and remarkable\ncomputational efforts to solve optimization and control problems, which could\nimpair its prospect. Introducing human guidance into reinforcement learning is\na promising way to improve learning performance. In this paper, a comprehensive\nhuman guidance-based reinforcement learning framework is established. A novel\nprioritized experience replay mechanism that adapts to human guidance in the\nreinforcement learning process is proposed to boost the efficiency and\nperformance of the reinforcement learning algorithm. To relieve the heavy\nworkload on human participants, a behavior model is established based on an\nincremental online learning method to mimic human actions. We design two\nchallenging autonomous driving tasks for evaluating the proposed algorithm.\nExperiments are conducted to access the training and testing performance and\nlearning mechanism of the proposed algorithm. Comparative results against the\nstate-of-the-art methods suggest the advantages of our algorithm in terms of\nlearning efficiency, performance, and robustness.", + "authors": "Jingda Wu, Zhiyu Huang, Wenhui Huang, Chen Lv", + "published": "2021-09-26", + "updated": "2022-11-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1708.07738v1", + "title": "A Function Approximation Method for Model-based High-Dimensional Inverse Reinforcement Learning", + "abstract": "This works handles the inverse reinforcement learning problem in\nhigh-dimensional state spaces, which relies on an efficient solution of\nmodel-based high-dimensional reinforcement learning problems. To solve the\ncomputationally expensive reinforcement learning problems, we propose a\nfunction approximation method to ensure that the Bellman Optimality Equation\nalways holds, and then estimate a function based on the observed human actions\nfor inverse reinforcement learning problems. The time complexity of the\nproposed method is linearly proportional to the cardinality of the action set,\nthus it can handle high-dimensional even continuous state spaces efficiently.\nWe test the proposed method in a simulated environment to show its accuracy,\nand three clinical tasks to show how it can be used to evaluate a doctor's\nproficiency.", + "authors": "Kun Li, Joel W. Burdick", + "published": "2017-08-23", + "updated": "2017-08-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2212.08232v1", + "title": "Offline Robot Reinforcement Learning with Uncertainty-Guided Human Expert Sampling", + "abstract": "Recent advances in batch (offline) reinforcement learning have shown\npromising results in learning from available offline data and proved offline\nreinforcement learning to be an essential toolkit in learning control policies\nin a model-free setting. An offline reinforcement learning algorithm applied to\na dataset collected by a suboptimal non-learning-based algorithm can result in\na policy that outperforms the behavior agent used to collect the data. Such a\nscenario is frequent in robotics, where existing automation is collecting\noperational data. Although offline learning techniques can learn from data\ngenerated by a sub-optimal behavior agent, there is still an opportunity to\nimprove the sample complexity of existing offline reinforcement learning\nalgorithms by strategically introducing human demonstration data into the\ntraining process. To this end, we propose a novel approach that uses\nuncertainty estimation to trigger the injection of human demonstration data and\nguide policy training towards optimal behavior while reducing overall sample\ncomplexity. Our experiments show that this approach is more sample efficient\nwhen compared to a naive way of combining expert data with data collected from\na sub-optimal agent. We augmented an existing offline reinforcement learning\nalgorithm Conservative Q-Learning with our approach and performed experiments\non data collected from MuJoCo and OffWorld Gym learning environments.", + "authors": "Ashish Kumar, Ilya Kuzovkin", + "published": "2022-12-16", + "updated": "2022-12-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2204.01409v1", + "title": "Safe Controller for Output Feedback Linear Systems using Model-Based Reinforcement Learning", + "abstract": "The objective of this research is to enable safety-critical systems to\nsimultaneously learn and execute optimal control policies in a safe manner to\nachieve complex autonomy. Learning optimal policies via trial and error, i.e.,\ntraditional reinforcement learning, is difficult to implement in\nsafety-critical systems, particularly when task restarts are unavailable. Safe\nmodel-based reinforcement learning techniques based on a barrier transformation\nhave recently been developed to address this problem. However, these methods\nrely on full state feedback, limiting their usability in a real-world\nenvironment. In this work, an output-feedback safe model-based reinforcement\nlearning technique based on a novel barrier-aware dynamic state estimator has\nbeen designed to address this issue. The developed approach facilitates\nsimultaneous learning and execution of safe control policies for\nsafety-critical linear systems. Simulation results indicate that barrier\ntransformation is an effective approach to achieve online reinforcement\nlearning in safety-critical systems using output feedback.", + "authors": "S M Nahid Mahmud, Moad Abudia, Scott A Nivison, Zachary I. Bell, Rushikesh Kamalapurkar", + "published": "2022-04-04", + "updated": "2022-04-04", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.02104v2", + "title": "Model-Based Episodic Memory Induces Dynamic Hybrid Controls", + "abstract": "Episodic control enables sample efficiency in reinforcement learning by\nrecalling past experiences from an episodic memory. We propose a new\nmodel-based episodic memory of trajectories addressing current limitations of\nepisodic control. Our memory estimates trajectory values, guiding the agent\ntowards good policies. Built upon the memory, we construct a complementary\nlearning model via a dynamic hybrid control unifying model-based, episodic and\nhabitual learning into a single architecture. Experiments demonstrate that our\nmodel allows significantly faster and better learning than other strong\nreinforcement learning agents across a variety of environments including\nstochastic and non-Markovian settings.", + "authors": "Hung Le, Thommen Karimpanal George, Majid Abdolshah, Truyen Tran, Svetha Venkatesh", + "published": "2021-11-03", + "updated": "2021-11-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2011.10714v1", + "title": "Double Meta-Learning for Data Efficient Policy Optimization in Non-Stationary Environments", + "abstract": "We are interested in learning models of non-stationary environments, which\ncan be framed as a multi-task learning problem. Model-free reinforcement\nlearning algorithms can achieve good asymptotic performance in multi-task\nlearning at a cost of extensive sampling, due to their approach, which requires\nlearning from scratch. While model-based approaches are among the most data\nefficient learning algorithms, they still struggle with complex tasks and model\nuncertainties. Meta-reinforcement learning addresses the efficiency and\ngeneralization challenges on multi task learning by quickly leveraging the\nmeta-prior policy for a new task. In this paper, we propose a\nmeta-reinforcement learning approach to learn the dynamic model of a\nnon-stationary environment to be used for meta-policy optimization later. Due\nto the sample efficiency of model-based learning methods, we are able to\nsimultaneously train both the meta-model of the non-stationary environment and\nthe meta-policy until dynamic model convergence. Then, the meta-learned dynamic\nmodel of the environment will generate simulated data for meta-policy\noptimization. Our experiment demonstrates that our proposed method can\nmeta-learn the policy in a non-stationary environment with the data efficiency\nof model-based learning approaches while achieving the high asymptotic\nperformance of model-free meta-reinforcement learning.", + "authors": "Elahe Aghapour, Nora Ayanian", + "published": "2020-11-21", + "updated": "2020-11-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.11738v1", + "title": "Optimising Stochastic Routing for Taxi Fleets with Model Enhanced Reinforcement Learning", + "abstract": "The future of mobility-as-a-Service (Maas)should embrace an integrated system\nof ride-hailing, street-hailing and ride-sharing with optimised intelligent\nvehicle routing in response to a real-time, stochastic demand pattern. We aim\nto optimise routing policies for a large fleet of vehicles for street-hailing\nservices, given a stochastic demand pattern in small to medium-sized road\nnetworks. A model-based dispatch algorithm, a high performance model-free\nreinforcement learning based algorithm and a novel hybrid algorithm combining\nthe benefits of both the top-down approach and the model-free reinforcement\nlearning have been proposed to route the \\emph{vacant} vehicles. We design our\nreinforcement learning based routing algorithm using proximal policy\noptimisation and combined intrinsic and extrinsic rewards to strike a balance\nbetween exploration and exploitation. Using a large-scale agent-based\nmicroscopic simulation platform to evaluate our proposed algorithms, our\nmodel-free reinforcement learning and hybrid algorithm show excellent\nperformance on both artificial road network and community-based Singapore road\nnetwork with empirical demands, and our hybrid algorithm can significantly\naccelerate the model-free learner in the process of learning.", + "authors": "Shen Ren, Qianxiao Li, Liye Zhang, Zheng Qin, Bo Yang", + "published": "2020-10-22", + "updated": "2020-10-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "nlin.AO", + "physics.soc-ph" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.02429v1", + "title": "AD4RL: Autonomous Driving Benchmarks for Offline Reinforcement Learning with Value-based Dataset", + "abstract": "Offline reinforcement learning has emerged as a promising technology by\nenhancing its practicality through the use of pre-collected large datasets.\nDespite its practical benefits, most algorithm development research in offline\nreinforcement learning still relies on game tasks with synthetic datasets. To\naddress such limitations, this paper provides autonomous driving datasets and\nbenchmarks for offline reinforcement learning research. We provide 19 datasets,\nincluding real-world human driver's datasets, and seven popular offline\nreinforcement learning algorithms in three realistic driving scenarios. We also\nprovide a unified decision-making process model that can operate effectively\nacross different scenarios, serving as a reference framework in algorithm\ndesign. Our research lays the groundwork for further collaborations in the\ncommunity to explore practical aspects of existing reinforcement learning\nmethods. Dataset and codes can be found in https://sites.google.com/view/ad4rl.", + "authors": "Dongsu Lee, Chanin Eom, Minhae Kwon", + "published": "2024-04-03", + "updated": "2024-04-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2304.00006v1", + "title": "Bi-directional personalization reinforcement learning-based architecture with active learning using a multi-model data service for the travel nursing industry", + "abstract": "The challenges of using inadequate online recruitment systems can be\naddressed with machine learning and software engineering techniques.\nBi-directional personalization reinforcement learning-based architecture with\nactive learning can get recruiters to recommend qualified applicants and also\nenable applicants to receive personalized job recommendations. This paper\nfocuses on how machine learning techniques can enhance the recruitment process\nin the travel nursing industry by helping speed up data acquisition using a\nmulti-model data service and then providing personalized recommendations using\nbi-directional reinforcement learning with active learning. This need was\nespecially evident when trying to respond to the overwhelming needs of\nhealthcare facilities during the COVID-19 pandemic. The need for traveling\nnurses and other healthcare professionals was more evident during the lockdown\nperiod. A data service was architected for job feed processing using an\norchestration of natural language processing (NLP) models that synthesize\njob-related data into a database efficiently and accurately. The multi-model\ndata service provided the data necessary to develop a bi-directional\npersonalization system using reinforcement learning with active learning that\ncould recommend travel nurses and healthcare professionals to recruiters and\nprovide job recommendations to applicants using an internally developed smart\nmatch score as a basis. The bi-directional personalization reinforcement\nlearning-based architecture with active learning combines two personalization\nsystems - one that runs forward to recommend qualified candidates for jobs and\nanother that runs backward and recommends jobs for applicants.", + "authors": "Ezana N. Beyenne", + "published": "2023-03-14", + "updated": "2023-03-14", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.LG", + "I.2" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.07915v2", + "title": "Barrier Certified Safety Learning Control: When Sum-of-Square Programming Meets Reinforcement Learning", + "abstract": "Safety guarantee is essential in many engineering implementations.\nReinforcement learning provides a useful way to strengthen safety. However,\nreinforcement learning algorithms cannot completely guarantee safety over\nrealistic operations. To address this issue, this work adopts control barrier\nfunctions over reinforcement learning, and proposes a compensated algorithm to\ncompletely maintain safety. Specifically, a sum-of-squares programming has been\nexploited to search for the optimal controller, and tune the learning\nhyperparameters simultaneously. Thus, the control actions are pledged to be\nalways within the safe region. The effectiveness of proposed method is\ndemonstrated via an inverted pendulum model. Compared to quadratic programming\nbased reinforcement learning methods, our sum-of-squares programming based\nreinforcement learning has shown its superiority.", + "authors": "Hejun Huang, Zhenglong Li, Dongkun Han", + "published": "2022-06-16", + "updated": "2022-06-29", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.LG", + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.13529v2", + "title": "Lyapunov-Based Reinforcement Learning State Estimator", + "abstract": "In this paper, we consider the state estimation problem for nonlinear\nstochastic discrete-time systems. We combine Lyapunov's method in control\ntheory and deep reinforcement learning to design the state estimator. We\ntheoretically prove the convergence of the bounded estimate error solely using\nthe data simulated from the model. An actor-critic reinforcement learning\nalgorithm is proposed to learn the state estimator approximated by a deep\nneural network. The convergence of the algorithm is analysed. The proposed\nLyapunov-based reinforcement learning state estimator is compared with a number\nof existing nonlinear filtering methods through Monte Carlo simulations,\nshowing its advantage in terms of estimate convergence even under some system\nuncertainties such as covariance shift in system noise and randomly missing\nmeasurements. To the best of our knowledge, this is the first reinforcement\nlearning based nonlinear state estimator with bounded estimate error\nperformance guarantee.", + "authors": "Liang Hu, Chengwei Wu, Wei Pan", + "published": "2020-10-26", + "updated": "2021-01-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO", + "cs.SY", + "eess.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.01474v1", + "title": "Offline Reinforcement Learning with Causal Structured World Models", + "abstract": "Model-based methods have recently shown promising for offline reinforcement\nlearning (RL), aiming to learn good policies from historical data without\ninteracting with the environment. Previous model-based offline RL methods learn\nfully connected nets as world-models that map the states and actions to the\nnext-step states. However, it is sensible that a world-model should adhere to\nthe underlying causal effect such that it will support learning an effective\npolicy generalizing well in unseen states. In this paper, We first provide\ntheoretical results that causal world-models can outperform plain world-models\nfor offline RL by incorporating the causal structure into the generalization\nerror bound. We then propose a practical algorithm, oFfline mOdel-based\nreinforcement learning with CaUsal Structure (FOCUS), to illustrate the\nfeasibility of learning and leveraging causal structure in offline RL.\nExperimental results on two benchmarks show that FOCUS reconstructs the\nunderlying causal structure accurately and robustly. Consequently, it performs\nbetter than the plain model-based offline RL algorithms and other causal\nmodel-based RL algorithms.", + "authors": "Zheng-Mao Zhu, Xiong-Hui Chen, Hong-Long Tian, Kun Zhang, Yang Yu", + "published": "2022-06-03", + "updated": "2022-06-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1906.08312v1", + "title": "Calibrated Model-Based Deep Reinforcement Learning", + "abstract": "Estimates of predictive uncertainty are important for accurate model-based\nplanning and reinforcement learning. However, predictive\nuncertainties---especially ones derived from modern deep learning systems---can\nbe inaccurate and impose a bottleneck on performance. This paper explores which\nuncertainties are needed for model-based reinforcement learning and argues that\ngood uncertainties must be calibrated, i.e. their probabilities should match\nempirical frequencies of predicted events. We describe a simple way to augment\nany model-based reinforcement learning agent with a calibrated model and show\nthat doing so consistently improves planning, sample complexity, and\nexploration. On the \\textsc{HalfCheetah} MuJoCo task, our system achieves\nstate-of-the-art performance using 50\\% fewer samples than the current leading\napproach. Our findings suggest that calibration can improve the performance of\nmodel-based reinforcement learning with minimal computational and\nimplementation overhead.", + "authors": "Ali Malik, Volodymyr Kuleshov, Jiaming Song, Danny Nemer, Harlan Seymour, Stefano Ermon", + "published": "2019-06-19", + "updated": "2019-06-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.12095v1", + "title": "Document-editing Assistants and Model-based Reinforcement Learning as a Path to Conversational AI", + "abstract": "Intelligent assistants that follow commands or answer simple questions, such\nas Siri and Google search, are among the most economically important\napplications of AI. Future conversational AI assistants promise even greater\ncapabilities and a better user experience through a deeper understanding of the\ndomain, the user, or the user's purposes. But what domain and what methods are\nbest suited to researching and realizing this promise? In this article we argue\nfor the domain of voice document editing and for the methods of model-based\nreinforcement learning. The primary advantages of voice document editing are\nthat the domain is tightly scoped and that it provides something for the\nconversation to be about (the document) that is delimited and fully accessible\nto the intelligent assistant. The advantages of reinforcement learning in\ngeneral are that its methods are designed to learn from interaction without\nexplicit instruction and that it formalizes the purposes of the assistant.\nModel-based reinforcement learning is needed in order to genuinely understand\nthe domain of discourse and thereby work efficiently with the user to achieve\ntheir goals. Together, voice document editing and model-based reinforcement\nlearning comprise a promising research direction for achieving conversational\nAI.", + "authors": "Katya Kudashkina, Patrick M. Pilarski, Richard S. Sutton", + "published": "2020-08-27", + "updated": "2020-08-27", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.HC", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.16348v2", + "title": "Rating-based Reinforcement Learning", + "abstract": "This paper develops a novel rating-based reinforcement learning approach that\nuses human ratings to obtain human guidance in reinforcement learning.\nDifferent from the existing preference-based and ranking-based reinforcement\nlearning paradigms, based on human relative preferences over sample pairs, the\nproposed rating-based reinforcement learning approach is based on human\nevaluation of individual trajectories without relative comparisons between\nsample pairs. The rating-based reinforcement learning approach builds on a new\nprediction model for human ratings and a novel multi-class loss function. We\nconduct several experimental studies based on synthetic ratings and real human\nratings to evaluate the effectiveness and benefits of the new rating-based\nreinforcement learning approach.", + "authors": "Devin White, Mingkang Wu, Ellen Novoseller, Vernon J. Lawhern, Nicholas Waytowich, Yongcan Cao", + "published": "2023-07-30", + "updated": "2024-01-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2308.11336v1", + "title": "On the Opportunities and Challenges of Offline Reinforcement Learning for Recommender Systems", + "abstract": "Reinforcement learning serves as a potent tool for modeling dynamic user\ninterests within recommender systems, garnering increasing research attention\nof late. However, a significant drawback persists: its poor data efficiency,\nstemming from its interactive nature. The training of reinforcement\nlearning-based recommender systems demands expensive online interactions to\namass adequate trajectories, essential for agents to learn user preferences.\nThis inefficiency renders reinforcement learning-based recommender systems a\nformidable undertaking, necessitating the exploration of potential solutions.\nRecent strides in offline reinforcement learning present a new perspective.\nOffline reinforcement learning empowers agents to glean insights from offline\ndatasets and deploy learned policies in online settings. Given that recommender\nsystems possess extensive offline datasets, the framework of offline\nreinforcement learning aligns seamlessly. Despite being a burgeoning field,\nworks centered on recommender systems utilizing offline reinforcement learning\nremain limited. This survey aims to introduce and delve into offline\nreinforcement learning within recommender systems, offering an inclusive review\nof existing literature in this domain. Furthermore, we strive to underscore\nprevalent challenges, opportunities, and future pathways, poised to propel\nresearch in this evolving field.", + "authors": "Xiaocong Chen, Siyu Wang, Julian McAuley, Dietmar Jannach, Lina Yao", + "published": "2023-08-22", + "updated": "2023-08-22", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1406.1853v2", + "title": "Model-based Reinforcement Learning and the Eluder Dimension", + "abstract": "We consider the problem of learning to optimize an unknown Markov decision\nprocess (MDP). We show that, if the MDP can be parameterized within some known\nfunction class, we can obtain regret bounds that scale with the dimensionality,\nrather than cardinality, of the system. We characterize this dependence\nexplicitly as $\\tilde{O}(\\sqrt{d_K d_E T})$ where $T$ is time elapsed, $d_K$ is\nthe Kolmogorov dimension and $d_E$ is the \\emph{eluder dimension}. These\nrepresent the first unified regret bounds for model-based reinforcement\nlearning and provide state of the art guarantees in several important settings.\nMoreover, we present a simple and computationally efficient algorithm\n\\emph{posterior sampling for reinforcement learning} (PSRL) that satisfies\nthese bounds.", + "authors": "Ian Osband, Benjamin Van Roy", + "published": "2014-06-07", + "updated": "2014-10-31", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.03188v3", + "title": "Optimizing Quantum Variational Circuits with Deep Reinforcement Learning", + "abstract": "Quantum Machine Learning (QML) is considered to be one of the most promising\napplications of near term quantum devices. However, the optimization of quantum\nmachine learning models presents numerous challenges arising from the\nimperfections of hardware and the fundamental obstacles in navigating an\nexponentially scaling Hilbert space. In this work, we evaluate the potential of\ncontemporary methods in deep reinforcement learning to augment gradient based\noptimization routines in quantum variational circuits. We find that\nreinforcement learning augmented optimizers consistently outperform gradient\ndescent in noisy environments. All code and pretrained weights are available to\nreplicate the results or deploy the models at:\nhttps://github.com/lockwo/rl_qvc_opt.", + "authors": "Owen Lockwood", + "published": "2021-09-07", + "updated": "2022-05-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "quant-ph" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2102.05612v1", + "title": "Personalization for Web-based Services using Offline Reinforcement Learning", + "abstract": "Large-scale Web-based services present opportunities for improving UI\npolicies based on observed user interactions. We address challenges of learning\nsuch policies through model-free offline Reinforcement Learning (RL) with\noff-policy training. Deployed in a production system for user authentication in\na major social network, it significantly improves long-term objectives. We\narticulate practical challenges, compare several ML techniques, provide\ninsights on training and evaluation of RL models, and discuss generalizations.", + "authors": "Pavlos Athanasios Apostolopoulos, Zehui Wang, Hanson Wang, Chad Zhou, Kittipat Virochsiri, Norm Zhou, Igor L. Markov", + "published": "2021-02-10", + "updated": "2021-02-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.HC", + "cs.SE" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.14766v1", + "title": "Reinforcement Learning from Statistical Feedback: the Journey from AB Testing to ANT Testing", + "abstract": "Reinforcement Learning from Human Feedback (RLHF) has played a crucial role\nin the success of large models such as ChatGPT. RLHF is a reinforcement\nlearning framework which combines human feedback to improve learning\neffectiveness and performance. However, obtaining preferences feedback manually\nis quite expensive in commercial applications. Some statistical commercial\nindicators are usually more valuable and always ignored in RLHF. There exists a\ngap between commercial target and model training. In our research, we will\nattempt to fill this gap with statistical business feedback instead of human\nfeedback, using AB testing which is a well-established statistical method.\nReinforcement Learning from Statistical Feedback (RLSF) based on AB testing is\nproposed. Statistical inference methods are used to obtain preferences for\ntraining the reward network, which fine-tunes the pre-trained model in\nreinforcement learning framework, achieving greater business value.\nFurthermore, we extend AB testing with double selections at a single time-point\nto ANT testing with multiple selections at different feedback time points.\nMoreover, we design numerical experiences to validate the effectiveness of our\nalgorithm framework.", + "authors": "Feiyang Han, Yimin Wei, Zhaofeng Liu, Yanxing Qi", + "published": "2023-11-24", + "updated": "2023-11-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "math.ST", + "stat.ME", + "stat.TH" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.07260v1", + "title": "TIAGo RL: Simulated Reinforcement Learning Environments with Tactile Data for Mobile Robots", + "abstract": "Tactile information is important for robust performance in robotic tasks that\ninvolve physical interaction, such as object manipulation. However, with more\ndata included in the reasoning and control process, modeling behavior becomes\nincreasingly difficult. Deep Reinforcement Learning (DRL) produced promising\nresults for learning complex behavior in various domains, including\ntactile-based manipulation in robotics. In this work, we present our\nopen-source reinforcement learning environments for the TIAGo service robot.\nThey produce tactile sensor measurements that resemble those of a real\nsensorised gripper for TIAGo, encouraging research in transfer learning of DRL\npolicies. Lastly, we show preliminary training results of a learned force\ncontrol policy and compare it to a classical PI controller.", + "authors": "Luca Lach, Francesco Ferro, Robert Haschke", + "published": "2023-11-13", + "updated": "2023-11-13", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1811.00128v1", + "title": "Towards a Simple Approach to Multi-step Model-based Reinforcement Learning", + "abstract": "When environmental interaction is expensive, model-based reinforcement\nlearning offers a solution by planning ahead and avoiding costly mistakes.\nModel-based agents typically learn a single-step transition model. In this\npaper, we propose a multi-step model that predicts the outcome of an action\nsequence with variable length. We show that this model is easy to learn, and\nthat the model can make policy-conditional predictions. We report preliminary\nresults that show a clear advantage for the multi-step model compared to its\none-step counterpart.", + "authors": "Kavosh Asadi, Evan Cater, Dipendra Misra, Michael L. Littman", + "published": "2018-10-31", + "updated": "2018-10-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2308.14897v1", + "title": "Statistically Efficient Variance Reduction with Double Policy Estimation for Off-Policy Evaluation in Sequence-Modeled Reinforcement Learning", + "abstract": "Offline reinforcement learning aims to utilize datasets of previously\ngathered environment-action interaction records to learn a policy without\naccess to the real environment. Recent work has shown that offline\nreinforcement learning can be formulated as a sequence modeling problem and\nsolved via supervised learning with approaches such as decision transformer.\nWhile these sequence-based methods achieve competitive results over\nreturn-to-go methods, especially on tasks that require longer episodes or with\nscarce rewards, importance sampling is not considered to correct the policy\nbias when dealing with off-policy data, mainly due to the absence of behavior\npolicy and the use of deterministic evaluation policies. To this end, we\npropose DPE: an RL algorithm that blends offline sequence modeling and offline\nreinforcement learning with Double Policy Estimation (DPE) in a unified\nframework with statistically proven properties on variance reduction. We\nvalidate our method in multiple tasks of OpenAI Gym with D4RL benchmarks. Our\nmethod brings a performance improvements on selected methods which outperforms\nSOTA baselines in several tasks, demonstrating the advantages of enabling\ndouble policy estimation for sequence-modeled reinforcement learning.", + "authors": "Hanhan Zhou, Tian Lan, Vaneet Aggarwal", + "published": "2023-08-28", + "updated": "2023-08-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.DC" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.01977v1", + "title": "Accelerating Goal-Directed Reinforcement Learning by Model Characterization", + "abstract": "We propose a hybrid approach aimed at improving the sample efficiency in\ngoal-directed reinforcement learning. We do this via a two-step mechanism where\nfirstly, we approximate a model from Model-Free reinforcement learning. Then,\nwe leverage this approximate model along with a notion of reachability using\nMean First Passage Times to perform Model-Based reinforcement learning. Built\non such a novel observation, we design two new algorithms - Mean First Passage\nTime based Q-Learning (MFPT-Q) and Mean First Passage Time based DYNA\n(MFPT-DYNA), that have been fundamentally modified from the state-of-the-art\nreinforcement learning techniques. Preliminary results have shown that our\nhybrid approaches converge with much fewer iterations than their corresponding\nstate-of-the-art counterparts and therefore requiring much fewer samples and\nmuch fewer training trials to converge.", + "authors": "Shoubhik Debnath, Gaurav Sukhatme, Lantao Liu", + "published": "2019-01-04", + "updated": "2019-01-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2301.11520v3", + "title": "SNeRL: Semantic-aware Neural Radiance Fields for Reinforcement Learning", + "abstract": "As previous representations for reinforcement learning cannot effectively\nincorporate a human-intuitive understanding of the 3D environment, they usually\nsuffer from sub-optimal performances. In this paper, we present Semantic-aware\nNeural Radiance Fields for Reinforcement Learning (SNeRL), which jointly\noptimizes semantic-aware neural radiance fields (NeRF) with a convolutional\nencoder to learn 3D-aware neural implicit representation from multi-view\nimages. We introduce 3D semantic and distilled feature fields in parallel to\nthe RGB radiance fields in NeRF to learn semantic and object-centric\nrepresentation for reinforcement learning. SNeRL outperforms not only previous\npixel-based representations but also recent 3D-aware representations both in\nmodel-free and model-based reinforcement learning.", + "authors": "Dongseok Shim, Seungjae Lee, H. Jin Kim", + "published": "2023-01-27", + "updated": "2023-05-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + } + ], + [ + { + "url": "http://arxiv.org/abs/2403.14238v1", + "title": "Reinforcement Learning from Reflective Feedback (RLRF): Aligning and Improving LLMs via Fine-Grained Self-Reflection", + "abstract": "Despite the promise of RLHF in aligning LLMs with human preferences, it often\nleads to superficial alignment, prioritizing stylistic changes over improving\ndownstream performance of LLMs. Underspecified preferences could obscure\ndirections to align the models. Lacking exploration restricts identification of\ndesirable outputs to improve the models. To overcome these challenges, we\npropose a novel framework: Reinforcement Learning from Reflective Feedback\n(RLRF), which leverages fine-grained feedback based on detailed criteria to\nimprove the core capabilities of LLMs. RLRF employs a self-reflection mechanism\nto systematically explore and refine LLM responses, then fine-tuning the models\nvia a RL algorithm along with promising responses. Our experiments across\nJust-Eval, Factuality, and Mathematical Reasoning demonstrate the efficacy and\ntransformative potential of RLRF beyond superficial surface-level adjustment.", + "authors": "Kyungjae Lee, Dasol Hwang, Sunghyun Park, Youngsoo Jang, Moontae Lee", + "published": "2024-03-21", + "updated": "2024-03-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "Model AND Based AND Reinforcement AND Learning", + "gt": "5.1 RL from Preference-based Feedback Preference-based RLHF methods (Ouyang et al., 2022b; Glaese et al., 2022; Bai et al., 2022; Nakano et al., 2022), which learn reward models from preference-based human feedback and then finetune LLMs through reinforcement learning, have successfully achieved to better align human preferences. One of the notable approaches in RLHF is Direct Preference Optimization (DPO) (Rafailov et al., 2023), which directly optimizes the LLMs from the pairwise preference dataset without explicit training of reward models. Iterative training methods (Yuan et al., 2024; Gulcehre et al., 2023; Adolphs et al., 2023) have been proposed to further improve the performance of LLMs by iteratively leveraging offline RL algorithms including DPO. Recent work (Yuan et al., 2024; Guo et al., 2024) utilizes policy LM to reward its response (i.e., SelfRewarding). The self-rewarding approach is similar to our feedback model in that it predicts absolute ratings, but it is trained based on a human\u2019s overall preference rather than fine-grained aspects. 5.2 RL from Fine-Grained Feedback To further improve the capabilities of LLMs beyond preference alignment, Wu et al. (2023) and Chen et al. (2024) have leveraged fine-grained reward models in RL fine-tuning. However, they require a separate pairwise dataset and training for the fine-grained reward model, which are additionally required for each improvement in capabilities of LLMs. Recently, fine-grained evaluation methods (Ye et al., 2023b; Kim et al., 2023; Min et al., 2023a) have been developed to evaluate the capabilities of LLMs using LLM as an evaluator, and show high correlation with human evaluation. From the success of this LLM evaluation, we leverage fine-grained feedback to improve the capabilities of LLM in RL fine-tuning. 5.3 Improving via Self-Reflection Several works have demonstrated the self-reflection capabilities of LLMs to transform a candidate response into an improved one in many real-world applications, without requiring additional finetuning (Han et al., 2024; Huang et al., 2022; Shinn et al., 2023; Madaan et al., 2023; Yang et al., 2022). Recent works (Ye et al., 2023a; Li et al., 2023a) have utilized self-refined examples as training data, in finetuning LLMs. However, these previous methods predominantly provide coarse-grained feedback on output responses and do not explore diverse candidates for potentially improved responses. In contrast, our work focuses on detailed, multi-aspect feedback and introduces self-reflective search to explore superior response candidates.", + "pre_questions": [], + "main_content": "Introduction Reinforcement Learning from Human Feedback (RLHF) has emerged as a crucial framework for aligning large language models (LLMs) with human preferences. To facilitate preference alignment, existing approaches such as InstructGPT (Ouyang et al., 2022a), Sparrow (Glaese et al., 2022), Llama2 (Touvron et al., 2023) commonly train a reward model with preferential human feedback. This reward model assesses the overall quality of model outputs as a scalar value. Then training LLMs with the reward signals encourages the models to generate more favorable responses better aligned with human preferences. Despite recent successes in preference alignment, training LLMs through RLHF does not guarantee a significant improvement of LLM\u2019s capabilities, in terms of downstream performance in * Equally contributed to this work. NLP tasks. Previous works (Zhou et al., 2023; Lin et al., 2023) have raised skepticism regarding the efficacy of current alignment techniques in improving LLM\u2019s capabilities. Zhou et al. (2023) claim that such alignment tuning might be superficial learning, where the model primarily learns favorable styles or formats for interacting with users. Lin et al. (2023) also observe that most distribution shifts between base and post-alignment LLMs tend to be predominantly in stylistic tokens. However, enhancing the capabilities of LLMs is more critical than adjusting their interaction styles or formats to better match human preferences. To address the superficial nature of preference alignment, we first investigate why the current RLHF often leads surface-level alignment. We tackle factuality and mathematical reasoning because the stylistic adjustment rarely contributes to downstream performance. Observing preferencebased reward models is notably deficient in evaluating mathematical reasoning, we hypothesize that preference-based reward models may cause superficial alignment. As a solution, we leverage finegrained LLM feedback that incorporates both verbal response and numeric score adhering to detailed criteria. However, even if adopting RL fine-tuning with fine-grained feedback as a reward, improving LLM capabilities remains a significant challenge due to the combinatorial action space, the vast array of potential responses in NLP tasks (Ramamurthy et al., 2023; Yehudai et al., 2022; Zhuang et al., 2023). To this end, we introduce a novel framework: Reinforcement Learning from Reflective Feedback (RLRF), designed to effectively explore promising responses and improve LLM capabilities through fine-grained feedback. Self-reflection, which empowers LLMs to evaluate and refine their responses based on feedback against previous outputs (Madaan et al., 2023; Ganguli et al., 2023; Welleck et al., 2022; Pan et al., 2023; Chen et al., arXiv:2403.14238v1 [cs.CL] 21 Mar 2024 Figure 1: An overview of our proposed Reinforcement Learning from Reflective Feedback (RLRF). 2023), is the key idea that enables targeted exploration on promising responses. High-quality outputs that have been improved through selfreflection lead to advance LLM capabilities with RL fine-tuning. Our framework consists of the following two stages as illustrated in Figure 1. Initially, the FineGrained Self-Reflection stage exploits the selfreflection ability of LLMs along with a fine-grained feedback model to search refined responses with high-quality. Then the RL Fine-tuning stage applies a RL algorithm to fine-tune the LLM utilizing these refined responses and their associated scores. In the experiments, we assess our approach on LLM-based evaluation benchmarks including Just-Eval (Lin et al., 2023), Factscore (Min et al., 2023a), and GSM8k (Cobbe et al., 2021). We employ the Llama-2 13B model (Touvron et al., 2023) after fine-tuning on the customized opensource instruction data (See Table 1). Note that the RLRF framework is flexible and scalable. Users can iterate the Fine-Grained Self-Reflection stage multiple times to attain higher-quality responses. RL Fine-tuning stage is not limited to applying only preference-based approaches, while our experiments are based on Direct Preference Optimization (DPO) (Rafailov et al., 2023). 2 Preliminaries 2.1 Preference-based RLHF Preference-based RLHF aims to optimize the policy (i.e., LLM) that aligns with human preferences using the pre-collected pairwise preference dataset D = {(xi,yi +,yi \u2212)}N i=1, where xi is instruction, and yi + and yi \u2212indicate the chosen and rejected responses, respectively. Conventional RLHF methods (Ouyang et al., 2022a; Glaese et al., 2022) train a preference-based reward model (RM) on the pairwise preference dataset, then optimize the policy using the trained reward model. The reward model is trained by a binary ranking loss with respect to the pairwise dataset D as follows: LRM(\u03b8) = \u2212ED \u0002 log(\u03c3(r\u03b8(x,y+) \u2212r\u03b8(x,y\u2212))) \u0003 , (1) where \u03c3 is the logistic function, r\u03b8(x, y) is the scalar output of reward model for instruction x and response y with parameters \u03b8, and y+ and y\u2212 indicate the chosen and rejected responses, respectively. Then, the policy is optimized by the following KL-penalized objective that combines the learned reward and KL-divergence between the current policy and reference policy: max \u03d5 Ex\u223cD,y\u223c\u03c0\u03d5(y|x) \u0014 r\u03b8(x,y) \u2212\u03b2 log \u03c0\u03d5(y|x) \u03c0ref(y|x) \u0015 , (2) where \u03c0\u03d5 is the policy (i.e., LLM) with parameters \u03d5, \u03c0ref is the reference policy (e.g., initial policy), and \u03b2 is a coefficient that balances the trade-off between the learned reward and the KL penalty. This KL-penalized objective can mitigate overoptimization (Gao et al., 2022) of the learned reward model, and is commonly optimized by proximal policy optimization (PPO) (Schulman et al., 2017). One of the recent notable preference-based RLHF algorithms, direct preference optimization (DPO) (Rafailov et al., 2023), directly optimizes the policy from the pre-collected pairwise preference dataset D without explicit training of reward model. Rafailov et al. (2023) show that training of reward model (Eq. (1)) and policy optimization (Eq. (2)) processes can be replaced by optimizing the following simple binary classification objective on the pairwise preference dataset D: LDPO(\u03d5) = \u2212ED \u0014 log \u03c3 \u0012 \u03b2 log \u03c0\u03d5(y+|x) \u03c0ref(y+|x) \u2212\u03b2 log \u03c0\u03d5(y\u2212|x) \u03c0ref(y\u2212|x) \u0013\u0015 . (3) This single-stage policy learning of DPO enables more stable and efficient training, compared to PPO. 2.2 Challenges in Improving the Capabilities of LLM via preference-based RLHF Despite recent successes of RLHF, fine-tuning LLMs with RLHF still has many challenges such as instability of training (Zheng et al., 2023b), sensitivity to hyperparameters (Ramamurthy et al., 2023), and overoptimization (Gao et al., 2022) of the learned reward model. Unlike the prior works that address the conventional challenges in RLHF, we focus on the following challenges of preferencebased RLHF that are relevant to improving the capabilities of LLMs: \u2022 Underspecified Preference Criteria: Several works (Bansal et al., 2023; Wu et al., 2023; Krishna et al., 2023; Ye et al., 2023b) show that it is challenging for human annotators to consistently evaluate the overall quality of responses due to their different criteria for multiple aspects. Thus, to achieve the improvement of specific capabilities of LLMs, the fine-grained evaluation ability of specific aspects is essentially required. \u2022 Restricted Exploration: One of the major challenges of RL finetuning on LLMs is combinatorial action space in NLP tasks. Due to this complexity, it is infeasible to find an optimal policy through the exploration based on a naive exhaustive search. Previous RLHF approaches commonly used temperature-based sampling for exploration, to sample diverse outputs by increasing token-level randomness. To reduce the search space in language generation, top-k sampling (Fan et al., 2018) and nucleus sampling (Holtzman et al., 2020) could be alternatives, but these methods still have difficulty in exploring high-quality responses. 3 RL from Reflective Feedback (RLRF) In this section, we introduce Reinforcement Learning from Reflective Feedback (RLRF), a framework designed to produce promising responses through self-reflection, then improve the capabilities of LLMs with RL fine-tuning. Specifically, we present the Fine-Grained Feedback Model (Sec 3.1), which can criticize the responses and evaluate the fine-grained capabilities of LLMs in multiple aspects (e.g., logical correctness, factuality, insightfulness). Then we will describe Reinforcement Learning from Reflective Feedback (RLRF), which consists of the following two components that leverage the fine-grained feedback model: (1) Fine-Grained Self-Reflection, which exploits LLM\u2019s self-reflection capability with finegrained feedback model to search high-quality refined responses (Sec 3.2), (2) RL Fine-tuning, which fine-tunes the LLM on the refined dataset with the RL algorithm (Sec 3.3). 3.1 Fine-Grained Feedback Model To address the first challenge of underspecified criteria, we present a fine-grained feedback model, which can evaluate the responses from LLMs on fine-grained criteria for multiple aspects. Prior studies have shown the limitations of evaluating LLMs\u2019 responses with a single metric of preference (Bansal et al., 2023; Wu et al., 2023; Krishna et al., 2023; Ye et al., 2023b). Recently, Ye et al. (2023b) have developed a fine-grained language model evaluation method for the capabilities of LLMs using LLM as an evaluator. Inspired by this, we define the following eight evaluation aspects with three-level rating rubrics in each aspect: Factuality, Logical Correctness, Metacognition, Insightfulness, Completeness, Comprehension, Readability, Harmlessness (See Table 7). In defining the rating rubrics, we focus on recognizing whether the response y meets specific standards (categorized as success, moderate, or failure), whereas previous works (Liu et al., 2023; Zheng et al., 2023a) employed a wide range of rating scales, such as 5 or 10 points. To achieve focused evaluation on aspects that are essential to follow each instruction, our feedback model selects the top-3 relevant aspects from the whole aspect set and then evaluates the selected aspects, similar to the approach proposed by (Ye et al., 2023b). Finally, our fine-grained feedback model fp with rubrics for all aspects as prompt p, generates the feedback fp(x,y) on three relevant aspects (See Table 16). We parse per-aspect ratings in the last sentence of fp(x,y), and use the ratings to complement the underspecified reward r(x,y) (i.e., preference-based reward). For brevity, we will refer to the fine-grained feedback model and preference-based reward model as the feedback model and reward model in the remaining sections. Optionally, if the task of a given instruction is known, we can evaluate on a single task-specific aspect (See Table 9). For task-specific instructions, we align them with a single fixed aspect. For example, a mathematical reasoning task can be aligned with \u201clogical correctness\u201d, while aligning a biography generation task with \u201cfactuality\u201d. In such NLP tasks, if reference knowledge or answers are available, we can boost the critique capabilities of feedback models prompting with the reference, which enables the feedback to be grounded in the reference (See Sec 4.4). We used Wikipedia articles for a biography generation task, and human answers for a mathematical reasoning task. 3.2 Fine-Grained Self-Reflection To tackle the second challenge of restricted exploration, we present a fine-grained self-reflection, which can effectively explore high-quality responses among the massive set of available responses. Unlike other RLHF approaches that explore diverse outputs through temperature-based sampling, we encourage effective exploration by leveraging the LLM\u2019s self-reflection ability that provides feedback and uses it to refine itself. To boost self-reflection, we employ the feedback fp(x,yk) as such prompt which provides detailed reasons behind the model\u2019s mistakes, facilitating more effective reflection and improvement. Fine-grained self-reflection starts by selecting a promising response \u02dc y to be refined. To select a promising response, we generate a set of n candidate responses for given instruction x and their evaluations as follows: Dy = {(x,yi,fp(x,yi),r(x,yi))|yi \u223c\u03c0\u03d5(x)}n i=1, where \u03c0\u03d5 is the policy (i.e., LLM) and yi is generated response by temperature-based sampling. Then, \u02dc y is selected as the promising response with the highest preference-based reward among the candidate responses as follows: \u02dc y = arg max y\u2208{yi}n i=1 r(x,y), where {yi}n i=1 is a set of n response candidates. To effectively explore high-quality responses, we generate m refinement by performing self-reflection that reads the feedback fp(x,\u02dc y) and corrects the errors in \u02dc y as follows: Dz = {(x,zj,fp(x,zj),r(x,zj))|zj \u223c\u03c0\u03d5(\u02dc x)}m j=1, where \u02dc x = {x,\u02dc y, fp(x,\u02dc y)} and zj is refined response by self-reflection. We use both generated datasets Dy and Dz in RL fine-tuning. 3.3 RL Fine-tuning In the last stage, we fine-tune the language model (i.e., policy \u03c0\u03d5) via DPO (Rafailov et al., 2023) which is one of the representative RL algorithms for fine-tuning LLMs. Since DPO directly optimizes the policy from the pairwise preference dataset, it requires positive-negative pairs in the form of comparable preference. We construct positive-negative pairs with whole datasets D = Dy \u222aDz, which are generated from the fine-grained self-reflection Type Data Size Data Format Data Name SFT Seed for Initial M0 100K x 7\u2192y \u2022 UltraChat, Airoboros, Open-Orca, Open-Platypus 23K (x,y,f) 7\u2192\u02dc y \u2022 Reflection Custom Preference-based Reward Model 550K (x,ya) \u227b \u227a(x,yb) \u2022 Anthropic HH, OpenAI Summarize, WebGPT, StackExchange, Stanford SHP, UltraFeedback Task-augmented Reward Model 550K + 23K (x,ya) \u227b \u227a(x,yb) \u2022 Preference Data (550K) + Math (16K) + Factuality (7K) Feedback Model 30K (x,y) 7\u2192f \u2022 Instruction-following Custom (sampled from SFT Seed) 9K (x,y, GT) 7\u2192f \u2022 Math Custom on GSM8K and MATH 8K (x,y, REF) 7\u2192f \u2022 Factuality Custom (Biography generation) RL fine-tuning 60K (x, y+, y\u2212) \u2022 ShareGPT 10K (x, y+, y\u2212) \u2022 Math Custom on GSM8K and MATH 10K (x, y+, y\u2212) \u2022 Factuality Custom (Biography generation) Table 1: Training data for our reward, feedback, and policy models. We list both the open-source dataset and custom data collected by GPT-4\u2019s API. The more details of this training data can be found in Appendix C. stage. First, we classify the dataset into positive dataset D+ and negative dataset D\u2212through the feedback score. The responses with ratings of all aspects in feedback fp(x,y) being 1 (i.e., \u201csuccess\u201d) are selected as the positive set, and the remaining responses that include the rating of 0 (\u201cmoderate\u201d) or -1 (\u201cfailure\u201d) are selected as the negative set. Among the examples in D+, we select top-k responses with the highest reward as the positive examples y+. For 1-to-1 pair matching, we randomly sample negative examples y\u2212from D\u2212according to the number of the positive set, and discard examples with no positive set. Finally, we fine-tune the language model by optimizing the DPO objective (Eq. (3)) by leveraging the y+ and y\u2212as pairwise datasets. We use DPO (Rafailov et al., 2023) for the RL fine-tuning stage, but our framework is not limited to applying only preference-based approaches such as DPO. 3.4 Iterative Training Figure 1 summarizes the overall process and details of our proposed framework. Our framework serves the iterative training that alternates between fine-grained self-reflection and RL fine-tuning. Since the updated policy can generate better responses and refinements during the fine-grained self-reflection process than the outputs from the previous policy, policy improvement can be continuously performed by repeating this process until the policy performance converges. 4 Experiment 4.1 Experimental Setup Training Dataset Table 1 summarizes the training data. Starting with the base model, we fine-tune three independent models (the feedback, reward, and initial policy models) on the open-sourced dataset and our additional custom dataset extracted from OpenAI API (gpt-4-1106-preview). During RL fine-tuning, we use the following three types of datasets: (1) general instruction: ShareGPT, (2) Math reasoning: GSM8K (Cobbe et al., 2021), MATH (Hendrycks et al., 2021), (3) Factuality (Min et al., 2023b). We provide more details regarding these datasets in Appendix C. Base Model and Hyper-parameters In our experiment, we used the Llama-2-13b-chat as our base model. All experiments were conducted on 16 A100 GPUs, each with 40 GB of memory. We set the learning rate to 2e-5 (constant) for fine-tuning both the feedback and the initial policy models, and 2e-6 (cosine decay) for DPO. In DPO finetuning, we set \u03b2 = 0.1, in Eq (3). The results of additional values (\u03b2 = 0.01, 0.1, 0.5) can be found in Appendix B. In the self-reflection stage, we restricted the maximum samples of exploration to n + m = 30, where n is the size of Dy and m is that of Dz. In our experiments, we select the best values (n = 10, m = 20) among (10, 20), (15, 15), and (20, 10) on a subset of training set. Baselines for Comparison In our experiment, we take state-of-the-art LLMs: GPT-4-0613, GPTMethod Just-Eval (by GPT-4) FactScore Math Accuracy Total Helpful Depth Factual Math SOTA LLMs GPT-4-0613 4.80 4.86 4.49 4.49 5.00 83.20 94.60\u2020 GPT-3.5-turbo-0301 4.75 4.81 4.33 4.33 5.00 79.00 80.80\u2020 Llama-2-70b-chat 4.72 4.58 4.38 4.38 3.12 67.70 56.80\u2021 Llama-2-13b-chat 4.45 4.41 4.02 4.24 2.38 65.30 43.14 Our RLRF Initial M0 4.60 4.58 4.17 4.51 4.00 70.79 41.77 M1 (RS) 4.65 4.63 4.24 4.54 3.44 72.20 47.84 M1 (DPO) 4.66 4.66 4.27 4.55 3.88 78.50 47.92 M2 (RS \u2192DPO) 4.64 4.62 4.23 4.55 3.75 76.30 51.02 M2 (DPO \u2192DPO) 4.63 4.63 4.24 4.52 4.06 79.30 49.66 RLHF Baseline M1 (RS, Reward-only) 4.63 4.59 4.23 4.49 3.19 69.10 39.27 M1 (DPO, Reward-only) 4.62 4.60 4.19 4.53 3.44 70.79 41.09 Table 2: The main results of RLRF compared to various open and closed models. The best results among 13B-based models are bold-faced. The dagger (\u2020) indicates the results on the CoT setting reported in (Zhao et al., 2023), while the double dagger (\u2021) is the result on 8-shot setting reported in (Touvron et al., 2023). 3.5-turbo-0301, and Llama-2-70b-chat. As RLHF baselines, we used only the reward model without our feedback model, learning on pairs of positive examples with highest reward and random negative examples in Dy. As an alternative to DPO using both positive and negative examples, we can supervised fine-tune the model on only the positive set, which we call Rejection Sampling (RS). 4.2 Evaluation Benchmarks To measure the effectiveness of our RLRF in multi-aspects, we conduct experiments on JustEval (Lin et al., 2023) for fine-grained evaluation by GPT-4. This benchmark consists of 1,000 instructions from diverse datasets including AlpacaEval (Li et al., 2023b), MT-bench (Zheng et al., 2023a), LIMA (Zhou et al., 2023), HH-RLHFredteam (Ganguli et al., 2022), and MaliciousInstruct (Huang et al., 2023). This benchmark provides the categories of task type and topic for each example, which enables comprehensive analysis over diverse categories. We report four metrics in Just-Eval: Total (avg. six aspects), Helpfulness, Depth, Factuality, and Mathematics (helpfulness over math problems). Our complete results, including those specific to aspects, tasks, and datasets in just-eval, can be found in the Appendix A. To evaluate the task-specific capabilities of LLMs, we test models on two tasks: Factuality (biography generation) and Mathematical Reasoning tasks (Cobbe et al., 2021). Following the previous work (Min et al., 2023b), we compute the FactScore of the model\u2019s responses (given instruction of \u201cTell me about a bio of [person]\u201d). The FactScore computes a ratio of correct and incorrect facts in the response. We extracted 10.2k person names from Wikipedia (10k for the train set, and 200 entities for the test set). For mathematical reasoning, we measure test set accuracy on GSM8K (Cobbe et al., 2021). While other approaches (Zhao et al., 2023; Imani et al., 2023) designed for mathematical reasoning use few-shot or chain-of-thought (CoT) prompts to boost performance, we conduct on zeroshot setting, without such additional prompts. 4.3 Does our RLRF effectively enhance LLM\u2019s capabilities? Table 2 shows our main results on Just-Eval, FactScore, and GSM8K. The results show that our framework RLRF with DPO and Rejection Sampling (RS) improves the performance on overall tasks, from M0 to M2. Especially in FactScore and Math Accuracy, our method gradually improves the performance without reaching saturation. On the other hand, Just-Eval performance by GPT-4 saturated at M1 showing the model\u2019s tendency to overfit during DPO fine-tuning. When comparing DPO and RS, DPO effectively improved the performance on the factuality task, while it is sensitive to hyper-parameters (See Appendix B). RLHF baselines sightly improves Just-Eval scores, while the performance on FactScore and Math accuracy did Figure 2: On Math reasoning (GSM8K). We observed that reward or feedback scores indicate the correctness of responses. (Left) Reward Scores, (Center) Feedback Scores with Reference, (Right) Feedback Scores without Reference. Figure 3: On the factuality task. (Left) Reward Scores, (Center) Feedback Scores with Reference, (Right) Feedback Scores without Reference. Figure 4: Results on different numbers of samples in each stage: Generating responses, feedbacks, or refined responses. The y-axis is the total scores on Just-Eval. not improve or decreased slightly, showing that RL through preference-based reward was not able to improve the capabilties of LLMs. 4.4 Does our fine-grained feedback recognize the correctness of the model\u2019s responses in the NLP tasks well? We investigate how well the feedback and reward models detect the success or failure of generating responses on the two tasks. We randomly sample responses on the test sets in Factuality and GSM8K, then split the responses based on whether they are correct or incorrect. Since FactScore ranges from 0 to 100%, we separate them by top 30% and bottom 30% scores. Figure 2 and 3 show the distributions of reward and feedback scores for correct and incorrect examples. In GSM8K, the reward model failed to distinguish correct and incorrect samples, while our feedback model (with reference) captures their correctness well. This finding implies that RLHF based on only a preference-based reward model in reasoning tasks such as mathematics can lead to superficial alignment. On the other hand, contrary to our expectations, the reward model performed well in the factuality task, discriminating between more factual and less factual responses. However, when the reference (Wikipedia) was not provided, our feedback model did not detect factuality well, especially on the bottom 30% factual responses. We can observe that the preference-based reward model can be a better proxy when there is no reference knowledge to utilize. 4.5 Is exploring more samples effective in acquiring high-quality refined responses? Since sampling diverse outputs requires extensive computations, it is crucial to investigate the resource efficiency of each step in the sampling process and to allocate resources accordingly. We investigated the impact of varying the number of samples for responses (y), feedbacks (fp), and refined responses (z) on Just-Eval. When we change the number of samples in a particular element, we fix the number of samples in other elements to n=1. Figure 4 shows the average ratings on Just-Eval (By Aspect setting). We observed that sampling more responses y (i.e., increasing the size of Dy) had the Figure 5: Average Token Lengths of several models. largest impact on the performance, while sampling diverse feedback fp shows only a slight difference. Based on this result, we opted to generate a single feedback sample for each (x,y) pair. 4.6 How does the model\u2019s response length change during training? Figure 5 shows the average output token lengths of the models on Just-Eval dataset (on average, 1000 examples). During DPO fine-tuning (M0 \u2192 M1 \u2192M2), the average length gradually increases, whereas fine-tuning with rejection sampling slightly reduces the token length. Aligning LLMs with human preferences should improve downstream performance of the models as well as learning more favorable styles. In this paper, we propose a novel framework, RLRF, which exploits a fine-grained feedback model to critically assess LLM outputs beyond superficial preference, exploring high-quality responses through selfreflection. Subsequently, RLRF improves the models via a RL algorithm based on these promising responses. Our experimental findings reveal that RLRF significantly improves LLM\u2019s performance, ranging from fine-grained alignment evaluations to mathematical tasks. Given flexibility and scalability of the framework, we posit that our RLRF has transformative potential to bridge the disparity between proprietary and open-source LLMs. 7 Limitations Our study acknowledges several limitations and suggests future directions for further improvements. First, the assessment of aspects such as insightfulness and readability in our work may be subjective, leading to low agreement across human evaluators, as reported in (Ye et al., 2023b). This subjectivity could cause generic feedback that lacks specific details on certain aspects. Future work could investigate more objective criteria or refine evaluation rubrics to identify weak capabilities of LLMs more precisely. Second, our RLRF framework, grounded in RL, incurs substantial computational costs during the exploration stage. As a result, we restrict the sampling to only 30 candidates and confine DPO/Rejection Sampling fine-tuning to just 2 iterations. These resource constraints prevent further optimization. Third, while our RLRF framework is compatible with various RL algorithms, we opt for DPO for its proven stability and efficiency. Future work could exploit cutting-edge RL methods, such as Online DPO (Guo et al., 2024) and Inverse Preference Learning (Hejna and Sadigh, 2023), to further improve downstream performance within our transformative framework. Ethics Statement Considering the application of our research outcomes, we acknowledge the potential risks and ethical concerns associated with LLM-powered digital assistants. These risks and concerns include providing inaccurate information in response to user inquiries or, worse, deliberately generating fake information due to malicious users. However, our framework is designed specifically to minimize such risks. For instance, we have focused on improving the factuality of LLMs, aiming to mitigate hallucination problems. We use open-source datasets for research purposes only adhered to the terms of use and licenses. Additionally, we collect an extra dataset through the OpenAI API fully respecting its terms of use. We conduct our research ethically responsible and legally aligned." + }, + { + "url": "http://arxiv.org/abs/2305.18290v2", + "title": "Direct Preference Optimization: Your Language Model is Secretly a Reward Model", + "abstract": "While large-scale unsupervised language models (LMs) learn broad world\nknowledge and some reasoning skills, achieving precise control of their\nbehavior is difficult due to the completely unsupervised nature of their\ntraining. Existing methods for gaining such steerability collect human labels\nof the relative quality of model generations and fine-tune the unsupervised LM\nto align with these preferences, often with reinforcement learning from human\nfeedback (RLHF). However, RLHF is a complex and often unstable procedure, first\nfitting a reward model that reflects the human preferences, and then\nfine-tuning the large unsupervised LM using reinforcement learning to maximize\nthis estimated reward without drifting too far from the original model. In this\npaper we introduce a new parameterization of the reward model in RLHF that\nenables extraction of the corresponding optimal policy in closed form, allowing\nus to solve the standard RLHF problem with only a simple classification loss.\nThe resulting algorithm, which we call Direct Preference Optimization (DPO), is\nstable, performant, and computationally lightweight, eliminating the need for\nsampling from the LM during fine-tuning or performing significant\nhyperparameter tuning. Our experiments show that DPO can fine-tune LMs to align\nwith human preferences as well as or better than existing methods. Notably,\nfine-tuning with DPO exceeds PPO-based RLHF in ability to control sentiment of\ngenerations, and matches or improves response quality in summarization and\nsingle-turn dialogue while being substantially simpler to implement and train.", + "authors": "Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, Chelsea Finn", + "published": "2023-05-29", + "updated": "2023-12-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.14251v2", + "title": "FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation", + "abstract": "Evaluating the factuality of long-form text generated by large language\nmodels (LMs) is non-trivial because (1) generations often contain a mixture of\nsupported and unsupported pieces of information, making binary judgments of\nquality inadequate, and (2) human evaluation is time-consuming and costly. In\nthis paper, we introduce FACTSCORE, a new evaluation that breaks a generation\ninto a series of atomic facts and computes the percentage of atomic facts\nsupported by a reliable knowledge source. We conduct an extensive human\nevaluation to obtain FACTSCOREs of people biographies generated by several\nstate-of-the-art commercial LMs -- InstructGPT, ChatGPT, and the\nretrieval-augmented PerplexityAI -- and report new analysis demonstrating the\nneed for such a fine-grained score (e.g., ChatGPT only achieves 58%). Since\nhuman evaluation is costly, we also introduce an automated model that estimates\nFACTSCORE using retrieval and a strong language model, with less than a 2%\nerror rate. Finally, we use this automated metric to evaluate 6,500 generations\nfrom a new set of 13 recent LMs that would have cost $26K if evaluated by\nhumans, with various findings: GPT-4 and ChatGPT are more factual than public\nmodels, and Vicuna and Alpaca are some of the best public models. FACTSCORE is\navailable for public use via `pip install factscore`.", + "authors": "Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Wei Koh, Mohit Iyyer, Luke Zettlemoyer, Hannaneh Hajishirzi", + "published": "2023-05-23", + "updated": "2023-10-11", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.08491v2", + "title": "Prometheus: Inducing Fine-grained Evaluation Capability in Language Models", + "abstract": "Recently, using a powerful proprietary Large Language Model (LLM) (e.g.,\nGPT-4) as an evaluator for long-form responses has become the de facto\nstandard. However, for practitioners with large-scale evaluation tasks and\ncustom criteria in consideration (e.g., child-readability), using proprietary\nLLMs as an evaluator is unreliable due to the closed-source nature,\nuncontrolled versioning, and prohibitive costs. In this work, we propose\nPrometheus, a fully open-source LLM that is on par with GPT-4's evaluation\ncapabilities when the appropriate reference materials (reference answer, score\nrubric) are accompanied. We first construct the Feedback Collection, a new\ndataset that consists of 1K fine-grained score rubrics, 20K instructions, and\n100K responses and language feedback generated by GPT-4. Using the Feedback\nCollection, we train Prometheus, a 13B evaluator LLM that can assess any given\nlong-form text based on customized score rubric provided by the user.\nExperimental results show that Prometheus scores a Pearson correlation of 0.897\nwith human evaluators when evaluating with 45 customized score rubrics, which\nis on par with GPT-4 (0.882), and greatly outperforms ChatGPT (0.392).\nFurthermore, measuring correlation with GPT-4 with 1222 customized score\nrubrics across four benchmarks (MT Bench, Vicuna Bench, Feedback Bench, Flask\nEval) shows similar trends, bolstering Prometheus's capability as an evaluator\nLLM. Lastly, Prometheus achieves the highest accuracy on two human preference\nbenchmarks (HHH Alignment & MT Bench Human Judgment) compared to open-sourced\nreward models explicitly trained on human preference datasets, highlighting its\npotential as an universal reward model. We open-source our code, dataset, and\nmodel at https://kaistai.github.io/prometheus/.", + "authors": "Seungone Kim, Jamin Shin, Yejin Cho, Joel Jang, Shayne Longpre, Hwaran Lee, Sangdoo Yun, Seongjin Shin, Sungdong Kim, James Thorne, Minjoon Seo", + "published": "2023-10-12", + "updated": "2024-03-09", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2210.11610v2", + "title": "Large Language Models Can Self-Improve", + "abstract": "Large Language Models (LLMs) have achieved excellent performances in various\ntasks. However, fine-tuning an LLM requires extensive supervision. Human, on\nthe other hand, may improve their reasoning abilities by self-thinking without\nexternal inputs. In this work, we demonstrate that an LLM is also capable of\nself-improving with only unlabeled datasets. We use a pre-trained LLM to\ngenerate \"high-confidence\" rationale-augmented answers for unlabeled questions\nusing Chain-of-Thought prompting and self-consistency, and fine-tune the LLM\nusing those self-generated solutions as target outputs. We show that our\napproach improves the general reasoning ability of a 540B-parameter LLM\n(74.4%->82.1% on GSM8K, 78.2%->83.0% on DROP, 90.0%->94.4% on OpenBookQA, and\n63.4%->67.9% on ANLI-A3) and achieves state-of-the-art-level performance,\nwithout any ground truth label. We conduct ablation studies and show that\nfine-tuning on reasoning is critical for self-improvement.", + "authors": "Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, Jiawei Han", + "published": "2022-10-20", + "updated": "2022-10-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2308.08998v2", + "title": "Reinforced Self-Training (ReST) for Language Modeling", + "abstract": "Reinforcement learning from human feedback (RLHF) can improve the quality of\nlarge language model's (LLM) outputs by aligning them with human preferences.\nWe propose a simple algorithm for aligning LLMs with human preferences inspired\nby growing batch reinforcement learning (RL), which we call Reinforced\nSelf-Training (ReST). Given an initial LLM policy, ReST produces a dataset by\ngenerating samples from the policy, which are then used to improve the LLM\npolicy using offline RL algorithms. ReST is more efficient than typical online\nRLHF methods because the training dataset is produced offline, which allows\ndata reuse. While ReST is a general approach applicable to all generative\nlearning settings, we focus on its application to machine translation. Our\nresults show that ReST can substantially improve translation quality, as\nmeasured by automated metrics and human evaluation on machine translation\nbenchmarks in a compute and sample-efficient manner.", + "authors": "Caglar Gulcehre, Tom Le Paine, Srivatsan Srinivasan, Ksenia Konyushkova, Lotte Weerts, Abhishek Sharma, Aditya Siddhant, Alex Ahern, Miaosen Wang, Chenjie Gu, Wolfgang Macherey, Arnaud Doucet, Orhan Firat, Nando de Freitas", + "published": "2023-08-17", + "updated": "2023-08-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2203.02155v1", + "title": "Training language models to follow instructions with human feedback", + "abstract": "Making language models bigger does not inherently make them better at\nfollowing a user's intent. For example, large language models can generate\noutputs that are untruthful, toxic, or simply not helpful to the user. In other\nwords, these models are not aligned with their users. In this paper, we show an\navenue for aligning language models with user intent on a wide range of tasks\nby fine-tuning with human feedback. Starting with a set of labeler-written\nprompts and prompts submitted through the OpenAI API, we collect a dataset of\nlabeler demonstrations of the desired model behavior, which we use to fine-tune\nGPT-3 using supervised learning. We then collect a dataset of rankings of model\noutputs, which we use to further fine-tune this supervised model using\nreinforcement learning from human feedback. We call the resulting models\nInstructGPT. In human evaluations on our prompt distribution, outputs from the\n1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3,\ndespite having 100x fewer parameters. Moreover, InstructGPT models show\nimprovements in truthfulness and reductions in toxic output generation while\nhaving minimal performance regressions on public NLP datasets. Even though\nInstructGPT still makes simple mistakes, our results show that fine-tuning with\nhuman feedback is a promising direction for aligning language models with human\nintent.", + "authors": "Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, Ryan Lowe", + "published": "2022-03-04", + "updated": "2022-03-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2306.01693v2", + "title": "Fine-Grained Human Feedback Gives Better Rewards for Language Model Training", + "abstract": "Language models (LMs) often exhibit undesirable text generation behaviors,\nincluding generating false, toxic, or irrelevant outputs. Reinforcement\nlearning from human feedback (RLHF) - where human preference judgments on LM\noutputs are transformed into a learning signal - has recently shown promise in\naddressing these issues. However, such holistic feedback conveys limited\ninformation on long text outputs; it does not indicate which aspects of the\noutputs influenced user preference; e.g., which parts contain what type(s) of\nerrors. In this paper, we use fine-grained human feedback (e.g., which sentence\nis false, which sub-sentence is irrelevant) as an explicit training signal. We\nintroduce Fine-Grained RLHF, a framework that enables training and learning\nfrom reward functions that are fine-grained in two respects: (1) density,\nproviding a reward after every segment (e.g., a sentence) is generated; and (2)\nincorporating multiple reward models associated with different feedback types\n(e.g., factual incorrectness, irrelevance, and information incompleteness). We\nconduct experiments on detoxification and long-form question answering to\nillustrate how learning with such reward functions leads to improved\nperformance, supported by both automatic and human evaluation. Additionally, we\nshow that LM behaviors can be customized using different combinations of\nfine-grained reward models. We release all data, collected human feedback, and\ncodes at https://FineGrainedRLHF.github.io.", + "authors": "Zeqiu Wu, Yushi Hu, Weijia Shi, Nouha Dziri, Alane Suhr, Prithviraj Ammanabrolu, Noah A. Smith, Mari Ostendorf, Hannaneh Hajishirzi", + "published": "2023-06-02", + "updated": "2023-10-30", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2204.05862v1", + "title": "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback", + "abstract": "We apply preference modeling and reinforcement learning from human feedback\n(RLHF) to finetune language models to act as helpful and harmless assistants.\nWe find this alignment training improves performance on almost all NLP\nevaluations, and is fully compatible with training for specialized skills such\nas python coding and summarization. We explore an iterated online mode of\ntraining, where preference models and RL policies are updated on a weekly\ncadence with fresh human feedback data, efficiently improving our datasets and\nmodels. Finally, we investigate the robustness of RLHF training, and identify a\nroughly linear relation between the RL reward and the square root of the KL\ndivergence between the policy and its initialization. Alongside our main\nresults, we perform peripheral analyses on calibration, competing objectives,\nand the use of OOD detection, compare our models with human writers, and\nprovide samples from our models using prompts appearing in recent related work.", + "authors": "Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, Ben Mann, Jared Kaplan", + "published": "2022-04-12", + "updated": "2022-04-12", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2209.14375v1", + "title": "Improving alignment of dialogue agents via targeted human judgements", + "abstract": "We present Sparrow, an information-seeking dialogue agent trained to be more\nhelpful, correct, and harmless compared to prompted language model baselines.\nWe use reinforcement learning from human feedback to train our models with two\nnew additions to help human raters judge agent behaviour. First, to make our\nagent more helpful and harmless, we break down the requirements for good\ndialogue into natural language rules the agent should follow, and ask raters\nabout each rule separately. We demonstrate that this breakdown enables us to\ncollect more targeted human judgements of agent behaviour and allows for more\nefficient rule-conditional reward models. Second, our agent provides evidence\nfrom sources supporting factual claims when collecting preference judgements\nover model statements. For factual questions, evidence provided by Sparrow\nsupports the sampled response 78% of the time. Sparrow is preferred more often\nthan baselines while being more resilient to adversarial probing by humans,\nviolating our rules only 8% of the time when probed. Finally, we conduct\nextensive analyses showing that though our model learns to follow our rules it\ncan exhibit distributional biases.", + "authors": "Amelia Glaese, Nat McAleese, Maja Tr\u0119bacz, John Aslanides, Vlad Firoiu, Timo Ewalds, Maribeth Rauh, Laura Weidinger, Martin Chadwick, Phoebe Thacker, Lucy Campbell-Gillingham, Jonathan Uesato, Po-Sen Huang, Ramona Comanescu, Fan Yang, Abigail See, Sumanth Dathathri, Rory Greig, Charlie Chen, Doug Fritz, Jaume Sanchez Elias, Richard Green, So\u0148a Mokr\u00e1, Nicholas Fernando, Boxi Wu, Rachel Foley, Susannah Young, Iason Gabriel, William Isaac, John Mellor, Demis Hassabis, Koray Kavukcuoglu, Lisa Anne Hendricks, Geoffrey Irving", + "published": "2022-09-28", + "updated": "2022-09-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2112.09332v3", + "title": "WebGPT: Browser-assisted question-answering with human feedback", + "abstract": "We fine-tune GPT-3 to answer long-form questions using a text-based\nweb-browsing environment, which allows the model to search and navigate the\nweb. By setting up the task so that it can be performed by humans, we are able\nto train models on the task using imitation learning, and then optimize answer\nquality with human feedback. To make human evaluation of factual accuracy\neasier, models must collect references while browsing in support of their\nanswers. We train and evaluate our models on ELI5, a dataset of questions asked\nby Reddit users. Our best model is obtained by fine-tuning GPT-3 using behavior\ncloning, and then performing rejection sampling against a reward model trained\nto predict human preferences. This model's answers are preferred by humans 56%\nof the time to those of our human demonstrators, and 69% of the time to the\nhighest-voted answer from Reddit.", + "authors": "Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight, Benjamin Chess, John Schulman", + "published": "2021-12-17", + "updated": "2022-06-01", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2401.10020v2", + "title": "Self-Rewarding Language Models", + "abstract": "We posit that to achieve superhuman agents, future models require superhuman\nfeedback in order to provide an adequate training signal. Current approaches\ncommonly train reward models from human preferences, which may then be\nbottlenecked by human performance level, and secondly these separate frozen\nreward models cannot then learn to improve during LLM training. In this work,\nwe study Self-Rewarding Language Models, where the language model itself is\nused via LLM-as-a-Judge prompting to provide its own rewards during training.\nWe show that during Iterative DPO training that not only does instruction\nfollowing ability improve, but also the ability to provide high-quality rewards\nto itself. Fine-tuning Llama 2 70B on three iterations of our approach yields a\nmodel that outperforms many existing systems on the AlpacaEval 2.0 leaderboard,\nincluding Claude 2, Gemini Pro, and GPT-4 0613. While there is much left still\nto explore, this work opens the door to the possibility of models that can\ncontinually improve in both axes.", + "authors": "Weizhe Yuan, Richard Yuanzhe Pang, Kyunghyun Cho, Xian Li, Sainbayar Sukhbaatar, Jing Xu, Jason Weston", + "published": "2024-01-18", + "updated": "2024-02-08", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.05826v1", + "title": "The CRINGE Loss: Learning what language not to model", + "abstract": "Standard language model training employs gold human documents or human-human\ninteraction data, and treats all training data as positive examples. Growing\nevidence shows that even with very large amounts of positive training data,\nissues remain that can be alleviated with relatively small amounts of negative\ndata -- examples of what the model should not do. In this work, we propose a\nnovel procedure to train with such data called the CRINGE loss (ContRastive\nIterative Negative GEneration). We show the effectiveness of this approach\nacross three different experiments on the tasks of safe generation,\ncontradiction avoidance, and open-domain dialogue. Our models outperform\nmultiple strong baselines and are conceptually simple, easy to train and\nimplement.", + "authors": "Leonard Adolphs, Tianyu Gao, Jing Xu, Kurt Shuster, Sainbayar Sukhbaatar, Jason Weston", + "published": "2022-11-10", + "updated": "2022-11-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2303.17651v2", + "title": "Self-Refine: Iterative Refinement with Self-Feedback", + "abstract": "Like humans, large language models (LLMs) do not always generate the best\noutput on their first try. Motivated by how humans refine their written text,\nwe introduce Self-Refine, an approach for improving initial outputs from LLMs\nthrough iterative feedback and refinement. The main idea is to generate an\ninitial output using an LLMs; then, the same LLMs provides feedback for its\noutput and uses it to refine itself, iteratively. Self-Refine does not require\nany supervised training data, additional training, or reinforcement learning,\nand instead uses a single LLM as the generator, refiner, and feedback provider.\nWe evaluate Self-Refine across 7 diverse tasks, ranging from dialog response\ngeneration to mathematical reasoning, using state-of-the-art (GPT-3.5, ChatGPT,\nand GPT-4) LLMs. Across all evaluated tasks, outputs generated with Self-Refine\nare preferred by humans and automatic metrics over those generated with the\nsame LLM using conventional one-step generation, improving by ~20% absolute on\naverage in task performance. Our work demonstrates that even state-of-the-art\nLLMs like GPT-4 can be further improved at test time using our simple,\nstandalone approach.", + "authors": "Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdanbakhsh, Peter Clark", + "published": "2023-03-30", + "updated": "2023-05-25", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.04792v2", + "title": "Direct Language Model Alignment from Online AI Feedback", + "abstract": "Direct alignment from preferences (DAP) methods, such as DPO, have recently\nemerged as efficient alternatives to reinforcement learning from human feedback\n(RLHF), that do not require a separate reward model. However, the preference\ndatasets used in DAP methods are usually collected ahead of training and never\nupdated, thus the feedback is purely offline. Moreover, responses in these\ndatasets are often sampled from a language model distinct from the one being\naligned, and since the model evolves over training, the alignment phase is\ninevitably off-policy. In this study, we posit that online feedback is key and\nimproves DAP methods. Our method, online AI feedback (OAIF), uses an LLM as\nannotator: on each training iteration, we sample two responses from the current\nmodel and prompt the LLM annotator to choose which one is preferred, thus\nproviding online feedback. Despite its simplicity, we demonstrate via human\nevaluation in several tasks that OAIF outperforms both offline DAP and RLHF\nmethods. We further show that the feedback leveraged in OAIF is easily\ncontrollable, via instruction prompts to the LLM annotator.", + "authors": "Shangmin Guo, Biao Zhang, Tianlin Liu, Tianqi Liu, Misha Khalman, Felipe Llinares, Alexandre Rame, Thomas Mesnard, Yao Zhao, Bilal Piot, Johan Ferret, Mathieu Blondel", + "published": "2024-02-07", + "updated": "2024-02-29", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL", + "cs.HC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2210.06774v3", + "title": "Re3: Generating Longer Stories With Recursive Reprompting and Revision", + "abstract": "We consider the problem of automatically generating longer stories of over\ntwo thousand words. Compared to prior work on shorter stories, long-range plot\ncoherence and relevance are more central challenges here. We propose the\nRecursive Reprompting and Revision framework (Re3) to address these challenges\nby (a) prompting a general-purpose language model to construct a structured\noverarching plan, and (b) generating story passages by repeatedly injecting\ncontextual information from both the plan and current story state into a\nlanguage model prompt. We then revise by (c) reranking different continuations\nfor plot coherence and premise relevance, and finally (d) editing the best\ncontinuation for factual consistency. Compared to similar-length stories\ngenerated directly from the same base model, human evaluators judged\nsubstantially more of Re3's stories as having a coherent overarching plot (by\n14% absolute increase), and relevant to the given initial premise (by 20%).", + "authors": "Kevin Yang, Yuandong Tian, Nanyun Peng, Dan Klein", + "published": "2022-10-13", + "updated": "2022-10-22", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2307.10928v4", + "title": "FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets", + "abstract": "Evaluation of Large Language Models (LLMs) is challenging because\ninstruction-following necessitates alignment with human values and the required\nset of skills varies depending on the instruction. However, previous studies\nhave mainly focused on coarse-grained evaluation (i.e. overall preference-based\nevaluation), which limits interpretability since it does not consider the\nnature of user instructions that require instance-wise skill composition. In\nthis paper, we introduce FLASK (Fine-grained Language Model Evaluation based on\nAlignment Skill Sets), a fine-grained evaluation protocol for both human-based\nand model-based evaluation which decomposes coarse-level scoring to a skill\nset-level scoring for each instruction. We experimentally observe that the\nfine-graininess of evaluation is crucial for attaining a holistic view of model\nperformance and increasing the reliability of the evaluation. Using FLASK, we\ncompare multiple open-source and proprietary LLMs and observe a high\ncorrelation between model-based and human-based evaluations. We publicly\nrelease the evaluation data and code implementation at\nhttps://github.com/kaistAI/FLASK.", + "authors": "Seonghyeon Ye, Doyoung Kim, Sungdong Kim, Hyeonbin Hwang, Seungone Kim, Yongrae Jo, James Thorne, Juho Kim, Minjoon Seo", + "published": "2023-07-20", + "updated": "2024-04-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2402.04792v2", + "title": "Direct Language Model Alignment from Online AI Feedback", + "abstract": "Direct alignment from preferences (DAP) methods, such as DPO, have recently\nemerged as efficient alternatives to reinforcement learning from human feedback\n(RLHF), that do not require a separate reward model. However, the preference\ndatasets used in DAP methods are usually collected ahead of training and never\nupdated, thus the feedback is purely offline. Moreover, responses in these\ndatasets are often sampled from a language model distinct from the one being\naligned, and since the model evolves over training, the alignment phase is\ninevitably off-policy. In this study, we posit that online feedback is key and\nimproves DAP methods. Our method, online AI feedback (OAIF), uses an LLM as\nannotator: on each training iteration, we sample two responses from the current\nmodel and prompt the LLM annotator to choose which one is preferred, thus\nproviding online feedback. Despite its simplicity, we demonstrate via human\nevaluation in several tasks that OAIF outperforms both offline DAP and RLHF\nmethods. We further show that the feedback leveraged in OAIF is easily\ncontrollable, via instruction prompts to the LLM annotator.", + "authors": "Shangmin Guo, Biao Zhang, Tianlin Liu, Tianqi Liu, Misha Khalman, Felipe Llinares, Alexandre Rame, Thomas Mesnard, Yao Zhao, Bilal Piot, Johan Ferret, Mathieu Blondel", + "published": "2024-02-07", + "updated": "2024-02-29", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.CL", + "cs.HC" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2401.07301v1", + "title": "Small Language Model Can Self-correct", + "abstract": "Generative Language Models (LMs) such as ChatGPT have exhibited remarkable\nperformance across various downstream tasks. Nevertheless, one of their most\nprominent drawbacks is generating inaccurate or false information with a\nconfident tone. Previous studies have devised sophisticated pipelines and\nprompts to induce large LMs to exhibit the capability for self-correction.\nHowever, large LMs are explicitly prompted to verify and modify its answers\nseparately rather than completing all steps spontaneously like humans.\nMoreover, these complex prompts are extremely challenging for small LMs to\nfollow. In this paper, we introduce the \\underline{I}ntrinsic\n\\underline{S}elf-\\underline{C}orrection (ISC) in generative language models,\naiming to correct the initial output of LMs in a self-triggered manner, even\nfor those small LMs with 6 billion parameters. Specifically, we devise a\npipeline for constructing self-correction data and propose Partial Answer\nMasking (PAM), aiming to endow the model with the capability for intrinsic\nself-correction through fine-tuning. We conduct experiments using LMs with\nparameters sizes ranging from 6 billion to 13 billion in two tasks, including\ncommonsense reasoning and factual knowledge reasoning. Our experiments\ndemonstrate that the outputs generated using ISC outperform those generated\nwithout self-correction. We believe that the output quality of even small LMs\ncan be further improved by empowering them with the ability to intrinsic\nself-correct.", + "authors": "Haixia Han, Jiaqing Liang, Jie Shi, Qianyu He, Yanghua Xiao", + "published": "2024-01-14", + "updated": "2024-01-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2401.06081v1", + "title": "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint", + "abstract": "Reinforcement learning (RL) has been widely used in training large language\nmodels~(LLMs) for preventing unexpected outputs, \\eg reducing harmfulness and\nerrors. However, existing RL methods mostly adopt the instance-level reward,\nwhich is unable to provide fine-grained supervision for complex reasoning\ntasks, and can not focus on the few key tokens that lead to the incorrectness.\nTo address it, we propose a new RL method named \\textbf{RLMEC} that\nincorporates a generative model as the reward model, which is trained by the\nerroneous solution rewriting task under the minimum editing constraint, and can\nproduce token-level rewards for RL training. Based on the generative reward\nmodel, we design the token-level RL objective for training and an\nimitation-based regularization for stabilizing RL process. And the both\nobjectives focus on the learning of the key tokens for the erroneous solution,\nreducing the effect of other unimportant tokens. The experiment results on\nmathematical tasks and question-answering tasks have demonstrated the\neffectiveness of our approach. Our code and data are available at\n\\url{https://github.com/RUCAIBox/RLMEC}.", + "authors": "Zhipeng Chen, Kun Zhou, Wayne Xin Zhao, Junchen Wan, Fuzheng Zhang, Di Zhang, Ji-Rong Wen", + "published": "2024-01-11", + "updated": "2024-01-11", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2301.10119v2", + "title": "Minimal Value-Equivalent Partial Models for Scalable and Robust Planning in Lifelong Reinforcement Learning", + "abstract": "Learning models of the environment from pure interaction is often considered\nan essential component of building lifelong reinforcement learning agents.\nHowever, the common practice in model-based reinforcement learning is to learn\nmodels that model every aspect of the agent's environment, regardless of\nwhether they are important in coming up with optimal decisions or not. In this\npaper, we argue that such models are not particularly well-suited for\nperforming scalable and robust planning in lifelong reinforcement learning\nscenarios and we propose new kinds of models that only model the relevant\naspects of the environment, which we call \"minimal value-equivalent partial\nmodels\". After providing a formal definition for these models, we provide\ntheoretical results demonstrating the scalability advantages of performing\nplanning with such models and then perform experiments to empirically\nillustrate our theoretical results. Then, we provide some useful heuristics on\nhow to learn these kinds of models with deep learning architectures and\nempirically demonstrate that models learned in such a way can allow for\nperforming planning that is robust to distribution shifts and compounding model\nerrors. Overall, both our theoretical and empirical results suggest that\nminimal value-equivalent partial models can provide significant benefits to\nperforming scalable and robust planning in lifelong reinforcement learning\nscenarios.", + "authors": "Safa Alver, Doina Precup", + "published": "2023-01-24", + "updated": "2023-06-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.05440v1", + "title": "Delay-Aware Model-Based Reinforcement Learning for Continuous Control", + "abstract": "Action delays degrade the performance of reinforcement learning in many\nreal-world systems. This paper proposes a formal definition of delay-aware\nMarkov Decision Process and proves it can be transformed into standard MDP with\naugmented states using the Markov reward process. We develop a delay-aware\nmodel-based reinforcement learning framework that can incorporate the\nmulti-step delay into the learned system models without learning effort.\nExperiments with the Gym and MuJoCo platforms show that the proposed\ndelay-aware model-based algorithm is more efficient in training and\ntransferable between systems with various durations of delay compared with\noff-policy model-free reinforcement learning methods. Codes available at:\nhttps://github.com/baimingc/dambrl.", + "authors": "Baiming Chen, Mengdi Xu, Liang Li, Ding Zhao", + "published": "2020-05-11", + "updated": "2020-05-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.07905v2", + "title": "Reinforcement Learning Ship Autopilot: Sample efficient and Model Predictive Control-based Approach", + "abstract": "In this research we focus on developing a reinforcement learning system for a\nchallenging task: autonomous control of a real-sized boat, with difficulties\narising from large uncertainties in the challenging ocean environment and the\nextremely high cost of exploring and sampling with a real boat. To this end, we\nexplore a novel Gaussian processes (GP) based reinforcement learning approach\nthat combines sample-efficient model-based reinforcement learning and model\npredictive control (MPC). Our approach, sample-efficient probabilistic model\npredictive control (SPMPC), iteratively learns a Gaussian process dynamics\nmodel and uses it to efficiently update control signals within the MPC closed\ncontrol loop. A system using SPMPC is built to efficiently learn an autopilot\ntask. After investigating its performance in a simulation modeled upon real\nboat driving data, the proposed system successfully learns to drive a\nreal-sized boat equipped with a single engine and sensors measuring GPS, speed,\ndirection, and wind in an autopilot task without human demonstration.", + "authors": "Yunduan Cui, Shigeki Osaki, Takamitsu Matsubara", + "published": "2019-01-23", + "updated": "2019-07-23", + "primary_cat": "cs.SY", + "cats": [ + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2009.09781v1", + "title": "Rethinking Supervised Learning and Reinforcement Learning in Task-Oriented Dialogue Systems", + "abstract": "Dialogue policy learning for task-oriented dialogue systems has enjoyed great\nprogress recently mostly through employing reinforcement learning methods.\nHowever, these approaches have become very sophisticated. It is time to\nre-evaluate it. Are we really making progress developing dialogue agents only\nbased on reinforcement learning? We demonstrate how (1)~traditional supervised\nlearning together with (2)~a simulator-free adversarial learning method can be\nused to achieve performance comparable to state-of-the-art RL-based methods.\nFirst, we introduce a simple dialogue action decoder to predict the appropriate\nactions. Then, the traditional multi-label classification solution for dialogue\npolicy learning is extended by adding dense layers to improve the dialogue\nagent performance. Finally, we employ the Gumbel-Softmax estimator to\nalternatively train the dialogue agent and the dialogue reward model without\nusing reinforcement learning. Based on our extensive experimentation, we can\nconclude the proposed methods can achieve more stable and higher performance\nwith fewer efforts, such as the domain knowledge required to design a user\nsimulator and the intractable parameter tuning in reinforcement learning. Our\nmain goal is not to beat reinforcement learning with supervised learning, but\nto demonstrate the value of rethinking the role of reinforcement learning and\nsupervised learning in optimizing task-oriented dialogue systems.", + "authors": "Ziming Li, Julia Kiseleva, Maarten de Rijke", + "published": "2020-09-21", + "updated": "2020-09-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.12516v2", + "title": "Prioritized Experience-based Reinforcement Learning with Human Guidance for Autonomous Driving", + "abstract": "Reinforcement learning (RL) requires skillful definition and remarkable\ncomputational efforts to solve optimization and control problems, which could\nimpair its prospect. Introducing human guidance into reinforcement learning is\na promising way to improve learning performance. In this paper, a comprehensive\nhuman guidance-based reinforcement learning framework is established. A novel\nprioritized experience replay mechanism that adapts to human guidance in the\nreinforcement learning process is proposed to boost the efficiency and\nperformance of the reinforcement learning algorithm. To relieve the heavy\nworkload on human participants, a behavior model is established based on an\nincremental online learning method to mimic human actions. We design two\nchallenging autonomous driving tasks for evaluating the proposed algorithm.\nExperiments are conducted to access the training and testing performance and\nlearning mechanism of the proposed algorithm. Comparative results against the\nstate-of-the-art methods suggest the advantages of our algorithm in terms of\nlearning efficiency, performance, and robustness.", + "authors": "Jingda Wu, Zhiyu Huang, Wenhui Huang, Chen Lv", + "published": "2021-09-26", + "updated": "2022-11-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2007.12666v5", + "title": "Safe Model-Based Reinforcement Learning for Systems with Parametric Uncertainties", + "abstract": "Reinforcement learning has been established over the past decade as an\neffective tool to find optimal control policies for dynamical systems, with\nrecent focus on approaches that guarantee safety during the learning and/or\nexecution phases. In general, safety guarantees are critical in reinforcement\nlearning when the system is safety-critical and/or task restarts are not\npractically feasible. In optimal control theory, safety requirements are often\nexpressed in terms of state and/or control constraints. In recent years,\nreinforcement learning approaches that rely on persistent excitation have been\ncombined with a barrier transformation to learn the optimal control policies\nunder state constraints. To soften the excitation requirements, model-based\nreinforcement learning methods that rely on exact model knowledge have also\nbeen integrated with the barrier transformation framework. The objective of\nthis paper is to develop safe reinforcement learning method for deterministic\nnonlinear systems, with parametric uncertainties in the model, to learn\napproximate constrained optimal policies without relying on stringent\nexcitation conditions. To that end, a model-based reinforcement learning\ntechnique that utilizes a novel filtered concurrent learning method, along with\na barrier transformation, is developed in this paper to realize simultaneous\nlearning of unknown model parameters and approximate optimal state-constrained\ncontrol policies for safety-critical systems.", + "authors": "S M Nahid Mahmud, Scott A Nivison, Zachary I. Bell, Rushikesh Kamalapurkar", + "published": "2020-07-24", + "updated": "2021-10-05", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.SY", + "math.OC" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2105.00822v2", + "title": "Generative Adversarial Reward Learning for Generalized Behavior Tendency Inference", + "abstract": "Recent advances in reinforcement learning have inspired increasing interest\nin learning user modeling adaptively through dynamic interactions, e.g., in\nreinforcement learning based recommender systems. Reward function is crucial\nfor most of reinforcement learning applications as it can provide the guideline\nabout the optimization. However, current reinforcement-learning-based methods\nrely on manually-defined reward functions, which cannot adapt to dynamic and\nnoisy environments. Besides, they generally use task-specific reward functions\nthat sacrifice generalization ability. We propose a generative inverse\nreinforcement learning for user behavioral preference modelling, to address the\nabove issues. Instead of using predefined reward functions, our model can\nautomatically learn the rewards from user's actions based on discriminative\nactor-critic network and Wasserstein GAN. Our model provides a general way of\ncharacterizing and explaining underlying behavioral tendencies, and our\nexperiments show our method outperforms state-of-the-art methods in a variety\nof scenarios, namely traffic signal control, online recommender systems, and\nscanpath prediction.", + "authors": "Xiaocong Chen, Lina Yao, Xianzhi Wang, Aixin Sun, Wenjie Zhang, Quan Z. Sheng", + "published": "2021-05-03", + "updated": "2021-05-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.IR" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1507.06923v1", + "title": "A Reinforcement Learning Approach to Online Learning of Decision Trees", + "abstract": "Online decision tree learning algorithms typically examine all features of a\nnew data point to update model parameters. We propose a novel alternative,\nReinforcement Learning- based Decision Trees (RLDT), that uses Reinforcement\nLearning (RL) to actively examine a minimal number of features of a data point\nto classify it with high accuracy. Furthermore, RLDT optimizes a long term\nreturn, providing a better alternative to the traditional myopic greedy\napproach to growing decision trees. We demonstrate that this approach performs\nas well as batch learning algorithms and other online decision tree learning\nalgorithms, while making significantly fewer queries about the features of the\ndata points. We also show that RLDT can effectively handle concept drift.", + "authors": "Abhinav Garlapati, Aditi Raghunathan, Vaishnavh Nagarajan, Balaraman Ravindran", + "published": "2015-07-24", + "updated": "2015-07-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.15175v1", + "title": "Coordinated Reinforcement Learning for Optimizing Mobile Networks", + "abstract": "Mobile networks are composed of many base stations and for each of them many\nparameters must be optimized to provide good services. Automatically and\ndynamically optimizing all these entities is challenging as they are sensitive\nto variations in the environment and can affect each other through\ninterferences. Reinforcement learning (RL) algorithms are good candidates to\nautomatically learn base station configuration strategies from incoming data\nbut they are often hard to scale to many agents. In this work, we demonstrate\nhow to use coordination graphs and reinforcement learning in a complex\napplication involving hundreds of cooperating agents. We show how mobile\nnetworks can be modeled using coordination graphs and how network optimization\nproblems can be solved efficiently using multi- agent reinforcement learning.\nThe graph structure occurs naturally from expert knowledge about the network\nand allows to explicitly learn coordinating behaviors between the antennas\nthrough edge value functions represented by neural networks. We show\nempirically that coordinated reinforcement learning outperforms other methods.\nThe use of local RL updates and parameter sharing can handle a large number of\nagents without sacrificing coordination which makes it well suited to optimize\nthe ever denser networks brought by 5G and beyond.", + "authors": "Maxime Bouton, Hasan Farooq, Julien Forgeat, Shruti Bothe, Meral Shirazipour, Per Karlsson", + "published": "2021-09-30", + "updated": "2021-09-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.08162v1", + "title": "Causal Reasoning from Meta-reinforcement Learning", + "abstract": "Discovering and exploiting the causal structure in the environment is a\ncrucial challenge for intelligent agents. Here we explore whether causal\nreasoning can emerge via meta-reinforcement learning. We train a recurrent\nnetwork with model-free reinforcement learning to solve a range of problems\nthat each contain causal structure. We find that the trained agent can perform\ncausal reasoning in novel situations in order to obtain rewards. The agent can\nselect informative interventions, draw causal inferences from observational\ndata, and make counterfactual predictions. Although established formal causal\nreasoning algorithms also exist, in this paper we show that such reasoning can\narise from model-free reinforcement learning, and suggest that causal reasoning\nin complex settings may benefit from the more end-to-end learning-based\napproaches presented here. This work also offers new strategies for structured\nexploration in reinforcement learning, by providing agents with the ability to\nperform -- and interpret -- experiments.", + "authors": "Ishita Dasgupta, Jane Wang, Silvia Chiappa, Jovana Mitrovic, Pedro Ortega, David Raposo, Edward Hughes, Peter Battaglia, Matthew Botvinick, Zeb Kurth-Nelson", + "published": "2019-01-23", + "updated": "2019-01-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.05546v2", + "title": "Multi-Agent Quantum Reinforcement Learning using Evolutionary Optimization", + "abstract": "Multi-Agent Reinforcement Learning is becoming increasingly more important in\ntimes of autonomous driving and other smart industrial applications.\nSimultaneously a promising new approach to Reinforcement Learning arises using\nthe inherent properties of quantum mechanics, reducing the trainable parameters\nof a model significantly. However, gradient-based Multi-Agent Quantum\nReinforcement Learning methods often have to struggle with barren plateaus,\nholding them back from matching the performance of classical approaches. We\nbuild upon an existing approach for gradient free Quantum Reinforcement\nLearning and propose three genetic variations with Variational Quantum Circuits\nfor Multi-Agent Reinforcement Learning using evolutionary optimization. We\nevaluate our genetic variations in the Coin Game environment and also compare\nthem to classical approaches. We showed that our Variational Quantum Circuit\napproaches perform significantly better compared to a neural network with a\nsimilar amount of trainable parameters. Compared to the larger neural network,\nour approaches archive similar results using $97.88\\%$ less parameters.", + "authors": "Michael K\u00f6lle, Felix Topp, Thomy Phan, Philipp Altmann, Jonas N\u00fc\u00dflein, Claudia Linnhoff-Popien", + "published": "2023-11-09", + "updated": "2024-01-13", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph", + "cs.AI", + "cs.MA" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1206.3281v1", + "title": "Model-Based Bayesian Reinforcement Learning in Large Structured Domains", + "abstract": "Model-based Bayesian reinforcement learning has generated significant\ninterest in the AI community as it provides an elegant solution to the optimal\nexploration-exploitation tradeoff in classical reinforcement learning.\nUnfortunately, the applicability of this type of approach has been limited to\nsmall domains due to the high complexity of reasoning about the joint posterior\nover model parameters. In this paper, we consider the use of factored\nrepresentations combined with online planning techniques, to improve\nscalability of these methods. The main contribution of this paper is a Bayesian\nframework for learning the structure and parameters of a dynamical system,\nwhile also simultaneously planning a (near-)optimal sequence of actions.", + "authors": "Stephane Ross, Joelle Pineau", + "published": "2012-06-13", + "updated": "2012-06-13", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.13529v2", + "title": "Lyapunov-Based Reinforcement Learning State Estimator", + "abstract": "In this paper, we consider the state estimation problem for nonlinear\nstochastic discrete-time systems. We combine Lyapunov's method in control\ntheory and deep reinforcement learning to design the state estimator. We\ntheoretically prove the convergence of the bounded estimate error solely using\nthe data simulated from the model. An actor-critic reinforcement learning\nalgorithm is proposed to learn the state estimator approximated by a deep\nneural network. The convergence of the algorithm is analysed. The proposed\nLyapunov-based reinforcement learning state estimator is compared with a number\nof existing nonlinear filtering methods through Monte Carlo simulations,\nshowing its advantage in terms of estimate convergence even under some system\nuncertainties such as covariance shift in system noise and randomly missing\nmeasurements. To the best of our knowledge, this is the first reinforcement\nlearning based nonlinear state estimator with bounded estimate error\nperformance guarantee.", + "authors": "Liang Hu, Chengwei Wu, Wei Pan", + "published": "2020-10-26", + "updated": "2021-01-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO", + "cs.SY", + "eess.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1910.11914v3", + "title": "On the convergence of projective-simulation-based reinforcement learning in Markov decision processes", + "abstract": "In recent years, the interest in leveraging quantum effects for enhancing\nmachine learning tasks has significantly increased. Many algorithms speeding up\nsupervised and unsupervised learning were established. The first framework in\nwhich ways to exploit quantum resources specifically for the broader context of\nreinforcement learning were found is projective simulation. Projective\nsimulation presents an agent-based reinforcement learning approach designed in\na manner which may support quantum walk-based speed-ups. Although classical\nvariants of projective simulation have been benchmarked against common\nreinforcement learning algorithms, very few formal theoretical analyses have\nbeen provided for its performance in standard learning scenarios. In this\npaper, we provide a detailed formal discussion of the properties of this model.\nSpecifically, we prove that one version of the projective simulation model,\nunderstood as a reinforcement learning approach, converges to optimal behavior\nin a large class of Markov decision processes. This proof shows that a\nphysically-inspired approach to reinforcement learning can guarantee to\nconverge.", + "authors": "Walter L. Boyajian, Jens Clausen, Lea M. Trenkwalder, Vedran Dunjko, Hans J. Briegel", + "published": "2019-10-25", + "updated": "2020-11-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "quant-ph", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.02380v2", + "title": "Adaptive Rollout Length for Model-Based RL Using Model-Free Deep RL", + "abstract": "Model-based reinforcement learning promises to learn an optimal policy from\nfewer interactions with the environment compared to model-free reinforcement\nlearning by learning an intermediate model of the environment in order to\npredict future interactions. When predicting a sequence of interactions, the\nrollout length, which limits the prediction horizon, is a critical\nhyperparameter as accuracy of the predictions diminishes in the regions that\nare further away from real experience. As a result, with a longer rollout\nlength, an overall worse policy is learned in the long run. Thus, the\nhyperparameter provides a trade-off between quality and efficiency. In this\nwork, we frame the problem of tuning the rollout length as a meta-level\nsequential decision-making problem that optimizes the final policy learned by\nmodel-based reinforcement learning given a fixed budget of environment\ninteractions by adapting the hyperparameter dynamically based on feedback from\nthe learning process, such as accuracy of the model and the remaining budget of\ninteractions. We use model-free deep reinforcement learning to solve the\nmeta-level decision problem and demonstrate that our approach outperforms\ncommon heuristic baselines on two well-known reinforcement learning\nenvironments.", + "authors": "Abhinav Bhatia, Philip S. Thomas, Shlomo Zilberstein", + "published": "2022-06-06", + "updated": "2022-06-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2006.07178v2", + "title": "Meta-Reinforcement Learning Robust to Distributional Shift via Model Identification and Experience Relabeling", + "abstract": "Reinforcement learning algorithms can acquire policies for complex tasks\nautonomously. However, the number of samples required to learn a diverse set of\nskills can be prohibitively large. While meta-reinforcement learning methods\nhave enabled agents to leverage prior experience to adapt quickly to new tasks,\ntheir performance depends crucially on how close the new task is to the\npreviously experienced tasks. Current approaches are either not able to\nextrapolate well, or can do so at the expense of requiring extremely large\namounts of data for on-policy meta-training. In this work, we present model\nidentification and experience relabeling (MIER), a meta-reinforcement learning\nalgorithm that is both efficient and extrapolates well when faced with\nout-of-distribution tasks at test time. Our method is based on a simple\ninsight: we recognize that dynamics models can be adapted efficiently and\nconsistently with off-policy data, more easily than policies and value\nfunctions. These dynamics models can then be used to continue training policies\nand value functions for out-of-distribution tasks without using\nmeta-reinforcement learning at all, by generating synthetic experience for the\nnew task.", + "authors": "Russell Mendonca, Xinyang Geng, Chelsea Finn, Sergey Levine", + "published": "2020-06-12", + "updated": "2020-06-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2204.01409v1", + "title": "Safe Controller for Output Feedback Linear Systems using Model-Based Reinforcement Learning", + "abstract": "The objective of this research is to enable safety-critical systems to\nsimultaneously learn and execute optimal control policies in a safe manner to\nachieve complex autonomy. Learning optimal policies via trial and error, i.e.,\ntraditional reinforcement learning, is difficult to implement in\nsafety-critical systems, particularly when task restarts are unavailable. Safe\nmodel-based reinforcement learning techniques based on a barrier transformation\nhave recently been developed to address this problem. However, these methods\nrely on full state feedback, limiting their usability in a real-world\nenvironment. In this work, an output-feedback safe model-based reinforcement\nlearning technique based on a novel barrier-aware dynamic state estimator has\nbeen designed to address this issue. The developed approach facilitates\nsimultaneous learning and execution of safe control policies for\nsafety-critical linear systems. Simulation results indicate that barrier\ntransformation is an effective approach to achieve online reinforcement\nlearning in safety-critical systems using output feedback.", + "authors": "S M Nahid Mahmud, Moad Abudia, Scott A Nivison, Zachary I. Bell, Rushikesh Kamalapurkar", + "published": "2022-04-04", + "updated": "2022-04-04", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1912.03918v1", + "title": "Transformer Based Reinforcement Learning For Games", + "abstract": "Recent times have witnessed sharp improvements in reinforcement learning\ntasks using deep reinforcement learning techniques like Deep Q Networks, Policy\nGradients, Actor Critic methods which are based on deep learning based models\nand back-propagation of gradients to train such models. An active area of\nresearch in reinforcement learning is about training agents to play complex\nvideo games, which so far has been something accomplished only by human\nintelligence. Some state of the art performances in video game playing using\ndeep reinforcement learning are obtained by processing the sequence of frames\nfrom video games, passing them through a convolutional network to obtain\nfeatures and then using recurrent neural networks to figure out the action\nleading to optimal rewards. The recurrent neural network will learn to extract\nthe meaningful signal out of the sequence of such features. In this work, we\npropose a method utilizing a transformer network which have recently replaced\nRNNs in Natural Language Processing (NLP), and perform experiments to compare\nwith existing methods.", + "authors": "Uddeshya Upadhyay, Nikunj Shah, Sucheta Ravikanti, Mayanka Medhe", + "published": "2019-12-09", + "updated": "2019-12-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NE" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2003.13839v1", + "title": "Model-Reference Reinforcement Learning Control of Autonomous Surface Vehicles with Uncertainties", + "abstract": "This paper presents a novel model-reference reinforcement learning control\nmethod for uncertain autonomous surface vehicles. The proposed control combines\na conventional control method with deep reinforcement learning. With the\nconventional control, we can ensure the learning-based control law provides\nclosed-loop stability for the overall system, and potentially increase the\nsample efficiency of the deep reinforcement learning. With the reinforcement\nlearning, we can directly learn a control law to compensate for modeling\nuncertainties. In the proposed control, a nominal system is employed for the\ndesign of a baseline control law using a conventional control approach. The\nnominal system also defines the desired performance for uncertain autonomous\nvehicles to follow. In comparison with traditional deep reinforcement learning\nmethods, our proposed learning-based control can provide stability guarantees\nand better sample efficiency. We demonstrate the performance of the new\nalgorithm via extensive simulation results.", + "authors": "Qingrui Zhang, Wei Pan, Vasso Reppa", + "published": "2020-03-30", + "updated": "2020-03-30", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.AI", + "cs.LG", + "cs.RO", + "cs.SY", + "math.OC" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1712.04170v2", + "title": "Interpretable Policies for Reinforcement Learning by Genetic Programming", + "abstract": "The search for interpretable reinforcement learning policies is of high\nacademic and industrial interest. Especially for industrial systems, domain\nexperts are more likely to deploy autonomously learned controllers if they are\nunderstandable and convenient to evaluate. Basic algebraic equations are\nsupposed to meet these requirements, as long as they are restricted to an\nadequate complexity. Here we introduce the genetic programming for\nreinforcement learning (GPRL) approach based on model-based batch reinforcement\nlearning and genetic programming, which autonomously learns policy equations\nfrom pre-existing default state-action trajectory samples. GPRL is compared to\na straight-forward method which utilizes genetic programming for symbolic\nregression, yielding policies imitating an existing well-performing, but\nnon-interpretable policy. Experiments on three reinforcement learning\nbenchmarks, i.e., mountain car, cart-pole balancing, and industrial benchmark,\ndemonstrate the superiority of our GPRL approach compared to the symbolic\nregression method. GPRL is capable of producing well-performing interpretable\nreinforcement learning policies from pre-existing default trajectory data.", + "authors": "Daniel Hein, Steffen Udluft, Thomas A. Runkler", + "published": "2017-12-12", + "updated": "2018-04-04", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.NE", + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2312.15385v1", + "title": "Discrete-Time Mean-Variance Strategy Based on Reinforcement Learning", + "abstract": "This paper studies a discrete-time mean-variance model based on reinforcement\nlearning. Compared with its continuous-time counterpart in \\cite{zhou2020mv},\nthe discrete-time model makes more general assumptions about the asset's return\ndistribution. Using entropy to measure the cost of exploration, we derive the\noptimal investment strategy, whose density function is also Gaussian type.\nAdditionally, we design the corresponding reinforcement learning algorithm.\nBoth simulation experiments and empirical analysis indicate that our\ndiscrete-time model exhibits better applicability when analyzing real-world\ndata than the continuous-time model.", + "authors": "Xiangyu Cui, Xun Li, Yun Shi, Si Zhao", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "q-fin.MF", + "cats": [ + "q-fin.MF", + "cs.LG", + "q-fin.PM" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1611.00862v1", + "title": "Quantile Reinforcement Learning", + "abstract": "In reinforcement learning, the standard criterion to evaluate policies in a\nstate is the expectation of (discounted) sum of rewards. However, this\ncriterion may not always be suitable, we consider an alternative criterion\nbased on the notion of quantiles. In the case of episodic reinforcement\nlearning problems, we propose an algorithm based on stochastic approximation\nwith two timescales. We evaluate our proposition on a simple model of the TV\nshow, Who wants to be a millionaire.", + "authors": "Hugo Gilbert, Paul Weng", + "published": "2016-11-03", + "updated": "2016-11-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.03688v1", + "title": "A Computational Model of Representation Learning in the Brain Cortex, Integrating Unsupervised and Reinforcement Learning", + "abstract": "A common view on the brain learning processes proposes that the three classic\nlearning paradigms -- unsupervised, reinforcement, and supervised -- take place\nin respectively the cortex, the basal-ganglia, and the cerebellum. However,\ndopamine outbursts, usually assumed to encode reward, are not limited to the\nbasal ganglia but also reach prefrontal, motor, and higher sensory cortices. We\npropose that in the cortex the same reward-based trial-and-error processes\nmight support not only the acquisition of motor representations but also of\nsensory representations. In particular, reward signals might guide\ntrial-and-error processes that mix with associative learning processes to\nsupport the acquisition of representations better serving downstream action\nselection. We tested the soundness of this hypothesis with a computational\nmodel that integrates unsupervised learning (Contrastive Divergence) and\nreinforcement learning (REINFORCE). The model was tested with a task requiring\ndifferent responses to different visual images grouped in categories involving\neither colour, shape, or size. Results show that a balanced mix of unsupervised\nand reinforcement learning processes leads to the best performance. Indeed,\nexcessive unsupervised learning tends to under-represent task-relevant features\nwhile excessive reinforcement learning tends to initially learn slowly and then\nto incur in local minima. These results stimulate future empirical studies on\ncategory learning directed to investigate similar effects in the extrastriate\nvisual cortices. Moreover, they prompt further computational investigations\ndirected to study the possible advantages of integrating unsupervised and\nreinforcement learning processes.", + "authors": "Giovanni Granato, Emilio Cartoni, Federico Da Rold, Andrea Mattera, Gianluca Baldassarre", + "published": "2021-06-07", + "updated": "2021-06-07", + "primary_cat": "q-bio.NC", + "cats": [ + "q-bio.NC", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.07789v1", + "title": "Safe Reinforcement Learning by Imagining the Near Future", + "abstract": "Safe reinforcement learning is a promising path toward applying reinforcement\nlearning algorithms to real-world problems, where suboptimal behaviors may lead\nto actual negative consequences. In this work, we focus on the setting where\nunsafe states can be avoided by planning ahead a short time into the future. In\nthis setting, a model-based agent with a sufficiently accurate model can avoid\nunsafe states. We devise a model-based algorithm that heavily penalizes unsafe\ntrajectories, and derive guarantees that our algorithm can avoid unsafe states\nunder certain assumptions. Experiments demonstrate that our algorithm can\nachieve competitive rewards with fewer safety violations in several continuous\ncontrol tasks.", + "authors": "Garrett Thomas, Yuping Luo, Tengyu Ma", + "published": "2022-02-15", + "updated": "2022-02-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1703.04489v1", + "title": "Reinforcement Learning for Transition-Based Mention Detection", + "abstract": "This paper describes an application of reinforcement learning to the mention\ndetection task. We define a novel action-based formulation for the mention\ndetection task, in which a model can flexibly revise past labeling decisions by\ngrouping together tokens and assigning partial mention labels. We devise a\nmethod to create mention-level episodes and we train a model by rewarding\ncorrectly labeled complete mentions, irrespective of the inner structure\ncreated. The model yields results which are on par with a competitive\nsupervised counterpart while being more flexible in terms of achieving targeted\nbehavior through reward modeling and generating internal mention structure,\nespecially on longer mentions.", + "authors": "Georgiana Dinu, Wael Hamza, Radu Florian", + "published": "2017-03-13", + "updated": "2017-03-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.01794v1", + "title": "Imitation Game: A Model-based and Imitation Learning Deep Reinforcement Learning Hybrid", + "abstract": "Autonomous and learning systems based on Deep Reinforcement Learning have\nfirmly established themselves as a foundation for approaches to creating\nresilient and efficient Cyber-Physical Energy Systems. However, most current\napproaches suffer from two distinct problems: Modern model-free algorithms such\nas Soft Actor Critic need a high number of samples to learn a meaningful\npolicy, as well as a fallback to ward against concept drifts (e. g.,\ncatastrophic forgetting). In this paper, we present the work in progress\ntowards a hybrid agent architecture that combines model-based Deep\nReinforcement Learning with imitation learning to overcome both problems.", + "authors": "Eric MSP Veith, Torben Logemann, Aleksandr Berezin, Arlena Well\u00dfow, Stephan Balduin", + "published": "2024-04-02", + "updated": "2024-04-02", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1806.01265v2", + "title": "Equivalence Between Wasserstein and Value-Aware Loss for Model-based Reinforcement Learning", + "abstract": "Learning a generative model is a key component of model-based reinforcement\nlearning. Though learning a good model in the tabular setting is a simple task,\nlearning a useful model in the approximate setting is challenging. In this\ncontext, an important question is the loss function used for model learning as\nvarying the loss function can have a remarkable impact on effectiveness of\nplanning. Recently Farahmand et al. (2017) proposed a value-aware model\nlearning (VAML) objective that captures the structure of value function during\nmodel learning. Using tools from Asadi et al. (2018), we show that minimizing\nthe VAML objective is in fact equivalent to minimizing the Wasserstein metric.\nThis equivalence improves our understanding of value-aware models, and also\ncreates a theoretical foundation for applications of Wasserstein in model-based\nreinforcement~learning.", + "authors": "Kavosh Asadi, Evan Cater, Dipendra Misra, Michael L. Littman", + "published": "2018-06-01", + "updated": "2018-07-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2012.00743v1", + "title": "Adaptive Neural Architectures for Recommender Systems", + "abstract": "Deep learning has proved an effective means to capture the non-linear\nassociations of user preferences. However, the main drawback of existing deep\nlearning architectures is that they follow a fixed recommendation strategy,\nignoring users' real time-feedback. Recent advances of deep reinforcement\nstrategies showed that recommendation policies can be continuously updated\nwhile users interact with the system. In doing so, we can learn the optimal\npolicy that fits to users' preferences over the recommendation sessions. The\nmain drawback of deep reinforcement strategies is that are based on predefined\nand fixed neural architectures. To shed light on how to handle this issue, in\nthis study we first present deep reinforcement learning strategies for\nrecommendation and discuss the main limitations due to the fixed neural\narchitectures. Then, we detail how recent advances on progressive neural\narchitectures are used for consecutive tasks in other research domains.\nFinally, we present the key challenges to fill the gap between deep\nreinforcement learning and adaptive neural architectures. We provide guidelines\nfor searching for the best neural architecture based on each user feedback via\nreinforcement learning, while considering the prediction performance on\nreal-time recommendations and the model complexity.", + "authors": "Dimitrios Rafailidis, Stefanos Antaris", + "published": "2020-11-11", + "updated": "2020-11-11", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.03188v3", + "title": "Optimizing Quantum Variational Circuits with Deep Reinforcement Learning", + "abstract": "Quantum Machine Learning (QML) is considered to be one of the most promising\napplications of near term quantum devices. However, the optimization of quantum\nmachine learning models presents numerous challenges arising from the\nimperfections of hardware and the fundamental obstacles in navigating an\nexponentially scaling Hilbert space. In this work, we evaluate the potential of\ncontemporary methods in deep reinforcement learning to augment gradient based\noptimization routines in quantum variational circuits. We find that\nreinforcement learning augmented optimizers consistently outperform gradient\ndescent in noisy environments. All code and pretrained weights are available to\nreplicate the results or deploy the models at:\nhttps://github.com/lockwo/rl_qvc_opt.", + "authors": "Owen Lockwood", + "published": "2021-09-07", + "updated": "2022-05-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "quant-ph" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1810.01112v1", + "title": "The Dreaming Variational Autoencoder for Reinforcement Learning Environments", + "abstract": "Reinforcement learning has shown great potential in generalizing over raw\nsensory data using only a single neural network for value optimization. There\nare several challenges in the current state-of-the-art reinforcement learning\nalgorithms that prevent them from converging towards the global optima. It is\nlikely that the solution to these problems lies in short- and long-term\nplanning, exploration and memory management for reinforcement learning\nalgorithms. Games are often used to benchmark reinforcement learning algorithms\nas they provide a flexible, reproducible, and easy to control environment.\nRegardless, few games feature a state-space where results in exploration,\nmemory, and planning are easily perceived. This paper presents The Dreaming\nVariational Autoencoder (DVAE), a neural network based generative modeling\narchitecture for exploration in environments with sparse feedback. We further\npresent Deep Maze, a novel and flexible maze engine that challenges DVAE in\npartial and fully-observable state-spaces, long-horizon tasks, and\ndeterministic and stochastic problems. We show initial findings and encourage\nfurther work in reinforcement learning driven by generative exploration.", + "authors": "Per-Arne Andersen, Morten Goodwin, Ole-Christoffer Granmo", + "published": "2018-10-02", + "updated": "2018-10-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1903.08543v6", + "title": "Optimizing thermodynamic trajectories using evolutionary and gradient-based reinforcement learning", + "abstract": "Using a model heat engine, we show that neural network-based reinforcement\nlearning can identify thermodynamic trajectories of maximal efficiency. We\nconsider both gradient and gradient-free reinforcement learning. We use an\nevolutionary learning algorithm to evolve a population of neural networks,\nsubject to a directive to maximize the efficiency of a trajectory composed of a\nset of elementary thermodynamic processes; the resulting networks learn to\ncarry out the maximally-efficient Carnot, Stirling, or Otto cycles. When given\nan additional irreversible process, this evolutionary scheme learns a\npreviously unknown thermodynamic cycle. Gradient-based reinforcement learning\nis able to learn the Stirling cycle, whereas an evolutionary approach achieves\nthe optimal Carnot cycle. Our results show how the reinforcement learning\nstrategies developed for game playing can be applied to solve physical problems\nconditioned upon path-extensive order parameters.", + "authors": "Chris Beeler, Uladzimir Yahorau, Rory Coles, Kyle Mills, Stephen Whitelam, Isaac Tamblyn", + "published": "2019-03-20", + "updated": "2021-11-22", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE", + "cond-mat.stat-mech", + "cs.LG", + "physics.comp-ph" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.02429v1", + "title": "AD4RL: Autonomous Driving Benchmarks for Offline Reinforcement Learning with Value-based Dataset", + "abstract": "Offline reinforcement learning has emerged as a promising technology by\nenhancing its practicality through the use of pre-collected large datasets.\nDespite its practical benefits, most algorithm development research in offline\nreinforcement learning still relies on game tasks with synthetic datasets. To\naddress such limitations, this paper provides autonomous driving datasets and\nbenchmarks for offline reinforcement learning research. We provide 19 datasets,\nincluding real-world human driver's datasets, and seven popular offline\nreinforcement learning algorithms in three realistic driving scenarios. We also\nprovide a unified decision-making process model that can operate effectively\nacross different scenarios, serving as a reference framework in algorithm\ndesign. Our research lays the groundwork for further collaborations in the\ncommunity to explore practical aspects of existing reinforcement learning\nmethods. Dataset and codes can be found in https://sites.google.com/view/ad4rl.", + "authors": "Dongsu Lee, Chanin Eom, Minhae Kwon", + "published": "2024-04-03", + "updated": "2024-04-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.13489v2", + "title": "Boosting Reinforcement Learning and Planning with Demonstrations: A Survey", + "abstract": "Although reinforcement learning has seen tremendous success recently, this\nkind of trial-and-error learning can be impractical or inefficient in complex\nenvironments. The use of demonstrations, on the other hand, enables agents to\nbenefit from expert knowledge rather than having to discover the best action to\ntake through exploration. In this survey, we discuss the advantages of using\ndemonstrations in sequential decision making, various ways to apply\ndemonstrations in learning-based decision making paradigms (for example,\nreinforcement learning and planning in the learned models), and how to collect\nthe demonstrations in various scenarios. Additionally, we exemplify a practical\npipeline for generating and utilizing demonstrations in the recently proposed\nManiSkill robot learning benchmark.", + "authors": "Tongzhou Mu, Hao Su", + "published": "2023-03-23", + "updated": "2023-03-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2305.00477v2", + "title": "Posterior Sampling for Deep Reinforcement Learning", + "abstract": "Despite remarkable successes, deep reinforcement learning algorithms remain\nsample inefficient: they require an enormous amount of trial and error to find\ngood policies. Model-based algorithms promise sample efficiency by building an\nenvironment model that can be used for planning. Posterior Sampling for\nReinforcement Learning is such a model-based algorithm that has attracted\nsignificant interest due to its performance in the tabular setting. This paper\nintroduces Posterior Sampling for Deep Reinforcement Learning (PSDRL), the\nfirst truly scalable approximation of Posterior Sampling for Reinforcement\nLearning that retains its model-based essence. PSDRL combines efficient\nuncertainty quantification over latent state space models with a specially\ntailored continual planning algorithm based on value-function approximation.\nExtensive experiments on the Atari benchmark show that PSDRL significantly\noutperforms previous state-of-the-art attempts at scaling up posterior sampling\nwhile being competitive with a state-of-the-art (model-based) reinforcement\nlearning method, both in sample efficiency and computational efficiency.", + "authors": "Remo Sasso, Michelangelo Conserva, Paulo Rauber", + "published": "2023-04-30", + "updated": "2023-05-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "68T07", + "I.2.m" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.07240v1", + "title": "Model-Reference Reinforcement Learning for Collision-Free Tracking Control of Autonomous Surface Vehicles", + "abstract": "This paper presents a novel model-reference reinforcement learning algorithm\nfor the intelligent tracking control of uncertain autonomous surface vehicles\nwith collision avoidance. The proposed control algorithm combines a\nconventional control method with reinforcement learning to enhance control\naccuracy and intelligence. In the proposed control design, a nominal system is\nconsidered for the design of a baseline tracking controller using a\nconventional control approach. The nominal system also defines the desired\nbehaviour of uncertain autonomous surface vehicles in an obstacle-free\nenvironment. Thanks to reinforcement learning, the overall tracking controller\nis capable of compensating for model uncertainties and achieving collision\navoidance at the same time in environments with obstacles. In comparison to\ntraditional deep reinforcement learning methods, our proposed learning-based\ncontrol can provide stability guarantees and better sample efficiency. We\ndemonstrate the performance of the new algorithm using an example of autonomous\nsurface vehicles.", + "authors": "Qingrui Zhang, Wei Pan, Vasso Reppa", + "published": "2020-08-17", + "updated": "2020-08-17", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.LG", + "cs.RO", + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1802.10592v2", + "title": "Model-Ensemble Trust-Region Policy Optimization", + "abstract": "Model-free reinforcement learning (RL) methods are succeeding in a growing\nnumber of tasks, aided by recent advances in deep learning. However, they tend\nto suffer from high sample complexity, which hinders their use in real-world\ndomains. Alternatively, model-based reinforcement learning promises to reduce\nsample complexity, but tends to require careful tuning and to date have\nsucceeded mainly in restrictive domains where simple models are sufficient for\nlearning. In this paper, we analyze the behavior of vanilla model-based\nreinforcement learning methods when deep neural networks are used to learn both\nthe model and the policy, and show that the learned policy tends to exploit\nregions where insufficient data is available for the model to be learned,\ncausing instability in training. To overcome this issue, we propose to use an\nensemble of models to maintain the model uncertainty and regularize the\nlearning process. We further show that the use of likelihood ratio derivatives\nyields much more stable learning than backpropagation through time. Altogether,\nour approach Model-Ensemble Trust-Region Policy Optimization (ME-TRPO)\nsignificantly reduces the sample complexity compared to model-free deep RL\nmethods on challenging continuous control benchmark tasks.", + "authors": "Thanard Kurutach, Ignasi Clavera, Yan Duan, Aviv Tamar, Pieter Abbeel", + "published": "2018-02-28", + "updated": "2018-10-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2301.03933v1", + "title": "Hint assisted reinforcement learning: an application in radio astronomy", + "abstract": "Model based reinforcement learning has proven to be more sample efficient\nthan model free methods. On the other hand, the construction of a dynamics\nmodel in model based reinforcement learning has increased complexity. Data\nprocessing tasks in radio astronomy are such situations where the original\nproblem which is being solved by reinforcement learning itself is the creation\nof a model. Fortunately, many methods based on heuristics or signal processing\ndo exist to perform the same tasks and we can leverage them to propose the best\naction to take, or in other words, to provide a `hint'. We propose to use\n`hints' generated by the environment as an aid to the reinforcement learning\nprocess mitigating the complexity of model construction. We modify the soft\nactor critic algorithm to use hints and use the alternating direction method of\nmultipliers algorithm with inequality constraints to train the agent. Results\nin several environments show that we get the increased sample efficiency by\nusing hints as compared to model free methods.", + "authors": "Sarod Yatawatta", + "published": "2023-01-10", + "updated": "2023-01-10", + "primary_cat": "astro-ph.IM", + "cats": [ + "astro-ph.IM", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.03967v1", + "title": "A Deep Reinforcement Learning Approach for Composing Moving IoT Services", + "abstract": "We develop a novel framework for efficiently and effectively discovering\ncrowdsourced services that move in close proximity to a user over a period of\ntime. We introduce a moving crowdsourced service model which is modelled as a\nmoving region. We propose a deep reinforcement learning-based composition\napproach to select and compose moving IoT services considering quality\nparameters. Additionally, we develop a parallel flock-based service discovery\nalgorithm as a ground-truth to measure the accuracy of the proposed approach.\nThe experiments on two real-world datasets verify the effectiveness and\nefficiency of the deep reinforcement learning-based approach.", + "authors": "Azadeh Ghari Neiat, Athman Bouguettaya, Mohammed Bahutair", + "published": "2021-11-06", + "updated": "2021-11-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1705.07460v1", + "title": "Experience enrichment based task independent reward model", + "abstract": "For most reinforcement learning approaches, the learning is performed by\nmaximizing an accumulative reward that is expectedly and manually defined for\nspecific tasks. However, in real world, rewards are emergent phenomena from the\ncomplex interactions between agents and environments. In this paper, we propose\nan implicit generic reward model for reinforcement learning. Unlike those\nrewards that are manually defined for specific tasks, such implicit reward is\ntask independent. It only comes from the deviation from the agents' previous\nexperiences.", + "authors": "Min Xu", + "published": "2017-05-21", + "updated": "2017-05-21", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1811.00128v1", + "title": "Towards a Simple Approach to Multi-step Model-based Reinforcement Learning", + "abstract": "When environmental interaction is expensive, model-based reinforcement\nlearning offers a solution by planning ahead and avoiding costly mistakes.\nModel-based agents typically learn a single-step transition model. In this\npaper, we propose a multi-step model that predicts the outcome of an action\nsequence with variable length. We show that this model is easy to learn, and\nthat the model can make policy-conditional predictions. We report preliminary\nresults that show a clear advantage for the multi-step model compared to its\none-step counterpart.", + "authors": "Kavosh Asadi, Evan Cater, Dipendra Misra, Michael L. Littman", + "published": "2018-10-31", + "updated": "2018-10-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2012.07369v2", + "title": "Learning for MPC with Stability & Safety Guarantees", + "abstract": "The combination of learning methods with Model Predictive Control (MPC) has\nattracted a significant amount of attention in the recent literature. The hope\nof this combination is to reduce the reliance of MPC schemes on accurate\nmodels, and to tap into the fast developing machine learning and reinforcement\nlearning tools to exploit the growing amount of data available for many\nsystems. In particular, the combination of reinforcement learning and MPC has\nbeen proposed as a viable and theoretically justified approach to introduce\nexplainable, safe and stable policies in reinforcement learning. However, a\nformal theory detailing how the safety and stability of an MPC-based policy can\nbe maintained through the parameter updates delivered by the learning tools is\nstill lacking. This paper addresses this gap. The theory is developed for the\ngeneric Robust MPC case, and applied in simulation in the robust tube-based\nlinear MPC case, where the theory is fairly easy to deploy in practice. The\npaper focuses on Reinforcement Learning as a learning tool, but it applies to\nany learning method that updates the MPC parameters online.", + "authors": "S\u00e9bastien Gros, Mario Zanon", + "published": "2020-12-14", + "updated": "2022-07-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.SY", + "eess.SY", + "math.OC" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2209.06604v1", + "title": "Learning state correspondence of reinforcement learning tasks for knowledge transfer", + "abstract": "Deep reinforcement learning has shown an ability to achieve super-human\nperformance in solving complex reinforcement learning (RL) tasks only from\nraw-pixels. However, it fails to reuse knowledge from previously learnt tasks\nto solve new, unseen ones. Generalizing and reusing knowledge are the\nfundamental requirements for creating a truly intelligent agent. This work\nproposes a general method for one-to-one transfer learning based on generative\nadversarial network model tailored to RL task.", + "authors": "Marko Ruman, Tatiana V. Guy", + "published": "2022-09-14", + "updated": "2022-09-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1305.1809v2", + "title": "Cover Tree Bayesian Reinforcement Learning", + "abstract": "This paper proposes an online tree-based Bayesian approach for reinforcement\nlearning. For inference, we employ a generalised context tree model. This\ndefines a distribution on multivariate Gaussian piecewise-linear models, which\ncan be updated in closed form. The tree structure itself is constructed using\nthe cover tree method, which remains efficient in high dimensional spaces. We\ncombine the model with Thompson sampling and approximate dynamic programming to\nobtain effective exploration policies in unknown environments. The flexibility\nand computational simplicity of the model render it suitable for many\nreinforcement learning problems in continuous state spaces. We demonstrate this\nin an experimental comparison with least squares policy iteration.", + "authors": "Nikolaos Tziortziotis, Christos Dimitrakakis, Konstantinos Blekas", + "published": "2013-05-08", + "updated": "2014-05-02", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.00766v1", + "title": "Tracking the Race Between Deep Reinforcement Learning and Imitation Learning -- Extended Version", + "abstract": "Learning-based approaches for solving large sequential decision making\nproblems have become popular in recent years. The resulting agents perform\ndifferently and their characteristics depend on those of the underlying\nlearning approach. Here, we consider a benchmark planning problem from the\nreinforcement learning domain, the Racetrack, to investigate the properties of\nagents derived from different deep (reinforcement) learning approaches. We\ncompare the performance of deep supervised learning, in particular imitation\nlearning, to reinforcement learning for the Racetrack model. We find that\nimitation learning yields agents that follow more risky paths. In contrast, the\ndecisions of deep reinforcement learning are more foresighted, i.e., avoid\nstates in which fatal decisions are more likely. Our evaluations show that for\nthis sequential decision making problem, deep reinforcement learning performs\nbest in many aspects even though for imitation learning optimal decisions are\nconsidered.", + "authors": "Timo P. Gros, Daniel H\u00f6ller, J\u00f6rg Hoffmann, Verena Wolf", + "published": "2020-08-03", + "updated": "2020-08-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.04816v1", + "title": "Characterizing Policy Divergence for Personalized Meta-Reinforcement Learning", + "abstract": "Despite ample motivation from costly exploration and limited trajectory data,\nrapidly adapting to new environments with few-shot reinforcement learning (RL)\ncan remain a challenging task, especially with respect to personalized\nsettings. Here, we consider the problem of recommending optimal policies to a\nset of multiple entities each with potentially different characteristics, such\nthat individual entities may parameterize distinct environments with unique\ntransition dynamics. Inspired by existing literature in meta-learning, we\nextend previous work by focusing on the notion that certain environments are\nmore similar to each other than others in personalized settings, and propose a\nmodel-free meta-learning algorithm that prioritizes past experiences by\nrelevance during gradient-based adaptation. Our algorithm involves\ncharacterizing past policy divergence through methods in inverse reinforcement\nlearning, and we illustrate how such metrics are able to effectively\ndistinguish past policy parameters by the environment they were deployed in,\nleading to more effective fast adaptation during test time. To study\npersonalization more effectively we introduce a navigation testbed to\nspecifically incorporate environment diversity across training episodes, and\ndemonstrate that our approach outperforms meta-learning alternatives with\nrespect to few-shot reinforcement learning in personalized settings.", + "authors": "Michael Zhang", + "published": "2020-10-09", + "updated": "2020-10-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2110.14524v1", + "title": "Model based Multi-agent Reinforcement Learning with Tensor Decompositions", + "abstract": "A challenge in multi-agent reinforcement learning is to be able to generalize\nover intractable state-action spaces. Inspired from Tesseract [Mahajan et al.,\n2021], this position paper investigates generalisation in state-action space\nover unexplored state-action pairs by modelling the transition and reward\nfunctions as tensors of low CP-rank. Initial experiments on synthetic MDPs show\nthat using tensor decompositions in a model-based reinforcement learning\nalgorithm can lead to much faster convergence if the true transition and reward\nfunctions are indeed of low rank.", + "authors": "Pascal Van Der Vaart, Anuj Mahajan, Shimon Whiteson", + "published": "2021-10-27", + "updated": "2021-10-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.MA" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.11142v1", + "title": "Two-stage Deep Reinforcement Learning for Inverter-based Volt-VAR Control in Active Distribution Networks", + "abstract": "Model-based Vol/VAR optimization method is widely used to eliminate voltage\nviolations and reduce network losses. However, the parameters of active\ndistribution networks(ADNs) are not onsite identified, so significant errors\nmay be involved in the model and make the model-based method infeasible. To\ncope with this critical issue, we propose a novel two-stage deep reinforcement\nlearning (DRL) method to improve the voltage profile by regulating\ninverter-based energy resources, which consists of offline stage and online\nstage. In the offline stage, a highly efficient adversarial reinforcement\nlearning algorithm is developed to train an offline agent robust to the model\nmismatch. In the sequential online stage, we transfer the offline agent safely\nas the online agent to perform continuous learning and controlling online with\nsignificantly improved safety and efficiency. Numerical simulations on IEEE\ntest cases not only demonstrate that the proposed adversarial reinforcement\nlearning algorithm outperforms the state-of-art algorithm, but also show that\nour proposed two-stage method achieves much better performance than the\nexisting DRL based methods in the online application.", + "authors": "Haotian Liu, Wenchuan Wu", + "published": "2020-05-20", + "updated": "2020-05-20", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.LG", + "cs.SY", + "J.7; C.3" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.02219v1", + "title": "Uncertainty-Based Out-of-Distribution Detection in Deep Reinforcement Learning", + "abstract": "We consider the problem of detecting out-of-distribution (OOD) samples in\ndeep reinforcement learning. In a value based reinforcement learning setting,\nwe propose to use uncertainty estimation techniques directly on the agent's\nvalue estimating neural network to detect OOD samples. The focus of our work\nlies in analyzing the suitability of approximate Bayesian inference methods and\nrelated ensembling techniques that generate uncertainty estimates. Although\nprior work has shown that dropout-based variational inference techniques and\nbootstrap-based approaches can be used to model epistemic uncertainty, the\nsuitability for detecting OOD samples in deep reinforcement learning remains an\nopen question. Our results show that uncertainty estimation can be used to\ndifferentiate in- from out-of-distribution samples. Over the complete training\nprocess of the reinforcement learning agents, bootstrap-based approaches tend\nto produce more reliable epistemic uncertainty estimates, when compared to\ndropout-based approaches.", + "authors": "Andreas Sedlmeier, Thomas Gabor, Thomy Phan, Lenz Belzner, Claudia Linnhoff-Popien", + "published": "2019-01-08", + "updated": "2019-01-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.02104v2", + "title": "Model-Based Episodic Memory Induces Dynamic Hybrid Controls", + "abstract": "Episodic control enables sample efficiency in reinforcement learning by\nrecalling past experiences from an episodic memory. We propose a new\nmodel-based episodic memory of trajectories addressing current limitations of\nepisodic control. Our memory estimates trajectory values, guiding the agent\ntowards good policies. Built upon the memory, we construct a complementary\nlearning model via a dynamic hybrid control unifying model-based, episodic and\nhabitual learning into a single architecture. Experiments demonstrate that our\nmodel allows significantly faster and better learning than other strong\nreinforcement learning agents across a variety of environments including\nstochastic and non-Markovian settings.", + "authors": "Hung Le, Thommen Karimpanal George, Majid Abdolshah, Truyen Tran, Svetha Venkatesh", + "published": "2021-11-03", + "updated": "2021-11-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.01977v1", + "title": "Accelerating Goal-Directed Reinforcement Learning by Model Characterization", + "abstract": "We propose a hybrid approach aimed at improving the sample efficiency in\ngoal-directed reinforcement learning. We do this via a two-step mechanism where\nfirstly, we approximate a model from Model-Free reinforcement learning. Then,\nwe leverage this approximate model along with a notion of reachability using\nMean First Passage Times to perform Model-Based reinforcement learning. Built\non such a novel observation, we design two new algorithms - Mean First Passage\nTime based Q-Learning (MFPT-Q) and Mean First Passage Time based DYNA\n(MFPT-DYNA), that have been fundamentally modified from the state-of-the-art\nreinforcement learning techniques. Preliminary results have shown that our\nhybrid approaches converge with much fewer iterations than their corresponding\nstate-of-the-art counterparts and therefore requiring much fewer samples and\nmuch fewer training trials to converge.", + "authors": "Shoubhik Debnath, Gaurav Sukhatme, Lantao Liu", + "published": "2019-01-04", + "updated": "2019-01-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.16543v2", + "title": "Model-based deep reinforcement learning for accelerated learning from flow simulations", + "abstract": "In recent years, deep reinforcement learning has emerged as a technique to\nsolve closed-loop flow control problems. Employing simulation-based\nenvironments in reinforcement learning enables a priori end-to-end optimization\nof the control system, provides a virtual testbed for safety-critical control\napplications, and allows to gain a deep understanding of the control\nmechanisms. While reinforcement learning has been applied successfully in a\nnumber of rather simple flow control benchmarks, a major bottleneck toward\nreal-world applications is the high computational cost and turnaround time of\nflow simulations. In this contribution, we demonstrate the benefits of\nmodel-based reinforcement learning for flow control applications. Specifically,\nwe optimize the policy by alternating between trajectories sampled from flow\nsimulations and trajectories sampled from an ensemble of environment models.\nThe model-based learning reduces the overall training time by up to $85\\%$ for\nthe fluidic pinball test case. Even larger savings are expected for more\ndemanding flow simulations.", + "authors": "Andre Weiner, Janis Geise", + "published": "2024-02-26", + "updated": "2024-04-10", + "primary_cat": "physics.flu-dyn", + "cats": [ + "physics.flu-dyn", + "cs.CE", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.12095v1", + "title": "Document-editing Assistants and Model-based Reinforcement Learning as a Path to Conversational AI", + "abstract": "Intelligent assistants that follow commands or answer simple questions, such\nas Siri and Google search, are among the most economically important\napplications of AI. Future conversational AI assistants promise even greater\ncapabilities and a better user experience through a deeper understanding of the\ndomain, the user, or the user's purposes. But what domain and what methods are\nbest suited to researching and realizing this promise? In this article we argue\nfor the domain of voice document editing and for the methods of model-based\nreinforcement learning. The primary advantages of voice document editing are\nthat the domain is tightly scoped and that it provides something for the\nconversation to be about (the document) that is delimited and fully accessible\nto the intelligent assistant. The advantages of reinforcement learning in\ngeneral are that its methods are designed to learn from interaction without\nexplicit instruction and that it formalizes the purposes of the assistant.\nModel-based reinforcement learning is needed in order to genuinely understand\nthe domain of discourse and thereby work efficiently with the user to achieve\ntheir goals. Together, voice document editing and model-based reinforcement\nlearning comprise a promising research direction for achieving conversational\nAI.", + "authors": "Katya Kudashkina, Patrick M. Pilarski, Richard S. Sutton", + "published": "2020-08-27", + "updated": "2020-08-27", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.HC", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2304.00006v1", + "title": "Bi-directional personalization reinforcement learning-based architecture with active learning using a multi-model data service for the travel nursing industry", + "abstract": "The challenges of using inadequate online recruitment systems can be\naddressed with machine learning and software engineering techniques.\nBi-directional personalization reinforcement learning-based architecture with\nactive learning can get recruiters to recommend qualified applicants and also\nenable applicants to receive personalized job recommendations. This paper\nfocuses on how machine learning techniques can enhance the recruitment process\nin the travel nursing industry by helping speed up data acquisition using a\nmulti-model data service and then providing personalized recommendations using\nbi-directional reinforcement learning with active learning. This need was\nespecially evident when trying to respond to the overwhelming needs of\nhealthcare facilities during the COVID-19 pandemic. The need for traveling\nnurses and other healthcare professionals was more evident during the lockdown\nperiod. A data service was architected for job feed processing using an\norchestration of natural language processing (NLP) models that synthesize\njob-related data into a database efficiently and accurately. The multi-model\ndata service provided the data necessary to develop a bi-directional\npersonalization system using reinforcement learning with active learning that\ncould recommend travel nurses and healthcare professionals to recruiters and\nprovide job recommendations to applicants using an internally developed smart\nmatch score as a basis. The bi-directional personalization reinforcement\nlearning-based architecture with active learning combines two personalization\nsystems - one that runs forward to recommend qualified candidates for jobs and\nanother that runs backward and recommends jobs for applicants.", + "authors": "Ezana N. Beyenne", + "published": "2023-03-14", + "updated": "2023-03-14", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.LG", + "I.2" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.12142v1", + "title": "Bridging Imagination and Reality for Model-Based Deep Reinforcement Learning", + "abstract": "Sample efficiency has been one of the major challenges for deep reinforcement\nlearning. Recently, model-based reinforcement learning has been proposed to\naddress this challenge by performing planning on imaginary trajectories with a\nlearned world model. However, world model learning may suffer from overfitting\nto training trajectories, and thus model-based value estimation and policy\nsearch will be pone to be sucked in an inferior local policy. In this paper, we\npropose a novel model-based reinforcement learning algorithm, called BrIdging\nReality and Dream (BIRD). It maximizes the mutual information between imaginary\nand real trajectories so that the policy improvement learned from imaginary\ntrajectories can be easily generalized to real trajectories. We demonstrate\nthat our approach improves sample efficiency of model-based planning, and\nachieves state-of-the-art performance on challenging visual control benchmarks.", + "authors": "Guangxiang Zhu, Minghao Zhang, Honglak Lee, Chongjie Zhang", + "published": "2020-10-23", + "updated": "2020-10-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2006.09234v1", + "title": "Model Embedding Model-Based Reinforcement Learning", + "abstract": "Model-based reinforcement learning (MBRL) has shown its advantages in\nsample-efficiency over model-free reinforcement learning (MFRL). Despite the\nimpressive results it achieves, it still faces a trade-off between the ease of\ndata generation and model bias. In this paper, we propose a simple and elegant\nmodel-embedding model-based reinforcement learning (MEMB) algorithm in the\nframework of the probabilistic reinforcement learning. To balance the\nsample-efficiency and model bias, we exploit both real and imaginary data in\nthe training. In particular, we embed the model in the policy update and learn\n$Q$ and $V$ functions from the real data set. We provide the theoretical\nanalysis of MEMB with the Lipschitz continuity assumption on the model and\npolicy. At last, we evaluate MEMB on several benchmarks and demonstrate our\nalgorithm can achieve state-of-the-art performance.", + "authors": "Xiaoyu Tan, Chao Qu, Junwu Xiong, James Zhang", + "published": "2020-06-16", + "updated": "2020-06-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2209.05530v1", + "title": "Model-based Reinforcement Learning with Multi-step Plan Value Estimation", + "abstract": "A promising way to improve the sample efficiency of reinforcement learning is\nmodel-based methods, in which many explorations and evaluations can happen in\nthe learned models to save real-world samples. However, when the learned model\nhas a non-negligible model error, sequential steps in the model are hard to be\naccurately evaluated, limiting the model's utilization. This paper proposes to\nalleviate this issue by introducing multi-step plans to replace multi-step\nactions for model-based RL. We employ the multi-step plan value estimation,\nwhich evaluates the expected discounted return after executing a sequence of\naction plans at a given state, and updates the policy by directly computing the\nmulti-step policy gradient via plan value estimation. The new model-based\nreinforcement learning algorithm MPPVE (Model-based Planning Policy Learning\nwith Multi-step Plan Value Estimation) shows a better utilization of the\nlearned model and achieves a better sample efficiency than state-of-the-art\nmodel-based RL approaches.", + "authors": "Haoxin Lin, Yihao Sun, Jiaji Zhang, Yang Yu", + "published": "2022-09-12", + "updated": "2022-09-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.07315v1", + "title": "An introduction to reinforcement learning for neuroscience", + "abstract": "Reinforcement learning has a rich history in neuroscience, from early work on\ndopamine as a reward prediction error signal for temporal difference learning\n(Schultz et al., 1997) to recent work suggesting that dopamine could implement\na form of 'distributional reinforcement learning' popularized in deep learning\n(Dabney et al., 2020). Throughout this literature, there has been a tight link\nbetween theoretical advances in reinforcement learning and neuroscientific\nexperiments and findings. As a result, the theories describing our experimental\ndata have become increasingly complex and difficult to navigate. In this\nreview, we cover the basic theory underlying classical work in reinforcement\nlearning and build up to an introductory overview of methods used in modern\ndeep reinforcement learning that have found applications in systems\nneuroscience. We start with an overview of the reinforcement learning problem\nand classical temporal difference algorithms, followed by a discussion of\n'model-free' and 'model-based' reinforcement learning together with methods\nsuch as DYNA and successor representations that fall in between these two\ncategories. Throughout these sections, we highlight the close parallels between\nthe machine learning methods and related work in both experimental and\ntheoretical neuroscience. We then provide an introduction to deep reinforcement\nlearning with examples of how these methods have been used to model different\nlearning phenomena in the systems neuroscience literature, such as\nmeta-reinforcement learning (Wang et al., 2018) and distributional\nreinforcement learning (Dabney et al., 2020). Code that implements the methods\ndiscussed in this work and generates the figures is also provided.", + "authors": "Kristopher T. Jensen", + "published": "2023-11-13", + "updated": "2023-11-13", + "primary_cat": "q-bio.NC", + "cats": [ + "q-bio.NC", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1906.12189v1", + "title": "Learning-based Model Predictive Control for Safe Exploration and Reinforcement Learning", + "abstract": "Reinforcement learning has been successfully used to solve difficult tasks in\ncomplex unknown environments. However, these methods typically do not provide\nany safety guarantees during the learning process. This is particularly\nproblematic, since reinforcement learning agent actively explore their\nenvironment. This prevents their use in safety-critical, real-world\napplications. In this paper, we present a learning-based model predictive\ncontrol scheme that provides high-probability safety guarantees throughout the\nlearning process. Based on a reliable statistical model, we construct provably\naccurate confidence intervals on predicted trajectories. Unlike previous\napproaches, we allow for input-dependent uncertainties. Based on these reliable\npredictions, we guarantee that trajectories satisfy safety constraints.\nMoreover, we use a terminal set constraint to recursively guarantee the\nexistence of safe control actions at every iteration. We evaluate the resulting\nalgorithm to safely explore the dynamics of an inverted pendulum and to solve a\nreinforcement learning task on a cart-pole system with safety constraints.", + "authors": "Torsten Koller, Felix Berkenkamp, Matteo Turchetta, Joschka Boedecker, Andreas Krause", + "published": "2019-06-27", + "updated": "2019-06-27", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.AI", + "cs.LG", + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.07915v2", + "title": "Barrier Certified Safety Learning Control: When Sum-of-Square Programming Meets Reinforcement Learning", + "abstract": "Safety guarantee is essential in many engineering implementations.\nReinforcement learning provides a useful way to strengthen safety. However,\nreinforcement learning algorithms cannot completely guarantee safety over\nrealistic operations. To address this issue, this work adopts control barrier\nfunctions over reinforcement learning, and proposes a compensated algorithm to\ncompletely maintain safety. Specifically, a sum-of-squares programming has been\nexploited to search for the optimal controller, and tune the learning\nhyperparameters simultaneously. Thus, the control actions are pledged to be\nalways within the safe region. The effectiveness of proposed method is\ndemonstrated via an inverted pendulum model. Compared to quadratic programming\nbased reinforcement learning methods, our sum-of-squares programming based\nreinforcement learning has shown its superiority.", + "authors": "Hejun Huang, Zhenglong Li, Dongkun Han", + "published": "2022-06-16", + "updated": "2022-06-29", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.LG", + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.01659v1", + "title": "Reinforcement Learning for Battery Energy Storage Dispatch augmented with Model-based Optimizer", + "abstract": "Reinforcement learning has been found useful in solving optimal power flow\n(OPF) problems in electric power distribution systems. However, the use of\nlargely model-free reinforcement learning algorithms that completely ignore the\nphysics-based modeling of the power grid compromises the optimizer performance\nand poses scalability challenges. This paper proposes a novel approach to\nsynergistically combine the physics-based models with learning-based algorithms\nusing imitation learning to solve distribution-level OPF problems.\nSpecifically, we propose imitation learning based improvements in deep\nreinforcement learning (DRL) methods to solve the OPF problem for a specific\ncase of battery storage dispatch in the power distribution systems. The\nproposed imitation learning algorithm uses the approximate optimal solutions\nobtained from a linearized model-based OPF solver to provide a good initial\npolicy for the DRL algorithms while improving the training efficiency. The\neffectiveness of the proposed approach is demonstrated using IEEE 34-bus and\n123-bus distribution feeders with numerous distribution-level battery storage\nsystems.", + "authors": "Gayathri Krishnamoorthy, Anamika Dubey", + "published": "2021-09-02", + "updated": "2021-09-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SY", + "eess.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.09013v1", + "title": "Self-Inspection Method of Unmanned Aerial Vehicles in Power Plants Using Deep Q-Network Reinforcement Learning", + "abstract": "For the purpose of inspecting power plants, autonomous robots can be built\nusing reinforcement learning techniques. The method replicates the environment\nand employs a simple reinforcement learning (RL) algorithm. This strategy might\nbe applied in several sectors, including the electricity generation sector. A\npre-trained model with perception, planning, and action is suggested by the\nresearch. To address optimization problems, such as the Unmanned Aerial Vehicle\n(UAV) navigation problem, Deep Q-network (DQN), a reinforcement learning-based\nframework that Deepmind launched in 2015, incorporates both deep learning and\nQ-learning. To overcome problems with current procedures, the research proposes\na power plant inspection system incorporating UAV autonomous navigation and DQN\nreinforcement learning. These training processes set reward functions with\nreference to states and consider both internal and external effect factors,\nwhich distinguishes them from other reinforcement learning training techniques\nnow in use. The key components of the reinforcement learning segment of the\ntechnique, for instance, introduce states such as the simulation of a wind\nfield, the battery charge level of an unmanned aerial vehicle, the height the\nUAV reached, etc. The trained model makes it more likely that the inspection\nstrategy will be applied in practice by enabling the UAV to move around on its\nown in difficult environments. The average score of the model converges to\n9,000. The trained model allowed the UAV to make the fewest number of rotations\nnecessary to go to the target point.", + "authors": "Haoran Guan", + "published": "2023-03-16", + "updated": "2023-03-16", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2011.10714v1", + "title": "Double Meta-Learning for Data Efficient Policy Optimization in Non-Stationary Environments", + "abstract": "We are interested in learning models of non-stationary environments, which\ncan be framed as a multi-task learning problem. Model-free reinforcement\nlearning algorithms can achieve good asymptotic performance in multi-task\nlearning at a cost of extensive sampling, due to their approach, which requires\nlearning from scratch. While model-based approaches are among the most data\nefficient learning algorithms, they still struggle with complex tasks and model\nuncertainties. Meta-reinforcement learning addresses the efficiency and\ngeneralization challenges on multi task learning by quickly leveraging the\nmeta-prior policy for a new task. In this paper, we propose a\nmeta-reinforcement learning approach to learn the dynamic model of a\nnon-stationary environment to be used for meta-policy optimization later. Due\nto the sample efficiency of model-based learning methods, we are able to\nsimultaneously train both the meta-model of the non-stationary environment and\nthe meta-policy until dynamic model convergence. Then, the meta-learned dynamic\nmodel of the environment will generate simulated data for meta-policy\noptimization. Our experiment demonstrates that our proposed method can\nmeta-learn the policy in a non-stationary environment with the data efficiency\nof model-based learning approaches while achieving the high asymptotic\nperformance of model-free meta-reinforcement learning.", + "authors": "Elahe Aghapour, Nora Ayanian", + "published": "2020-11-21", + "updated": "2020-11-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1609.03348v4", + "title": "A Threshold-based Scheme for Reinforcement Learning in Neural Networks", + "abstract": "A generic and scalable Reinforcement Learning scheme for Artificial Neural\nNetworks is presented, providing a general purpose learning machine. By\nreference to a node threshold three features are described 1) A mechanism for\nPrimary Reinforcement, capable of solving linearly inseparable problems 2) The\nlearning scheme is extended to include a mechanism for Conditioned\nReinforcement, capable of forming long term strategy 3) The learning scheme is\nmodified to use a threshold-based deep learning algorithm, providing a robust\nand biologically inspired alternative to backpropagation. The model may be used\nfor supervised as well as unsupervised training regimes.", + "authors": "Thomas H. Ward", + "published": "2016-09-12", + "updated": "2017-01-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NE" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2102.05612v1", + "title": "Personalization for Web-based Services using Offline Reinforcement Learning", + "abstract": "Large-scale Web-based services present opportunities for improving UI\npolicies based on observed user interactions. We address challenges of learning\nsuch policies through model-free offline Reinforcement Learning (RL) with\noff-policy training. Deployed in a production system for user authentication in\na major social network, it significantly improves long-term objectives. We\narticulate practical challenges, compare several ML techniques, provide\ninsights on training and evaluation of RL models, and discuss generalizations.", + "authors": "Pavlos Athanasios Apostolopoulos, Zehui Wang, Hanson Wang, Chad Zhou, Kittipat Virochsiri, Norm Zhou, Igor L. Markov", + "published": "2021-02-10", + "updated": "2021-02-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.HC", + "cs.SE" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1804.07193v3", + "title": "Lipschitz Continuity in Model-based Reinforcement Learning", + "abstract": "We examine the impact of learning Lipschitz continuous models in the context\nof model-based reinforcement learning. We provide a novel bound on multi-step\nprediction error of Lipschitz models where we quantify the error using the\nWasserstein metric. We go on to prove an error bound for the value-function\nestimate arising from Lipschitz models and show that the estimated value\nfunction is itself Lipschitz. We conclude with empirical results that show the\nbenefits of controlling the Lipschitz constant of neural-network models.", + "authors": "Kavosh Asadi, Dipendra Misra, Michael L. Littman", + "published": "2018-04-19", + "updated": "2018-07-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2102.03022v1", + "title": "Deceptive Reinforcement Learning for Privacy-Preserving Planning", + "abstract": "In this paper, we study the problem of deceptive reinforcement learning to\npreserve the privacy of a reward function. Reinforcement learning is the\nproblem of finding a behaviour policy based on rewards received from\nexploratory behaviour. A key ingredient in reinforcement learning is a reward\nfunction, which determines how much reward (negative or positive) is given and\nwhen. However, in some situations, we may want to keep a reward function\nprivate; that is, to make it difficult for an observer to determine the reward\nfunction used. We define the problem of privacy-preserving reinforcement\nlearning, and present two models for solving it. These models are based on\ndissimulation -- a form of deception that `hides the truth'. We evaluate our\nmodels both computationally and via human behavioural experiments. Results show\nthat the resulting policies are indeed deceptive, and that participants can\ndetermine the true reward function less reliably than that of an honest agent.", + "authors": "Zhengshang Liu, Yue Yang, Tim Miller, Peta Masters", + "published": "2021-02-05", + "updated": "2021-02-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.MA" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1708.07738v1", + "title": "A Function Approximation Method for Model-based High-Dimensional Inverse Reinforcement Learning", + "abstract": "This works handles the inverse reinforcement learning problem in\nhigh-dimensional state spaces, which relies on an efficient solution of\nmodel-based high-dimensional reinforcement learning problems. To solve the\ncomputationally expensive reinforcement learning problems, we propose a\nfunction approximation method to ensure that the Bellman Optimality Equation\nalways holds, and then estimate a function based on the observed human actions\nfor inverse reinforcement learning problems. The time complexity of the\nproposed method is linearly proportional to the cardinality of the action set,\nthus it can handle high-dimensional even continuous state spaces efficiently.\nWe test the proposed method in a simulated environment to show its accuracy,\nand three clinical tasks to show how it can be used to evaluate a doctor's\nproficiency.", + "authors": "Kun Li, Joel W. Burdick", + "published": "2017-08-23", + "updated": "2017-08-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.06914v1", + "title": "Model-assisted Reinforcement Learning of a Quadrotor", + "abstract": "In recent times, reinforcement learning has produced baffling results when it\ncomes to performing control tasks with highly non-linear systems. The\nimpressive results always outweigh the potential vulnerabilities or\nuncertainties associated with the agents when deployed in the real-world. While\nthe performance is remarkable compared to the classical control algorithms, the\nreinforcement learning-based methods suffer from two flaws, robustness and\ninterpretability, which are vital for contemporary real-world applications. The\npaper attempts to alleviate such problems with reinforcement learning and\nproposes the concept of model-assisted reinforcement learning to induce a\nnotion of conservativeness in the agents. The control task considered for the\nexperiment involves navigating a CrazyFlie quadrotor. The paper also describes\na way of reformulating the task to have the flexibility of tuning the level of\nconservativeness via multi-objective reinforcement learning. The results\ninclude a comparison of the vanilla reinforcement learning approaches and the\nproposed approach. The metrics are evaluated by systematically injecting\ndisturbances to classify the inherent robustness and conservativeness of the\nagents. More concrete arguments are made by computing and comparing the\nbackward reachability tubes of the RL policies by solving the\nHamilton-Jacobi-Bellman partial differential equation (HJ PDE).", + "authors": "Arshad Javeed", + "published": "2023-11-12", + "updated": "2023-11-12", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.09450v1", + "title": "Adversarial Imitation Learning via Random Search", + "abstract": "Developing agents that can perform challenging complex tasks is the goal of\nreinforcement learning. The model-free reinforcement learning has been\nconsidered as a feasible solution. However, the state of the art research has\nbeen to develop increasingly complicated techniques. This increasing complexity\nmakes the reconstruction difficult. Furthermore, the problem of reward\ndependency is still exists. As a result, research on imitation learning, which\nlearns policy from a demonstration of experts, has begun to attract attention.\nImitation learning directly learns policy based on data on the behavior of the\nexperts without the explicit reward signal provided by the environment.\nHowever, imitation learning tries to optimize policies based on deep\nreinforcement learning such as trust region policy optimization. As a result,\ndeep reinforcement learning based imitation learning also poses a crisis of\nreproducibility. The issue of complex model-free model has received\nconsiderable critical attention. A derivative-free optimization based\nreinforcement learning and the simplification on policies obtain competitive\nperformance on the dynamic complex tasks. The simplified policies and\nderivative free methods make algorithm be simple. The reconfiguration of\nresearch demo becomes easy. In this paper, we propose an imitation learning\nmethod that takes advantage of the derivative-free optimization with simple\nlinear policies. The proposed method performs simple random search in the\nparameter space of policies and shows computational efficiency. Experiments in\nthis paper show that the proposed model, without a direct reward signal from\nthe environment, obtains competitive performance on the MuJoCo locomotion\ntasks.", + "authors": "MyungJae Shin, Joongheon Kim", + "published": "2020-08-21", + "updated": "2020-08-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.07525v1", + "title": "Using Collision Momentum in Deep Reinforcement Learning Based Adversarial Pedestrian Modeling", + "abstract": "Recent research in pedestrian simulation often aims to develop realistic\nbehaviors in various situations, but it is challenging for existing algorithms\nto generate behaviors that identify weaknesses in automated vehicles'\nperformance in extreme and unlikely scenarios and edge cases. To address this,\nspecialized pedestrian behavior algorithms are needed. Current research focuses\non realistic trajectories using social force models and reinforcement learning\nbased models. However, we propose a reinforcement learning algorithm that\nspecifically targets collisions and better uncovers unique failure modes of\nautomated vehicle controllers. Our algorithm is efficient and generates more\nsevere collisions, allowing for the identification and correction of weaknesses\nin autonomous driving algorithms in complex and varied scenarios.", + "authors": "Dianwei Chen, Ekim Yurtsever, Keith Redmill, Umit Ozguner", + "published": "2023-06-13", + "updated": "2023-06-13", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.09064v2", + "title": "Can Interpretable Reinforcement Learning Manage Prosperity Your Way?", + "abstract": "Personalisation of products and services is fast becoming the driver of\nsuccess in banking and commerce. Machine learning holds the promise of gaining\na deeper understanding of and tailoring to customers' needs and preferences.\nWhereas traditional solutions to financial decision problems frequently rely on\nmodel assumptions, reinforcement learning is able to exploit large amounts of\ndata to improve customer modelling and decision-making in complex financial\nenvironments with fewer assumptions. Model explainability and interpretability\npresent challenges from a regulatory perspective which demands transparency for\nacceptance; they also offer the opportunity for improved insight into and\nunderstanding of customers. Post-hoc approaches are typically used for\nexplaining pretrained reinforcement learning models. Based on our previous\nmodeling of customer spending behaviour, we adapt our recent reinforcement\nlearning algorithm that intrinsically characterizes desirable behaviours and we\ntransition to the problem of asset management. We train inherently\ninterpretable reinforcement learning agents to give investment advice that is\naligned with prototype financial personality traits which are combined to make\na final recommendation. We observe that the trained agents' advice adheres to\ntheir intended characteristics, they learn the value of compound growth, and,\nwithout any explicit reference, the notion of risk as well as improved policy\nconvergence.", + "authors": "Charl Maree, Christian Omlin", + "published": "2022-02-18", + "updated": "2022-06-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2011.01734v1", + "title": "Differentiable Physics Models for Real-world Offline Model-based Reinforcement Learning", + "abstract": "A limitation of model-based reinforcement learning (MBRL) is the exploitation\nof errors in the learned models. Black-box models can fit complex dynamics with\nhigh fidelity, but their behavior is undefined outside of the data\ndistribution.Physics-based models are better at extrapolating, due to the\ngeneral validity of their informed structure, but underfit in the real world\ndue to the presence of unmodeled phenomena. In this work, we demonstrate\nexperimentally that for the offline model-based reinforcement learning setting,\nphysics-based models can be beneficial compared to high-capacity function\napproximators if the mechanical structure is known. Physics-based models can\nlearn to perform the ball in a cup (BiC) task on a physical manipulator using\nonly 4 minutes of sampled data using offline MBRL. We find that black-box\nmodels consistently produce unviable policies for BiC as all predicted\ntrajectories diverge to physically impossible state, despite having access to\nmore data than the physics-based model. In addition, we generalize the approach\nof physics parameter identification from modeling holonomic multi-body systems\nto systems with nonholonomic dynamics using end-to-end automatic\ndifferentiation.\n Videos: https://sites.google.com/view/ball-in-a-cup-in-4-minutes/", + "authors": "Michael Lutter, Johannes Silberbauer, Joe Watson, Jan Peters", + "published": "2020-11-03", + "updated": "2020-11-03", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.02900v2", + "title": "Meta Federated Reinforcement Learning for Distributed Resource Allocation", + "abstract": "In cellular networks, resource allocation is usually performed in a\ncentralized way, which brings huge computation complexity to the base station\n(BS) and high transmission overhead. This paper explores a distributed resource\nallocation method that aims to maximize energy efficiency (EE) while ensuring\nthe quality of service (QoS) for users. Specifically, in order to address\nwireless channel conditions, we propose a robust meta federated reinforcement\nlearning (\\textit{MFRL}) framework that allows local users to optimize transmit\npower and assign channels using locally trained neural network models, so as to\noffload computational burden from the cloud server to the local users, reducing\ntransmission overhead associated with local channel state information. The BS\nperforms the meta learning procedure to initialize a general global model,\nenabling rapid adaptation to different environments with improved EE\nperformance. The federated learning technique, based on decentralized\nreinforcement learning, promotes collaboration and mutual benefits among users.\nAnalysis and numerical results demonstrate that the proposed \\textit{MFRL}\nframework accelerates the reinforcement learning process, decreases\ntransmission overhead, and offloads computation, while outperforming the\nconventional decentralized reinforcement learning algorithm in terms of\nconvergence speed and EE performance across various scenarios.", + "authors": "Zelin Ji, Zhijin Qin, Xiaoming Tao", + "published": "2023-07-06", + "updated": "2023-07-09", + "primary_cat": "eess.SP", + "cats": [ + "eess.SP", + "cs.SY", + "eess.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2301.11520v3", + "title": "SNeRL: Semantic-aware Neural Radiance Fields for Reinforcement Learning", + "abstract": "As previous representations for reinforcement learning cannot effectively\nincorporate a human-intuitive understanding of the 3D environment, they usually\nsuffer from sub-optimal performances. In this paper, we present Semantic-aware\nNeural Radiance Fields for Reinforcement Learning (SNeRL), which jointly\noptimizes semantic-aware neural radiance fields (NeRF) with a convolutional\nencoder to learn 3D-aware neural implicit representation from multi-view\nimages. We introduce 3D semantic and distilled feature fields in parallel to\nthe RGB radiance fields in NeRF to learn semantic and object-centric\nrepresentation for reinforcement learning. SNeRL outperforms not only previous\npixel-based representations but also recent 3D-aware representations both in\nmodel-free and model-based reinforcement learning.", + "authors": "Dongseok Shim, Seungjae Lee, H. Jin Kim", + "published": "2023-01-27", + "updated": "2023-05-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2112.01195v1", + "title": "Maximum Entropy Model-based Reinforcement Learning", + "abstract": "Recent advances in reinforcement learning have demonstrated its ability to\nsolve hard agent-environment interaction tasks on a super-human level. However,\nthe application of reinforcement learning methods to practical and real-world\ntasks is currently limited due to most RL state-of-art algorithms' sample\ninefficiency, i.e., the need for a vast number of training episodes. For\nexample, OpenAI Five algorithm that has beaten human players in Dota 2 has\ntrained for thousands of years of game time. Several approaches exist that\ntackle the issue of sample inefficiency, that either offers a more efficient\nusage of already gathered experience or aim to gain a more relevant and diverse\nexperience via a better exploration of an environment. However, to our\nknowledge, no such approach exists for model-based algorithms, that showed\ntheir high sample efficiency in solving hard control tasks with\nhigh-dimensional state space. This work connects exploration techniques and\nmodel-based reinforcement learning. We have designed a novel exploration method\nthat takes into account features of the model-based approach. We also\ndemonstrate through experiments that our method significantly improves the\nperformance of the model-based algorithm Dreamer.", + "authors": "Oleg Svidchenko, Aleksei Shpilman", + "published": "2021-12-02", + "updated": "2021-12-02", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2308.11336v1", + "title": "On the Opportunities and Challenges of Offline Reinforcement Learning for Recommender Systems", + "abstract": "Reinforcement learning serves as a potent tool for modeling dynamic user\ninterests within recommender systems, garnering increasing research attention\nof late. However, a significant drawback persists: its poor data efficiency,\nstemming from its interactive nature. The training of reinforcement\nlearning-based recommender systems demands expensive online interactions to\namass adequate trajectories, essential for agents to learn user preferences.\nThis inefficiency renders reinforcement learning-based recommender systems a\nformidable undertaking, necessitating the exploration of potential solutions.\nRecent strides in offline reinforcement learning present a new perspective.\nOffline reinforcement learning empowers agents to glean insights from offline\ndatasets and deploy learned policies in online settings. Given that recommender\nsystems possess extensive offline datasets, the framework of offline\nreinforcement learning aligns seamlessly. Despite being a burgeoning field,\nworks centered on recommender systems utilizing offline reinforcement learning\nremain limited. This survey aims to introduce and delve into offline\nreinforcement learning within recommender systems, offering an inclusive review\nof existing literature in this domain. Furthermore, we strive to underscore\nprevalent challenges, opportunities, and future pathways, poised to propel\nresearch in this evolving field.", + "authors": "Xiaocong Chen, Siyu Wang, Julian McAuley, Dietmar Jannach, Lina Yao", + "published": "2023-08-22", + "updated": "2023-08-22", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2308.14897v1", + "title": "Statistically Efficient Variance Reduction with Double Policy Estimation for Off-Policy Evaluation in Sequence-Modeled Reinforcement Learning", + "abstract": "Offline reinforcement learning aims to utilize datasets of previously\ngathered environment-action interaction records to learn a policy without\naccess to the real environment. Recent work has shown that offline\nreinforcement learning can be formulated as a sequence modeling problem and\nsolved via supervised learning with approaches such as decision transformer.\nWhile these sequence-based methods achieve competitive results over\nreturn-to-go methods, especially on tasks that require longer episodes or with\nscarce rewards, importance sampling is not considered to correct the policy\nbias when dealing with off-policy data, mainly due to the absence of behavior\npolicy and the use of deterministic evaluation policies. To this end, we\npropose DPE: an RL algorithm that blends offline sequence modeling and offline\nreinforcement learning with Double Policy Estimation (DPE) in a unified\nframework with statistically proven properties on variance reduction. We\nvalidate our method in multiple tasks of OpenAI Gym with D4RL benchmarks. Our\nmethod brings a performance improvements on selected methods which outperforms\nSOTA baselines in several tasks, demonstrating the advantages of enabling\ndouble policy estimation for sequence-modeled reinforcement learning.", + "authors": "Hanhan Zhou, Tian Lan, Vaneet Aggarwal", + "published": "2023-08-28", + "updated": "2023-08-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.DC" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.07260v1", + "title": "TIAGo RL: Simulated Reinforcement Learning Environments with Tactile Data for Mobile Robots", + "abstract": "Tactile information is important for robust performance in robotic tasks that\ninvolve physical interaction, such as object manipulation. However, with more\ndata included in the reasoning and control process, modeling behavior becomes\nincreasingly difficult. Deep Reinforcement Learning (DRL) produced promising\nresults for learning complex behavior in various domains, including\ntactile-based manipulation in robotics. In this work, we present our\nopen-source reinforcement learning environments for the TIAGo service robot.\nThey produce tactile sensor measurements that resemble those of a real\nsensorised gripper for TIAGo, encouraging research in transfer learning of DRL\npolicies. Lastly, we show preliminary training results of a learned force\ncontrol policy and compare it to a classical PI controller.", + "authors": "Luca Lach, Francesco Ferro, Robert Haschke", + "published": "2023-11-13", + "updated": "2023-11-13", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.13044v1", + "title": "Reinforcement Learning with Feedback-modulated TD-STDP", + "abstract": "Spiking neuron networks have been used successfully to solve simple\nreinforcement learning tasks with continuous action set applying learning rules\nbased on spike-timing-dependent plasticity (STDP). However, most of these\nmodels cannot be applied to reinforcement learning tasks with discrete action\nset since they assume that the selected action is a deterministic function of\nfiring rate of neurons, which is continuous. In this paper, we propose a new\nSTDP-based learning rule for spiking neuron networks which contains feedback\nmodulation. We show that the STDP-based learning rule can be used to solve\nreinforcement learning tasks with discrete action set at a speed similar to\nstandard reinforcement learning algorithms when applied to the CartPole and\nLunarLander tasks. Moreover, we demonstrate that the agent is unable to solve\nthese tasks if feedback modulation is omitted from the learning rule. We\nconclude that feedback modulation allows better credit assignment when only the\nunits contributing to the executed action and TD error participate in learning.", + "authors": "Stephen Chung, Robert Kozma", + "published": "2020-08-29", + "updated": "2020-08-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML", + "I.2.8" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1709.05067v1", + "title": "Deep Reinforcement Learning for Conversational AI", + "abstract": "Deep reinforcement learning is revolutionizing the artificial intelligence\nfield. Currently, it serves as a good starting point for constructing\nintelligent autonomous systems which offer a better knowledge of the visual\nworld. It is possible to scale deep reinforcement learning with the use of deep\nlearning and do amazing tasks such as use of pixels in playing video games. In\nthis paper, key concepts of deep reinforcement learning including reward\nfunction, differences between reinforcement learning and supervised learning\nand models for implementation of reinforcement are discussed. Key challenges\nrelated to the implementation of reinforcement learning in conversational AI\ndomain are identified as well as discussed in detail. Various conversational\nmodels which are based on deep reinforcement learning (as well as deep\nlearning) are also discussed. In summary, this paper discusses key aspects of\ndeep reinforcement learning which are crucial for designing an efficient\nconversational AI.", + "authors": "Mahipal Jadeja, Neelanshi Varia, Agam Shah", + "published": "2017-09-15", + "updated": "2017-09-15", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1012.1552v1", + "title": "Bridging the Gap between Reinforcement Learning and Knowledge Representation: A Logical Off- and On-Policy Framework", + "abstract": "Knowledge Representation is important issue in reinforcement learning. In\nthis paper, we bridge the gap between reinforcement learning and knowledge\nrepresentation, by providing a rich knowledge representation framework, based\non normal logic programs with answer set semantics, that is capable of solving\nmodel-free reinforcement learning problems for more complex do-mains and\nexploits the domain-specific knowledge. We prove the correctness of our\napproach. We show that the complexity of finding an offline and online policy\nfor a model-free reinforcement learning problem in our approach is NP-complete.\nMoreover, we show that any model-free reinforcement learning problem in MDP\nenvironment can be encoded as a SAT problem. The importance of that is\nmodel-free reinforcement", + "authors": "Emad Saad", + "published": "2010-12-07", + "updated": "2010-12-07", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG", + "cs.LO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + } + ] + ] + }, + { + "url": "http://arxiv.org/abs/2007.01298v3", + "title": "Image Classification by Reinforcement Learning with Two-State Q-Learning", + "abstract": "In this paper, a simple and efficient Hybrid Classifier is presented which is\nbased on deep learning and reinforcement learning. Here, Q-Learning has been\nused with two states and 'two or three' actions. Other techniques found in the\nliterature use feature map extracted from Convolutional Neural Networks and use\nthese in the Q-states along with past history. This leads to technical\ndifficulties in these approaches because the number of states is high due to\nlarge dimensions of the feature map. Because the proposed technique uses only\ntwo Q-states it is straightforward and consequently has much lesser number of\noptimization parameters, and thus also has a simple reward function. Also, the\nproposed technique uses novel actions for processing images as compared to\nother techniques found in literature. The performance of the proposed technique\nis compared with other recent algorithms like ResNet50, InceptionV3, etc. on\npopular databases including ImageNet, Cats and Dogs Dataset, and Caltech-101\nDataset. The proposed approach outperforms others techniques on all the\ndatasets used.", + "authors": "Abdul Mueed Hafiz", + "published": "2020-06-28", + "updated": "2020-10-31", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG", + "eess.IV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1707.06347v2", + "title": "Proximal Policy Optimization Algorithms", + "abstract": "We propose a new family of policy gradient methods for reinforcement\nlearning, which alternate between sampling data through interaction with the\nenvironment, and optimizing a \"surrogate\" objective function using stochastic\ngradient ascent. Whereas standard policy gradient methods perform one gradient\nupdate per data sample, we propose a novel objective function that enables\nmultiple epochs of minibatch updates. The new methods, which we call proximal\npolicy optimization (PPO), have some of the benefits of trust region policy\noptimization (TRPO), but they are much simpler to implement, more general, and\nhave better sample complexity (empirically). Our experiments test PPO on a\ncollection of benchmark tasks, including simulated robotic locomotion and Atari\ngame playing, and we show that PPO outperforms other online policy gradient\nmethods, and overall strikes a favorable balance between sample complexity,\nsimplicity, and wall-time.", + "authors": "John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov", + "published": "2017-07-20", + "updated": "2017-08-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2307.14710v2", + "title": "Pre-training Vision Transformers with Very Limited Synthesized Images", + "abstract": "Formula-driven supervised learning (FDSL) is a pre-training method that\nrelies on synthetic images generated from mathematical formulae such as\nfractals. Prior work on FDSL has shown that pre-training vision transformers on\nsuch synthetic datasets can yield competitive accuracy on a wide range of\ndownstream tasks. These synthetic images are categorized according to the\nparameters in the mathematical formula that generate them. In the present work,\nwe hypothesize that the process for generating different instances for the same\ncategory in FDSL, can be viewed as a form of data augmentation. We validate\nthis hypothesis by replacing the instances with data augmentation, which means\nwe only need a single image per category. Our experiments shows that this\none-instance fractal database (OFDB) performs better than the original dataset\nwhere instances were explicitly generated. We further scale up OFDB to 21,000\ncategories and show that it matches, or even surpasses, the model pre-trained\non ImageNet-21k in ImageNet-1k fine-tuning. The number of images in OFDB is\n21k, whereas ImageNet-21k has 14M. This opens new possibilities for\npre-training vision transformers with much smaller datasets.", + "authors": "Ryo Nakamura, Hirokatsu Kataoka, Sora Takashima, Edgar Josafat Martinez Noriega, Rio Yokota, Nakamasa Inoue", + "published": "2023-07-27", + "updated": "2023-07-31", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.14735v2", + "title": "Unleashing the potential of prompt engineering in Large Language Models: a comprehensive review", + "abstract": "This paper delves into the pivotal role of prompt engineering in unleashing\nthe capabilities of Large Language Models (LLMs). Prompt engineering is the\nprocess of structuring input text for LLMs and is a technique integral to\noptimizing the efficacy of LLMs. This survey elucidates foundational principles\nof prompt engineering, such as role-prompting, one-shot, and few-shot\nprompting, as well as more advanced methodologies such as the chain-of-thought\nand tree-of-thoughts prompting. The paper sheds light on how external\nassistance in the form of plugins can assist in this task, and reduce machine\nhallucination by retrieving external knowledge. We subsequently delineate\nprospective directions in prompt engineering research, emphasizing the need for\na deeper understanding of structures and the role of agents in Artificial\nIntelligence-Generated Content (AIGC) tools. We discuss how to assess the\nefficacy of prompt methods from different perspectives and using different\nmethods. Finally, we gather information about the application of prompt\nengineering in such fields as education and programming, showing its\ntransformative potential. This comprehensive survey aims to serve as a friendly\nguide for anyone venturing through the big world of LLMs and prompt\nengineering.", + "authors": "Banghao Chen, Zhaofeng Zhang, Nicolas Langren\u00e9, Shengxin Zhu", + "published": "2023-10-23", + "updated": "2023-10-27", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "I.2.7" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2302.07944v2", + "title": "Effective Data Augmentation With Diffusion Models", + "abstract": "Data augmentation is one of the most prevalent tools in deep learning,\nunderpinning many recent advances, including those from classification,\ngenerative models, and representation learning. The standard approach to data\naugmentation combines simple transformations like rotations and flips to\ngenerate new images from existing ones. However, these new images lack\ndiversity along key semantic axes present in the data. Current augmentations\ncannot alter the high-level semantic attributes, such as animal species present\nin a scene, to enhance the diversity of data. We address the lack of diversity\nin data augmentation with image-to-image transformations parameterized by\npre-trained text-to-image diffusion models. Our method edits images to change\ntheir semantics using an off-the-shelf diffusion model, and generalizes to\nnovel visual concepts from a few labelled examples. We evaluate our approach on\nfew-shot image classification tasks, and on a real-world weed recognition task,\nand observe an improvement in accuracy in tested domains.", + "authors": "Brandon Trabucco, Kyle Doherty, Max Gurinas, Ruslan Salakhutdinov", + "published": "2023-02-07", + "updated": "2023-05-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.15316v1", + "title": "Training on Thin Air: Improve Image Classification with Generated Data", + "abstract": "Acquiring high-quality data for training discriminative models is a crucial\nyet challenging aspect of building effective predictive systems. In this paper,\nwe present Diffusion Inversion, a simple yet effective method that leverages\nthe pre-trained generative model, Stable Diffusion, to generate diverse,\nhigh-quality training data for image classification. Our approach captures the\noriginal data distribution and ensures data coverage by inverting images to the\nlatent space of Stable Diffusion, and generates diverse novel training images\nby conditioning the generative model on noisy versions of these vectors. We\nidentify three key components that allow our generated images to successfully\nsupplant the original dataset, leading to a 2-3x enhancement in sample\ncomplexity and a 6.5x decrease in sampling time. Moreover, our approach\nconsistently outperforms generic prompt-based steering methods and KNN\nretrieval baseline across a wide range of datasets. Additionally, we\ndemonstrate the compatibility of our approach with widely-used data\naugmentation techniques, as well as the reliability of the generated data in\nsupporting various neural architectures and enhancing few-shot learning.", + "authors": "Yongchao Zhou, Hshmat Sahak, Jimmy Ba", + "published": "2023-05-24", + "updated": "2023-05-24", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2302.07944v2", + "title": "Effective Data Augmentation With Diffusion Models", + "abstract": "Data augmentation is one of the most prevalent tools in deep learning,\nunderpinning many recent advances, including those from classification,\ngenerative models, and representation learning. The standard approach to data\naugmentation combines simple transformations like rotations and flips to\ngenerate new images from existing ones. However, these new images lack\ndiversity along key semantic axes present in the data. Current augmentations\ncannot alter the high-level semantic attributes, such as animal species present\nin a scene, to enhance the diversity of data. We address the lack of diversity\nin data augmentation with image-to-image transformations parameterized by\npre-trained text-to-image diffusion models. Our method edits images to change\ntheir semantics using an off-the-shelf diffusion model, and generalizes to\nnovel visual concepts from a few labelled examples. We evaluate our approach on\nfew-shot image classification tasks, and on a real-world weed recognition task,\nand observe an improvement in accuracy in tested domains.", + "authors": "Brandon Trabucco, Kyle Doherty, Max Gurinas, Ruslan Salakhutdinov", + "published": "2023-02-07", + "updated": "2023-05-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2112.10752v2", + "title": "High-Resolution Image Synthesis with Latent Diffusion Models", + "abstract": "By decomposing the image formation process into a sequential application of\ndenoising autoencoders, diffusion models (DMs) achieve state-of-the-art\nsynthesis results on image data and beyond. Additionally, their formulation\nallows for a guiding mechanism to control the image generation process without\nretraining. However, since these models typically operate directly in pixel\nspace, optimization of powerful DMs often consumes hundreds of GPU days and\ninference is expensive due to sequential evaluations. To enable DM training on\nlimited computational resources while retaining their quality and flexibility,\nwe apply them in the latent space of powerful pretrained autoencoders. In\ncontrast to previous work, training diffusion models on such a representation\nallows for the first time to reach a near-optimal point between complexity\nreduction and detail preservation, greatly boosting visual fidelity. By\nintroducing cross-attention layers into the model architecture, we turn\ndiffusion models into powerful and flexible generators for general conditioning\ninputs such as text or bounding boxes and high-resolution synthesis becomes\npossible in a convolutional manner. Our latent diffusion models (LDMs) achieve\na new state of the art for image inpainting and highly competitive performance\non various tasks, including unconditional image generation, semantic scene\nsynthesis, and super-resolution, while significantly reducing computational\nrequirements compared to pixel-based DMs. Code is available at\nhttps://github.com/CompVis/latent-diffusion .", + "authors": "Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Bj\u00f6rn Ommer", + "published": "2021-12-20", + "updated": "2022-04-13", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2404.02429v1", + "title": "AD4RL: Autonomous Driving Benchmarks for Offline Reinforcement Learning with Value-based Dataset", + "abstract": "Offline reinforcement learning has emerged as a promising technology by\nenhancing its practicality through the use of pre-collected large datasets.\nDespite its practical benefits, most algorithm development research in offline\nreinforcement learning still relies on game tasks with synthetic datasets. To\naddress such limitations, this paper provides autonomous driving datasets and\nbenchmarks for offline reinforcement learning research. We provide 19 datasets,\nincluding real-world human driver's datasets, and seven popular offline\nreinforcement learning algorithms in three realistic driving scenarios. We also\nprovide a unified decision-making process model that can operate effectively\nacross different scenarios, serving as a reference framework in algorithm\ndesign. Our research lays the groundwork for further collaborations in the\ncommunity to explore practical aspects of existing reinforcement learning\nmethods. Dataset and codes can be found in https://sites.google.com/view/ad4rl.", + "authors": "Dongsu Lee, Chanin Eom, Minhae Kwon", + "published": "2024-04-03", + "updated": "2024-04-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.02900v2", + "title": "Meta Federated Reinforcement Learning for Distributed Resource Allocation", + "abstract": "In cellular networks, resource allocation is usually performed in a\ncentralized way, which brings huge computation complexity to the base station\n(BS) and high transmission overhead. This paper explores a distributed resource\nallocation method that aims to maximize energy efficiency (EE) while ensuring\nthe quality of service (QoS) for users. Specifically, in order to address\nwireless channel conditions, we propose a robust meta federated reinforcement\nlearning (\\textit{MFRL}) framework that allows local users to optimize transmit\npower and assign channels using locally trained neural network models, so as to\noffload computational burden from the cloud server to the local users, reducing\ntransmission overhead associated with local channel state information. The BS\nperforms the meta learning procedure to initialize a general global model,\nenabling rapid adaptation to different environments with improved EE\nperformance. The federated learning technique, based on decentralized\nreinforcement learning, promotes collaboration and mutual benefits among users.\nAnalysis and numerical results demonstrate that the proposed \\textit{MFRL}\nframework accelerates the reinforcement learning process, decreases\ntransmission overhead, and offloads computation, while outperforming the\nconventional decentralized reinforcement learning algorithm in terms of\nconvergence speed and EE performance across various scenarios.", + "authors": "Zelin Ji, Zhijin Qin, Xiaoming Tao", + "published": "2023-07-06", + "updated": "2023-07-09", + "primary_cat": "eess.SP", + "cats": [ + "eess.SP", + "cs.SY", + "eess.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2012.09737v2", + "title": "Model-free and Bayesian Ensembling Model-based Deep Reinforcement Learning for Particle Accelerator Control Demonstrated on the FERMI FEL", + "abstract": "Reinforcement learning holds tremendous promise in accelerator controls. The\nprimary goal of this paper is to show how this approach can be utilised on an\noperational level on accelerator physics problems. Despite the success of\nmodel-free reinforcement learning in several domains, sample-efficiency still\nis a bottle-neck, which might be encompassed by model-based methods. We compare\nwell-suited purely model-based to model-free reinforcement learning applied to\nthe intensity optimisation on the FERMI FEL system. We find that the\nmodel-based approach demonstrates higher representational power and\nsample-efficiency, while the asymptotic performance of the model-free method is\nslightly superior. The model-based algorithm is implemented in a DYNA-style\nusing an uncertainty aware model, and the model-free algorithm is based on\ntailored deep Q-learning. In both cases, the algorithms were implemented in a\nway, which presents increased noise robustness as omnipresent in accelerator\ncontrol problems. Code is released in\nhttps://github.com/MathPhysSim/FERMI_RL_Paper.", + "authors": "Simon Hirlaender, Niky Bruchon", + "published": "2020-12-17", + "updated": "2022-01-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SY", + "eess.SY", + "physics.acc-ph", + "I.2; J.2" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.02380v2", + "title": "Adaptive Rollout Length for Model-Based RL Using Model-Free Deep RL", + "abstract": "Model-based reinforcement learning promises to learn an optimal policy from\nfewer interactions with the environment compared to model-free reinforcement\nlearning by learning an intermediate model of the environment in order to\npredict future interactions. When predicting a sequence of interactions, the\nrollout length, which limits the prediction horizon, is a critical\nhyperparameter as accuracy of the predictions diminishes in the regions that\nare further away from real experience. As a result, with a longer rollout\nlength, an overall worse policy is learned in the long run. Thus, the\nhyperparameter provides a trade-off between quality and efficiency. In this\nwork, we frame the problem of tuning the rollout length as a meta-level\nsequential decision-making problem that optimizes the final policy learned by\nmodel-based reinforcement learning given a fixed budget of environment\ninteractions by adapting the hyperparameter dynamically based on feedback from\nthe learning process, such as accuracy of the model and the remaining budget of\ninteractions. We use model-free deep reinforcement learning to solve the\nmeta-level decision problem and demonstrate that our approach outperforms\ncommon heuristic baselines on two well-known reinforcement learning\nenvironments.", + "authors": "Abhinav Bhatia, Philip S. Thomas, Shlomo Zilberstein", + "published": "2022-06-06", + "updated": "2022-06-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1609.03348v4", + "title": "A Threshold-based Scheme for Reinforcement Learning in Neural Networks", + "abstract": "A generic and scalable Reinforcement Learning scheme for Artificial Neural\nNetworks is presented, providing a general purpose learning machine. By\nreference to a node threshold three features are described 1) A mechanism for\nPrimary Reinforcement, capable of solving linearly inseparable problems 2) The\nlearning scheme is extended to include a mechanism for Conditioned\nReinforcement, capable of forming long term strategy 3) The learning scheme is\nmodified to use a threshold-based deep learning algorithm, providing a robust\nand biologically inspired alternative to backpropagation. The model may be used\nfor supervised as well as unsupervised training regimes.", + "authors": "Thomas H. Ward", + "published": "2016-09-12", + "updated": "2017-01-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NE" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.07915v2", + "title": "Barrier Certified Safety Learning Control: When Sum-of-Square Programming Meets Reinforcement Learning", + "abstract": "Safety guarantee is essential in many engineering implementations.\nReinforcement learning provides a useful way to strengthen safety. However,\nreinforcement learning algorithms cannot completely guarantee safety over\nrealistic operations. To address this issue, this work adopts control barrier\nfunctions over reinforcement learning, and proposes a compensated algorithm to\ncompletely maintain safety. Specifically, a sum-of-squares programming has been\nexploited to search for the optimal controller, and tune the learning\nhyperparameters simultaneously. Thus, the control actions are pledged to be\nalways within the safe region. The effectiveness of proposed method is\ndemonstrated via an inverted pendulum model. Compared to quadratic programming\nbased reinforcement learning methods, our sum-of-squares programming based\nreinforcement learning has shown its superiority.", + "authors": "Hejun Huang, Zhenglong Li, Dongkun Han", + "published": "2022-06-16", + "updated": "2022-06-29", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.LG", + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.01977v1", + "title": "Accelerating Goal-Directed Reinforcement Learning by Model Characterization", + "abstract": "We propose a hybrid approach aimed at improving the sample efficiency in\ngoal-directed reinforcement learning. We do this via a two-step mechanism where\nfirstly, we approximate a model from Model-Free reinforcement learning. Then,\nwe leverage this approximate model along with a notion of reachability using\nMean First Passage Times to perform Model-Based reinforcement learning. Built\non such a novel observation, we design two new algorithms - Mean First Passage\nTime based Q-Learning (MFPT-Q) and Mean First Passage Time based DYNA\n(MFPT-DYNA), that have been fundamentally modified from the state-of-the-art\nreinforcement learning techniques. Preliminary results have shown that our\nhybrid approaches converge with much fewer iterations than their corresponding\nstate-of-the-art counterparts and therefore requiring much fewer samples and\nmuch fewer training trials to converge.", + "authors": "Shoubhik Debnath, Gaurav Sukhatme, Lantao Liu", + "published": "2019-01-04", + "updated": "2019-01-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2011.10714v1", + "title": "Double Meta-Learning for Data Efficient Policy Optimization in Non-Stationary Environments", + "abstract": "We are interested in learning models of non-stationary environments, which\ncan be framed as a multi-task learning problem. Model-free reinforcement\nlearning algorithms can achieve good asymptotic performance in multi-task\nlearning at a cost of extensive sampling, due to their approach, which requires\nlearning from scratch. While model-based approaches are among the most data\nefficient learning algorithms, they still struggle with complex tasks and model\nuncertainties. Meta-reinforcement learning addresses the efficiency and\ngeneralization challenges on multi task learning by quickly leveraging the\nmeta-prior policy for a new task. In this paper, we propose a\nmeta-reinforcement learning approach to learn the dynamic model of a\nnon-stationary environment to be used for meta-policy optimization later. Due\nto the sample efficiency of model-based learning methods, we are able to\nsimultaneously train both the meta-model of the non-stationary environment and\nthe meta-policy until dynamic model convergence. Then, the meta-learned dynamic\nmodel of the environment will generate simulated data for meta-policy\noptimization. Our experiment demonstrates that our proposed method can\nmeta-learn the policy in a non-stationary environment with the data efficiency\nof model-based learning approaches while achieving the high asymptotic\nperformance of model-free meta-reinforcement learning.", + "authors": "Elahe Aghapour, Nora Ayanian", + "published": "2020-11-21", + "updated": "2020-11-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2006.07178v2", + "title": "Meta-Reinforcement Learning Robust to Distributional Shift via Model Identification and Experience Relabeling", + "abstract": "Reinforcement learning algorithms can acquire policies for complex tasks\nautonomously. However, the number of samples required to learn a diverse set of\nskills can be prohibitively large. While meta-reinforcement learning methods\nhave enabled agents to leverage prior experience to adapt quickly to new tasks,\ntheir performance depends crucially on how close the new task is to the\npreviously experienced tasks. Current approaches are either not able to\nextrapolate well, or can do so at the expense of requiring extremely large\namounts of data for on-policy meta-training. In this work, we present model\nidentification and experience relabeling (MIER), a meta-reinforcement learning\nalgorithm that is both efficient and extrapolates well when faced with\nout-of-distribution tasks at test time. Our method is based on a simple\ninsight: we recognize that dynamics models can be adapted efficiently and\nconsistently with off-policy data, more easily than policies and value\nfunctions. These dynamics models can then be used to continue training policies\nand value functions for out-of-distribution tasks without using\nmeta-reinforcement learning at all, by generating synthetic experience for the\nnew task.", + "authors": "Russell Mendonca, Xinyang Geng, Chelsea Finn, Sergey Levine", + "published": "2020-06-12", + "updated": "2020-06-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.03188v3", + "title": "Optimizing Quantum Variational Circuits with Deep Reinforcement Learning", + "abstract": "Quantum Machine Learning (QML) is considered to be one of the most promising\napplications of near term quantum devices. However, the optimization of quantum\nmachine learning models presents numerous challenges arising from the\nimperfections of hardware and the fundamental obstacles in navigating an\nexponentially scaling Hilbert space. In this work, we evaluate the potential of\ncontemporary methods in deep reinforcement learning to augment gradient based\noptimization routines in quantum variational circuits. We find that\nreinforcement learning augmented optimizers consistently outperform gradient\ndescent in noisy environments. All code and pretrained weights are available to\nreplicate the results or deploy the models at:\nhttps://github.com/lockwo/rl_qvc_opt.", + "authors": "Owen Lockwood", + "published": "2021-09-07", + "updated": "2022-05-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "quant-ph" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2312.15385v1", + "title": "Discrete-Time Mean-Variance Strategy Based on Reinforcement Learning", + "abstract": "This paper studies a discrete-time mean-variance model based on reinforcement\nlearning. Compared with its continuous-time counterpart in \\cite{zhou2020mv},\nthe discrete-time model makes more general assumptions about the asset's return\ndistribution. Using entropy to measure the cost of exploration, we derive the\noptimal investment strategy, whose density function is also Gaussian type.\nAdditionally, we design the corresponding reinforcement learning algorithm.\nBoth simulation experiments and empirical analysis indicate that our\ndiscrete-time model exhibits better applicability when analyzing real-world\ndata than the continuous-time model.", + "authors": "Xiangyu Cui, Xun Li, Yun Shi, Si Zhao", + "published": "2023-12-24", + "updated": "2023-12-24", + "primary_cat": "q-fin.MF", + "cats": [ + "q-fin.MF", + "cs.LG", + "q-fin.PM" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2012.00743v1", + "title": "Adaptive Neural Architectures for Recommender Systems", + "abstract": "Deep learning has proved an effective means to capture the non-linear\nassociations of user preferences. However, the main drawback of existing deep\nlearning architectures is that they follow a fixed recommendation strategy,\nignoring users' real time-feedback. Recent advances of deep reinforcement\nstrategies showed that recommendation policies can be continuously updated\nwhile users interact with the system. In doing so, we can learn the optimal\npolicy that fits to users' preferences over the recommendation sessions. The\nmain drawback of deep reinforcement strategies is that are based on predefined\nand fixed neural architectures. To shed light on how to handle this issue, in\nthis study we first present deep reinforcement learning strategies for\nrecommendation and discuss the main limitations due to the fixed neural\narchitectures. Then, we detail how recent advances on progressive neural\narchitectures are used for consecutive tasks in other research domains.\nFinally, we present the key challenges to fill the gap between deep\nreinforcement learning and adaptive neural architectures. We provide guidelines\nfor searching for the best neural architecture based on each user feedback via\nreinforcement learning, while considering the prediction performance on\nreal-time recommendations and the model complexity.", + "authors": "Dimitrios Rafailidis, Stefanos Antaris", + "published": "2020-11-11", + "updated": "2020-11-11", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2009.09781v1", + "title": "Rethinking Supervised Learning and Reinforcement Learning in Task-Oriented Dialogue Systems", + "abstract": "Dialogue policy learning for task-oriented dialogue systems has enjoyed great\nprogress recently mostly through employing reinforcement learning methods.\nHowever, these approaches have become very sophisticated. It is time to\nre-evaluate it. Are we really making progress developing dialogue agents only\nbased on reinforcement learning? We demonstrate how (1)~traditional supervised\nlearning together with (2)~a simulator-free adversarial learning method can be\nused to achieve performance comparable to state-of-the-art RL-based methods.\nFirst, we introduce a simple dialogue action decoder to predict the appropriate\nactions. Then, the traditional multi-label classification solution for dialogue\npolicy learning is extended by adding dense layers to improve the dialogue\nagent performance. Finally, we employ the Gumbel-Softmax estimator to\nalternatively train the dialogue agent and the dialogue reward model without\nusing reinforcement learning. Based on our extensive experimentation, we can\nconclude the proposed methods can achieve more stable and higher performance\nwith fewer efforts, such as the domain knowledge required to design a user\nsimulator and the intractable parameter tuning in reinforcement learning. Our\nmain goal is not to beat reinforcement learning with supervised learning, but\nto demonstrate the value of rethinking the role of reinforcement learning and\nsupervised learning in optimizing task-oriented dialogue systems.", + "authors": "Ziming Li, Julia Kiseleva, Maarten de Rijke", + "published": "2020-09-21", + "updated": "2020-09-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2102.05612v1", + "title": "Personalization for Web-based Services using Offline Reinforcement Learning", + "abstract": "Large-scale Web-based services present opportunities for improving UI\npolicies based on observed user interactions. We address challenges of learning\nsuch policies through model-free offline Reinforcement Learning (RL) with\noff-policy training. Deployed in a production system for user authentication in\na major social network, it significantly improves long-term objectives. We\narticulate practical challenges, compare several ML techniques, provide\ninsights on training and evaluation of RL models, and discuss generalizations.", + "authors": "Pavlos Athanasios Apostolopoulos, Zehui Wang, Hanson Wang, Chad Zhou, Kittipat Virochsiri, Norm Zhou, Igor L. Markov", + "published": "2021-02-10", + "updated": "2021-02-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.HC", + "cs.SE" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.03967v1", + "title": "A Deep Reinforcement Learning Approach for Composing Moving IoT Services", + "abstract": "We develop a novel framework for efficiently and effectively discovering\ncrowdsourced services that move in close proximity to a user over a period of\ntime. We introduce a moving crowdsourced service model which is modelled as a\nmoving region. We propose a deep reinforcement learning-based composition\napproach to select and compose moving IoT services considering quality\nparameters. Additionally, we develop a parallel flock-based service discovery\nalgorithm as a ground-truth to measure the accuracy of the proposed approach.\nThe experiments on two real-world datasets verify the effectiveness and\nefficiency of the deep reinforcement learning-based approach.", + "authors": "Azadeh Ghari Neiat, Athman Bouguettaya, Mohammed Bahutair", + "published": "2021-11-06", + "updated": "2021-11-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1811.00128v1", + "title": "Towards a Simple Approach to Multi-step Model-based Reinforcement Learning", + "abstract": "When environmental interaction is expensive, model-based reinforcement\nlearning offers a solution by planning ahead and avoiding costly mistakes.\nModel-based agents typically learn a single-step transition model. In this\npaper, we propose a multi-step model that predicts the outcome of an action\nsequence with variable length. We show that this model is easy to learn, and\nthat the model can make policy-conditional predictions. We report preliminary\nresults that show a clear advantage for the multi-step model compared to its\none-step counterpart.", + "authors": "Kavosh Asadi, Evan Cater, Dipendra Misra, Michael L. Littman", + "published": "2018-10-31", + "updated": "2018-10-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1812.09968v1", + "title": "VMAV-C: A Deep Attention-based Reinforcement Learning Algorithm for Model-based Control", + "abstract": "Recent breakthroughs in Go play and strategic games have witnessed the great\npotential of reinforcement learning in intelligently scheduling in uncertain\nenvironment, but some bottlenecks are also encountered when we generalize this\nparadigm to universal complex tasks. Among them, the low efficiency of data\nutilization in model-free reinforcement algorithms is of great concern. In\ncontrast, the model-based reinforcement learning algorithms can reveal\nunderlying dynamics in learning environments and seldom suffer the data\nutilization problem. To address the problem, a model-based reinforcement\nlearning algorithm with attention mechanism embedded is proposed as an\nextension of World Models in this paper. We learn the environment model through\nMixture Density Network Recurrent Network(MDN-RNN) for agents to interact, with\ncombinations of variational auto-encoder(VAE) and attention incorporated in\nstate value estimates during the process of learning policy. In this way, agent\ncan learn optimal policies through less interactions with actual environment,\nand final experiments demonstrate the effectiveness of our model in control\nproblem.", + "authors": "Xingxing Liang, Qi Wang, Yanghe Feng, Zhong Liu, Jincai Huang", + "published": "2018-12-24", + "updated": "2018-12-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.NE" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1910.03016v4", + "title": "Is a Good Representation Sufficient for Sample Efficient Reinforcement Learning?", + "abstract": "Modern deep learning methods provide effective means to learn good\nrepresentations. However, is a good representation itself sufficient for sample\nefficient reinforcement learning? This question has largely been studied only\nwith respect to (worst-case) approximation error, in the more classical\napproximate dynamic programming literature. With regards to the statistical\nviewpoint, this question is largely unexplored, and the extant body of\nliterature mainly focuses on conditions which permit sample efficient\nreinforcement learning with little understanding of what are necessary\nconditions for efficient reinforcement learning.\n This work shows that, from the statistical viewpoint, the situation is far\nsubtler than suggested by the more traditional approximation viewpoint, where\nthe requirements on the representation that suffice for sample efficient RL are\neven more stringent. Our main results provide sharp thresholds for\nreinforcement learning methods, showing that there are hard limitations on what\nconstitutes good function approximation (in terms of the dimensionality of the\nrepresentation), where we focus on natural representational conditions relevant\nto value-based, model-based, and policy-based learning. These lower bounds\nhighlight that having a good (value-based, model-based, or policy-based)\nrepresentation in and of itself is insufficient for efficient reinforcement\nlearning, unless the quality of this approximation passes certain hard\nthresholds. Furthermore, our lower bounds also imply exponential separations on\nthe sample complexity between 1) value-based learning with perfect\nrepresentation and value-based learning with a good-but-not-perfect\nrepresentation, 2) value-based learning and policy-based learning, 3)\npolicy-based learning and supervised learning and 4) reinforcement learning and\nimitation learning.", + "authors": "Simon S. Du, Sham M. Kakade, Ruosong Wang, Lin F. Yang", + "published": "2019-10-07", + "updated": "2020-02-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "math.OC", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.14766v1", + "title": "Reinforcement Learning from Statistical Feedback: the Journey from AB Testing to ANT Testing", + "abstract": "Reinforcement Learning from Human Feedback (RLHF) has played a crucial role\nin the success of large models such as ChatGPT. RLHF is a reinforcement\nlearning framework which combines human feedback to improve learning\neffectiveness and performance. However, obtaining preferences feedback manually\nis quite expensive in commercial applications. Some statistical commercial\nindicators are usually more valuable and always ignored in RLHF. There exists a\ngap between commercial target and model training. In our research, we will\nattempt to fill this gap with statistical business feedback instead of human\nfeedback, using AB testing which is a well-established statistical method.\nReinforcement Learning from Statistical Feedback (RLSF) based on AB testing is\nproposed. Statistical inference methods are used to obtain preferences for\ntraining the reward network, which fine-tunes the pre-trained model in\nreinforcement learning framework, achieving greater business value.\nFurthermore, we extend AB testing with double selections at a single time-point\nto ANT testing with multiple selections at different feedback time points.\nMoreover, we design numerical experiences to validate the effectiveness of our\nalgorithm framework.", + "authors": "Feiyang Han, Yimin Wei, Zhaofeng Liu, Yanxing Qi", + "published": "2023-11-24", + "updated": "2023-11-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "math.ST", + "stat.ME", + "stat.TH" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2204.01409v1", + "title": "Safe Controller for Output Feedback Linear Systems using Model-Based Reinforcement Learning", + "abstract": "The objective of this research is to enable safety-critical systems to\nsimultaneously learn and execute optimal control policies in a safe manner to\nachieve complex autonomy. Learning optimal policies via trial and error, i.e.,\ntraditional reinforcement learning, is difficult to implement in\nsafety-critical systems, particularly when task restarts are unavailable. Safe\nmodel-based reinforcement learning techniques based on a barrier transformation\nhave recently been developed to address this problem. However, these methods\nrely on full state feedback, limiting their usability in a real-world\nenvironment. In this work, an output-feedback safe model-based reinforcement\nlearning technique based on a novel barrier-aware dynamic state estimator has\nbeen designed to address this issue. The developed approach facilitates\nsimultaneous learning and execution of safe control policies for\nsafety-critical linear systems. Simulation results indicate that barrier\ntransformation is an effective approach to achieve online reinforcement\nlearning in safety-critical systems using output feedback.", + "authors": "S M Nahid Mahmud, Moad Abudia, Scott A Nivison, Zachary I. Bell, Rushikesh Kamalapurkar", + "published": "2022-04-04", + "updated": "2022-04-04", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.07789v1", + "title": "Safe Reinforcement Learning by Imagining the Near Future", + "abstract": "Safe reinforcement learning is a promising path toward applying reinforcement\nlearning algorithms to real-world problems, where suboptimal behaviors may lead\nto actual negative consequences. In this work, we focus on the setting where\nunsafe states can be avoided by planning ahead a short time into the future. In\nthis setting, a model-based agent with a sufficiently accurate model can avoid\nunsafe states. We devise a model-based algorithm that heavily penalizes unsafe\ntrajectories, and derive guarantees that our algorithm can avoid unsafe states\nunder certain assumptions. Experiments demonstrate that our algorithm can\nachieve competitive rewards with fewer safety violations in several continuous\ncontrol tasks.", + "authors": "Garrett Thomas, Yuping Luo, Tengyu Ma", + "published": "2022-02-15", + "updated": "2022-02-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.11142v1", + "title": "Two-stage Deep Reinforcement Learning for Inverter-based Volt-VAR Control in Active Distribution Networks", + "abstract": "Model-based Vol/VAR optimization method is widely used to eliminate voltage\nviolations and reduce network losses. However, the parameters of active\ndistribution networks(ADNs) are not onsite identified, so significant errors\nmay be involved in the model and make the model-based method infeasible. To\ncope with this critical issue, we propose a novel two-stage deep reinforcement\nlearning (DRL) method to improve the voltage profile by regulating\ninverter-based energy resources, which consists of offline stage and online\nstage. In the offline stage, a highly efficient adversarial reinforcement\nlearning algorithm is developed to train an offline agent robust to the model\nmismatch. In the sequential online stage, we transfer the offline agent safely\nas the online agent to perform continuous learning and controlling online with\nsignificantly improved safety and efficiency. Numerical simulations on IEEE\ntest cases not only demonstrate that the proposed adversarial reinforcement\nlearning algorithm outperforms the state-of-art algorithm, but also show that\nour proposed two-stage method achieves much better performance than the\nexisting DRL based methods in the online application.", + "authors": "Haotian Liu, Wenchuan Wu", + "published": "2020-05-20", + "updated": "2020-05-20", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.LG", + "cs.SY", + "J.7; C.3" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2006.09234v1", + "title": "Model Embedding Model-Based Reinforcement Learning", + "abstract": "Model-based reinforcement learning (MBRL) has shown its advantages in\nsample-efficiency over model-free reinforcement learning (MFRL). Despite the\nimpressive results it achieves, it still faces a trade-off between the ease of\ndata generation and model bias. In this paper, we propose a simple and elegant\nmodel-embedding model-based reinforcement learning (MEMB) algorithm in the\nframework of the probabilistic reinforcement learning. To balance the\nsample-efficiency and model bias, we exploit both real and imaginary data in\nthe training. In particular, we embed the model in the policy update and learn\n$Q$ and $V$ functions from the real data set. We provide the theoretical\nanalysis of MEMB with the Lipschitz continuity assumption on the model and\npolicy. At last, we evaluate MEMB on several benchmarks and demonstrate our\nalgorithm can achieve state-of-the-art performance.", + "authors": "Xiaoyu Tan, Chao Qu, Junwu Xiong, James Zhang", + "published": "2020-06-16", + "updated": "2020-06-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1305.1809v2", + "title": "Cover Tree Bayesian Reinforcement Learning", + "abstract": "This paper proposes an online tree-based Bayesian approach for reinforcement\nlearning. For inference, we employ a generalised context tree model. This\ndefines a distribution on multivariate Gaussian piecewise-linear models, which\ncan be updated in closed form. The tree structure itself is constructed using\nthe cover tree method, which remains efficient in high dimensional spaces. We\ncombine the model with Thompson sampling and approximate dynamic programming to\nobtain effective exploration policies in unknown environments. The flexibility\nand computational simplicity of the model render it suitable for many\nreinforcement learning problems in continuous state spaces. We demonstrate this\nin an experimental comparison with least squares policy iteration.", + "authors": "Nikolaos Tziortziotis, Christos Dimitrakakis, Konstantinos Blekas", + "published": "2013-05-08", + "updated": "2014-05-02", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1703.04489v1", + "title": "Reinforcement Learning for Transition-Based Mention Detection", + "abstract": "This paper describes an application of reinforcement learning to the mention\ndetection task. We define a novel action-based formulation for the mention\ndetection task, in which a model can flexibly revise past labeling decisions by\ngrouping together tokens and assigning partial mention labels. We devise a\nmethod to create mention-level episodes and we train a model by rewarding\ncorrectly labeled complete mentions, irrespective of the inner structure\ncreated. The model yields results which are on par with a competitive\nsupervised counterpart while being more flexible in terms of achieving targeted\nbehavior through reward modeling and generating internal mention structure,\nespecially on longer mentions.", + "authors": "Georgiana Dinu, Wael Hamza, Radu Florian", + "published": "2017-03-13", + "updated": "2017-03-13", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1712.04170v2", + "title": "Interpretable Policies for Reinforcement Learning by Genetic Programming", + "abstract": "The search for interpretable reinforcement learning policies is of high\nacademic and industrial interest. Especially for industrial systems, domain\nexperts are more likely to deploy autonomously learned controllers if they are\nunderstandable and convenient to evaluate. Basic algebraic equations are\nsupposed to meet these requirements, as long as they are restricted to an\nadequate complexity. Here we introduce the genetic programming for\nreinforcement learning (GPRL) approach based on model-based batch reinforcement\nlearning and genetic programming, which autonomously learns policy equations\nfrom pre-existing default state-action trajectory samples. GPRL is compared to\na straight-forward method which utilizes genetic programming for symbolic\nregression, yielding policies imitating an existing well-performing, but\nnon-interpretable policy. Experiments on three reinforcement learning\nbenchmarks, i.e., mountain car, cart-pole balancing, and industrial benchmark,\ndemonstrate the superiority of our GPRL approach compared to the symbolic\nregression method. GPRL is capable of producing well-performing interpretable\nreinforcement learning policies from pre-existing default trajectory data.", + "authors": "Daniel Hein, Steffen Udluft, Thomas A. Runkler", + "published": "2017-12-12", + "updated": "2018-04-04", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.NE", + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1910.11914v3", + "title": "On the convergence of projective-simulation-based reinforcement learning in Markov decision processes", + "abstract": "In recent years, the interest in leveraging quantum effects for enhancing\nmachine learning tasks has significantly increased. Many algorithms speeding up\nsupervised and unsupervised learning were established. The first framework in\nwhich ways to exploit quantum resources specifically for the broader context of\nreinforcement learning were found is projective simulation. Projective\nsimulation presents an agent-based reinforcement learning approach designed in\na manner which may support quantum walk-based speed-ups. Although classical\nvariants of projective simulation have been benchmarked against common\nreinforcement learning algorithms, very few formal theoretical analyses have\nbeen provided for its performance in standard learning scenarios. In this\npaper, we provide a detailed formal discussion of the properties of this model.\nSpecifically, we prove that one version of the projective simulation model,\nunderstood as a reinforcement learning approach, converges to optimal behavior\nin a large class of Markov decision processes. This proof shows that a\nphysically-inspired approach to reinforcement learning can guarantee to\nconverge.", + "authors": "Walter L. Boyajian, Jens Clausen, Lea M. Trenkwalder, Vedran Dunjko, Hans J. Briegel", + "published": "2019-10-25", + "updated": "2020-11-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "quant-ph", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1708.07738v1", + "title": "A Function Approximation Method for Model-based High-Dimensional Inverse Reinforcement Learning", + "abstract": "This works handles the inverse reinforcement learning problem in\nhigh-dimensional state spaces, which relies on an efficient solution of\nmodel-based high-dimensional reinforcement learning problems. To solve the\ncomputationally expensive reinforcement learning problems, we propose a\nfunction approximation method to ensure that the Bellman Optimality Equation\nalways holds, and then estimate a function based on the observed human actions\nfor inverse reinforcement learning problems. The time complexity of the\nproposed method is linearly proportional to the cardinality of the action set,\nthus it can handle high-dimensional even continuous state spaces efficiently.\nWe test the proposed method in a simulated environment to show its accuracy,\nand three clinical tasks to show how it can be used to evaluate a doctor's\nproficiency.", + "authors": "Kun Li, Joel W. Burdick", + "published": "2017-08-23", + "updated": "2017-08-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.12095v1", + "title": "Document-editing Assistants and Model-based Reinforcement Learning as a Path to Conversational AI", + "abstract": "Intelligent assistants that follow commands or answer simple questions, such\nas Siri and Google search, are among the most economically important\napplications of AI. Future conversational AI assistants promise even greater\ncapabilities and a better user experience through a deeper understanding of the\ndomain, the user, or the user's purposes. But what domain and what methods are\nbest suited to researching and realizing this promise? In this article we argue\nfor the domain of voice document editing and for the methods of model-based\nreinforcement learning. The primary advantages of voice document editing are\nthat the domain is tightly scoped and that it provides something for the\nconversation to be about (the document) that is delimited and fully accessible\nto the intelligent assistant. The advantages of reinforcement learning in\ngeneral are that its methods are designed to learn from interaction without\nexplicit instruction and that it formalizes the purposes of the assistant.\nModel-based reinforcement learning is needed in order to genuinely understand\nthe domain of discourse and thereby work efficiently with the user to achieve\ntheir goals. Together, voice document editing and model-based reinforcement\nlearning comprise a promising research direction for achieving conversational\nAI.", + "authors": "Katya Kudashkina, Patrick M. Pilarski, Richard S. Sutton", + "published": "2020-08-27", + "updated": "2020-08-27", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.HC", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2305.03360v1", + "title": "A Survey on Offline Model-Based Reinforcement Learning", + "abstract": "Model-based approaches are becoming increasingly popular in the field of\noffline reinforcement learning, with high potential in real-world applications\ndue to the model's capability of thoroughly utilizing the large historical\ndatasets available with supervised learning techniques. This paper presents a\nliterature review of recent work in offline model-based reinforcement learning,\na field that utilizes model-based approaches in offline reinforcement learning.\nThe survey provides a brief overview of the concepts and recent developments in\nboth offline reinforcement learning and model-based reinforcement learning, and\ndiscuss the intersection of the two fields. We then presents key relevant\npapers in the field of offline model-based reinforcement learning and discuss\ntheir methods, particularly their approaches in solving the issue of\ndistributional shift, the main problem faced by all current offline model-based\nreinforcement learning methods. We further discuss key challenges faced by the\nfield, and suggest possible directions for future work.", + "authors": "Haoyang He", + "published": "2023-05-05", + "updated": "2023-05-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SY", + "eess.SY", + "I.2.6; I.2.8" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2105.00822v2", + "title": "Generative Adversarial Reward Learning for Generalized Behavior Tendency Inference", + "abstract": "Recent advances in reinforcement learning have inspired increasing interest\nin learning user modeling adaptively through dynamic interactions, e.g., in\nreinforcement learning based recommender systems. Reward function is crucial\nfor most of reinforcement learning applications as it can provide the guideline\nabout the optimization. However, current reinforcement-learning-based methods\nrely on manually-defined reward functions, which cannot adapt to dynamic and\nnoisy environments. Besides, they generally use task-specific reward functions\nthat sacrifice generalization ability. We propose a generative inverse\nreinforcement learning for user behavioral preference modelling, to address the\nabove issues. Instead of using predefined reward functions, our model can\nautomatically learn the rewards from user's actions based on discriminative\nactor-critic network and Wasserstein GAN. Our model provides a general way of\ncharacterizing and explaining underlying behavioral tendencies, and our\nexperiments show our method outperforms state-of-the-art methods in a variety\nof scenarios, namely traffic signal control, online recommender systems, and\nscanpath prediction.", + "authors": "Xiaocong Chen, Lina Yao, Xianzhi Wang, Aixin Sun, Wenjie Zhang, Quan Z. Sheng", + "published": "2021-05-03", + "updated": "2021-05-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.IR" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2404.01794v1", + "title": "Imitation Game: A Model-based and Imitation Learning Deep Reinforcement Learning Hybrid", + "abstract": "Autonomous and learning systems based on Deep Reinforcement Learning have\nfirmly established themselves as a foundation for approaches to creating\nresilient and efficient Cyber-Physical Energy Systems. However, most current\napproaches suffer from two distinct problems: Modern model-free algorithms such\nas Soft Actor Critic need a high number of samples to learn a meaningful\npolicy, as well as a fallback to ward against concept drifts (e. g.,\ncatastrophic forgetting). In this paper, we present the work in progress\ntowards a hybrid agent architecture that combines model-based Deep\nReinforcement Learning with imitation learning to overcome both problems.", + "authors": "Eric MSP Veith, Torben Logemann, Aleksandr Berezin, Arlena Well\u00dfow, Stephan Balduin", + "published": "2024-04-02", + "updated": "2024-04-02", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.07260v1", + "title": "TIAGo RL: Simulated Reinforcement Learning Environments with Tactile Data for Mobile Robots", + "abstract": "Tactile information is important for robust performance in robotic tasks that\ninvolve physical interaction, such as object manipulation. However, with more\ndata included in the reasoning and control process, modeling behavior becomes\nincreasingly difficult. Deep Reinforcement Learning (DRL) produced promising\nresults for learning complex behavior in various domains, including\ntactile-based manipulation in robotics. In this work, we present our\nopen-source reinforcement learning environments for the TIAGo service robot.\nThey produce tactile sensor measurements that resemble those of a real\nsensorised gripper for TIAGo, encouraging research in transfer learning of DRL\npolicies. Lastly, we show preliminary training results of a learned force\ncontrol policy and compare it to a classical PI controller.", + "authors": "Luca Lach, Francesco Ferro, Robert Haschke", + "published": "2023-11-13", + "updated": "2023-11-13", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.07240v1", + "title": "Model-Reference Reinforcement Learning for Collision-Free Tracking Control of Autonomous Surface Vehicles", + "abstract": "This paper presents a novel model-reference reinforcement learning algorithm\nfor the intelligent tracking control of uncertain autonomous surface vehicles\nwith collision avoidance. The proposed control algorithm combines a\nconventional control method with reinforcement learning to enhance control\naccuracy and intelligence. In the proposed control design, a nominal system is\nconsidered for the design of a baseline tracking controller using a\nconventional control approach. The nominal system also defines the desired\nbehaviour of uncertain autonomous surface vehicles in an obstacle-free\nenvironment. Thanks to reinforcement learning, the overall tracking controller\nis capable of compensating for model uncertainties and achieving collision\navoidance at the same time in environments with obstacles. In comparison to\ntraditional deep reinforcement learning methods, our proposed learning-based\ncontrol can provide stability guarantees and better sample efficiency. We\ndemonstrate the performance of the new algorithm using an example of autonomous\nsurface vehicles.", + "authors": "Qingrui Zhang, Wei Pan, Vasso Reppa", + "published": "2020-08-17", + "updated": "2020-08-17", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.LG", + "cs.RO", + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2011.01734v1", + "title": "Differentiable Physics Models for Real-world Offline Model-based Reinforcement Learning", + "abstract": "A limitation of model-based reinforcement learning (MBRL) is the exploitation\nof errors in the learned models. Black-box models can fit complex dynamics with\nhigh fidelity, but their behavior is undefined outside of the data\ndistribution.Physics-based models are better at extrapolating, due to the\ngeneral validity of their informed structure, but underfit in the real world\ndue to the presence of unmodeled phenomena. In this work, we demonstrate\nexperimentally that for the offline model-based reinforcement learning setting,\nphysics-based models can be beneficial compared to high-capacity function\napproximators if the mechanical structure is known. Physics-based models can\nlearn to perform the ball in a cup (BiC) task on a physical manipulator using\nonly 4 minutes of sampled data using offline MBRL. We find that black-box\nmodels consistently produce unviable policies for BiC as all predicted\ntrajectories diverge to physically impossible state, despite having access to\nmore data than the physics-based model. In addition, we generalize the approach\nof physics parameter identification from modeling holonomic multi-body systems\nto systems with nonholonomic dynamics using end-to-end automatic\ndifferentiation.\n Videos: https://sites.google.com/view/ball-in-a-cup-in-4-minutes/", + "authors": "Michael Lutter, Johannes Silberbauer, Joe Watson, Jan Peters", + "published": "2020-11-03", + "updated": "2020-11-03", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2402.16543v2", + "title": "Model-based deep reinforcement learning for accelerated learning from flow simulations", + "abstract": "In recent years, deep reinforcement learning has emerged as a technique to\nsolve closed-loop flow control problems. Employing simulation-based\nenvironments in reinforcement learning enables a priori end-to-end optimization\nof the control system, provides a virtual testbed for safety-critical control\napplications, and allows to gain a deep understanding of the control\nmechanisms. While reinforcement learning has been applied successfully in a\nnumber of rather simple flow control benchmarks, a major bottleneck toward\nreal-world applications is the high computational cost and turnaround time of\nflow simulations. In this contribution, we demonstrate the benefits of\nmodel-based reinforcement learning for flow control applications. Specifically,\nwe optimize the policy by alternating between trajectories sampled from flow\nsimulations and trajectories sampled from an ensemble of environment models.\nThe model-based learning reduces the overall training time by up to $85\\%$ for\nthe fluidic pinball test case. Even larger savings are expected for more\ndemanding flow simulations.", + "authors": "Andre Weiner, Janis Geise", + "published": "2024-02-26", + "updated": "2024-04-10", + "primary_cat": "physics.flu-dyn", + "cats": [ + "physics.flu-dyn", + "cs.CE", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2304.00006v1", + "title": "Bi-directional personalization reinforcement learning-based architecture with active learning using a multi-model data service for the travel nursing industry", + "abstract": "The challenges of using inadequate online recruitment systems can be\naddressed with machine learning and software engineering techniques.\nBi-directional personalization reinforcement learning-based architecture with\nactive learning can get recruiters to recommend qualified applicants and also\nenable applicants to receive personalized job recommendations. This paper\nfocuses on how machine learning techniques can enhance the recruitment process\nin the travel nursing industry by helping speed up data acquisition using a\nmulti-model data service and then providing personalized recommendations using\nbi-directional reinforcement learning with active learning. This need was\nespecially evident when trying to respond to the overwhelming needs of\nhealthcare facilities during the COVID-19 pandemic. The need for traveling\nnurses and other healthcare professionals was more evident during the lockdown\nperiod. A data service was architected for job feed processing using an\norchestration of natural language processing (NLP) models that synthesize\njob-related data into a database efficiently and accurately. The multi-model\ndata service provided the data necessary to develop a bi-directional\npersonalization system using reinforcement learning with active learning that\ncould recommend travel nurses and healthcare professionals to recruiters and\nprovide job recommendations to applicants using an internally developed smart\nmatch score as a basis. The bi-directional personalization reinforcement\nlearning-based architecture with active learning combines two personalization\nsystems - one that runs forward to recommend qualified candidates for jobs and\nanother that runs backward and recommends jobs for applicants.", + "authors": "Ezana N. Beyenne", + "published": "2023-03-14", + "updated": "2023-03-14", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.LG", + "I.2" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1709.05067v1", + "title": "Deep Reinforcement Learning for Conversational AI", + "abstract": "Deep reinforcement learning is revolutionizing the artificial intelligence\nfield. Currently, it serves as a good starting point for constructing\nintelligent autonomous systems which offer a better knowledge of the visual\nworld. It is possible to scale deep reinforcement learning with the use of deep\nlearning and do amazing tasks such as use of pixels in playing video games. In\nthis paper, key concepts of deep reinforcement learning including reward\nfunction, differences between reinforcement learning and supervised learning\nand models for implementation of reinforcement are discussed. Key challenges\nrelated to the implementation of reinforcement learning in conversational AI\ndomain are identified as well as discussed in detail. Various conversational\nmodels which are based on deep reinforcement learning (as well as deep\nlearning) are also discussed. In summary, this paper discusses key aspects of\ndeep reinforcement learning which are crucial for designing an efficient\nconversational AI.", + "authors": "Mahipal Jadeja, Neelanshi Varia, Agam Shah", + "published": "2017-09-15", + "updated": "2017-09-15", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2202.09064v2", + "title": "Can Interpretable Reinforcement Learning Manage Prosperity Your Way?", + "abstract": "Personalisation of products and services is fast becoming the driver of\nsuccess in banking and commerce. Machine learning holds the promise of gaining\na deeper understanding of and tailoring to customers' needs and preferences.\nWhereas traditional solutions to financial decision problems frequently rely on\nmodel assumptions, reinforcement learning is able to exploit large amounts of\ndata to improve customer modelling and decision-making in complex financial\nenvironments with fewer assumptions. Model explainability and interpretability\npresent challenges from a regulatory perspective which demands transparency for\nacceptance; they also offer the opportunity for improved insight into and\nunderstanding of customers. Post-hoc approaches are typically used for\nexplaining pretrained reinforcement learning models. Based on our previous\nmodeling of customer spending behaviour, we adapt our recent reinforcement\nlearning algorithm that intrinsically characterizes desirable behaviours and we\ntransition to the problem of asset management. We train inherently\ninterpretable reinforcement learning agents to give investment advice that is\naligned with prototype financial personality traits which are combined to make\na final recommendation. We observe that the trained agents' advice adheres to\ntheir intended characteristics, they learn the value of compound growth, and,\nwithout any explicit reference, the notion of risk as well as improved policy\nconvergence.", + "authors": "Charl Maree, Christian Omlin", + "published": "2022-02-18", + "updated": "2022-06-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1506.00685v1", + "title": "Model-based reinforcement learning for infinite-horizon approximate optimal tracking", + "abstract": "This paper provides an approximate online adaptive solution to the\ninfinite-horizon optimal tracking problem for control-affine continuous-time\nnonlinear systems with unknown drift dynamics. Model-based reinforcement\nlearning is used to relax the persistence of excitation condition. Model-based\nreinforcement learning is implemented using a concurrent learning-based system\nidentifier to simulate experience by evaluating the Bellman error over\nunexplored areas of the state space. Tracking of the desired trajectory and\nconvergence of the developed policy to a neighborhood of the optimal policy are\nestablished via Lyapunov-based stability analysis. Simulation results\ndemonstrate the effectiveness of the developed technique.", + "authors": "Rushikesh Kamalapurkar, Lindsey Andrews, Patrick Walters, Warren E. Dixon", + "published": "2015-06-01", + "updated": "2015-06-01", + "primary_cat": "cs.SY", + "cats": [ + "cs.SY", + "math.OC" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1912.03918v1", + "title": "Transformer Based Reinforcement Learning For Games", + "abstract": "Recent times have witnessed sharp improvements in reinforcement learning\ntasks using deep reinforcement learning techniques like Deep Q Networks, Policy\nGradients, Actor Critic methods which are based on deep learning based models\nand back-propagation of gradients to train such models. An active area of\nresearch in reinforcement learning is about training agents to play complex\nvideo games, which so far has been something accomplished only by human\nintelligence. Some state of the art performances in video game playing using\ndeep reinforcement learning are obtained by processing the sequence of frames\nfrom video games, passing them through a convolutional network to obtain\nfeatures and then using recurrent neural networks to figure out the action\nleading to optimal rewards. The recurrent neural network will learn to extract\nthe meaningful signal out of the sequence of such features. In this work, we\npropose a method utilizing a transformer network which have recently replaced\nRNNs in Natural Language Processing (NLP), and perform experiments to compare\nwith existing methods.", + "authors": "Uddeshya Upadhyay, Nikunj Shah, Sucheta Ravikanti, Mayanka Medhe", + "published": "2019-12-09", + "updated": "2019-12-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NE" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1206.3281v1", + "title": "Model-Based Bayesian Reinforcement Learning in Large Structured Domains", + "abstract": "Model-based Bayesian reinforcement learning has generated significant\ninterest in the AI community as it provides an elegant solution to the optimal\nexploration-exploitation tradeoff in classical reinforcement learning.\nUnfortunately, the applicability of this type of approach has been limited to\nsmall domains due to the high complexity of reasoning about the joint posterior\nover model parameters. In this paper, we consider the use of factored\nrepresentations combined with online planning techniques, to improve\nscalability of these methods. The main contribution of this paper is a Bayesian\nframework for learning the structure and parameters of a dynamical system,\nwhile also simultaneously planning a (near-)optimal sequence of actions.", + "authors": "Stephane Ross, Joelle Pineau", + "published": "2012-06-13", + "updated": "2012-06-13", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2005.05440v1", + "title": "Delay-Aware Model-Based Reinforcement Learning for Continuous Control", + "abstract": "Action delays degrade the performance of reinforcement learning in many\nreal-world systems. This paper proposes a formal definition of delay-aware\nMarkov Decision Process and proves it can be transformed into standard MDP with\naugmented states using the Markov reward process. We develop a delay-aware\nmodel-based reinforcement learning framework that can incorporate the\nmulti-step delay into the learned system models without learning effort.\nExperiments with the Gym and MuJoCo platforms show that the proposed\ndelay-aware model-based algorithm is more efficient in training and\ntransferable between systems with various durations of delay compared with\noff-policy model-free reinforcement learning methods. Codes available at:\nhttps://github.com/baimingc/dambrl.", + "authors": "Baiming Chen, Mengdi Xu, Liang Li, Ding Zhao", + "published": "2020-05-11", + "updated": "2020-05-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2307.16348v2", + "title": "Rating-based Reinforcement Learning", + "abstract": "This paper develops a novel rating-based reinforcement learning approach that\nuses human ratings to obtain human guidance in reinforcement learning.\nDifferent from the existing preference-based and ranking-based reinforcement\nlearning paradigms, based on human relative preferences over sample pairs, the\nproposed rating-based reinforcement learning approach is based on human\nevaluation of individual trajectories without relative comparisons between\nsample pairs. The rating-based reinforcement learning approach builds on a new\nprediction model for human ratings and a novel multi-class loss function. We\nconduct several experimental studies based on synthetic ratings and real human\nratings to evaluate the effectiveness and benefits of the new rating-based\nreinforcement learning approach.", + "authors": "Devin White, Mingkang Wu, Ellen Novoseller, Vernon J. Lawhern, Nicholas Waytowich, Yongcan Cao", + "published": "2023-07-30", + "updated": "2024-01-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2009.14365v1", + "title": "Toolpath design for additive manufacturing using deep reinforcement learning", + "abstract": "Toolpath optimization of metal-based additive manufacturing processes is\ncurrently hampered by the high-dimensionality of its design space. In this\nwork, a reinforcement learning platform is proposed that dynamically learns\ntoolpath strategies to build an arbitrary part. To this end, three prominent\nmodel-free reinforcement learning formulations are investigated to design\nadditive manufacturing toolpaths and demonstrated for two cases of dense and\nsparse reward structures. The results indicate that this learning-based\ntoolpath design approach achieves high scores, especially when a dense reward\nstructure is present.", + "authors": "Mojtaba Mozaffar, Ablodghani Ebrahimi, Jian Cao", + "published": "2020-09-30", + "updated": "2020-09-30", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.09450v1", + "title": "Adversarial Imitation Learning via Random Search", + "abstract": "Developing agents that can perform challenging complex tasks is the goal of\nreinforcement learning. The model-free reinforcement learning has been\nconsidered as a feasible solution. However, the state of the art research has\nbeen to develop increasingly complicated techniques. This increasing complexity\nmakes the reconstruction difficult. Furthermore, the problem of reward\ndependency is still exists. As a result, research on imitation learning, which\nlearns policy from a demonstration of experts, has begun to attract attention.\nImitation learning directly learns policy based on data on the behavior of the\nexperts without the explicit reward signal provided by the environment.\nHowever, imitation learning tries to optimize policies based on deep\nreinforcement learning such as trust region policy optimization. As a result,\ndeep reinforcement learning based imitation learning also poses a crisis of\nreproducibility. The issue of complex model-free model has received\nconsiderable critical attention. A derivative-free optimization based\nreinforcement learning and the simplification on policies obtain competitive\nperformance on the dynamic complex tasks. The simplified policies and\nderivative free methods make algorithm be simple. The reconfiguration of\nresearch demo becomes easy. In this paper, we propose an imitation learning\nmethod that takes advantage of the derivative-free optimization with simple\nlinear policies. The proposed method performs simple random search in the\nparameter space of policies and shows computational efficiency. Experiments in\nthis paper show that the proposed model, without a direct reward signal from\nthe environment, obtains competitive performance on the MuJoCo locomotion\ntasks.", + "authors": "MyungJae Shin, Joongheon Kim", + "published": "2020-08-21", + "updated": "2020-08-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1507.06923v1", + "title": "A Reinforcement Learning Approach to Online Learning of Decision Trees", + "abstract": "Online decision tree learning algorithms typically examine all features of a\nnew data point to update model parameters. We propose a novel alternative,\nReinforcement Learning- based Decision Trees (RLDT), that uses Reinforcement\nLearning (RL) to actively examine a minimal number of features of a data point\nto classify it with high accuracy. Furthermore, RLDT optimizes a long term\nreturn, providing a better alternative to the traditional myopic greedy\napproach to growing decision trees. We demonstrate that this approach performs\nas well as batch learning algorithms and other online decision tree learning\nalgorithms, while making significantly fewer queries about the features of the\ndata points. We also show that RLDT can effectively handle concept drift.", + "authors": "Abhinav Garlapati, Aditi Raghunathan, Vaishnavh Nagarajan, Balaraman Ravindran", + "published": "2015-07-24", + "updated": "2015-07-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.12516v2", + "title": "Prioritized Experience-based Reinforcement Learning with Human Guidance for Autonomous Driving", + "abstract": "Reinforcement learning (RL) requires skillful definition and remarkable\ncomputational efforts to solve optimization and control problems, which could\nimpair its prospect. Introducing human guidance into reinforcement learning is\na promising way to improve learning performance. In this paper, a comprehensive\nhuman guidance-based reinforcement learning framework is established. A novel\nprioritized experience replay mechanism that adapts to human guidance in the\nreinforcement learning process is proposed to boost the efficiency and\nperformance of the reinforcement learning algorithm. To relieve the heavy\nworkload on human participants, a behavior model is established based on an\nincremental online learning method to mimic human actions. We design two\nchallenging autonomous driving tasks for evaluating the proposed algorithm.\nExperiments are conducted to access the training and testing performance and\nlearning mechanism of the proposed algorithm. Comparative results against the\nstate-of-the-art methods suggest the advantages of our algorithm in terms of\nlearning efficiency, performance, and robustness.", + "authors": "Jingda Wu, Zhiyu Huang, Wenhui Huang, Chen Lv", + "published": "2021-09-26", + "updated": "2022-11-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1810.03198v1", + "title": "Reinforcement Evolutionary Learning Method for self-learning", + "abstract": "In statistical modelling the biggest threat is concept drift which makes the\nmodel gradually showing deteriorating performance over time. There are state of\nthe art methodologies to detect the impact of concept drift, however general\nstrategy considered to overcome the issue in performance is to rebuild or\nre-calibrate the model periodically as the variable patterns for the model\nchanges significantly due to market change or consumer behavior change etc.\nQuantitative research is the most widely spread application of data science in\nMarketing or financial domain where applicability of state of the art\nreinforcement learning for auto-learning is less explored paradigm.\nReinforcement learning is heavily dependent on having a simulated environment\nwhich is majorly available for gaming or online systems, to learn from the live\nfeedback. However, there are some research happened on the area of online\nadvertisement, pricing etc where due to the nature of the online learning\nenvironment scope of reinforcement learning is explored. Our proposed solution\nis a reinforcement learning based, true self-learning algorithm which can adapt\nto the data change or concept drift and auto learn and self-calibrate for the\nnew patterns of the data solving the problem of concept drift.\n Keywords - Reinforcement learning, Genetic Algorithm, Q-learning,\nClassification modelling, CMA-ES, NES, Multi objective optimization, Concept\ndrift, Population stability index, Incremental learning, F1-measure, Predictive\nModelling, Self-learning, MCTS, AlphaGo, AlphaZero", + "authors": "Kumarjit Pathak, Jitin Kapila", + "published": "2018-10-07", + "updated": "2018-10-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NE", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1810.01112v1", + "title": "The Dreaming Variational Autoencoder for Reinforcement Learning Environments", + "abstract": "Reinforcement learning has shown great potential in generalizing over raw\nsensory data using only a single neural network for value optimization. There\nare several challenges in the current state-of-the-art reinforcement learning\nalgorithms that prevent them from converging towards the global optima. It is\nlikely that the solution to these problems lies in short- and long-term\nplanning, exploration and memory management for reinforcement learning\nalgorithms. Games are often used to benchmark reinforcement learning algorithms\nas they provide a flexible, reproducible, and easy to control environment.\nRegardless, few games feature a state-space where results in exploration,\nmemory, and planning are easily perceived. This paper presents The Dreaming\nVariational Autoencoder (DVAE), a neural network based generative modeling\narchitecture for exploration in environments with sparse feedback. We further\npresent Deep Maze, a novel and flexible maze engine that challenges DVAE in\npartial and fully-observable state-spaces, long-horizon tasks, and\ndeterministic and stochastic problems. We show initial findings and encourage\nfurther work in reinforcement learning driven by generative exploration.", + "authors": "Per-Arne Andersen, Morten Goodwin, Ole-Christoffer Granmo", + "published": "2018-10-02", + "updated": "2018-10-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1611.00862v1", + "title": "Quantile Reinforcement Learning", + "abstract": "In reinforcement learning, the standard criterion to evaluate policies in a\nstate is the expectation of (discounted) sum of rewards. However, this\ncriterion may not always be suitable, we consider an alternative criterion\nbased on the notion of quantiles. In the case of episodic reinforcement\nlearning problems, we propose an algorithm based on stochastic approximation\nwith two timescales. We evaluate our proposition on a simple model of the TV\nshow, Who wants to be a millionaire.", + "authors": "Hugo Gilbert, Paul Weng", + "published": "2016-11-03", + "updated": "2016-11-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2301.11520v3", + "title": "SNeRL: Semantic-aware Neural Radiance Fields for Reinforcement Learning", + "abstract": "As previous representations for reinforcement learning cannot effectively\nincorporate a human-intuitive understanding of the 3D environment, they usually\nsuffer from sub-optimal performances. In this paper, we present Semantic-aware\nNeural Radiance Fields for Reinforcement Learning (SNeRL), which jointly\noptimizes semantic-aware neural radiance fields (NeRF) with a convolutional\nencoder to learn 3D-aware neural implicit representation from multi-view\nimages. We introduce 3D semantic and distilled feature fields in parallel to\nthe RGB radiance fields in NeRF to learn semantic and object-centric\nrepresentation for reinforcement learning. SNeRL outperforms not only previous\npixel-based representations but also recent 3D-aware representations both in\nmodel-free and model-based reinforcement learning.", + "authors": "Dongseok Shim, Seungjae Lee, H. Jin Kim", + "published": "2023-01-27", + "updated": "2023-05-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2102.03022v1", + "title": "Deceptive Reinforcement Learning for Privacy-Preserving Planning", + "abstract": "In this paper, we study the problem of deceptive reinforcement learning to\npreserve the privacy of a reward function. Reinforcement learning is the\nproblem of finding a behaviour policy based on rewards received from\nexploratory behaviour. A key ingredient in reinforcement learning is a reward\nfunction, which determines how much reward (negative or positive) is given and\nwhen. However, in some situations, we may want to keep a reward function\nprivate; that is, to make it difficult for an observer to determine the reward\nfunction used. We define the problem of privacy-preserving reinforcement\nlearning, and present two models for solving it. These models are based on\ndissimulation -- a form of deception that `hides the truth'. We evaluate our\nmodels both computationally and via human behavioural experiments. Results show\nthat the resulting policies are indeed deceptive, and that participants can\ndetermine the true reward function less reliably than that of an honest agent.", + "authors": "Zhengshang Liu, Yue Yang, Tim Miller, Peta Masters", + "published": "2021-02-05", + "updated": "2021-02-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.MA" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.01474v1", + "title": "Offline Reinforcement Learning with Causal Structured World Models", + "abstract": "Model-based methods have recently shown promising for offline reinforcement\nlearning (RL), aiming to learn good policies from historical data without\ninteracting with the environment. Previous model-based offline RL methods learn\nfully connected nets as world-models that map the states and actions to the\nnext-step states. However, it is sensible that a world-model should adhere to\nthe underlying causal effect such that it will support learning an effective\npolicy generalizing well in unseen states. In this paper, We first provide\ntheoretical results that causal world-models can outperform plain world-models\nfor offline RL by incorporating the causal structure into the generalization\nerror bound. We then propose a practical algorithm, oFfline mOdel-based\nreinforcement learning with CaUsal Structure (FOCUS), to illustrate the\nfeasibility of learning and leveraging causal structure in offline RL.\nExperimental results on two benchmarks show that FOCUS reconstructs the\nunderlying causal structure accurately and robustly. Consequently, it performs\nbetter than the plain model-based offline RL algorithms and other causal\nmodel-based RL algorithms.", + "authors": "Zheng-Mao Zhu, Xiong-Hui Chen, Hong-Long Tian, Kun Zhang, Yang Yu", + "published": "2022-06-03", + "updated": "2022-06-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.13529v2", + "title": "Lyapunov-Based Reinforcement Learning State Estimator", + "abstract": "In this paper, we consider the state estimation problem for nonlinear\nstochastic discrete-time systems. We combine Lyapunov's method in control\ntheory and deep reinforcement learning to design the state estimator. We\ntheoretically prove the convergence of the bounded estimate error solely using\nthe data simulated from the model. An actor-critic reinforcement learning\nalgorithm is proposed to learn the state estimator approximated by a deep\nneural network. The convergence of the algorithm is analysed. The proposed\nLyapunov-based reinforcement learning state estimator is compared with a number\nof existing nonlinear filtering methods through Monte Carlo simulations,\nshowing its advantage in terms of estimate convergence even under some system\nuncertainties such as covariance shift in system noise and randomly missing\nmeasurements. To the best of our knowledge, this is the first reinforcement\nlearning based nonlinear state estimator with bounded estimate error\nperformance guarantee.", + "authors": "Liang Hu, Chengwei Wu, Wei Pan", + "published": "2020-10-26", + "updated": "2021-01-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO", + "cs.SY", + "eess.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.11738v1", + "title": "Optimising Stochastic Routing for Taxi Fleets with Model Enhanced Reinforcement Learning", + "abstract": "The future of mobility-as-a-Service (Maas)should embrace an integrated system\nof ride-hailing, street-hailing and ride-sharing with optimised intelligent\nvehicle routing in response to a real-time, stochastic demand pattern. We aim\nto optimise routing policies for a large fleet of vehicles for street-hailing\nservices, given a stochastic demand pattern in small to medium-sized road\nnetworks. A model-based dispatch algorithm, a high performance model-free\nreinforcement learning based algorithm and a novel hybrid algorithm combining\nthe benefits of both the top-down approach and the model-free reinforcement\nlearning have been proposed to route the \\emph{vacant} vehicles. We design our\nreinforcement learning based routing algorithm using proximal policy\noptimisation and combined intrinsic and extrinsic rewards to strike a balance\nbetween exploration and exploitation. Using a large-scale agent-based\nmicroscopic simulation platform to evaluate our proposed algorithms, our\nmodel-free reinforcement learning and hybrid algorithm show excellent\nperformance on both artificial road network and community-based Singapore road\nnetwork with empirical demands, and our hybrid algorithm can significantly\naccelerate the model-free learner in the process of learning.", + "authors": "Shen Ren, Qianxiao Li, Liye Zhang, Zheng Qin, Bo Yang", + "published": "2020-10-22", + "updated": "2020-10-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "nlin.AO", + "physics.soc-ph" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1709.09346v2", + "title": "Cold-Start Reinforcement Learning with Softmax Policy Gradient", + "abstract": "Policy-gradient approaches to reinforcement learning have two common and\nundesirable overhead procedures, namely warm-start training and sample variance\nreduction. In this paper, we describe a reinforcement learning method based on\na softmax value function that requires neither of these procedures. Our method\ncombines the advantages of policy-gradient methods with the efficiency and\nsimplicity of maximum-likelihood approaches. We apply this new cold-start\nreinforcement learning method in training sequence generation models for\nstructured output prediction problems. Empirical evidence validates this method\non automatic summarization and image captioning tasks.", + "authors": "Nan Ding, Radu Soricut", + "published": "2017-09-27", + "updated": "2017-10-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1806.01265v2", + "title": "Equivalence Between Wasserstein and Value-Aware Loss for Model-based Reinforcement Learning", + "abstract": "Learning a generative model is a key component of model-based reinforcement\nlearning. Though learning a good model in the tabular setting is a simple task,\nlearning a useful model in the approximate setting is challenging. In this\ncontext, an important question is the loss function used for model learning as\nvarying the loss function can have a remarkable impact on effectiveness of\nplanning. Recently Farahmand et al. (2017) proposed a value-aware model\nlearning (VAML) objective that captures the structure of value function during\nmodel learning. Using tools from Asadi et al. (2018), we show that minimizing\nthe VAML objective is in fact equivalent to minimizing the Wasserstein metric.\nThis equivalence improves our understanding of value-aware models, and also\ncreates a theoretical foundation for applications of Wasserstein in model-based\nreinforcement~learning.", + "authors": "Kavosh Asadi, Evan Cater, Dipendra Misra, Michael L. Littman", + "published": "2018-06-01", + "updated": "2018-07-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.13044v1", + "title": "Reinforcement Learning with Feedback-modulated TD-STDP", + "abstract": "Spiking neuron networks have been used successfully to solve simple\nreinforcement learning tasks with continuous action set applying learning rules\nbased on spike-timing-dependent plasticity (STDP). However, most of these\nmodels cannot be applied to reinforcement learning tasks with discrete action\nset since they assume that the selected action is a deterministic function of\nfiring rate of neurons, which is continuous. In this paper, we propose a new\nSTDP-based learning rule for spiking neuron networks which contains feedback\nmodulation. We show that the STDP-based learning rule can be used to solve\nreinforcement learning tasks with discrete action set at a speed similar to\nstandard reinforcement learning algorithms when applied to the CartPole and\nLunarLander tasks. Moreover, we demonstrate that the agent is unable to solve\nthese tasks if feedback modulation is omitted from the learning rule. We\nconclude that feedback modulation allows better credit assignment when only the\nunits contributing to the executed action and TD error participate in learning.", + "authors": "Stephen Chung, Robert Kozma", + "published": "2020-08-29", + "updated": "2020-08-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML", + "I.2.8" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1906.12189v1", + "title": "Learning-based Model Predictive Control for Safe Exploration and Reinforcement Learning", + "abstract": "Reinforcement learning has been successfully used to solve difficult tasks in\ncomplex unknown environments. However, these methods typically do not provide\nany safety guarantees during the learning process. This is particularly\nproblematic, since reinforcement learning agent actively explore their\nenvironment. This prevents their use in safety-critical, real-world\napplications. In this paper, we present a learning-based model predictive\ncontrol scheme that provides high-probability safety guarantees throughout the\nlearning process. Based on a reliable statistical model, we construct provably\naccurate confidence intervals on predicted trajectories. Unlike previous\napproaches, we allow for input-dependent uncertainties. Based on these reliable\npredictions, we guarantee that trajectories satisfy safety constraints.\nMoreover, we use a terminal set constraint to recursively guarantee the\nexistence of safe control actions at every iteration. We evaluate the resulting\nalgorithm to safely explore the dynamics of an inverted pendulum and to solve a\nreinforcement learning task on a cart-pole system with safety constraints.", + "authors": "Torsten Koller, Felix Berkenkamp, Matteo Turchetta, Joschka Boedecker, Andreas Krause", + "published": "2019-06-27", + "updated": "2019-06-27", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.AI", + "cs.LG", + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1804.07193v3", + "title": "Lipschitz Continuity in Model-based Reinforcement Learning", + "abstract": "We examine the impact of learning Lipschitz continuous models in the context\nof model-based reinforcement learning. We provide a novel bound on multi-step\nprediction error of Lipschitz models where we quantify the error using the\nWasserstein metric. We go on to prove an error bound for the value-function\nestimate arising from Lipschitz models and show that the estimated value\nfunction is itself Lipschitz. We conclude with empirical results that show the\nbenefits of controlling the Lipschitz constant of neural-network models.", + "authors": "Kavosh Asadi, Dipendra Misra, Michael L. Littman", + "published": "2018-04-19", + "updated": "2018-07-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2010.12142v1", + "title": "Bridging Imagination and Reality for Model-Based Deep Reinforcement Learning", + "abstract": "Sample efficiency has been one of the major challenges for deep reinforcement\nlearning. Recently, model-based reinforcement learning has been proposed to\naddress this challenge by performing planning on imaginary trajectories with a\nlearned world model. However, world model learning may suffer from overfitting\nto training trajectories, and thus model-based value estimation and policy\nsearch will be pone to be sucked in an inferior local policy. In this paper, we\npropose a novel model-based reinforcement learning algorithm, called BrIdging\nReality and Dream (BIRD). It maximizes the mutual information between imaginary\nand real trajectories so that the policy improvement learned from imaginary\ntrajectories can be easily generalized to real trajectories. We demonstrate\nthat our approach improves sample efficiency of model-based planning, and\nachieves state-of-the-art performance on challenging visual control benchmarks.", + "authors": "Guangxiang Zhu, Minghao Zhang, Honglak Lee, Chongjie Zhang", + "published": "2020-10-23", + "updated": "2020-10-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1802.10592v2", + "title": "Model-Ensemble Trust-Region Policy Optimization", + "abstract": "Model-free reinforcement learning (RL) methods are succeeding in a growing\nnumber of tasks, aided by recent advances in deep learning. However, they tend\nto suffer from high sample complexity, which hinders their use in real-world\ndomains. Alternatively, model-based reinforcement learning promises to reduce\nsample complexity, but tends to require careful tuning and to date have\nsucceeded mainly in restrictive domains where simple models are sufficient for\nlearning. In this paper, we analyze the behavior of vanilla model-based\nreinforcement learning methods when deep neural networks are used to learn both\nthe model and the policy, and show that the learned policy tends to exploit\nregions where insufficient data is available for the model to be learned,\ncausing instability in training. To overcome this issue, we propose to use an\nensemble of models to maintain the model uncertainty and regularize the\nlearning process. We further show that the use of likelihood ratio derivatives\nyields much more stable learning than backpropagation through time. Altogether,\nour approach Model-Ensemble Trust-Region Policy Optimization (ME-TRPO)\nsignificantly reduces the sample complexity compared to model-free deep RL\nmethods on challenging continuous control benchmark tasks.", + "authors": "Thanard Kurutach, Ignasi Clavera, Yan Duan, Aviv Tamar, Pieter Abbeel", + "published": "2018-02-28", + "updated": "2018-10-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1705.03562v1", + "title": "Deep Episodic Value Iteration for Model-based Meta-Reinforcement Learning", + "abstract": "We present a new deep meta reinforcement learner, which we call Deep Episodic\nValue Iteration (DEVI). DEVI uses a deep neural network to learn a similarity\nmetric for a non-parametric model-based reinforcement learning algorithm. Our\nmodel is trained end-to-end via back-propagation. Despite being trained using\nthe model-free Q-learning objective, we show that DEVI's model-based internal\nstructure provides `one-shot' transfer to changes in reward and transition\nstructure, even for tasks with very high-dimensional state spaces.", + "authors": "Steven Stenberg Hansen", + "published": "2017-05-09", + "updated": "2017-05-09", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.AI", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.07905v2", + "title": "Reinforcement Learning Ship Autopilot: Sample efficient and Model Predictive Control-based Approach", + "abstract": "In this research we focus on developing a reinforcement learning system for a\nchallenging task: autonomous control of a real-sized boat, with difficulties\narising from large uncertainties in the challenging ocean environment and the\nextremely high cost of exploring and sampling with a real boat. To this end, we\nexplore a novel Gaussian processes (GP) based reinforcement learning approach\nthat combines sample-efficient model-based reinforcement learning and model\npredictive control (MPC). Our approach, sample-efficient probabilistic model\npredictive control (SPMPC), iteratively learns a Gaussian process dynamics\nmodel and uses it to efficiently update control signals within the MPC closed\ncontrol loop. A system using SPMPC is built to efficiently learn an autopilot\ntask. After investigating its performance in a simulation modeled upon real\nboat driving data, the proposed system successfully learns to drive a\nreal-sized boat equipped with a single engine and sensors measuring GPS, speed,\ndirection, and wind in an autopilot task without human demonstration.", + "authors": "Yunduan Cui, Shigeki Osaki, Takamitsu Matsubara", + "published": "2019-01-23", + "updated": "2019-07-23", + "primary_cat": "cs.SY", + "cats": [ + "cs.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2212.08232v1", + "title": "Offline Robot Reinforcement Learning with Uncertainty-Guided Human Expert Sampling", + "abstract": "Recent advances in batch (offline) reinforcement learning have shown\npromising results in learning from available offline data and proved offline\nreinforcement learning to be an essential toolkit in learning control policies\nin a model-free setting. An offline reinforcement learning algorithm applied to\na dataset collected by a suboptimal non-learning-based algorithm can result in\na policy that outperforms the behavior agent used to collect the data. Such a\nscenario is frequent in robotics, where existing automation is collecting\noperational data. Although offline learning techniques can learn from data\ngenerated by a sub-optimal behavior agent, there is still an opportunity to\nimprove the sample complexity of existing offline reinforcement learning\nalgorithms by strategically introducing human demonstration data into the\ntraining process. To this end, we propose a novel approach that uses\nuncertainty estimation to trigger the injection of human demonstration data and\nguide policy training towards optimal behavior while reducing overall sample\ncomplexity. Our experiments show that this approach is more sample efficient\nwhen compared to a naive way of combining expert data with data collected from\na sub-optimal agent. We augmented an existing offline reinforcement learning\nalgorithm Conservative Q-Learning with our approach and performed experiments\non data collected from MuJoCo and OffWorld Gym learning environments.", + "authors": "Ashish Kumar, Ilya Kuzovkin", + "published": "2022-12-16", + "updated": "2022-12-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.RO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2209.05530v1", + "title": "Model-based Reinforcement Learning with Multi-step Plan Value Estimation", + "abstract": "A promising way to improve the sample efficiency of reinforcement learning is\nmodel-based methods, in which many explorations and evaluations can happen in\nthe learned models to save real-world samples. However, when the learned model\nhas a non-negligible model error, sequential steps in the model are hard to be\naccurately evaluated, limiting the model's utilization. This paper proposes to\nalleviate this issue by introducing multi-step plans to replace multi-step\nactions for model-based RL. We employ the multi-step plan value estimation,\nwhich evaluates the expected discounted return after executing a sequence of\naction plans at a given state, and updates the policy by directly computing the\nmulti-step policy gradient via plan value estimation. The new model-based\nreinforcement learning algorithm MPPVE (Model-based Planning Policy Learning\nwith Multi-step Plan Value Estimation) shows a better utilization of the\nlearned model and achieves a better sample efficiency than state-of-the-art\nmodel-based RL approaches.", + "authors": "Haoxin Lin, Yihao Sun, Jiaji Zhang, Yang Yu", + "published": "2022-09-12", + "updated": "2022-09-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1406.1853v2", + "title": "Model-based Reinforcement Learning and the Eluder Dimension", + "abstract": "We consider the problem of learning to optimize an unknown Markov decision\nprocess (MDP). We show that, if the MDP can be parameterized within some known\nfunction class, we can obtain regret bounds that scale with the dimensionality,\nrather than cardinality, of the system. We characterize this dependence\nexplicitly as $\\tilde{O}(\\sqrt{d_K d_E T})$ where $T$ is time elapsed, $d_K$ is\nthe Kolmogorov dimension and $d_E$ is the \\emph{eluder dimension}. These\nrepresent the first unified regret bounds for model-based reinforcement\nlearning and provide state of the art guarantees in several important settings.\nMoreover, we present a simple and computationally efficient algorithm\n\\emph{posterior sampling for reinforcement learning} (PSRL) that satisfies\nthese bounds.", + "authors": "Ian Osband, Benjamin Van Roy", + "published": "2014-06-07", + "updated": "2014-10-31", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1903.08543v6", + "title": "Optimizing thermodynamic trajectories using evolutionary and gradient-based reinforcement learning", + "abstract": "Using a model heat engine, we show that neural network-based reinforcement\nlearning can identify thermodynamic trajectories of maximal efficiency. We\nconsider both gradient and gradient-free reinforcement learning. We use an\nevolutionary learning algorithm to evolve a population of neural networks,\nsubject to a directive to maximize the efficiency of a trajectory composed of a\nset of elementary thermodynamic processes; the resulting networks learn to\ncarry out the maximally-efficient Carnot, Stirling, or Otto cycles. When given\nan additional irreversible process, this evolutionary scheme learns a\npreviously unknown thermodynamic cycle. Gradient-based reinforcement learning\nis able to learn the Stirling cycle, whereas an evolutionary approach achieves\nthe optimal Carnot cycle. Our results show how the reinforcement learning\nstrategies developed for game playing can be applied to solve physical problems\nconditioned upon path-extensive order parameters.", + "authors": "Chris Beeler, Uladzimir Yahorau, Rory Coles, Kyle Mills, Stephen Whitelam, Isaac Tamblyn", + "published": "2019-03-20", + "updated": "2021-11-22", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE", + "cond-mat.stat-mech", + "cs.LG", + "physics.comp-ph" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2301.03933v1", + "title": "Hint assisted reinforcement learning: an application in radio astronomy", + "abstract": "Model based reinforcement learning has proven to be more sample efficient\nthan model free methods. On the other hand, the construction of a dynamics\nmodel in model based reinforcement learning has increased complexity. Data\nprocessing tasks in radio astronomy are such situations where the original\nproblem which is being solved by reinforcement learning itself is the creation\nof a model. Fortunately, many methods based on heuristics or signal processing\ndo exist to perform the same tasks and we can leverage them to propose the best\naction to take, or in other words, to provide a `hint'. We propose to use\n`hints' generated by the environment as an aid to the reinforcement learning\nprocess mitigating the complexity of model construction. We modify the soft\nactor critic algorithm to use hints and use the alternating direction method of\nmultipliers algorithm with inequality constraints to train the agent. Results\nin several environments show that we get the increased sample efficiency by\nusing hints as compared to model free methods.", + "authors": "Sarod Yatawatta", + "published": "2023-01-10", + "updated": "2023-01-10", + "primary_cat": "astro-ph.IM", + "cats": [ + "astro-ph.IM", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2106.03688v1", + "title": "A Computational Model of Representation Learning in the Brain Cortex, Integrating Unsupervised and Reinforcement Learning", + "abstract": "A common view on the brain learning processes proposes that the three classic\nlearning paradigms -- unsupervised, reinforcement, and supervised -- take place\nin respectively the cortex, the basal-ganglia, and the cerebellum. However,\ndopamine outbursts, usually assumed to encode reward, are not limited to the\nbasal ganglia but also reach prefrontal, motor, and higher sensory cortices. We\npropose that in the cortex the same reward-based trial-and-error processes\nmight support not only the acquisition of motor representations but also of\nsensory representations. In particular, reward signals might guide\ntrial-and-error processes that mix with associative learning processes to\nsupport the acquisition of representations better serving downstream action\nselection. We tested the soundness of this hypothesis with a computational\nmodel that integrates unsupervised learning (Contrastive Divergence) and\nreinforcement learning (REINFORCE). The model was tested with a task requiring\ndifferent responses to different visual images grouped in categories involving\neither colour, shape, or size. Results show that a balanced mix of unsupervised\nand reinforcement learning processes leads to the best performance. Indeed,\nexcessive unsupervised learning tends to under-represent task-relevant features\nwhile excessive reinforcement learning tends to initially learn slowly and then\nto incur in local minima. These results stimulate future empirical studies on\ncategory learning directed to investigate similar effects in the extrastriate\nvisual cortices. Moreover, they prompt further computational investigations\ndirected to study the possible advantages of integrating unsupervised and\nreinforcement learning processes.", + "authors": "Giovanni Granato, Emilio Cartoni, Federico Da Rold, Andrea Mattera, Gianluca Baldassarre", + "published": "2021-06-07", + "updated": "2021-06-07", + "primary_cat": "q-bio.NC", + "cats": [ + "q-bio.NC", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2007.12666v5", + "title": "Safe Model-Based Reinforcement Learning for Systems with Parametric Uncertainties", + "abstract": "Reinforcement learning has been established over the past decade as an\neffective tool to find optimal control policies for dynamical systems, with\nrecent focus on approaches that guarantee safety during the learning and/or\nexecution phases. In general, safety guarantees are critical in reinforcement\nlearning when the system is safety-critical and/or task restarts are not\npractically feasible. In optimal control theory, safety requirements are often\nexpressed in terms of state and/or control constraints. In recent years,\nreinforcement learning approaches that rely on persistent excitation have been\ncombined with a barrier transformation to learn the optimal control policies\nunder state constraints. To soften the excitation requirements, model-based\nreinforcement learning methods that rely on exact model knowledge have also\nbeen integrated with the barrier transformation framework. The objective of\nthis paper is to develop safe reinforcement learning method for deterministic\nnonlinear systems, with parametric uncertainties in the model, to learn\napproximate constrained optimal policies without relying on stringent\nexcitation conditions. To that end, a model-based reinforcement learning\ntechnique that utilizes a novel filtered concurrent learning method, along with\na barrier transformation, is developed in this paper to realize simultaneous\nlearning of unknown model parameters and approximate optimal state-constrained\ncontrol policies for safety-critical systems.", + "authors": "S M Nahid Mahmud, Scott A Nivison, Zachary I. Bell, Rushikesh Kamalapurkar", + "published": "2020-07-24", + "updated": "2021-10-05", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.SY", + "math.OC" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2206.02025v1", + "title": "Between Rate-Distortion Theory & Value Equivalence in Model-Based Reinforcement Learning", + "abstract": "The quintessential model-based reinforcement-learning agent iteratively\nrefines its estimates or prior beliefs about the true underlying model of the\nenvironment. Recent empirical successes in model-based reinforcement learning\nwith function approximation, however, eschew the true model in favor of a\nsurrogate that, while ignoring various facets of the environment, still\nfacilitates effective planning over behaviors. Recently formalized as the value\nequivalence principle, this algorithmic technique is perhaps unavoidable as\nreal-world reinforcement learning demands consideration of a simple,\ncomputationally-bounded agent interacting with an overwhelmingly complex\nenvironment. In this work, we entertain an extreme scenario wherein some\ncombination of immense environment complexity and limited agent capacity\nentirely precludes identifying an exactly value-equivalent model. In light of\nthis, we embrace a notion of approximate value equivalence and introduce an\nalgorithm for incrementally synthesizing simple and useful approximations of\nthe environment from which an agent might still recover near-optimal behavior.\nCrucially, we recognize the information-theoretic nature of this lossy\nenvironment compression problem and use the appropriate tools of\nrate-distortion theory to make mathematically precise how value equivalence can\nlend tractability to otherwise intractable sequential decision-making problems.", + "authors": "Dilip Arumugam, Benjamin Van Roy", + "published": "2022-06-04", + "updated": "2022-06-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.IT", + "math.IT" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1906.08312v1", + "title": "Calibrated Model-Based Deep Reinforcement Learning", + "abstract": "Estimates of predictive uncertainty are important for accurate model-based\nplanning and reinforcement learning. However, predictive\nuncertainties---especially ones derived from modern deep learning systems---can\nbe inaccurate and impose a bottleneck on performance. This paper explores which\nuncertainties are needed for model-based reinforcement learning and argues that\ngood uncertainties must be calibrated, i.e. their probabilities should match\nempirical frequencies of predicted events. We describe a simple way to augment\nany model-based reinforcement learning agent with a calibrated model and show\nthat doing so consistently improves planning, sample complexity, and\nexploration. On the \\textsc{HalfCheetah} MuJoCo task, our system achieves\nstate-of-the-art performance using 50\\% fewer samples than the current leading\napproach. Our findings suggest that calibration can improve the performance of\nmodel-based reinforcement learning with minimal computational and\nimplementation overhead.", + "authors": "Ali Malik, Volodymyr Kuleshov, Jiaming Song, Danny Nemer, Harlan Seymour, Stefano Ermon", + "published": "2019-06-19", + "updated": "2019-06-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2303.09013v1", + "title": "Self-Inspection Method of Unmanned Aerial Vehicles in Power Plants Using Deep Q-Network Reinforcement Learning", + "abstract": "For the purpose of inspecting power plants, autonomous robots can be built\nusing reinforcement learning techniques. The method replicates the environment\nand employs a simple reinforcement learning (RL) algorithm. This strategy might\nbe applied in several sectors, including the electricity generation sector. A\npre-trained model with perception, planning, and action is suggested by the\nresearch. To address optimization problems, such as the Unmanned Aerial Vehicle\n(UAV) navigation problem, Deep Q-network (DQN), a reinforcement learning-based\nframework that Deepmind launched in 2015, incorporates both deep learning and\nQ-learning. To overcome problems with current procedures, the research proposes\na power plant inspection system incorporating UAV autonomous navigation and DQN\nreinforcement learning. These training processes set reward functions with\nreference to states and consider both internal and external effect factors,\nwhich distinguishes them from other reinforcement learning training techniques\nnow in use. The key components of the reinforcement learning segment of the\ntechnique, for instance, introduce states such as the simulation of a wind\nfield, the battery charge level of an unmanned aerial vehicle, the height the\nUAV reached, etc. The trained model makes it more likely that the inspection\nstrategy will be applied in practice by enabling the UAV to move around on its\nown in difficult environments. The average score of the model converges to\n9,000. The trained model allowed the UAV to make the fewest number of rotations\nnecessary to go to the target point.", + "authors": "Haoran Guan", + "published": "2023-03-16", + "updated": "2023-03-16", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2301.10119v2", + "title": "Minimal Value-Equivalent Partial Models for Scalable and Robust Planning in Lifelong Reinforcement Learning", + "abstract": "Learning models of the environment from pure interaction is often considered\nan essential component of building lifelong reinforcement learning agents.\nHowever, the common practice in model-based reinforcement learning is to learn\nmodels that model every aspect of the agent's environment, regardless of\nwhether they are important in coming up with optimal decisions or not. In this\npaper, we argue that such models are not particularly well-suited for\nperforming scalable and robust planning in lifelong reinforcement learning\nscenarios and we propose new kinds of models that only model the relevant\naspects of the environment, which we call \"minimal value-equivalent partial\nmodels\". After providing a formal definition for these models, we provide\ntheoretical results demonstrating the scalability advantages of performing\nplanning with such models and then perform experiments to empirically\nillustrate our theoretical results. Then, we provide some useful heuristics on\nhow to learn these kinds of models with deep learning architectures and\nempirically demonstrate that models learned in such a way can allow for\nperforming planning that is robust to distribution shifts and compounding model\nerrors. Overall, both our theoretical and empirical results suggest that\nminimal value-equivalent partial models can provide significant benefits to\nperforming scalable and robust planning in lifelong reinforcement learning\nscenarios.", + "authors": "Safa Alver, Doina Precup", + "published": "2023-01-24", + "updated": "2023-06-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2308.14897v1", + "title": "Statistically Efficient Variance Reduction with Double Policy Estimation for Off-Policy Evaluation in Sequence-Modeled Reinforcement Learning", + "abstract": "Offline reinforcement learning aims to utilize datasets of previously\ngathered environment-action interaction records to learn a policy without\naccess to the real environment. Recent work has shown that offline\nreinforcement learning can be formulated as a sequence modeling problem and\nsolved via supervised learning with approaches such as decision transformer.\nWhile these sequence-based methods achieve competitive results over\nreturn-to-go methods, especially on tasks that require longer episodes or with\nscarce rewards, importance sampling is not considered to correct the policy\nbias when dealing with off-policy data, mainly due to the absence of behavior\npolicy and the use of deterministic evaluation policies. To this end, we\npropose DPE: an RL algorithm that blends offline sequence modeling and offline\nreinforcement learning with Double Policy Estimation (DPE) in a unified\nframework with statistically proven properties on variance reduction. We\nvalidate our method in multiple tasks of OpenAI Gym with D4RL benchmarks. Our\nmethod brings a performance improvements on selected methods which outperforms\nSOTA baselines in several tasks, demonstrating the advantages of enabling\ndouble policy estimation for sequence-modeled reinforcement learning.", + "authors": "Hanhan Zhou, Tian Lan, Vaneet Aggarwal", + "published": "2023-08-28", + "updated": "2023-08-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.DC" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2008.00766v1", + "title": "Tracking the Race Between Deep Reinforcement Learning and Imitation Learning -- Extended Version", + "abstract": "Learning-based approaches for solving large sequential decision making\nproblems have become popular in recent years. The resulting agents perform\ndifferently and their characteristics depend on those of the underlying\nlearning approach. Here, we consider a benchmark planning problem from the\nreinforcement learning domain, the Racetrack, to investigate the properties of\nagents derived from different deep (reinforcement) learning approaches. We\ncompare the performance of deep supervised learning, in particular imitation\nlearning, to reinforcement learning for the Racetrack model. We find that\nimitation learning yields agents that follow more risky paths. In contrast, the\ndecisions of deep reinforcement learning are more foresighted, i.e., avoid\nstates in which fatal decisions are more likely. Our evaluations show that for\nthis sequential decision making problem, deep reinforcement learning performs\nbest in many aspects even though for imitation learning optimal decisions are\nconsidered.", + "authors": "Timo P. Gros, Daniel H\u00f6ller, J\u00f6rg Hoffmann, Verena Wolf", + "published": "2020-08-03", + "updated": "2020-08-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.05546v2", + "title": "Multi-Agent Quantum Reinforcement Learning using Evolutionary Optimization", + "abstract": "Multi-Agent Reinforcement Learning is becoming increasingly more important in\ntimes of autonomous driving and other smart industrial applications.\nSimultaneously a promising new approach to Reinforcement Learning arises using\nthe inherent properties of quantum mechanics, reducing the trainable parameters\nof a model significantly. However, gradient-based Multi-Agent Quantum\nReinforcement Learning methods often have to struggle with barren plateaus,\nholding them back from matching the performance of classical approaches. We\nbuild upon an existing approach for gradient free Quantum Reinforcement\nLearning and propose three genetic variations with Variational Quantum Circuits\nfor Multi-Agent Reinforcement Learning using evolutionary optimization. We\nevaluate our genetic variations in the Coin Game environment and also compare\nthem to classical approaches. We showed that our Variational Quantum Circuit\napproaches perform significantly better compared to a neural network with a\nsimilar amount of trainable parameters. Compared to the larger neural network,\nour approaches archive similar results using $97.88\\%$ less parameters.", + "authors": "Michael K\u00f6lle, Felix Topp, Thomy Phan, Philipp Altmann, Jonas N\u00fc\u00dflein, Claudia Linnhoff-Popien", + "published": "2023-11-09", + "updated": "2024-01-13", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph", + "cs.AI", + "cs.MA" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2306.07525v1", + "title": "Using Collision Momentum in Deep Reinforcement Learning Based Adversarial Pedestrian Modeling", + "abstract": "Recent research in pedestrian simulation often aims to develop realistic\nbehaviors in various situations, but it is challenging for existing algorithms\nto generate behaviors that identify weaknesses in automated vehicles'\nperformance in extreme and unlikely scenarios and edge cases. To address this,\nspecialized pedestrian behavior algorithms are needed. Current research focuses\non realistic trajectories using social force models and reinforcement learning\nbased models. However, we propose a reinforcement learning algorithm that\nspecifically targets collisions and better uncovers unique failure modes of\nautomated vehicle controllers. Our algorithm is efficient and generates more\nsevere collisions, allowing for the identification and correction of weaknesses\nin autonomous driving algorithms in complex and varied scenarios.", + "authors": "Dianwei Chen, Ekim Yurtsever, Keith Redmill, Umit Ozguner", + "published": "2023-06-13", + "updated": "2023-06-13", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO", + "cs.AI", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.08162v1", + "title": "Causal Reasoning from Meta-reinforcement Learning", + "abstract": "Discovering and exploiting the causal structure in the environment is a\ncrucial challenge for intelligent agents. Here we explore whether causal\nreasoning can emerge via meta-reinforcement learning. We train a recurrent\nnetwork with model-free reinforcement learning to solve a range of problems\nthat each contain causal structure. We find that the trained agent can perform\ncausal reasoning in novel situations in order to obtain rewards. The agent can\nselect informative interventions, draw causal inferences from observational\ndata, and make counterfactual predictions. Although established formal causal\nreasoning algorithms also exist, in this paper we show that such reasoning can\narise from model-free reinforcement learning, and suggest that causal reasoning\nin complex settings may benefit from the more end-to-end learning-based\napproaches presented here. This work also offers new strategies for structured\nexploration in reinforcement learning, by providing agents with the ability to\nperform -- and interpret -- experiments.", + "authors": "Ishita Dasgupta, Jane Wang, Silvia Chiappa, Jovana Mitrovic, Pedro Ortega, David Raposo, Edward Hughes, Peter Battaglia, Matthew Botvinick, Zeb Kurth-Nelson", + "published": "2019-01-23", + "updated": "2019-01-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.02219v1", + "title": "Uncertainty-Based Out-of-Distribution Detection in Deep Reinforcement Learning", + "abstract": "We consider the problem of detecting out-of-distribution (OOD) samples in\ndeep reinforcement learning. In a value based reinforcement learning setting,\nwe propose to use uncertainty estimation techniques directly on the agent's\nvalue estimating neural network to detect OOD samples. The focus of our work\nlies in analyzing the suitability of approximate Bayesian inference methods and\nrelated ensembling techniques that generate uncertainty estimates. Although\nprior work has shown that dropout-based variational inference techniques and\nbootstrap-based approaches can be used to model epistemic uncertainty, the\nsuitability for detecting OOD samples in deep reinforcement learning remains an\nopen question. Our results show that uncertainty estimation can be used to\ndifferentiate in- from out-of-distribution samples. Over the complete training\nprocess of the reinforcement learning agents, bootstrap-based approaches tend\nto produce more reliable epistemic uncertainty estimates, when compared to\ndropout-based approaches.", + "authors": "Andreas Sedlmeier, Thomas Gabor, Thomy Phan, Lenz Belzner, Claudia Linnhoff-Popien", + "published": "2019-01-08", + "updated": "2019-01-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2211.10688v2", + "title": "ReInform: Selecting paths with reinforcement learning for contextualized link prediction", + "abstract": "We propose to use reinforcement learning to inform transformer-based\ncontextualized link prediction models by providing paths that are most useful\nfor predicting the correct answer. This is in contrast to previous approaches,\nthat either used reinforcement learning (RL) to directly search for the answer,\nor based their prediction on limited or randomly selected context. Our\nexperiments on WN18RR and FB15k-237 show that contextualized link prediction\nmodels consistently outperform RL-based answer search, and that additional\nimprovements (of up to 13.5% MRR) can be gained by combining RL with a link\nprediction model. The PyTorch implementation of the RL agent is available at\nhttps://github.com/marina-sp/reinform", + "authors": "Marina Speranskaya, Sameh Methias, Benjamin Roth", + "published": "2022-11-19", + "updated": "2023-01-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2311.07315v1", + "title": "An introduction to reinforcement learning for neuroscience", + "abstract": "Reinforcement learning has a rich history in neuroscience, from early work on\ndopamine as a reward prediction error signal for temporal difference learning\n(Schultz et al., 1997) to recent work suggesting that dopamine could implement\na form of 'distributional reinforcement learning' popularized in deep learning\n(Dabney et al., 2020). Throughout this literature, there has been a tight link\nbetween theoretical advances in reinforcement learning and neuroscientific\nexperiments and findings. As a result, the theories describing our experimental\ndata have become increasingly complex and difficult to navigate. In this\nreview, we cover the basic theory underlying classical work in reinforcement\nlearning and build up to an introductory overview of methods used in modern\ndeep reinforcement learning that have found applications in systems\nneuroscience. We start with an overview of the reinforcement learning problem\nand classical temporal difference algorithms, followed by a discussion of\n'model-free' and 'model-based' reinforcement learning together with methods\nsuch as DYNA and successor representations that fall in between these two\ncategories. Throughout these sections, we highlight the close parallels between\nthe machine learning methods and related work in both experimental and\ntheoretical neuroscience. We then provide an introduction to deep reinforcement\nlearning with examples of how these methods have been used to model different\nlearning phenomena in the systems neuroscience literature, such as\nmeta-reinforcement learning (Wang et al., 2018) and distributional\nreinforcement learning (Dabney et al., 2020). Code that implements the methods\ndiscussed in this work and generates the figures is also provided.", + "authors": "Kristopher T. Jensen", + "published": "2023-11-13", + "updated": "2023-11-13", + "primary_cat": "q-bio.NC", + "cats": [ + "q-bio.NC", + "cs.LG" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1901.11437v3", + "title": "Successor Features Combine Elements of Model-Free and Model-based Reinforcement Learning", + "abstract": "A key question in reinforcement learning is how an intelligent agent can\ngeneralize knowledge across different inputs. By generalizing across different\ninputs, information learned for one input can be immediately reused for\nimproving predictions for another input. Reusing information allows an agent to\ncompute an optimal decision-making strategy using less data. State\nrepresentation is a key element of the generalization process, compressing a\nhigh-dimensional input space into a low-dimensional latent state space. This\narticle analyzes properties of different latent state spaces, leading to new\nconnections between model-based and model-free reinforcement learning.\nSuccessor features, which predict frequencies of future observations, form a\nlink between model-based and model-free learning: Learning to predict future\nexpected reward outcomes, a key characteristic of model-based agents, is\nequivalent to learning successor features. Learning successor features is a\nform of temporal difference learning and is equivalent to learning to predict a\nsingle policy's utility, which is a characteristic of model-free agents.\nDrawing on the connection between model-based reinforcement learning and\nsuccessor features, we demonstrate that representations that are predictive of\nfuture reward outcomes generalize across variations in both transitions and\nrewards. This result extends previous work on successor features, which is\nconstrained to fixed transitions and assumes re-learning of the transferred\nstate representation.", + "authors": "Lucas Lehnert, Michael L. Littman", + "published": "2019-01-31", + "updated": "2020-10-04", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2109.01659v1", + "title": "Reinforcement Learning for Battery Energy Storage Dispatch augmented with Model-based Optimizer", + "abstract": "Reinforcement learning has been found useful in solving optimal power flow\n(OPF) problems in electric power distribution systems. However, the use of\nlargely model-free reinforcement learning algorithms that completely ignore the\nphysics-based modeling of the power grid compromises the optimizer performance\nand poses scalability challenges. This paper proposes a novel approach to\nsynergistically combine the physics-based models with learning-based algorithms\nusing imitation learning to solve distribution-level OPF problems.\nSpecifically, we propose imitation learning based improvements in deep\nreinforcement learning (DRL) methods to solve the OPF problem for a specific\ncase of battery storage dispatch in the power distribution systems. The\nproposed imitation learning algorithm uses the approximate optimal solutions\nobtained from a linearized model-based OPF solver to provide a good initial\npolicy for the DRL algorithms while improving the training efficiency. The\neffectiveness of the proposed approach is demonstrated using IEEE 34-bus and\n123-bus distribution feeders with numerous distribution-level battery storage\nsystems.", + "authors": "Gayathri Krishnamoorthy, Anamika Dubey", + "published": "2021-09-02", + "updated": "2021-09-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SY", + "eess.SY" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2110.14524v1", + "title": "Model based Multi-agent Reinforcement Learning with Tensor Decompositions", + "abstract": "A challenge in multi-agent reinforcement learning is to be able to generalize\nover intractable state-action spaces. Inspired from Tesseract [Mahajan et al.,\n2021], this position paper investigates generalisation in state-action space\nover unexplored state-action pairs by modelling the transition and reward\nfunctions as tensors of low CP-rank. Initial experiments on synthetic MDPs show\nthat using tensor decompositions in a model-based reinforcement learning\nalgorithm can lead to much faster convergence if the true transition and reward\nfunctions are indeed of low rank.", + "authors": "Pascal Van Der Vaart, Anuj Mahajan, Shimon Whiteson", + "published": "2021-10-27", + "updated": "2021-10-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.MA" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2111.02104v2", + "title": "Model-Based Episodic Memory Induces Dynamic Hybrid Controls", + "abstract": "Episodic control enables sample efficiency in reinforcement learning by\nrecalling past experiences from an episodic memory. We propose a new\nmodel-based episodic memory of trajectories addressing current limitations of\nepisodic control. Our memory estimates trajectory values, guiding the agent\ntowards good policies. Built upon the memory, we construct a complementary\nlearning model via a dynamic hybrid control unifying model-based, episodic and\nhabitual learning into a single architecture. Experiments demonstrate that our\nmodel allows significantly faster and better learning than other strong\nreinforcement learning agents across a variety of environments including\nstochastic and non-Markovian settings.", + "authors": "Hung Le, Thommen Karimpanal George, Majid Abdolshah, Truyen Tran, Svetha Venkatesh", + "published": "2021-11-03", + "updated": "2021-11-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1705.07460v1", + "title": "Experience enrichment based task independent reward model", + "abstract": "For most reinforcement learning approaches, the learning is performed by\nmaximizing an accumulative reward that is expectedly and manually defined for\nspecific tasks. However, in real world, rewards are emergent phenomena from the\ncomplex interactions between agents and environments. In this paper, we propose\nan implicit generic reward model for reinforcement learning. Unlike those\nrewards that are manually defined for specific tasks, such implicit reward is\ntask independent. It only comes from the deviation from the agents' previous\nexperiences.", + "authors": "Min Xu", + "published": "2017-05-21", + "updated": "2017-05-21", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2003.13839v1", + "title": "Model-Reference Reinforcement Learning Control of Autonomous Surface Vehicles with Uncertainties", + "abstract": "This paper presents a novel model-reference reinforcement learning control\nmethod for uncertain autonomous surface vehicles. The proposed control combines\na conventional control method with deep reinforcement learning. With the\nconventional control, we can ensure the learning-based control law provides\nclosed-loop stability for the overall system, and potentially increase the\nsample efficiency of the deep reinforcement learning. With the reinforcement\nlearning, we can directly learn a control law to compensate for modeling\nuncertainties. In the proposed control, a nominal system is employed for the\ndesign of a baseline control law using a conventional control approach. The\nnominal system also defines the desired performance for uncertain autonomous\nvehicles to follow. In comparison with traditional deep reinforcement learning\nmethods, our proposed learning-based control can provide stability guarantees\nand better sample efficiency. We demonstrate the performance of the new\nalgorithm via extensive simulation results.", + "authors": "Qingrui Zhang, Wei Pan, Vasso Reppa", + "published": "2020-03-30", + "updated": "2020-03-30", + "primary_cat": "eess.SY", + "cats": [ + "eess.SY", + "cs.AI", + "cs.LG", + "cs.RO", + "cs.SY", + "math.OC" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/2308.11336v1", + "title": "On the Opportunities and Challenges of Offline Reinforcement Learning for Recommender Systems", + "abstract": "Reinforcement learning serves as a potent tool for modeling dynamic user\ninterests within recommender systems, garnering increasing research attention\nof late. However, a significant drawback persists: its poor data efficiency,\nstemming from its interactive nature. The training of reinforcement\nlearning-based recommender systems demands expensive online interactions to\namass adequate trajectories, essential for agents to learn user preferences.\nThis inefficiency renders reinforcement learning-based recommender systems a\nformidable undertaking, necessitating the exploration of potential solutions.\nRecent strides in offline reinforcement learning present a new perspective.\nOffline reinforcement learning empowers agents to glean insights from offline\ndatasets and deploy learned policies in online settings. Given that recommender\nsystems possess extensive offline datasets, the framework of offline\nreinforcement learning aligns seamlessly. Despite being a burgeoning field,\nworks centered on recommender systems utilizing offline reinforcement learning\nremain limited. This survey aims to introduce and delve into offline\nreinforcement learning within recommender systems, offering an inclusive review\nof existing literature in this domain. Furthermore, we strive to underscore\nprevalent challenges, opportunities, and future pathways, poised to propel\nresearch in this evolving field.", + "authors": "Xiaocong Chen, Siyu Wang, Julian McAuley, Dietmar Jannach, Lina Yao", + "published": "2023-08-22", + "updated": "2023-08-22", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + }, + { + "url": "http://arxiv.org/abs/1012.1552v1", + "title": "Bridging the Gap between Reinforcement Learning and Knowledge Representation: A Logical Off- and On-Policy Framework", + "abstract": "Knowledge Representation is important issue in reinforcement learning. In\nthis paper, we bridge the gap between reinforcement learning and knowledge\nrepresentation, by providing a rich knowledge representation framework, based\non normal logic programs with answer set semantics, that is capable of solving\nmodel-free reinforcement learning problems for more complex do-mains and\nexploits the domain-specific knowledge. We prove the correctness of our\napproach. We show that the complexity of finding an offline and online policy\nfor a model-free reinforcement learning problem in our approach is NP-complete.\nMoreover, we show that any model-free reinforcement learning problem in MDP\nenvironment can be encoded as a SAT problem. The importance of that is\nmodel-free reinforcement", + "authors": "Emad Saad", + "published": "2010-12-07", + "updated": "2010-12-07", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG", + "cs.LO" + ], + "category": "Model AND Based AND Reinforcement AND Learning" + } +] \ No newline at end of file