| [ | |
| { | |
| "url": "http://arxiv.org/abs/2404.16300v1", | |
| "title": "Reinforcement Learning with Generative Models for Compact Support Sets", | |
| "abstract": "Foundation models contain a wealth of information from their vast number of\ntraining samples. However, most prior arts fail to extract this information in\na precise and efficient way for small sample sizes. In this work, we propose a\nframework utilizing reinforcement learning as a control for foundation models,\nallowing for the granular generation of small, focused synthetic support sets\nto augment the performance of neural network models on real data classification\ntasks. We first allow a reinforcement learning agent access to a novel context\nbased dictionary; the agent then uses this dictionary with a novel prompt\nstructure to form and optimize prompts as inputs to generative models,\nreceiving feedback based on a reward function combining the change in\nvalidation accuracy and entropy. A support set is formed this way over several\nexploration steps. Our framework produced excellent results, increasing\nclassification accuracy by significant margins for no additional labelling or\ndata cost.", | |
| "authors": "Nico Schiavone, Xingyu Li", | |
| "published": "2024-04-25", | |
| "updated": "2024-04-25", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.CV" | |
| ], | |
| "label": "Original Paper", | |
| "paper_cat": "Model AND Based AND Reinforcement AND Learning", | |
| "gt": "2.1. Reinforcement Learning Reinforcement learning [14] defines an agent and an environment with rules on how they can interact. The agent receives rewards based on how their actions affect the environment, with one of several reward schemes. The rewards inform the optimal behaviour of the agent, and thus the desirable properties of the end model. Popular reward schemes include exploration-based, which incentivizes exploring the action space, and goal-based, which explores to achieve set goals. Past works have attempted to use reinforcement learning directly in classification algorithms, but this generally yields lacklustre results for the amount of effort and training time required [4]. This is due to the long convergence time of conventional reinforcement learning algorithms, and the relative ease of using simple deep learning models when a well-labelled dataset is available, rather than optimizing the loss with an agent. In our framework, we circumvent this issue by using a deep learning model for classification and optimizing it by altering the training set, rather than directly making the predictions using the agent. 2.2. Generative Models Generative models have shown unprecedented success in many tasks in natural language processing and computer vision [1, 13]. Such models are often trained on datasets with in excess of one billion images, which stores a large wealth of knowledge that can be accessed through their generation capabilities [1]. These generative models have been widely used in contemporary research for image synthesis, such as augmentation of existing samples to artificially simulate a larger dataset [19, 20]. Replacing the dataset entirely with synthetic images is also a topic of interest, with excellent preliminary results despite no real data [22]. Finally, the generation of large support sets to supplement real data has Figure 1. Overall framework also been explored, but this mainly utilizes synthesis over a large scale to shore up the weaknesses of a dataset [11]. Contemporary generative models usually require text prompts to guide their behaviour. General prompting is successful in simple tasks, such as guided image synthesis, but complex and specific prompts often lead to unexpected results. This leads to an area of research known as prompt engineering, which is the focus of much of the recent literature in the topic of large models [2]. Common approaches generally utilize a fixed set of prompts that have been carefully engineered to produce certain results; in our framework, we allow the prompts to evolve naturally from a general structure to their optimal state using reinforcement learning to choose the subjects and the model performance as feedback.", | |
| "pre_questions": [], | |
| "main_content": "Introduction Deep learning [10] is one of the most popular and successful methods for any task where a large dataset can be procured, including fundamental computer vision tasks like classification. However, large, well-balanced, well-labelled datasets are often difficult and prohibitively expensive to acquire. Consequently, much of contemporary image classification utilizes a high quality source dataset and support sets with highly relevant data to the target task. The generation of such support sets has been a focus of contemporary research, and recently utilizes the output of the unprecedented success of large pretrained generative models like Stable Diffusion [13]. The advancements in generative models have led to the rise of synthetic datasets, where images are generated in large scale according to the target task and used in place of a real training dataset, yielding excellent results [6, 11, 22]. Despite these advancements, the body of research relating to synthetic datasets remains primarily focused on largebatch image synthesis. In this way, any issues caused by the unpredictable behaviour of modern generative models can easily be smoothed out. However, this results in the majority of successful applications requiring tens of thousands of images generated for a single task [6, 11], which is inefficient in time and cost. The goal of creating specific, highly focused support sets composed of several hundred images rather than several thousand is currently an open problem at the forefront of generative computer vision research. Consequently, it raises the question of if synthetic data can supplement real data, making up a very small portion of the overall dataset to shore up specific weaknesses, or whether synthetic data must make up a significant amount of the dataset if it is to be used at all. Reinforcement learning [14] is a popular control scheme that has an agent learn the optimal behaviour given an environment and a reward for desirable interactions. Recent studies have found reinforcement learning effective at writing and re-writing prompts [3, 7], but the use of reinforcement learning to guide the evolution of prompts has yet to be explored. Reinforcement learning is an excellent framework for imposing specific learned behaviours upon the resulting agent, and we posit that combining reinforcement learning with pretrained generative models will impart that much-needed specificity on the synthesized images, resulting in significant performance gains for a relatively small number of synthetic images. In this work, we introduce a framework utilizing reinforcement learning as a control for large generative models to synthesize precise support sets, intended to bolster the lacking aspects of real datasets without overwriting them for increased model performance at no extra data or labelling costs. To accomplish this, we utilize a dictionary based on the features of the original training dataset, and allow a reinforcement learning agent to learn the optimal structures and word choice to generate high quality, specific prompts for Stable Diffusion. The controlled output of Stable Diffusion is then used to supplement the existing training data for a neural network model, and the performance of this model on a validation set is given as feedback to the agent. 1 arXiv:2404.16300v1 [cs.LG] 25 Apr 2024 In this way, the framework allows Stable Diffusion to act as an extension of the reinforcement learning agent, acting directly to improve the performance of the model by tweaking the prompts that make up the support set. We evaluate this framework on several datasets, including CIFAR10 [8], and Tiny-ImageNet [9], showing free improvements on neural networks of \u223c1% for less than 500 total images in the support set. The main contributions for this work are: \u2022 A novel framework combining reinforcement learning and large pretrained generative models for the construction of small, focused, and effective synthetic support sets. \u2022 A new reward scheme that facilitates a better interaction sets. \u2022 A new reward scheme that facilitates a better interaction between reinforcement learning and classification. 3.1. Problem Formulation Initially, there is a well-labelled dataset D, consisting of N training samples, and a synthetic support set S, consisting of k\u2217m samples, where k is the current step number, and m is the number of samples generated per step. In this work, we impose an extra limit Nsyn on the number of samples in S. There is also a validation set V, and a test set T . Our goal in this study is to train a reinforcement learning agent A to optimally control a pretrained generative model, such as Stable Diffusion, to optimally populate S with at most Nsyn synthetic images, where Nsyn << N. As shown in Fig. 1, in each step, the agent forms a prompt, feeds it to Stable Diffusion, and the resulting images are added to S. The resulting dataset D+S is used to train a model M , and its performance on V is passed back to A as feedback. This 2 Figure 2. Images generated using our framework using CIFAR10 [8] labels. continues until a total of Nsyn images are contained within S, at which point the exploration thread terminates. When all exploration threads within the preset exploration budget are explored, the resulting framework is tested on the test set T yielding the final performance. 3.2. Image Synthesis For image synthesis, we are using Stable Diffusion [13], a successful text-to-image model that is trained on billions of text-image pairs.Stable Diffusion has already been used to great effect in contemporary works when the aim is to replace a real dataset [18, 22], and to augment existing samples [19, 20], but with comparatively fewer works focusing on consistently generating small, effective support sets. 3.3. Controlling the Synthesis with RL Reinforcement learning (RL) defines an agent and an environment, and gives a set of actions that the agent can take to affect the environment. In our framework, we take a classification model and its training dataset as the Environment. The reinforcement learning agent adaptively selects text prompts for the generative model towards image synthesis, which supplements the training set for classification performance improvement. The agent then receives feedback based on the change in the model\u2019s performance, which is taken as the State in our reinforcement framework. In this study, we adopt the policy-based method for agent optimization, building a policy \u03c0 : s \u2212 \u2192a that maps states to optimal actions [14]. The specific objective function is: L(\u03b8) = \u02c6 E[min(rt(\u03b8) \u02c6 At, clip(rt(\u03b8), 1 \u2212\u03f5, 1 + \u03f5) \u02c6 At)]. (1) where rt = \u03c0\u03b8(at|st) \u03c0\u03b8old(at|st) is the probability ratio, \u02c6 At is the estimator of the advantage function at step t, and \u03f5 is a small value. Action space: Our framework allows the reinforcement learning agent to interact with Stable Diffusion by forming prompts. Prompts of unlimited length are subject to unmanageable time complexity, so we utilize a set dictionary based on the dataset. We formulate the interaction with a basic sentence structure with enough expression to accurately place the image, and pose the following format: \u201dA {domain} of a {class}, {class}, and {class}\u201d. Domains include photographs, digital artwork, paintings, mosaics, and other clear representations of the class. Next, three class names are chosen from the list of classes in the dataset. We notice that Stable Diffusion usually puts more attention on the first \u201dclass\u201d term and generates the corresponding theme in the resulting image. Thus, our prompt design allows the agent to position the generated images at the boundaries between classes, which is where new images are most effective for improving classification performance [12]. This is in contrast to traditional prompting methods, where the prompt describes the primary subject of interest with qualifiers for other subjects. We instead follow contemporary diversity research, prioritizing brevity and maximal control [15]. The benefits of our approach are that single-class representative samples can be easily generated as follows: \u201dA {domain} of a car, car, and car\u201d, which has the added benefit of including more representative features from the chosen class due to the repetition. Multi-class samples can be equally easily generated by including two or three different class names, and the significance of each class can be altered by changing the order the classes appear in. In this way, our method allows the agent a yet unseen amount of control over the output of Stable Diffusion, resulting in significantly improved precision. Reward function: The agent\u2019s desired behaviour is to increase the accuracy of the classification model as much as possible with limited image synthesis. In our framework, we use a combined reward function, utilizing the validation set accuracy and the entropy to bias our model towards high, balanced accuracy. Under the assumption of a welllabelled training dataset, the former (i.e. classification accuracy on validate set) offers the most unfiltered access to the state changes in the model\u2019s performance. It is noteworthy that different from previous works utilizing reinforcement learning for classification, the accuracy alone is used, the addition of entropy in our reward allows the framework to simultaneously reward the improvement of weak classes, which improves the overall model performance on underrepresented classes. The formulation of our reward function is shown in Eq. 2, where the entropy under a state s can be calculated following Eq. 3. r(s, s\u2032) = \u2206Acc(s \u2192s\u2032) \u2212\u2206\u03c3entropy(s \u2192s\u2032), (2) \u03c3entropy(x, M) = \u2212\u03a3k i=1pM(yi|x) log pM(yi|x), (3) where s\u2032 is the state after performing action a, and s is the state before performing action a, and pM(\u02c6 y|x) represents the class probability of sample x under model M. 3 Pretrained Rand Syn Ours ResNet-18 92.0 92.3 92.7 ResNet-50 93.9 94.2 94.5 VGG-16 93.9 94.1 94.9 ShuffleNetV2 93.3 93.6 94.1 EfficientNetV2S 94.1 94.3 95.2 Table 1. Classification accuracy (%) on CIFAR-10 [8]. Pretrained Rand Syn Ours ResNet-18 54.3 54.4 54.7 ResNet-50 71.1 71.1 71.5 VGG-16 63.2 63.4 63.9 ShuffleNetV2 48.6 48.6 48.8 EfficientNetV2S 69.9 70.0 70.4 Table 2. Classification accuracy (%) on Tiny ImageNet [9]. 3.4. Full Algorithm One training step for the agent A consists of the following processes, in order: 1. A chooses a domain and three classes in the prompt to represent the generated images. 2. m images are generated following the prompt, which are added to S. 3. M is trained on D + S, and tested again on V, reporting the accuracy and entropy of the predictions. 4. The reward r(s, s\u2032) is given back to the agent. If k = 1, then the pretrained statistics are used in place of the data from the previous state s. This sequence is optimized using Proximal-PolicyOptimization [14] to find the optimal set of Nsyn synthetic samples contained in S. After the training process is completed, the algorithm has found the optimal prompts for to generate the optimal support set, and runs a final time without feedback to form S, the desired support set. 4. Results & Discussion 4.1. Datasets We evaluate our framework on two popular natural image datasets, CIFAR-10 [8] and Tiny ImageNet [9]. We chose these datasets due to computational reasons \u2013 the action space complexity scales as n3, where n is the number of classes in the dataset. Tiny ImageNet is a 200 class balanced dataset of 100 000 64x64 coloured images, and CIFAR-10 is a 10 class balanced dataset of 60 000 32x32 coloured images. In each case, we split the datasets using an 80:10:10 ratio of train:validation:test. 4.2. Experimental Protocol We follow the setup laid out in Section 3. For both datasets, we use a domain dictionary of {\u201dphotograph\u201d, \u201dpainting\u201d, \u201dstill-life\u201d, \u201dimage\u201d, \u201ddigital image\u201d} and a class dictionary composed of each class name once. In experiments, we select k = 10 to generate 10 images per step and our algorithm will run until a maximum of Nsyn = 400 images. Various models, including ResNet18, ResNet50 [5], ShuffleNetV2 [17], VGG-16 [16], and EfficientNetV2 [21], are evaluated in our experiments. We compare the results of our framework against vanilla trained models and the models trained with random synthetic images in equal number. The \u2019Random Synthesis\u2019 setting adds to the training set 400 images synthesized by selecting random classes to fill the blanks in the prompt, and our method uses the full reinforcement learning framework. 4.3. Main Results and Discussion The results of applying our framework are reported in Tables 1 and 2. In addition, example images generated off of the CIFAR-10 dataset are demonstrated in Fig. 2. From these results, we can see that our framework is superior to random synthesis for small-batch support set synthesis, increasing the accuracy by as much as 0.9% over the random synthesis method, and 1.1% over the baseline model. Notably, for two backbones on Tiny ImageNet, random synthesis fails to improve the performance of the model by > 0.1%, while our framework increases the accuracy by \u223c0.2%. In addition, our method adds only 0.33% extra images for CIFAR-10, and 0.2% for Tiny-ImageNet. Our experimental results show that the proposed framework has a high performance gain relative to the number of samples synthesized, a characteristic not seen in prior arts. We attribute this gain to the fine control that our designed reinforcement learning agent gives over the output of the large pretrained model, and the effectiveness of the feedback given back to the agent. Our framework currently requires some amount of information about the target dataset in order to work: class names, and a rough domain. This could be bypassed by forming the dictionary using an image-to-text encoder on representative samples after clustering by an unsupervised learning algorithm, but we leave the pursuit of this direction for future work. 5. Conclusions In this work, we proposed a framework allowing for the granular generation of small, focused synthetic support sets to augment the performance of general backbone networks on real data classification tasks. Our framework exploits the wealth of information present in large pretrained models by controlling their output using reinforcement learning agents, so that optimal, explainable prompts can be generated over many training steps. Our framework produced excellent results on a variety of backbones, increasing classification accuracy by significant margins for no additional labelling or data cost. 4" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2007.01298v3", | |
| "title": "Image Classification by Reinforcement Learning with Two-State Q-Learning", | |
| "abstract": "In this paper, a simple and efficient Hybrid Classifier is presented which is\nbased on deep learning and reinforcement learning. Here, Q-Learning has been\nused with two states and 'two or three' actions. Other techniques found in the\nliterature use feature map extracted from Convolutional Neural Networks and use\nthese in the Q-states along with past history. This leads to technical\ndifficulties in these approaches because the number of states is high due to\nlarge dimensions of the feature map. Because the proposed technique uses only\ntwo Q-states it is straightforward and consequently has much lesser number of\noptimization parameters, and thus also has a simple reward function. Also, the\nproposed technique uses novel actions for processing images as compared to\nother techniques found in literature. The performance of the proposed technique\nis compared with other recent algorithms like ResNet50, InceptionV3, etc. on\npopular databases including ImageNet, Cats and Dogs Dataset, and Caltech-101\nDataset. The proposed approach outperforms others techniques on all the\ndatasets used.", | |
| "authors": "Abdul Mueed Hafiz", | |
| "published": "2020-06-28", | |
| "updated": "2020-10-31", | |
| "primary_cat": "cs.CV", | |
| "cats": [ | |
| "cs.CV", | |
| "cs.LG", | |
| "eess.IV" | |
| ], | |
| "label": "Related Work" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1707.06347v2", | |
| "title": "Proximal Policy Optimization Algorithms", | |
| "abstract": "We propose a new family of policy gradient methods for reinforcement\nlearning, which alternate between sampling data through interaction with the\nenvironment, and optimizing a \"surrogate\" objective function using stochastic\ngradient ascent. Whereas standard policy gradient methods perform one gradient\nupdate per data sample, we propose a novel objective function that enables\nmultiple epochs of minibatch updates. The new methods, which we call proximal\npolicy optimization (PPO), have some of the benefits of trust region policy\noptimization (TRPO), but they are much simpler to implement, more general, and\nhave better sample complexity (empirically). Our experiments test PPO on a\ncollection of benchmark tasks, including simulated robotic locomotion and Atari\ngame playing, and we show that PPO outperforms other online policy gradient\nmethods, and overall strikes a favorable balance between sample complexity,\nsimplicity, and wall-time.", | |
| "authors": "John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov", | |
| "published": "2017-07-20", | |
| "updated": "2017-08-28", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG" | |
| ], | |
| "label": "Related Work" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2307.14710v2", | |
| "title": "Pre-training Vision Transformers with Very Limited Synthesized Images", | |
| "abstract": "Formula-driven supervised learning (FDSL) is a pre-training method that\nrelies on synthetic images generated from mathematical formulae such as\nfractals. Prior work on FDSL has shown that pre-training vision transformers on\nsuch synthetic datasets can yield competitive accuracy on a wide range of\ndownstream tasks. These synthetic images are categorized according to the\nparameters in the mathematical formula that generate them. In the present work,\nwe hypothesize that the process for generating different instances for the same\ncategory in FDSL, can be viewed as a form of data augmentation. We validate\nthis hypothesis by replacing the instances with data augmentation, which means\nwe only need a single image per category. Our experiments shows that this\none-instance fractal database (OFDB) performs better than the original dataset\nwhere instances were explicitly generated. We further scale up OFDB to 21,000\ncategories and show that it matches, or even surpasses, the model pre-trained\non ImageNet-21k in ImageNet-1k fine-tuning. The number of images in OFDB is\n21k, whereas ImageNet-21k has 14M. This opens new possibilities for\npre-training vision transformers with much smaller datasets.", | |
| "authors": "Ryo Nakamura, Hirokatsu Kataoka, Sora Takashima, Edgar Josafat Martinez Noriega, Rio Yokota, Nakamasa Inoue", | |
| "published": "2023-07-27", | |
| "updated": "2023-07-31", | |
| "primary_cat": "cs.CV", | |
| "cats": [ | |
| "cs.CV" | |
| ], | |
| "label": "Related Work" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2310.14735v2", | |
| "title": "Unleashing the potential of prompt engineering in Large Language Models: a comprehensive review", | |
| "abstract": "This paper delves into the pivotal role of prompt engineering in unleashing\nthe capabilities of Large Language Models (LLMs). Prompt engineering is the\nprocess of structuring input text for LLMs and is a technique integral to\noptimizing the efficacy of LLMs. This survey elucidates foundational principles\nof prompt engineering, such as role-prompting, one-shot, and few-shot\nprompting, as well as more advanced methodologies such as the chain-of-thought\nand tree-of-thoughts prompting. The paper sheds light on how external\nassistance in the form of plugins can assist in this task, and reduce machine\nhallucination by retrieving external knowledge. We subsequently delineate\nprospective directions in prompt engineering research, emphasizing the need for\na deeper understanding of structures and the role of agents in Artificial\nIntelligence-Generated Content (AIGC) tools. We discuss how to assess the\nefficacy of prompt methods from different perspectives and using different\nmethods. Finally, we gather information about the application of prompt\nengineering in such fields as education and programming, showing its\ntransformative potential. This comprehensive survey aims to serve as a friendly\nguide for anyone venturing through the big world of LLMs and prompt\nengineering.", | |
| "authors": "Banghao Chen, Zhaofeng Zhang, Nicolas Langren\u00e9, Shengxin Zhu", | |
| "published": "2023-10-23", | |
| "updated": "2023-10-27", | |
| "primary_cat": "cs.CL", | |
| "cats": [ | |
| "cs.CL", | |
| "cs.AI", | |
| "I.2.7" | |
| ], | |
| "label": "Related Work" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2302.07944v2", | |
| "title": "Effective Data Augmentation With Diffusion Models", | |
| "abstract": "Data augmentation is one of the most prevalent tools in deep learning,\nunderpinning many recent advances, including those from classification,\ngenerative models, and representation learning. The standard approach to data\naugmentation combines simple transformations like rotations and flips to\ngenerate new images from existing ones. However, these new images lack\ndiversity along key semantic axes present in the data. Current augmentations\ncannot alter the high-level semantic attributes, such as animal species present\nin a scene, to enhance the diversity of data. We address the lack of diversity\nin data augmentation with image-to-image transformations parameterized by\npre-trained text-to-image diffusion models. Our method edits images to change\ntheir semantics using an off-the-shelf diffusion model, and generalizes to\nnovel visual concepts from a few labelled examples. We evaluate our approach on\nfew-shot image classification tasks, and on a real-world weed recognition task,\nand observe an improvement in accuracy in tested domains.", | |
| "authors": "Brandon Trabucco, Kyle Doherty, Max Gurinas, Ruslan Salakhutdinov", | |
| "published": "2023-02-07", | |
| "updated": "2023-05-25", | |
| "primary_cat": "cs.CV", | |
| "cats": [ | |
| "cs.CV", | |
| "cs.AI" | |
| ], | |
| "label": "Related Work" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2305.15316v1", | |
| "title": "Training on Thin Air: Improve Image Classification with Generated Data", | |
| "abstract": "Acquiring high-quality data for training discriminative models is a crucial\nyet challenging aspect of building effective predictive systems. In this paper,\nwe present Diffusion Inversion, a simple yet effective method that leverages\nthe pre-trained generative model, Stable Diffusion, to generate diverse,\nhigh-quality training data for image classification. Our approach captures the\noriginal data distribution and ensures data coverage by inverting images to the\nlatent space of Stable Diffusion, and generates diverse novel training images\nby conditioning the generative model on noisy versions of these vectors. We\nidentify three key components that allow our generated images to successfully\nsupplant the original dataset, leading to a 2-3x enhancement in sample\ncomplexity and a 6.5x decrease in sampling time. Moreover, our approach\nconsistently outperforms generic prompt-based steering methods and KNN\nretrieval baseline across a wide range of datasets. Additionally, we\ndemonstrate the compatibility of our approach with widely-used data\naugmentation techniques, as well as the reliability of the generated data in\nsupporting various neural architectures and enhancing few-shot learning.", | |
| "authors": "Yongchao Zhou, Hshmat Sahak, Jimmy Ba", | |
| "published": "2023-05-24", | |
| "updated": "2023-05-24", | |
| "primary_cat": "cs.CV", | |
| "cats": [ | |
| "cs.CV", | |
| "cs.LG" | |
| ], | |
| "label": "Related Work" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2302.07944v2", | |
| "title": "Effective Data Augmentation With Diffusion Models", | |
| "abstract": "Data augmentation is one of the most prevalent tools in deep learning,\nunderpinning many recent advances, including those from classification,\ngenerative models, and representation learning. The standard approach to data\naugmentation combines simple transformations like rotations and flips to\ngenerate new images from existing ones. However, these new images lack\ndiversity along key semantic axes present in the data. Current augmentations\ncannot alter the high-level semantic attributes, such as animal species present\nin a scene, to enhance the diversity of data. We address the lack of diversity\nin data augmentation with image-to-image transformations parameterized by\npre-trained text-to-image diffusion models. Our method edits images to change\ntheir semantics using an off-the-shelf diffusion model, and generalizes to\nnovel visual concepts from a few labelled examples. We evaluate our approach on\nfew-shot image classification tasks, and on a real-world weed recognition task,\nand observe an improvement in accuracy in tested domains.", | |
| "authors": "Brandon Trabucco, Kyle Doherty, Max Gurinas, Ruslan Salakhutdinov", | |
| "published": "2023-02-07", | |
| "updated": "2023-05-25", | |
| "primary_cat": "cs.CV", | |
| "cats": [ | |
| "cs.CV", | |
| "cs.AI" | |
| ], | |
| "label": "Related Work" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2112.10752v2", | |
| "title": "High-Resolution Image Synthesis with Latent Diffusion Models", | |
| "abstract": "By decomposing the image formation process into a sequential application of\ndenoising autoencoders, diffusion models (DMs) achieve state-of-the-art\nsynthesis results on image data and beyond. Additionally, their formulation\nallows for a guiding mechanism to control the image generation process without\nretraining. However, since these models typically operate directly in pixel\nspace, optimization of powerful DMs often consumes hundreds of GPU days and\ninference is expensive due to sequential evaluations. To enable DM training on\nlimited computational resources while retaining their quality and flexibility,\nwe apply them in the latent space of powerful pretrained autoencoders. In\ncontrast to previous work, training diffusion models on such a representation\nallows for the first time to reach a near-optimal point between complexity\nreduction and detail preservation, greatly boosting visual fidelity. By\nintroducing cross-attention layers into the model architecture, we turn\ndiffusion models into powerful and flexible generators for general conditioning\ninputs such as text or bounding boxes and high-resolution synthesis becomes\npossible in a convolutional manner. Our latent diffusion models (LDMs) achieve\na new state of the art for image inpainting and highly competitive performance\non various tasks, including unconditional image generation, semantic scene\nsynthesis, and super-resolution, while significantly reducing computational\nrequirements compared to pixel-based DMs. Code is available at\nhttps://github.com/CompVis/latent-diffusion .", | |
| "authors": "Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Bj\u00f6rn Ommer", | |
| "published": "2021-12-20", | |
| "updated": "2022-04-13", | |
| "primary_cat": "cs.CV", | |
| "cats": [ | |
| "cs.CV" | |
| ], | |
| "label": "Related Work" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2404.02429v1", | |
| "title": "AD4RL: Autonomous Driving Benchmarks for Offline Reinforcement Learning with Value-based Dataset", | |
| "abstract": "Offline reinforcement learning has emerged as a promising technology by\nenhancing its practicality through the use of pre-collected large datasets.\nDespite its practical benefits, most algorithm development research in offline\nreinforcement learning still relies on game tasks with synthetic datasets. To\naddress such limitations, this paper provides autonomous driving datasets and\nbenchmarks for offline reinforcement learning research. We provide 19 datasets,\nincluding real-world human driver's datasets, and seven popular offline\nreinforcement learning algorithms in three realistic driving scenarios. We also\nprovide a unified decision-making process model that can operate effectively\nacross different scenarios, serving as a reference framework in algorithm\ndesign. Our research lays the groundwork for further collaborations in the\ncommunity to explore practical aspects of existing reinforcement learning\nmethods. Dataset and codes can be found in https://sites.google.com/view/ad4rl.", | |
| "authors": "Dongsu Lee, Chanin Eom, Minhae Kwon", | |
| "published": "2024-04-03", | |
| "updated": "2024-04-03", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2307.02900v2", | |
| "title": "Meta Federated Reinforcement Learning for Distributed Resource Allocation", | |
| "abstract": "In cellular networks, resource allocation is usually performed in a\ncentralized way, which brings huge computation complexity to the base station\n(BS) and high transmission overhead. This paper explores a distributed resource\nallocation method that aims to maximize energy efficiency (EE) while ensuring\nthe quality of service (QoS) for users. Specifically, in order to address\nwireless channel conditions, we propose a robust meta federated reinforcement\nlearning (\\textit{MFRL}) framework that allows local users to optimize transmit\npower and assign channels using locally trained neural network models, so as to\noffload computational burden from the cloud server to the local users, reducing\ntransmission overhead associated with local channel state information. The BS\nperforms the meta learning procedure to initialize a general global model,\nenabling rapid adaptation to different environments with improved EE\nperformance. The federated learning technique, based on decentralized\nreinforcement learning, promotes collaboration and mutual benefits among users.\nAnalysis and numerical results demonstrate that the proposed \\textit{MFRL}\nframework accelerates the reinforcement learning process, decreases\ntransmission overhead, and offloads computation, while outperforming the\nconventional decentralized reinforcement learning algorithm in terms of\nconvergence speed and EE performance across various scenarios.", | |
| "authors": "Zelin Ji, Zhijin Qin, Xiaoming Tao", | |
| "published": "2023-07-06", | |
| "updated": "2023-07-09", | |
| "primary_cat": "eess.SP", | |
| "cats": [ | |
| "eess.SP", | |
| "cs.SY", | |
| "eess.SY" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2012.09737v2", | |
| "title": "Model-free and Bayesian Ensembling Model-based Deep Reinforcement Learning for Particle Accelerator Control Demonstrated on the FERMI FEL", | |
| "abstract": "Reinforcement learning holds tremendous promise in accelerator controls. The\nprimary goal of this paper is to show how this approach can be utilised on an\noperational level on accelerator physics problems. Despite the success of\nmodel-free reinforcement learning in several domains, sample-efficiency still\nis a bottle-neck, which might be encompassed by model-based methods. We compare\nwell-suited purely model-based to model-free reinforcement learning applied to\nthe intensity optimisation on the FERMI FEL system. We find that the\nmodel-based approach demonstrates higher representational power and\nsample-efficiency, while the asymptotic performance of the model-free method is\nslightly superior. The model-based algorithm is implemented in a DYNA-style\nusing an uncertainty aware model, and the model-free algorithm is based on\ntailored deep Q-learning. In both cases, the algorithms were implemented in a\nway, which presents increased noise robustness as omnipresent in accelerator\ncontrol problems. Code is released in\nhttps://github.com/MathPhysSim/FERMI_RL_Paper.", | |
| "authors": "Simon Hirlaender, Niky Bruchon", | |
| "published": "2020-12-17", | |
| "updated": "2022-01-26", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI", | |
| "cs.SY", | |
| "eess.SY", | |
| "physics.acc-ph", | |
| "I.2; J.2" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2206.02380v2", | |
| "title": "Adaptive Rollout Length for Model-Based RL Using Model-Free Deep RL", | |
| "abstract": "Model-based reinforcement learning promises to learn an optimal policy from\nfewer interactions with the environment compared to model-free reinforcement\nlearning by learning an intermediate model of the environment in order to\npredict future interactions. When predicting a sequence of interactions, the\nrollout length, which limits the prediction horizon, is a critical\nhyperparameter as accuracy of the predictions diminishes in the regions that\nare further away from real experience. As a result, with a longer rollout\nlength, an overall worse policy is learned in the long run. Thus, the\nhyperparameter provides a trade-off between quality and efficiency. In this\nwork, we frame the problem of tuning the rollout length as a meta-level\nsequential decision-making problem that optimizes the final policy learned by\nmodel-based reinforcement learning given a fixed budget of environment\ninteractions by adapting the hyperparameter dynamically based on feedback from\nthe learning process, such as accuracy of the model and the remaining budget of\ninteractions. We use model-free deep reinforcement learning to solve the\nmeta-level decision problem and demonstrate that our approach outperforms\ncommon heuristic baselines on two well-known reinforcement learning\nenvironments.", | |
| "authors": "Abhinav Bhatia, Philip S. Thomas, Shlomo Zilberstein", | |
| "published": "2022-06-06", | |
| "updated": "2022-06-07", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1609.03348v4", | |
| "title": "A Threshold-based Scheme for Reinforcement Learning in Neural Networks", | |
| "abstract": "A generic and scalable Reinforcement Learning scheme for Artificial Neural\nNetworks is presented, providing a general purpose learning machine. By\nreference to a node threshold three features are described 1) A mechanism for\nPrimary Reinforcement, capable of solving linearly inseparable problems 2) The\nlearning scheme is extended to include a mechanism for Conditioned\nReinforcement, capable of forming long term strategy 3) The learning scheme is\nmodified to use a threshold-based deep learning algorithm, providing a robust\nand biologically inspired alternative to backpropagation. The model may be used\nfor supervised as well as unsupervised training regimes.", | |
| "authors": "Thomas H. Ward", | |
| "published": "2016-09-12", | |
| "updated": "2017-01-14", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.NE" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2206.07915v2", | |
| "title": "Barrier Certified Safety Learning Control: When Sum-of-Square Programming Meets Reinforcement Learning", | |
| "abstract": "Safety guarantee is essential in many engineering implementations.\nReinforcement learning provides a useful way to strengthen safety. However,\nreinforcement learning algorithms cannot completely guarantee safety over\nrealistic operations. To address this issue, this work adopts control barrier\nfunctions over reinforcement learning, and proposes a compensated algorithm to\ncompletely maintain safety. Specifically, a sum-of-squares programming has been\nexploited to search for the optimal controller, and tune the learning\nhyperparameters simultaneously. Thus, the control actions are pledged to be\nalways within the safe region. The effectiveness of proposed method is\ndemonstrated via an inverted pendulum model. Compared to quadratic programming\nbased reinforcement learning methods, our sum-of-squares programming based\nreinforcement learning has shown its superiority.", | |
| "authors": "Hejun Huang, Zhenglong Li, Dongkun Han", | |
| "published": "2022-06-16", | |
| "updated": "2022-06-29", | |
| "primary_cat": "eess.SY", | |
| "cats": [ | |
| "eess.SY", | |
| "cs.LG", | |
| "cs.SY" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1901.01977v1", | |
| "title": "Accelerating Goal-Directed Reinforcement Learning by Model Characterization", | |
| "abstract": "We propose a hybrid approach aimed at improving the sample efficiency in\ngoal-directed reinforcement learning. We do this via a two-step mechanism where\nfirstly, we approximate a model from Model-Free reinforcement learning. Then,\nwe leverage this approximate model along with a notion of reachability using\nMean First Passage Times to perform Model-Based reinforcement learning. Built\non such a novel observation, we design two new algorithms - Mean First Passage\nTime based Q-Learning (MFPT-Q) and Mean First Passage Time based DYNA\n(MFPT-DYNA), that have been fundamentally modified from the state-of-the-art\nreinforcement learning techniques. Preliminary results have shown that our\nhybrid approaches converge with much fewer iterations than their corresponding\nstate-of-the-art counterparts and therefore requiring much fewer samples and\nmuch fewer training trials to converge.", | |
| "authors": "Shoubhik Debnath, Gaurav Sukhatme, Lantao Liu", | |
| "published": "2019-01-04", | |
| "updated": "2019-01-04", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI", | |
| "stat.ML" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2011.10714v1", | |
| "title": "Double Meta-Learning for Data Efficient Policy Optimization in Non-Stationary Environments", | |
| "abstract": "We are interested in learning models of non-stationary environments, which\ncan be framed as a multi-task learning problem. Model-free reinforcement\nlearning algorithms can achieve good asymptotic performance in multi-task\nlearning at a cost of extensive sampling, due to their approach, which requires\nlearning from scratch. While model-based approaches are among the most data\nefficient learning algorithms, they still struggle with complex tasks and model\nuncertainties. Meta-reinforcement learning addresses the efficiency and\ngeneralization challenges on multi task learning by quickly leveraging the\nmeta-prior policy for a new task. In this paper, we propose a\nmeta-reinforcement learning approach to learn the dynamic model of a\nnon-stationary environment to be used for meta-policy optimization later. Due\nto the sample efficiency of model-based learning methods, we are able to\nsimultaneously train both the meta-model of the non-stationary environment and\nthe meta-policy until dynamic model convergence. Then, the meta-learned dynamic\nmodel of the environment will generate simulated data for meta-policy\noptimization. Our experiment demonstrates that our proposed method can\nmeta-learn the policy in a non-stationary environment with the data efficiency\nof model-based learning approaches while achieving the high asymptotic\nperformance of model-free meta-reinforcement learning.", | |
| "authors": "Elahe Aghapour, Nora Ayanian", | |
| "published": "2020-11-21", | |
| "updated": "2020-11-21", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2006.07178v2", | |
| "title": "Meta-Reinforcement Learning Robust to Distributional Shift via Model Identification and Experience Relabeling", | |
| "abstract": "Reinforcement learning algorithms can acquire policies for complex tasks\nautonomously. However, the number of samples required to learn a diverse set of\nskills can be prohibitively large. While meta-reinforcement learning methods\nhave enabled agents to leverage prior experience to adapt quickly to new tasks,\ntheir performance depends crucially on how close the new task is to the\npreviously experienced tasks. Current approaches are either not able to\nextrapolate well, or can do so at the expense of requiring extremely large\namounts of data for on-policy meta-training. In this work, we present model\nidentification and experience relabeling (MIER), a meta-reinforcement learning\nalgorithm that is both efficient and extrapolates well when faced with\nout-of-distribution tasks at test time. Our method is based on a simple\ninsight: we recognize that dynamics models can be adapted efficiently and\nconsistently with off-policy data, more easily than policies and value\nfunctions. These dynamics models can then be used to continue training policies\nand value functions for out-of-distribution tasks without using\nmeta-reinforcement learning at all, by generating synthetic experience for the\nnew task.", | |
| "authors": "Russell Mendonca, Xinyang Geng, Chelsea Finn, Sergey Levine", | |
| "published": "2020-06-12", | |
| "updated": "2020-06-15", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "stat.ML" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2109.03188v3", | |
| "title": "Optimizing Quantum Variational Circuits with Deep Reinforcement Learning", | |
| "abstract": "Quantum Machine Learning (QML) is considered to be one of the most promising\napplications of near term quantum devices. However, the optimization of quantum\nmachine learning models presents numerous challenges arising from the\nimperfections of hardware and the fundamental obstacles in navigating an\nexponentially scaling Hilbert space. In this work, we evaluate the potential of\ncontemporary methods in deep reinforcement learning to augment gradient based\noptimization routines in quantum variational circuits. We find that\nreinforcement learning augmented optimizers consistently outperform gradient\ndescent in noisy environments. All code and pretrained weights are available to\nreplicate the results or deploy the models at:\nhttps://github.com/lockwo/rl_qvc_opt.", | |
| "authors": "Owen Lockwood", | |
| "published": "2021-09-07", | |
| "updated": "2022-05-14", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "quant-ph" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2312.15385v1", | |
| "title": "Discrete-Time Mean-Variance Strategy Based on Reinforcement Learning", | |
| "abstract": "This paper studies a discrete-time mean-variance model based on reinforcement\nlearning. Compared with its continuous-time counterpart in \\cite{zhou2020mv},\nthe discrete-time model makes more general assumptions about the asset's return\ndistribution. Using entropy to measure the cost of exploration, we derive the\noptimal investment strategy, whose density function is also Gaussian type.\nAdditionally, we design the corresponding reinforcement learning algorithm.\nBoth simulation experiments and empirical analysis indicate that our\ndiscrete-time model exhibits better applicability when analyzing real-world\ndata than the continuous-time model.", | |
| "authors": "Xiangyu Cui, Xun Li, Yun Shi, Si Zhao", | |
| "published": "2023-12-24", | |
| "updated": "2023-12-24", | |
| "primary_cat": "q-fin.MF", | |
| "cats": [ | |
| "q-fin.MF", | |
| "cs.LG", | |
| "q-fin.PM" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2012.00743v1", | |
| "title": "Adaptive Neural Architectures for Recommender Systems", | |
| "abstract": "Deep learning has proved an effective means to capture the non-linear\nassociations of user preferences. However, the main drawback of existing deep\nlearning architectures is that they follow a fixed recommendation strategy,\nignoring users' real time-feedback. Recent advances of deep reinforcement\nstrategies showed that recommendation policies can be continuously updated\nwhile users interact with the system. In doing so, we can learn the optimal\npolicy that fits to users' preferences over the recommendation sessions. The\nmain drawback of deep reinforcement strategies is that are based on predefined\nand fixed neural architectures. To shed light on how to handle this issue, in\nthis study we first present deep reinforcement learning strategies for\nrecommendation and discuss the main limitations due to the fixed neural\narchitectures. Then, we detail how recent advances on progressive neural\narchitectures are used for consecutive tasks in other research domains.\nFinally, we present the key challenges to fill the gap between deep\nreinforcement learning and adaptive neural architectures. We provide guidelines\nfor searching for the best neural architecture based on each user feedback via\nreinforcement learning, while considering the prediction performance on\nreal-time recommendations and the model complexity.", | |
| "authors": "Dimitrios Rafailidis, Stefanos Antaris", | |
| "published": "2020-11-11", | |
| "updated": "2020-11-11", | |
| "primary_cat": "cs.IR", | |
| "cats": [ | |
| "cs.IR", | |
| "cs.AI", | |
| "cs.LG" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2009.09781v1", | |
| "title": "Rethinking Supervised Learning and Reinforcement Learning in Task-Oriented Dialogue Systems", | |
| "abstract": "Dialogue policy learning for task-oriented dialogue systems has enjoyed great\nprogress recently mostly through employing reinforcement learning methods.\nHowever, these approaches have become very sophisticated. It is time to\nre-evaluate it. Are we really making progress developing dialogue agents only\nbased on reinforcement learning? We demonstrate how (1)~traditional supervised\nlearning together with (2)~a simulator-free adversarial learning method can be\nused to achieve performance comparable to state-of-the-art RL-based methods.\nFirst, we introduce a simple dialogue action decoder to predict the appropriate\nactions. Then, the traditional multi-label classification solution for dialogue\npolicy learning is extended by adding dense layers to improve the dialogue\nagent performance. Finally, we employ the Gumbel-Softmax estimator to\nalternatively train the dialogue agent and the dialogue reward model without\nusing reinforcement learning. Based on our extensive experimentation, we can\nconclude the proposed methods can achieve more stable and higher performance\nwith fewer efforts, such as the domain knowledge required to design a user\nsimulator and the intractable parameter tuning in reinforcement learning. Our\nmain goal is not to beat reinforcement learning with supervised learning, but\nto demonstrate the value of rethinking the role of reinforcement learning and\nsupervised learning in optimizing task-oriented dialogue systems.", | |
| "authors": "Ziming Li, Julia Kiseleva, Maarten de Rijke", | |
| "published": "2020-09-21", | |
| "updated": "2020-09-21", | |
| "primary_cat": "cs.CL", | |
| "cats": [ | |
| "cs.CL", | |
| "cs.LG" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2102.05612v1", | |
| "title": "Personalization for Web-based Services using Offline Reinforcement Learning", | |
| "abstract": "Large-scale Web-based services present opportunities for improving UI\npolicies based on observed user interactions. We address challenges of learning\nsuch policies through model-free offline Reinforcement Learning (RL) with\noff-policy training. Deployed in a production system for user authentication in\na major social network, it significantly improves long-term objectives. We\narticulate practical challenges, compare several ML techniques, provide\ninsights on training and evaluation of RL models, and discuss generalizations.", | |
| "authors": "Pavlos Athanasios Apostolopoulos, Zehui Wang, Hanson Wang, Chad Zhou, Kittipat Virochsiri, Norm Zhou, Igor L. Markov", | |
| "published": "2021-02-10", | |
| "updated": "2021-02-10", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.HC", | |
| "cs.SE" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2111.03967v1", | |
| "title": "A Deep Reinforcement Learning Approach for Composing Moving IoT Services", | |
| "abstract": "We develop a novel framework for efficiently and effectively discovering\ncrowdsourced services that move in close proximity to a user over a period of\ntime. We introduce a moving crowdsourced service model which is modelled as a\nmoving region. We propose a deep reinforcement learning-based composition\napproach to select and compose moving IoT services considering quality\nparameters. Additionally, we develop a parallel flock-based service discovery\nalgorithm as a ground-truth to measure the accuracy of the proposed approach.\nThe experiments on two real-world datasets verify the effectiveness and\nefficiency of the deep reinforcement learning-based approach.", | |
| "authors": "Azadeh Ghari Neiat, Athman Bouguettaya, Mohammed Bahutair", | |
| "published": "2021-11-06", | |
| "updated": "2021-11-06", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1811.00128v1", | |
| "title": "Towards a Simple Approach to Multi-step Model-based Reinforcement Learning", | |
| "abstract": "When environmental interaction is expensive, model-based reinforcement\nlearning offers a solution by planning ahead and avoiding costly mistakes.\nModel-based agents typically learn a single-step transition model. In this\npaper, we propose a multi-step model that predicts the outcome of an action\nsequence with variable length. We show that this model is easy to learn, and\nthat the model can make policy-conditional predictions. We report preliminary\nresults that show a clear advantage for the multi-step model compared to its\none-step counterpart.", | |
| "authors": "Kavosh Asadi, Evan Cater, Dipendra Misra, Michael L. Littman", | |
| "published": "2018-10-31", | |
| "updated": "2018-10-31", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI", | |
| "stat.ML" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1812.09968v1", | |
| "title": "VMAV-C: A Deep Attention-based Reinforcement Learning Algorithm for Model-based Control", | |
| "abstract": "Recent breakthroughs in Go play and strategic games have witnessed the great\npotential of reinforcement learning in intelligently scheduling in uncertain\nenvironment, but some bottlenecks are also encountered when we generalize this\nparadigm to universal complex tasks. Among them, the low efficiency of data\nutilization in model-free reinforcement algorithms is of great concern. In\ncontrast, the model-based reinforcement learning algorithms can reveal\nunderlying dynamics in learning environments and seldom suffer the data\nutilization problem. To address the problem, a model-based reinforcement\nlearning algorithm with attention mechanism embedded is proposed as an\nextension of World Models in this paper. We learn the environment model through\nMixture Density Network Recurrent Network(MDN-RNN) for agents to interact, with\ncombinations of variational auto-encoder(VAE) and attention incorporated in\nstate value estimates during the process of learning policy. In this way, agent\ncan learn optimal policies through less interactions with actual environment,\nand final experiments demonstrate the effectiveness of our model in control\nproblem.", | |
| "authors": "Xingxing Liang, Qi Wang, Yanghe Feng, Zhong Liu, Jincai Huang", | |
| "published": "2018-12-24", | |
| "updated": "2018-12-24", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI", | |
| "cs.NE" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1910.03016v4", | |
| "title": "Is a Good Representation Sufficient for Sample Efficient Reinforcement Learning?", | |
| "abstract": "Modern deep learning methods provide effective means to learn good\nrepresentations. However, is a good representation itself sufficient for sample\nefficient reinforcement learning? This question has largely been studied only\nwith respect to (worst-case) approximation error, in the more classical\napproximate dynamic programming literature. With regards to the statistical\nviewpoint, this question is largely unexplored, and the extant body of\nliterature mainly focuses on conditions which permit sample efficient\nreinforcement learning with little understanding of what are necessary\nconditions for efficient reinforcement learning.\n This work shows that, from the statistical viewpoint, the situation is far\nsubtler than suggested by the more traditional approximation viewpoint, where\nthe requirements on the representation that suffice for sample efficient RL are\neven more stringent. Our main results provide sharp thresholds for\nreinforcement learning methods, showing that there are hard limitations on what\nconstitutes good function approximation (in terms of the dimensionality of the\nrepresentation), where we focus on natural representational conditions relevant\nto value-based, model-based, and policy-based learning. These lower bounds\nhighlight that having a good (value-based, model-based, or policy-based)\nrepresentation in and of itself is insufficient for efficient reinforcement\nlearning, unless the quality of this approximation passes certain hard\nthresholds. Furthermore, our lower bounds also imply exponential separations on\nthe sample complexity between 1) value-based learning with perfect\nrepresentation and value-based learning with a good-but-not-perfect\nrepresentation, 2) value-based learning and policy-based learning, 3)\npolicy-based learning and supervised learning and 4) reinforcement learning and\nimitation learning.", | |
| "authors": "Simon S. Du, Sham M. Kakade, Ruosong Wang, Lin F. Yang", | |
| "published": "2019-10-07", | |
| "updated": "2020-02-28", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI", | |
| "math.OC", | |
| "stat.ML" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2311.14766v1", | |
| "title": "Reinforcement Learning from Statistical Feedback: the Journey from AB Testing to ANT Testing", | |
| "abstract": "Reinforcement Learning from Human Feedback (RLHF) has played a crucial role\nin the success of large models such as ChatGPT. RLHF is a reinforcement\nlearning framework which combines human feedback to improve learning\neffectiveness and performance. However, obtaining preferences feedback manually\nis quite expensive in commercial applications. Some statistical commercial\nindicators are usually more valuable and always ignored in RLHF. There exists a\ngap between commercial target and model training. In our research, we will\nattempt to fill this gap with statistical business feedback instead of human\nfeedback, using AB testing which is a well-established statistical method.\nReinforcement Learning from Statistical Feedback (RLSF) based on AB testing is\nproposed. Statistical inference methods are used to obtain preferences for\ntraining the reward network, which fine-tunes the pre-trained model in\nreinforcement learning framework, achieving greater business value.\nFurthermore, we extend AB testing with double selections at a single time-point\nto ANT testing with multiple selections at different feedback time points.\nMoreover, we design numerical experiences to validate the effectiveness of our\nalgorithm framework.", | |
| "authors": "Feiyang Han, Yimin Wei, Zhaofeng Liu, Yanxing Qi", | |
| "published": "2023-11-24", | |
| "updated": "2023-11-24", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "math.ST", | |
| "stat.ME", | |
| "stat.TH" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2204.01409v1", | |
| "title": "Safe Controller for Output Feedback Linear Systems using Model-Based Reinforcement Learning", | |
| "abstract": "The objective of this research is to enable safety-critical systems to\nsimultaneously learn and execute optimal control policies in a safe manner to\nachieve complex autonomy. Learning optimal policies via trial and error, i.e.,\ntraditional reinforcement learning, is difficult to implement in\nsafety-critical systems, particularly when task restarts are unavailable. Safe\nmodel-based reinforcement learning techniques based on a barrier transformation\nhave recently been developed to address this problem. However, these methods\nrely on full state feedback, limiting their usability in a real-world\nenvironment. In this work, an output-feedback safe model-based reinforcement\nlearning technique based on a novel barrier-aware dynamic state estimator has\nbeen designed to address this issue. The developed approach facilitates\nsimultaneous learning and execution of safe control policies for\nsafety-critical linear systems. Simulation results indicate that barrier\ntransformation is an effective approach to achieve online reinforcement\nlearning in safety-critical systems using output feedback.", | |
| "authors": "S M Nahid Mahmud, Moad Abudia, Scott A Nivison, Zachary I. Bell, Rushikesh Kamalapurkar", | |
| "published": "2022-04-04", | |
| "updated": "2022-04-04", | |
| "primary_cat": "eess.SY", | |
| "cats": [ | |
| "eess.SY", | |
| "cs.SY" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2202.07789v1", | |
| "title": "Safe Reinforcement Learning by Imagining the Near Future", | |
| "abstract": "Safe reinforcement learning is a promising path toward applying reinforcement\nlearning algorithms to real-world problems, where suboptimal behaviors may lead\nto actual negative consequences. In this work, we focus on the setting where\nunsafe states can be avoided by planning ahead a short time into the future. In\nthis setting, a model-based agent with a sufficiently accurate model can avoid\nunsafe states. We devise a model-based algorithm that heavily penalizes unsafe\ntrajectories, and derive guarantees that our algorithm can avoid unsafe states\nunder certain assumptions. Experiments demonstrate that our algorithm can\nachieve competitive rewards with fewer safety violations in several continuous\ncontrol tasks.", | |
| "authors": "Garrett Thomas, Yuping Luo, Tengyu Ma", | |
| "published": "2022-02-15", | |
| "updated": "2022-02-15", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2005.11142v1", | |
| "title": "Two-stage Deep Reinforcement Learning for Inverter-based Volt-VAR Control in Active Distribution Networks", | |
| "abstract": "Model-based Vol/VAR optimization method is widely used to eliminate voltage\nviolations and reduce network losses. However, the parameters of active\ndistribution networks(ADNs) are not onsite identified, so significant errors\nmay be involved in the model and make the model-based method infeasible. To\ncope with this critical issue, we propose a novel two-stage deep reinforcement\nlearning (DRL) method to improve the voltage profile by regulating\ninverter-based energy resources, which consists of offline stage and online\nstage. In the offline stage, a highly efficient adversarial reinforcement\nlearning algorithm is developed to train an offline agent robust to the model\nmismatch. In the sequential online stage, we transfer the offline agent safely\nas the online agent to perform continuous learning and controlling online with\nsignificantly improved safety and efficiency. Numerical simulations on IEEE\ntest cases not only demonstrate that the proposed adversarial reinforcement\nlearning algorithm outperforms the state-of-art algorithm, but also show that\nour proposed two-stage method achieves much better performance than the\nexisting DRL based methods in the online application.", | |
| "authors": "Haotian Liu, Wenchuan Wu", | |
| "published": "2020-05-20", | |
| "updated": "2020-05-20", | |
| "primary_cat": "eess.SY", | |
| "cats": [ | |
| "eess.SY", | |
| "cs.LG", | |
| "cs.SY", | |
| "J.7; C.3" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2006.09234v1", | |
| "title": "Model Embedding Model-Based Reinforcement Learning", | |
| "abstract": "Model-based reinforcement learning (MBRL) has shown its advantages in\nsample-efficiency over model-free reinforcement learning (MFRL). Despite the\nimpressive results it achieves, it still faces a trade-off between the ease of\ndata generation and model bias. In this paper, we propose a simple and elegant\nmodel-embedding model-based reinforcement learning (MEMB) algorithm in the\nframework of the probabilistic reinforcement learning. To balance the\nsample-efficiency and model bias, we exploit both real and imaginary data in\nthe training. In particular, we embed the model in the policy update and learn\n$Q$ and $V$ functions from the real data set. We provide the theoretical\nanalysis of MEMB with the Lipschitz continuity assumption on the model and\npolicy. At last, we evaluate MEMB on several benchmarks and demonstrate our\nalgorithm can achieve state-of-the-art performance.", | |
| "authors": "Xiaoyu Tan, Chao Qu, Junwu Xiong, James Zhang", | |
| "published": "2020-06-16", | |
| "updated": "2020-06-16", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1305.1809v2", | |
| "title": "Cover Tree Bayesian Reinforcement Learning", | |
| "abstract": "This paper proposes an online tree-based Bayesian approach for reinforcement\nlearning. For inference, we employ a generalised context tree model. This\ndefines a distribution on multivariate Gaussian piecewise-linear models, which\ncan be updated in closed form. The tree structure itself is constructed using\nthe cover tree method, which remains efficient in high dimensional spaces. We\ncombine the model with Thompson sampling and approximate dynamic programming to\nobtain effective exploration policies in unknown environments. The flexibility\nand computational simplicity of the model render it suitable for many\nreinforcement learning problems in continuous state spaces. We demonstrate this\nin an experimental comparison with least squares policy iteration.", | |
| "authors": "Nikolaos Tziortziotis, Christos Dimitrakakis, Konstantinos Blekas", | |
| "published": "2013-05-08", | |
| "updated": "2014-05-02", | |
| "primary_cat": "stat.ML", | |
| "cats": [ | |
| "stat.ML", | |
| "cs.LG" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1703.04489v1", | |
| "title": "Reinforcement Learning for Transition-Based Mention Detection", | |
| "abstract": "This paper describes an application of reinforcement learning to the mention\ndetection task. We define a novel action-based formulation for the mention\ndetection task, in which a model can flexibly revise past labeling decisions by\ngrouping together tokens and assigning partial mention labels. We devise a\nmethod to create mention-level episodes and we train a model by rewarding\ncorrectly labeled complete mentions, irrespective of the inner structure\ncreated. The model yields results which are on par with a competitive\nsupervised counterpart while being more flexible in terms of achieving targeted\nbehavior through reward modeling and generating internal mention structure,\nespecially on longer mentions.", | |
| "authors": "Georgiana Dinu, Wael Hamza, Radu Florian", | |
| "published": "2017-03-13", | |
| "updated": "2017-03-13", | |
| "primary_cat": "cs.CL", | |
| "cats": [ | |
| "cs.CL", | |
| "cs.AI" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1712.04170v2", | |
| "title": "Interpretable Policies for Reinforcement Learning by Genetic Programming", | |
| "abstract": "The search for interpretable reinforcement learning policies is of high\nacademic and industrial interest. Especially for industrial systems, domain\nexperts are more likely to deploy autonomously learned controllers if they are\nunderstandable and convenient to evaluate. Basic algebraic equations are\nsupposed to meet these requirements, as long as they are restricted to an\nadequate complexity. Here we introduce the genetic programming for\nreinforcement learning (GPRL) approach based on model-based batch reinforcement\nlearning and genetic programming, which autonomously learns policy equations\nfrom pre-existing default state-action trajectory samples. GPRL is compared to\na straight-forward method which utilizes genetic programming for symbolic\nregression, yielding policies imitating an existing well-performing, but\nnon-interpretable policy. Experiments on three reinforcement learning\nbenchmarks, i.e., mountain car, cart-pole balancing, and industrial benchmark,\ndemonstrate the superiority of our GPRL approach compared to the symbolic\nregression method. GPRL is capable of producing well-performing interpretable\nreinforcement learning policies from pre-existing default trajectory data.", | |
| "authors": "Daniel Hein, Steffen Udluft, Thomas A. Runkler", | |
| "published": "2017-12-12", | |
| "updated": "2018-04-04", | |
| "primary_cat": "cs.AI", | |
| "cats": [ | |
| "cs.AI", | |
| "cs.NE", | |
| "cs.SY" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1910.11914v3", | |
| "title": "On the convergence of projective-simulation-based reinforcement learning in Markov decision processes", | |
| "abstract": "In recent years, the interest in leveraging quantum effects for enhancing\nmachine learning tasks has significantly increased. Many algorithms speeding up\nsupervised and unsupervised learning were established. The first framework in\nwhich ways to exploit quantum resources specifically for the broader context of\nreinforcement learning were found is projective simulation. Projective\nsimulation presents an agent-based reinforcement learning approach designed in\na manner which may support quantum walk-based speed-ups. Although classical\nvariants of projective simulation have been benchmarked against common\nreinforcement learning algorithms, very few formal theoretical analyses have\nbeen provided for its performance in standard learning scenarios. In this\npaper, we provide a detailed formal discussion of the properties of this model.\nSpecifically, we prove that one version of the projective simulation model,\nunderstood as a reinforcement learning approach, converges to optimal behavior\nin a large class of Markov decision processes. This proof shows that a\nphysically-inspired approach to reinforcement learning can guarantee to\nconverge.", | |
| "authors": "Walter L. Boyajian, Jens Clausen, Lea M. Trenkwalder, Vedran Dunjko, Hans J. Briegel", | |
| "published": "2019-10-25", | |
| "updated": "2020-11-12", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI", | |
| "quant-ph", | |
| "stat.ML" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1708.07738v1", | |
| "title": "A Function Approximation Method for Model-based High-Dimensional Inverse Reinforcement Learning", | |
| "abstract": "This works handles the inverse reinforcement learning problem in\nhigh-dimensional state spaces, which relies on an efficient solution of\nmodel-based high-dimensional reinforcement learning problems. To solve the\ncomputationally expensive reinforcement learning problems, we propose a\nfunction approximation method to ensure that the Bellman Optimality Equation\nalways holds, and then estimate a function based on the observed human actions\nfor inverse reinforcement learning problems. The time complexity of the\nproposed method is linearly proportional to the cardinality of the action set,\nthus it can handle high-dimensional even continuous state spaces efficiently.\nWe test the proposed method in a simulated environment to show its accuracy,\nand three clinical tasks to show how it can be used to evaluate a doctor's\nproficiency.", | |
| "authors": "Kun Li, Joel W. Burdick", | |
| "published": "2017-08-23", | |
| "updated": "2017-08-23", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.RO" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2008.12095v1", | |
| "title": "Document-editing Assistants and Model-based Reinforcement Learning as a Path to Conversational AI", | |
| "abstract": "Intelligent assistants that follow commands or answer simple questions, such\nas Siri and Google search, are among the most economically important\napplications of AI. Future conversational AI assistants promise even greater\ncapabilities and a better user experience through a deeper understanding of the\ndomain, the user, or the user's purposes. But what domain and what methods are\nbest suited to researching and realizing this promise? In this article we argue\nfor the domain of voice document editing and for the methods of model-based\nreinforcement learning. The primary advantages of voice document editing are\nthat the domain is tightly scoped and that it provides something for the\nconversation to be about (the document) that is delimited and fully accessible\nto the intelligent assistant. The advantages of reinforcement learning in\ngeneral are that its methods are designed to learn from interaction without\nexplicit instruction and that it formalizes the purposes of the assistant.\nModel-based reinforcement learning is needed in order to genuinely understand\nthe domain of discourse and thereby work efficiently with the user to achieve\ntheir goals. Together, voice document editing and model-based reinforcement\nlearning comprise a promising research direction for achieving conversational\nAI.", | |
| "authors": "Katya Kudashkina, Patrick M. Pilarski, Richard S. Sutton", | |
| "published": "2020-08-27", | |
| "updated": "2020-08-27", | |
| "primary_cat": "cs.AI", | |
| "cats": [ | |
| "cs.AI", | |
| "cs.HC", | |
| "cs.LG" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2305.03360v1", | |
| "title": "A Survey on Offline Model-Based Reinforcement Learning", | |
| "abstract": "Model-based approaches are becoming increasingly popular in the field of\noffline reinforcement learning, with high potential in real-world applications\ndue to the model's capability of thoroughly utilizing the large historical\ndatasets available with supervised learning techniques. This paper presents a\nliterature review of recent work in offline model-based reinforcement learning,\na field that utilizes model-based approaches in offline reinforcement learning.\nThe survey provides a brief overview of the concepts and recent developments in\nboth offline reinforcement learning and model-based reinforcement learning, and\ndiscuss the intersection of the two fields. We then presents key relevant\npapers in the field of offline model-based reinforcement learning and discuss\ntheir methods, particularly their approaches in solving the issue of\ndistributional shift, the main problem faced by all current offline model-based\nreinforcement learning methods. We further discuss key challenges faced by the\nfield, and suggest possible directions for future work.", | |
| "authors": "Haoyang He", | |
| "published": "2023-05-05", | |
| "updated": "2023-05-05", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI", | |
| "cs.SY", | |
| "eess.SY", | |
| "I.2.6; I.2.8" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2105.00822v2", | |
| "title": "Generative Adversarial Reward Learning for Generalized Behavior Tendency Inference", | |
| "abstract": "Recent advances in reinforcement learning have inspired increasing interest\nin learning user modeling adaptively through dynamic interactions, e.g., in\nreinforcement learning based recommender systems. Reward function is crucial\nfor most of reinforcement learning applications as it can provide the guideline\nabout the optimization. However, current reinforcement-learning-based methods\nrely on manually-defined reward functions, which cannot adapt to dynamic and\nnoisy environments. Besides, they generally use task-specific reward functions\nthat sacrifice generalization ability. We propose a generative inverse\nreinforcement learning for user behavioral preference modelling, to address the\nabove issues. Instead of using predefined reward functions, our model can\nautomatically learn the rewards from user's actions based on discriminative\nactor-critic network and Wasserstein GAN. Our model provides a general way of\ncharacterizing and explaining underlying behavioral tendencies, and our\nexperiments show our method outperforms state-of-the-art methods in a variety\nof scenarios, namely traffic signal control, online recommender systems, and\nscanpath prediction.", | |
| "authors": "Xiaocong Chen, Lina Yao, Xianzhi Wang, Aixin Sun, Wenjie Zhang, Quan Z. Sheng", | |
| "published": "2021-05-03", | |
| "updated": "2021-05-05", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI", | |
| "cs.IR" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2404.01794v1", | |
| "title": "Imitation Game: A Model-based and Imitation Learning Deep Reinforcement Learning Hybrid", | |
| "abstract": "Autonomous and learning systems based on Deep Reinforcement Learning have\nfirmly established themselves as a foundation for approaches to creating\nresilient and efficient Cyber-Physical Energy Systems. However, most current\napproaches suffer from two distinct problems: Modern model-free algorithms such\nas Soft Actor Critic need a high number of samples to learn a meaningful\npolicy, as well as a fallback to ward against concept drifts (e. g.,\ncatastrophic forgetting). In this paper, we present the work in progress\ntowards a hybrid agent architecture that combines model-based Deep\nReinforcement Learning with imitation learning to overcome both problems.", | |
| "authors": "Eric MSP Veith, Torben Logemann, Aleksandr Berezin, Arlena Well\u00dfow, Stephan Balduin", | |
| "published": "2024-04-02", | |
| "updated": "2024-04-02", | |
| "primary_cat": "cs.AI", | |
| "cats": [ | |
| "cs.AI" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2311.07260v1", | |
| "title": "TIAGo RL: Simulated Reinforcement Learning Environments with Tactile Data for Mobile Robots", | |
| "abstract": "Tactile information is important for robust performance in robotic tasks that\ninvolve physical interaction, such as object manipulation. However, with more\ndata included in the reasoning and control process, modeling behavior becomes\nincreasingly difficult. Deep Reinforcement Learning (DRL) produced promising\nresults for learning complex behavior in various domains, including\ntactile-based manipulation in robotics. In this work, we present our\nopen-source reinforcement learning environments for the TIAGo service robot.\nThey produce tactile sensor measurements that resemble those of a real\nsensorised gripper for TIAGo, encouraging research in transfer learning of DRL\npolicies. Lastly, we show preliminary training results of a learned force\ncontrol policy and compare it to a classical PI controller.", | |
| "authors": "Luca Lach, Francesco Ferro, Robert Haschke", | |
| "published": "2023-11-13", | |
| "updated": "2023-11-13", | |
| "primary_cat": "cs.RO", | |
| "cats": [ | |
| "cs.RO", | |
| "cs.AI" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2008.07240v1", | |
| "title": "Model-Reference Reinforcement Learning for Collision-Free Tracking Control of Autonomous Surface Vehicles", | |
| "abstract": "This paper presents a novel model-reference reinforcement learning algorithm\nfor the intelligent tracking control of uncertain autonomous surface vehicles\nwith collision avoidance. The proposed control algorithm combines a\nconventional control method with reinforcement learning to enhance control\naccuracy and intelligence. In the proposed control design, a nominal system is\nconsidered for the design of a baseline tracking controller using a\nconventional control approach. The nominal system also defines the desired\nbehaviour of uncertain autonomous surface vehicles in an obstacle-free\nenvironment. Thanks to reinforcement learning, the overall tracking controller\nis capable of compensating for model uncertainties and achieving collision\navoidance at the same time in environments with obstacles. In comparison to\ntraditional deep reinforcement learning methods, our proposed learning-based\ncontrol can provide stability guarantees and better sample efficiency. We\ndemonstrate the performance of the new algorithm using an example of autonomous\nsurface vehicles.", | |
| "authors": "Qingrui Zhang, Wei Pan, Vasso Reppa", | |
| "published": "2020-08-17", | |
| "updated": "2020-08-17", | |
| "primary_cat": "eess.SY", | |
| "cats": [ | |
| "eess.SY", | |
| "cs.LG", | |
| "cs.RO", | |
| "cs.SY" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2011.01734v1", | |
| "title": "Differentiable Physics Models for Real-world Offline Model-based Reinforcement Learning", | |
| "abstract": "A limitation of model-based reinforcement learning (MBRL) is the exploitation\nof errors in the learned models. Black-box models can fit complex dynamics with\nhigh fidelity, but their behavior is undefined outside of the data\ndistribution.Physics-based models are better at extrapolating, due to the\ngeneral validity of their informed structure, but underfit in the real world\ndue to the presence of unmodeled phenomena. In this work, we demonstrate\nexperimentally that for the offline model-based reinforcement learning setting,\nphysics-based models can be beneficial compared to high-capacity function\napproximators if the mechanical structure is known. Physics-based models can\nlearn to perform the ball in a cup (BiC) task on a physical manipulator using\nonly 4 minutes of sampled data using offline MBRL. We find that black-box\nmodels consistently produce unviable policies for BiC as all predicted\ntrajectories diverge to physically impossible state, despite having access to\nmore data than the physics-based model. In addition, we generalize the approach\nof physics parameter identification from modeling holonomic multi-body systems\nto systems with nonholonomic dynamics using end-to-end automatic\ndifferentiation.\n Videos: https://sites.google.com/view/ball-in-a-cup-in-4-minutes/", | |
| "authors": "Michael Lutter, Johannes Silberbauer, Joe Watson, Jan Peters", | |
| "published": "2020-11-03", | |
| "updated": "2020-11-03", | |
| "primary_cat": "cs.RO", | |
| "cats": [ | |
| "cs.RO", | |
| "cs.LG" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2402.16543v2", | |
| "title": "Model-based deep reinforcement learning for accelerated learning from flow simulations", | |
| "abstract": "In recent years, deep reinforcement learning has emerged as a technique to\nsolve closed-loop flow control problems. Employing simulation-based\nenvironments in reinforcement learning enables a priori end-to-end optimization\nof the control system, provides a virtual testbed for safety-critical control\napplications, and allows to gain a deep understanding of the control\nmechanisms. While reinforcement learning has been applied successfully in a\nnumber of rather simple flow control benchmarks, a major bottleneck toward\nreal-world applications is the high computational cost and turnaround time of\nflow simulations. In this contribution, we demonstrate the benefits of\nmodel-based reinforcement learning for flow control applications. Specifically,\nwe optimize the policy by alternating between trajectories sampled from flow\nsimulations and trajectories sampled from an ensemble of environment models.\nThe model-based learning reduces the overall training time by up to $85\\%$ for\nthe fluidic pinball test case. Even larger savings are expected for more\ndemanding flow simulations.", | |
| "authors": "Andre Weiner, Janis Geise", | |
| "published": "2024-02-26", | |
| "updated": "2024-04-10", | |
| "primary_cat": "physics.flu-dyn", | |
| "cats": [ | |
| "physics.flu-dyn", | |
| "cs.CE", | |
| "cs.LG" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2304.00006v1", | |
| "title": "Bi-directional personalization reinforcement learning-based architecture with active learning using a multi-model data service for the travel nursing industry", | |
| "abstract": "The challenges of using inadequate online recruitment systems can be\naddressed with machine learning and software engineering techniques.\nBi-directional personalization reinforcement learning-based architecture with\nactive learning can get recruiters to recommend qualified applicants and also\nenable applicants to receive personalized job recommendations. This paper\nfocuses on how machine learning techniques can enhance the recruitment process\nin the travel nursing industry by helping speed up data acquisition using a\nmulti-model data service and then providing personalized recommendations using\nbi-directional reinforcement learning with active learning. This need was\nespecially evident when trying to respond to the overwhelming needs of\nhealthcare facilities during the COVID-19 pandemic. The need for traveling\nnurses and other healthcare professionals was more evident during the lockdown\nperiod. A data service was architected for job feed processing using an\norchestration of natural language processing (NLP) models that synthesize\njob-related data into a database efficiently and accurately. The multi-model\ndata service provided the data necessary to develop a bi-directional\npersonalization system using reinforcement learning with active learning that\ncould recommend travel nurses and healthcare professionals to recruiters and\nprovide job recommendations to applicants using an internally developed smart\nmatch score as a basis. The bi-directional personalization reinforcement\nlearning-based architecture with active learning combines two personalization\nsystems - one that runs forward to recommend qualified candidates for jobs and\nanother that runs backward and recommends jobs for applicants.", | |
| "authors": "Ezana N. Beyenne", | |
| "published": "2023-03-14", | |
| "updated": "2023-03-14", | |
| "primary_cat": "cs.IR", | |
| "cats": [ | |
| "cs.IR", | |
| "cs.AI", | |
| "cs.LG", | |
| "I.2" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1709.05067v1", | |
| "title": "Deep Reinforcement Learning for Conversational AI", | |
| "abstract": "Deep reinforcement learning is revolutionizing the artificial intelligence\nfield. Currently, it serves as a good starting point for constructing\nintelligent autonomous systems which offer a better knowledge of the visual\nworld. It is possible to scale deep reinforcement learning with the use of deep\nlearning and do amazing tasks such as use of pixels in playing video games. In\nthis paper, key concepts of deep reinforcement learning including reward\nfunction, differences between reinforcement learning and supervised learning\nand models for implementation of reinforcement are discussed. Key challenges\nrelated to the implementation of reinforcement learning in conversational AI\ndomain are identified as well as discussed in detail. Various conversational\nmodels which are based on deep reinforcement learning (as well as deep\nlearning) are also discussed. In summary, this paper discusses key aspects of\ndeep reinforcement learning which are crucial for designing an efficient\nconversational AI.", | |
| "authors": "Mahipal Jadeja, Neelanshi Varia, Agam Shah", | |
| "published": "2017-09-15", | |
| "updated": "2017-09-15", | |
| "primary_cat": "cs.AI", | |
| "cats": [ | |
| "cs.AI" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2202.09064v2", | |
| "title": "Can Interpretable Reinforcement Learning Manage Prosperity Your Way?", | |
| "abstract": "Personalisation of products and services is fast becoming the driver of\nsuccess in banking and commerce. Machine learning holds the promise of gaining\na deeper understanding of and tailoring to customers' needs and preferences.\nWhereas traditional solutions to financial decision problems frequently rely on\nmodel assumptions, reinforcement learning is able to exploit large amounts of\ndata to improve customer modelling and decision-making in complex financial\nenvironments with fewer assumptions. Model explainability and interpretability\npresent challenges from a regulatory perspective which demands transparency for\nacceptance; they also offer the opportunity for improved insight into and\nunderstanding of customers. Post-hoc approaches are typically used for\nexplaining pretrained reinforcement learning models. Based on our previous\nmodeling of customer spending behaviour, we adapt our recent reinforcement\nlearning algorithm that intrinsically characterizes desirable behaviours and we\ntransition to the problem of asset management. We train inherently\ninterpretable reinforcement learning agents to give investment advice that is\naligned with prototype financial personality traits which are combined to make\na final recommendation. We observe that the trained agents' advice adheres to\ntheir intended characteristics, they learn the value of compound growth, and,\nwithout any explicit reference, the notion of risk as well as improved policy\nconvergence.", | |
| "authors": "Charl Maree, Christian Omlin", | |
| "published": "2022-02-18", | |
| "updated": "2022-06-29", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1506.00685v1", | |
| "title": "Model-based reinforcement learning for infinite-horizon approximate optimal tracking", | |
| "abstract": "This paper provides an approximate online adaptive solution to the\ninfinite-horizon optimal tracking problem for control-affine continuous-time\nnonlinear systems with unknown drift dynamics. Model-based reinforcement\nlearning is used to relax the persistence of excitation condition. Model-based\nreinforcement learning is implemented using a concurrent learning-based system\nidentifier to simulate experience by evaluating the Bellman error over\nunexplored areas of the state space. Tracking of the desired trajectory and\nconvergence of the developed policy to a neighborhood of the optimal policy are\nestablished via Lyapunov-based stability analysis. Simulation results\ndemonstrate the effectiveness of the developed technique.", | |
| "authors": "Rushikesh Kamalapurkar, Lindsey Andrews, Patrick Walters, Warren E. Dixon", | |
| "published": "2015-06-01", | |
| "updated": "2015-06-01", | |
| "primary_cat": "cs.SY", | |
| "cats": [ | |
| "cs.SY", | |
| "math.OC" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1912.03918v1", | |
| "title": "Transformer Based Reinforcement Learning For Games", | |
| "abstract": "Recent times have witnessed sharp improvements in reinforcement learning\ntasks using deep reinforcement learning techniques like Deep Q Networks, Policy\nGradients, Actor Critic methods which are based on deep learning based models\nand back-propagation of gradients to train such models. An active area of\nresearch in reinforcement learning is about training agents to play complex\nvideo games, which so far has been something accomplished only by human\nintelligence. Some state of the art performances in video game playing using\ndeep reinforcement learning are obtained by processing the sequence of frames\nfrom video games, passing them through a convolutional network to obtain\nfeatures and then using recurrent neural networks to figure out the action\nleading to optimal rewards. The recurrent neural network will learn to extract\nthe meaningful signal out of the sequence of such features. In this work, we\npropose a method utilizing a transformer network which have recently replaced\nRNNs in Natural Language Processing (NLP), and perform experiments to compare\nwith existing methods.", | |
| "authors": "Uddeshya Upadhyay, Nikunj Shah, Sucheta Ravikanti, Mayanka Medhe", | |
| "published": "2019-12-09", | |
| "updated": "2019-12-09", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.NE" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1206.3281v1", | |
| "title": "Model-Based Bayesian Reinforcement Learning in Large Structured Domains", | |
| "abstract": "Model-based Bayesian reinforcement learning has generated significant\ninterest in the AI community as it provides an elegant solution to the optimal\nexploration-exploitation tradeoff in classical reinforcement learning.\nUnfortunately, the applicability of this type of approach has been limited to\nsmall domains due to the high complexity of reasoning about the joint posterior\nover model parameters. In this paper, we consider the use of factored\nrepresentations combined with online planning techniques, to improve\nscalability of these methods. The main contribution of this paper is a Bayesian\nframework for learning the structure and parameters of a dynamical system,\nwhile also simultaneously planning a (near-)optimal sequence of actions.", | |
| "authors": "Stephane Ross, Joelle Pineau", | |
| "published": "2012-06-13", | |
| "updated": "2012-06-13", | |
| "primary_cat": "cs.AI", | |
| "cats": [ | |
| "cs.AI" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2005.05440v1", | |
| "title": "Delay-Aware Model-Based Reinforcement Learning for Continuous Control", | |
| "abstract": "Action delays degrade the performance of reinforcement learning in many\nreal-world systems. This paper proposes a formal definition of delay-aware\nMarkov Decision Process and proves it can be transformed into standard MDP with\naugmented states using the Markov reward process. We develop a delay-aware\nmodel-based reinforcement learning framework that can incorporate the\nmulti-step delay into the learned system models without learning effort.\nExperiments with the Gym and MuJoCo platforms show that the proposed\ndelay-aware model-based algorithm is more efficient in training and\ntransferable between systems with various durations of delay compared with\noff-policy model-free reinforcement learning methods. Codes available at:\nhttps://github.com/baimingc/dambrl.", | |
| "authors": "Baiming Chen, Mengdi Xu, Liang Li, Ding Zhao", | |
| "published": "2020-05-11", | |
| "updated": "2020-05-11", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI", | |
| "stat.ML" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2307.16348v2", | |
| "title": "Rating-based Reinforcement Learning", | |
| "abstract": "This paper develops a novel rating-based reinforcement learning approach that\nuses human ratings to obtain human guidance in reinforcement learning.\nDifferent from the existing preference-based and ranking-based reinforcement\nlearning paradigms, based on human relative preferences over sample pairs, the\nproposed rating-based reinforcement learning approach is based on human\nevaluation of individual trajectories without relative comparisons between\nsample pairs. The rating-based reinforcement learning approach builds on a new\nprediction model for human ratings and a novel multi-class loss function. We\nconduct several experimental studies based on synthetic ratings and real human\nratings to evaluate the effectiveness and benefits of the new rating-based\nreinforcement learning approach.", | |
| "authors": "Devin White, Mingkang Wu, Ellen Novoseller, Vernon J. Lawhern, Nicholas Waytowich, Yongcan Cao", | |
| "published": "2023-07-30", | |
| "updated": "2024-01-29", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI", | |
| "cs.RO" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2009.14365v1", | |
| "title": "Toolpath design for additive manufacturing using deep reinforcement learning", | |
| "abstract": "Toolpath optimization of metal-based additive manufacturing processes is\ncurrently hampered by the high-dimensionality of its design space. In this\nwork, a reinforcement learning platform is proposed that dynamically learns\ntoolpath strategies to build an arbitrary part. To this end, three prominent\nmodel-free reinforcement learning formulations are investigated to design\nadditive manufacturing toolpaths and demonstrated for two cases of dense and\nsparse reward structures. The results indicate that this learning-based\ntoolpath design approach achieves high scores, especially when a dense reward\nstructure is present.", | |
| "authors": "Mojtaba Mozaffar, Ablodghani Ebrahimi, Jian Cao", | |
| "published": "2020-09-30", | |
| "updated": "2020-09-30", | |
| "primary_cat": "cs.AI", | |
| "cats": [ | |
| "cs.AI" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2008.09450v1", | |
| "title": "Adversarial Imitation Learning via Random Search", | |
| "abstract": "Developing agents that can perform challenging complex tasks is the goal of\nreinforcement learning. The model-free reinforcement learning has been\nconsidered as a feasible solution. However, the state of the art research has\nbeen to develop increasingly complicated techniques. This increasing complexity\nmakes the reconstruction difficult. Furthermore, the problem of reward\ndependency is still exists. As a result, research on imitation learning, which\nlearns policy from a demonstration of experts, has begun to attract attention.\nImitation learning directly learns policy based on data on the behavior of the\nexperts without the explicit reward signal provided by the environment.\nHowever, imitation learning tries to optimize policies based on deep\nreinforcement learning such as trust region policy optimization. As a result,\ndeep reinforcement learning based imitation learning also poses a crisis of\nreproducibility. The issue of complex model-free model has received\nconsiderable critical attention. A derivative-free optimization based\nreinforcement learning and the simplification on policies obtain competitive\nperformance on the dynamic complex tasks. The simplified policies and\nderivative free methods make algorithm be simple. The reconfiguration of\nresearch demo becomes easy. In this paper, we propose an imitation learning\nmethod that takes advantage of the derivative-free optimization with simple\nlinear policies. The proposed method performs simple random search in the\nparameter space of policies and shows computational efficiency. Experiments in\nthis paper show that the proposed model, without a direct reward signal from\nthe environment, obtains competitive performance on the MuJoCo locomotion\ntasks.", | |
| "authors": "MyungJae Shin, Joongheon Kim", | |
| "published": "2020-08-21", | |
| "updated": "2020-08-21", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "stat.ML" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1507.06923v1", | |
| "title": "A Reinforcement Learning Approach to Online Learning of Decision Trees", | |
| "abstract": "Online decision tree learning algorithms typically examine all features of a\nnew data point to update model parameters. We propose a novel alternative,\nReinforcement Learning- based Decision Trees (RLDT), that uses Reinforcement\nLearning (RL) to actively examine a minimal number of features of a data point\nto classify it with high accuracy. Furthermore, RLDT optimizes a long term\nreturn, providing a better alternative to the traditional myopic greedy\napproach to growing decision trees. We demonstrate that this approach performs\nas well as batch learning algorithms and other online decision tree learning\nalgorithms, while making significantly fewer queries about the features of the\ndata points. We also show that RLDT can effectively handle concept drift.", | |
| "authors": "Abhinav Garlapati, Aditi Raghunathan, Vaishnavh Nagarajan, Balaraman Ravindran", | |
| "published": "2015-07-24", | |
| "updated": "2015-07-24", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2109.12516v2", | |
| "title": "Prioritized Experience-based Reinforcement Learning with Human Guidance for Autonomous Driving", | |
| "abstract": "Reinforcement learning (RL) requires skillful definition and remarkable\ncomputational efforts to solve optimization and control problems, which could\nimpair its prospect. Introducing human guidance into reinforcement learning is\na promising way to improve learning performance. In this paper, a comprehensive\nhuman guidance-based reinforcement learning framework is established. A novel\nprioritized experience replay mechanism that adapts to human guidance in the\nreinforcement learning process is proposed to boost the efficiency and\nperformance of the reinforcement learning algorithm. To relieve the heavy\nworkload on human participants, a behavior model is established based on an\nincremental online learning method to mimic human actions. We design two\nchallenging autonomous driving tasks for evaluating the proposed algorithm.\nExperiments are conducted to access the training and testing performance and\nlearning mechanism of the proposed algorithm. Comparative results against the\nstate-of-the-art methods suggest the advantages of our algorithm in terms of\nlearning efficiency, performance, and robustness.", | |
| "authors": "Jingda Wu, Zhiyu Huang, Wenhui Huang, Chen Lv", | |
| "published": "2021-09-26", | |
| "updated": "2022-11-29", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.RO" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1810.03198v1", | |
| "title": "Reinforcement Evolutionary Learning Method for self-learning", | |
| "abstract": "In statistical modelling the biggest threat is concept drift which makes the\nmodel gradually showing deteriorating performance over time. There are state of\nthe art methodologies to detect the impact of concept drift, however general\nstrategy considered to overcome the issue in performance is to rebuild or\nre-calibrate the model periodically as the variable patterns for the model\nchanges significantly due to market change or consumer behavior change etc.\nQuantitative research is the most widely spread application of data science in\nMarketing or financial domain where applicability of state of the art\nreinforcement learning for auto-learning is less explored paradigm.\nReinforcement learning is heavily dependent on having a simulated environment\nwhich is majorly available for gaming or online systems, to learn from the live\nfeedback. However, there are some research happened on the area of online\nadvertisement, pricing etc where due to the nature of the online learning\nenvironment scope of reinforcement learning is explored. Our proposed solution\nis a reinforcement learning based, true self-learning algorithm which can adapt\nto the data change or concept drift and auto learn and self-calibrate for the\nnew patterns of the data solving the problem of concept drift.\n Keywords - Reinforcement learning, Genetic Algorithm, Q-learning,\nClassification modelling, CMA-ES, NES, Multi objective optimization, Concept\ndrift, Population stability index, Incremental learning, F1-measure, Predictive\nModelling, Self-learning, MCTS, AlphaGo, AlphaZero", | |
| "authors": "Kumarjit Pathak, Jitin Kapila", | |
| "published": "2018-10-07", | |
| "updated": "2018-10-07", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.NE", | |
| "stat.ML" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1810.01112v1", | |
| "title": "The Dreaming Variational Autoencoder for Reinforcement Learning Environments", | |
| "abstract": "Reinforcement learning has shown great potential in generalizing over raw\nsensory data using only a single neural network for value optimization. There\nare several challenges in the current state-of-the-art reinforcement learning\nalgorithms that prevent them from converging towards the global optima. It is\nlikely that the solution to these problems lies in short- and long-term\nplanning, exploration and memory management for reinforcement learning\nalgorithms. Games are often used to benchmark reinforcement learning algorithms\nas they provide a flexible, reproducible, and easy to control environment.\nRegardless, few games feature a state-space where results in exploration,\nmemory, and planning are easily perceived. This paper presents The Dreaming\nVariational Autoencoder (DVAE), a neural network based generative modeling\narchitecture for exploration in environments with sparse feedback. We further\npresent Deep Maze, a novel and flexible maze engine that challenges DVAE in\npartial and fully-observable state-spaces, long-horizon tasks, and\ndeterministic and stochastic problems. We show initial findings and encourage\nfurther work in reinforcement learning driven by generative exploration.", | |
| "authors": "Per-Arne Andersen, Morten Goodwin, Ole-Christoffer Granmo", | |
| "published": "2018-10-02", | |
| "updated": "2018-10-02", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1611.00862v1", | |
| "title": "Quantile Reinforcement Learning", | |
| "abstract": "In reinforcement learning, the standard criterion to evaluate policies in a\nstate is the expectation of (discounted) sum of rewards. However, this\ncriterion may not always be suitable, we consider an alternative criterion\nbased on the notion of quantiles. In the case of episodic reinforcement\nlearning problems, we propose an algorithm based on stochastic approximation\nwith two timescales. We evaluate our proposition on a simple model of the TV\nshow, Who wants to be a millionaire.", | |
| "authors": "Hugo Gilbert, Paul Weng", | |
| "published": "2016-11-03", | |
| "updated": "2016-11-03", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2301.11520v3", | |
| "title": "SNeRL: Semantic-aware Neural Radiance Fields for Reinforcement Learning", | |
| "abstract": "As previous representations for reinforcement learning cannot effectively\nincorporate a human-intuitive understanding of the 3D environment, they usually\nsuffer from sub-optimal performances. In this paper, we present Semantic-aware\nNeural Radiance Fields for Reinforcement Learning (SNeRL), which jointly\noptimizes semantic-aware neural radiance fields (NeRF) with a convolutional\nencoder to learn 3D-aware neural implicit representation from multi-view\nimages. We introduce 3D semantic and distilled feature fields in parallel to\nthe RGB radiance fields in NeRF to learn semantic and object-centric\nrepresentation for reinforcement learning. SNeRL outperforms not only previous\npixel-based representations but also recent 3D-aware representations both in\nmodel-free and model-based reinforcement learning.", | |
| "authors": "Dongseok Shim, Seungjae Lee, H. Jin Kim", | |
| "published": "2023-01-27", | |
| "updated": "2023-05-31", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI", | |
| "cs.CV", | |
| "cs.RO" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2102.03022v1", | |
| "title": "Deceptive Reinforcement Learning for Privacy-Preserving Planning", | |
| "abstract": "In this paper, we study the problem of deceptive reinforcement learning to\npreserve the privacy of a reward function. Reinforcement learning is the\nproblem of finding a behaviour policy based on rewards received from\nexploratory behaviour. A key ingredient in reinforcement learning is a reward\nfunction, which determines how much reward (negative or positive) is given and\nwhen. However, in some situations, we may want to keep a reward function\nprivate; that is, to make it difficult for an observer to determine the reward\nfunction used. We define the problem of privacy-preserving reinforcement\nlearning, and present two models for solving it. These models are based on\ndissimulation -- a form of deception that `hides the truth'. We evaluate our\nmodels both computationally and via human behavioural experiments. Results show\nthat the resulting policies are indeed deceptive, and that participants can\ndetermine the true reward function less reliably than that of an honest agent.", | |
| "authors": "Zhengshang Liu, Yue Yang, Tim Miller, Peta Masters", | |
| "published": "2021-02-05", | |
| "updated": "2021-02-05", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI", | |
| "cs.MA" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2206.01474v1", | |
| "title": "Offline Reinforcement Learning with Causal Structured World Models", | |
| "abstract": "Model-based methods have recently shown promising for offline reinforcement\nlearning (RL), aiming to learn good policies from historical data without\ninteracting with the environment. Previous model-based offline RL methods learn\nfully connected nets as world-models that map the states and actions to the\nnext-step states. However, it is sensible that a world-model should adhere to\nthe underlying causal effect such that it will support learning an effective\npolicy generalizing well in unseen states. In this paper, We first provide\ntheoretical results that causal world-models can outperform plain world-models\nfor offline RL by incorporating the causal structure into the generalization\nerror bound. We then propose a practical algorithm, oFfline mOdel-based\nreinforcement learning with CaUsal Structure (FOCUS), to illustrate the\nfeasibility of learning and leveraging causal structure in offline RL.\nExperimental results on two benchmarks show that FOCUS reconstructs the\nunderlying causal structure accurately and robustly. Consequently, it performs\nbetter than the plain model-based offline RL algorithms and other causal\nmodel-based RL algorithms.", | |
| "authors": "Zheng-Mao Zhu, Xiong-Hui Chen, Hong-Long Tian, Kun Zhang, Yang Yu", | |
| "published": "2022-06-03", | |
| "updated": "2022-06-03", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "stat.ML" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2010.13529v2", | |
| "title": "Lyapunov-Based Reinforcement Learning State Estimator", | |
| "abstract": "In this paper, we consider the state estimation problem for nonlinear\nstochastic discrete-time systems. We combine Lyapunov's method in control\ntheory and deep reinforcement learning to design the state estimator. We\ntheoretically prove the convergence of the bounded estimate error solely using\nthe data simulated from the model. An actor-critic reinforcement learning\nalgorithm is proposed to learn the state estimator approximated by a deep\nneural network. The convergence of the algorithm is analysed. The proposed\nLyapunov-based reinforcement learning state estimator is compared with a number\nof existing nonlinear filtering methods through Monte Carlo simulations,\nshowing its advantage in terms of estimate convergence even under some system\nuncertainties such as covariance shift in system noise and randomly missing\nmeasurements. To the best of our knowledge, this is the first reinforcement\nlearning based nonlinear state estimator with bounded estimate error\nperformance guarantee.", | |
| "authors": "Liang Hu, Chengwei Wu, Wei Pan", | |
| "published": "2020-10-26", | |
| "updated": "2021-01-07", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.RO", | |
| "cs.SY", | |
| "eess.SY" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2010.11738v1", | |
| "title": "Optimising Stochastic Routing for Taxi Fleets with Model Enhanced Reinforcement Learning", | |
| "abstract": "The future of mobility-as-a-Service (Maas)should embrace an integrated system\nof ride-hailing, street-hailing and ride-sharing with optimised intelligent\nvehicle routing in response to a real-time, stochastic demand pattern. We aim\nto optimise routing policies for a large fleet of vehicles for street-hailing\nservices, given a stochastic demand pattern in small to medium-sized road\nnetworks. A model-based dispatch algorithm, a high performance model-free\nreinforcement learning based algorithm and a novel hybrid algorithm combining\nthe benefits of both the top-down approach and the model-free reinforcement\nlearning have been proposed to route the \\emph{vacant} vehicles. We design our\nreinforcement learning based routing algorithm using proximal policy\noptimisation and combined intrinsic and extrinsic rewards to strike a balance\nbetween exploration and exploitation. Using a large-scale agent-based\nmicroscopic simulation platform to evaluate our proposed algorithms, our\nmodel-free reinforcement learning and hybrid algorithm show excellent\nperformance on both artificial road network and community-based Singapore road\nnetwork with empirical demands, and our hybrid algorithm can significantly\naccelerate the model-free learner in the process of learning.", | |
| "authors": "Shen Ren, Qianxiao Li, Liye Zhang, Zheng Qin, Bo Yang", | |
| "published": "2020-10-22", | |
| "updated": "2020-10-22", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "nlin.AO", | |
| "physics.soc-ph" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1709.09346v2", | |
| "title": "Cold-Start Reinforcement Learning with Softmax Policy Gradient", | |
| "abstract": "Policy-gradient approaches to reinforcement learning have two common and\nundesirable overhead procedures, namely warm-start training and sample variance\nreduction. In this paper, we describe a reinforcement learning method based on\na softmax value function that requires neither of these procedures. Our method\ncombines the advantages of policy-gradient methods with the efficiency and\nsimplicity of maximum-likelihood approaches. We apply this new cold-start\nreinforcement learning method in training sequence generation models for\nstructured output prediction problems. Empirical evidence validates this method\non automatic summarization and image captioning tasks.", | |
| "authors": "Nan Ding, Radu Soricut", | |
| "published": "2017-09-27", | |
| "updated": "2017-10-13", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1806.01265v2", | |
| "title": "Equivalence Between Wasserstein and Value-Aware Loss for Model-based Reinforcement Learning", | |
| "abstract": "Learning a generative model is a key component of model-based reinforcement\nlearning. Though learning a good model in the tabular setting is a simple task,\nlearning a useful model in the approximate setting is challenging. In this\ncontext, an important question is the loss function used for model learning as\nvarying the loss function can have a remarkable impact on effectiveness of\nplanning. Recently Farahmand et al. (2017) proposed a value-aware model\nlearning (VAML) objective that captures the structure of value function during\nmodel learning. Using tools from Asadi et al. (2018), we show that minimizing\nthe VAML objective is in fact equivalent to minimizing the Wasserstein metric.\nThis equivalence improves our understanding of value-aware models, and also\ncreates a theoretical foundation for applications of Wasserstein in model-based\nreinforcement~learning.", | |
| "authors": "Kavosh Asadi, Evan Cater, Dipendra Misra, Michael L. Littman", | |
| "published": "2018-06-01", | |
| "updated": "2018-07-08", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI", | |
| "stat.ML" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2008.13044v1", | |
| "title": "Reinforcement Learning with Feedback-modulated TD-STDP", | |
| "abstract": "Spiking neuron networks have been used successfully to solve simple\nreinforcement learning tasks with continuous action set applying learning rules\nbased on spike-timing-dependent plasticity (STDP). However, most of these\nmodels cannot be applied to reinforcement learning tasks with discrete action\nset since they assume that the selected action is a deterministic function of\nfiring rate of neurons, which is continuous. In this paper, we propose a new\nSTDP-based learning rule for spiking neuron networks which contains feedback\nmodulation. We show that the STDP-based learning rule can be used to solve\nreinforcement learning tasks with discrete action set at a speed similar to\nstandard reinforcement learning algorithms when applied to the CartPole and\nLunarLander tasks. Moreover, we demonstrate that the agent is unable to solve\nthese tasks if feedback modulation is omitted from the learning rule. We\nconclude that feedback modulation allows better credit assignment when only the\nunits contributing to the executed action and TD error participate in learning.", | |
| "authors": "Stephen Chung, Robert Kozma", | |
| "published": "2020-08-29", | |
| "updated": "2020-08-29", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI", | |
| "stat.ML", | |
| "I.2.8" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1906.12189v1", | |
| "title": "Learning-based Model Predictive Control for Safe Exploration and Reinforcement Learning", | |
| "abstract": "Reinforcement learning has been successfully used to solve difficult tasks in\ncomplex unknown environments. However, these methods typically do not provide\nany safety guarantees during the learning process. This is particularly\nproblematic, since reinforcement learning agent actively explore their\nenvironment. This prevents their use in safety-critical, real-world\napplications. In this paper, we present a learning-based model predictive\ncontrol scheme that provides high-probability safety guarantees throughout the\nlearning process. Based on a reliable statistical model, we construct provably\naccurate confidence intervals on predicted trajectories. Unlike previous\napproaches, we allow for input-dependent uncertainties. Based on these reliable\npredictions, we guarantee that trajectories satisfy safety constraints.\nMoreover, we use a terminal set constraint to recursively guarantee the\nexistence of safe control actions at every iteration. We evaluate the resulting\nalgorithm to safely explore the dynamics of an inverted pendulum and to solve a\nreinforcement learning task on a cart-pole system with safety constraints.", | |
| "authors": "Torsten Koller, Felix Berkenkamp, Matteo Turchetta, Joschka Boedecker, Andreas Krause", | |
| "published": "2019-06-27", | |
| "updated": "2019-06-27", | |
| "primary_cat": "eess.SY", | |
| "cats": [ | |
| "eess.SY", | |
| "cs.AI", | |
| "cs.LG", | |
| "cs.SY" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1804.07193v3", | |
| "title": "Lipschitz Continuity in Model-based Reinforcement Learning", | |
| "abstract": "We examine the impact of learning Lipschitz continuous models in the context\nof model-based reinforcement learning. We provide a novel bound on multi-step\nprediction error of Lipschitz models where we quantify the error using the\nWasserstein metric. We go on to prove an error bound for the value-function\nestimate arising from Lipschitz models and show that the estimated value\nfunction is itself Lipschitz. We conclude with empirical results that show the\nbenefits of controlling the Lipschitz constant of neural-network models.", | |
| "authors": "Kavosh Asadi, Dipendra Misra, Michael L. Littman", | |
| "published": "2018-04-19", | |
| "updated": "2018-07-27", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI", | |
| "stat.ML" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2010.12142v1", | |
| "title": "Bridging Imagination and Reality for Model-Based Deep Reinforcement Learning", | |
| "abstract": "Sample efficiency has been one of the major challenges for deep reinforcement\nlearning. Recently, model-based reinforcement learning has been proposed to\naddress this challenge by performing planning on imaginary trajectories with a\nlearned world model. However, world model learning may suffer from overfitting\nto training trajectories, and thus model-based value estimation and policy\nsearch will be pone to be sucked in an inferior local policy. In this paper, we\npropose a novel model-based reinforcement learning algorithm, called BrIdging\nReality and Dream (BIRD). It maximizes the mutual information between imaginary\nand real trajectories so that the policy improvement learned from imaginary\ntrajectories can be easily generalized to real trajectories. We demonstrate\nthat our approach improves sample efficiency of model-based planning, and\nachieves state-of-the-art performance on challenging visual control benchmarks.", | |
| "authors": "Guangxiang Zhu, Minghao Zhang, Honglak Lee, Chongjie Zhang", | |
| "published": "2020-10-23", | |
| "updated": "2020-10-23", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1802.10592v2", | |
| "title": "Model-Ensemble Trust-Region Policy Optimization", | |
| "abstract": "Model-free reinforcement learning (RL) methods are succeeding in a growing\nnumber of tasks, aided by recent advances in deep learning. However, they tend\nto suffer from high sample complexity, which hinders their use in real-world\ndomains. Alternatively, model-based reinforcement learning promises to reduce\nsample complexity, but tends to require careful tuning and to date have\nsucceeded mainly in restrictive domains where simple models are sufficient for\nlearning. In this paper, we analyze the behavior of vanilla model-based\nreinforcement learning methods when deep neural networks are used to learn both\nthe model and the policy, and show that the learned policy tends to exploit\nregions where insufficient data is available for the model to be learned,\ncausing instability in training. To overcome this issue, we propose to use an\nensemble of models to maintain the model uncertainty and regularize the\nlearning process. We further show that the use of likelihood ratio derivatives\nyields much more stable learning than backpropagation through time. Altogether,\nour approach Model-Ensemble Trust-Region Policy Optimization (ME-TRPO)\nsignificantly reduces the sample complexity compared to model-free deep RL\nmethods on challenging continuous control benchmark tasks.", | |
| "authors": "Thanard Kurutach, Ignasi Clavera, Yan Duan, Aviv Tamar, Pieter Abbeel", | |
| "published": "2018-02-28", | |
| "updated": "2018-10-05", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI", | |
| "cs.RO" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1705.03562v1", | |
| "title": "Deep Episodic Value Iteration for Model-based Meta-Reinforcement Learning", | |
| "abstract": "We present a new deep meta reinforcement learner, which we call Deep Episodic\nValue Iteration (DEVI). DEVI uses a deep neural network to learn a similarity\nmetric for a non-parametric model-based reinforcement learning algorithm. Our\nmodel is trained end-to-end via back-propagation. Despite being trained using\nthe model-free Q-learning objective, we show that DEVI's model-based internal\nstructure provides `one-shot' transfer to changes in reward and transition\nstructure, even for tasks with very high-dimensional state spaces.", | |
| "authors": "Steven Stenberg Hansen", | |
| "published": "2017-05-09", | |
| "updated": "2017-05-09", | |
| "primary_cat": "stat.ML", | |
| "cats": [ | |
| "stat.ML", | |
| "cs.AI", | |
| "cs.LG" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1901.07905v2", | |
| "title": "Reinforcement Learning Ship Autopilot: Sample efficient and Model Predictive Control-based Approach", | |
| "abstract": "In this research we focus on developing a reinforcement learning system for a\nchallenging task: autonomous control of a real-sized boat, with difficulties\narising from large uncertainties in the challenging ocean environment and the\nextremely high cost of exploring and sampling with a real boat. To this end, we\nexplore a novel Gaussian processes (GP) based reinforcement learning approach\nthat combines sample-efficient model-based reinforcement learning and model\npredictive control (MPC). Our approach, sample-efficient probabilistic model\npredictive control (SPMPC), iteratively learns a Gaussian process dynamics\nmodel and uses it to efficiently update control signals within the MPC closed\ncontrol loop. A system using SPMPC is built to efficiently learn an autopilot\ntask. After investigating its performance in a simulation modeled upon real\nboat driving data, the proposed system successfully learns to drive a\nreal-sized boat equipped with a single engine and sensors measuring GPS, speed,\ndirection, and wind in an autopilot task without human demonstration.", | |
| "authors": "Yunduan Cui, Shigeki Osaki, Takamitsu Matsubara", | |
| "published": "2019-01-23", | |
| "updated": "2019-07-23", | |
| "primary_cat": "cs.SY", | |
| "cats": [ | |
| "cs.SY" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2212.08232v1", | |
| "title": "Offline Robot Reinforcement Learning with Uncertainty-Guided Human Expert Sampling", | |
| "abstract": "Recent advances in batch (offline) reinforcement learning have shown\npromising results in learning from available offline data and proved offline\nreinforcement learning to be an essential toolkit in learning control policies\nin a model-free setting. An offline reinforcement learning algorithm applied to\na dataset collected by a suboptimal non-learning-based algorithm can result in\na policy that outperforms the behavior agent used to collect the data. Such a\nscenario is frequent in robotics, where existing automation is collecting\noperational data. Although offline learning techniques can learn from data\ngenerated by a sub-optimal behavior agent, there is still an opportunity to\nimprove the sample complexity of existing offline reinforcement learning\nalgorithms by strategically introducing human demonstration data into the\ntraining process. To this end, we propose a novel approach that uses\nuncertainty estimation to trigger the injection of human demonstration data and\nguide policy training towards optimal behavior while reducing overall sample\ncomplexity. Our experiments show that this approach is more sample efficient\nwhen compared to a naive way of combining expert data with data collected from\na sub-optimal agent. We augmented an existing offline reinforcement learning\nalgorithm Conservative Q-Learning with our approach and performed experiments\non data collected from MuJoCo and OffWorld Gym learning environments.", | |
| "authors": "Ashish Kumar, Ilya Kuzovkin", | |
| "published": "2022-12-16", | |
| "updated": "2022-12-16", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.RO" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2209.05530v1", | |
| "title": "Model-based Reinforcement Learning with Multi-step Plan Value Estimation", | |
| "abstract": "A promising way to improve the sample efficiency of reinforcement learning is\nmodel-based methods, in which many explorations and evaluations can happen in\nthe learned models to save real-world samples. However, when the learned model\nhas a non-negligible model error, sequential steps in the model are hard to be\naccurately evaluated, limiting the model's utilization. This paper proposes to\nalleviate this issue by introducing multi-step plans to replace multi-step\nactions for model-based RL. We employ the multi-step plan value estimation,\nwhich evaluates the expected discounted return after executing a sequence of\naction plans at a given state, and updates the policy by directly computing the\nmulti-step policy gradient via plan value estimation. The new model-based\nreinforcement learning algorithm MPPVE (Model-based Planning Policy Learning\nwith Multi-step Plan Value Estimation) shows a better utilization of the\nlearned model and achieves a better sample efficiency than state-of-the-art\nmodel-based RL approaches.", | |
| "authors": "Haoxin Lin, Yihao Sun, Jiaji Zhang, Yang Yu", | |
| "published": "2022-09-12", | |
| "updated": "2022-09-12", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1406.1853v2", | |
| "title": "Model-based Reinforcement Learning and the Eluder Dimension", | |
| "abstract": "We consider the problem of learning to optimize an unknown Markov decision\nprocess (MDP). We show that, if the MDP can be parameterized within some known\nfunction class, we can obtain regret bounds that scale with the dimensionality,\nrather than cardinality, of the system. We characterize this dependence\nexplicitly as $\\tilde{O}(\\sqrt{d_K d_E T})$ where $T$ is time elapsed, $d_K$ is\nthe Kolmogorov dimension and $d_E$ is the \\emph{eluder dimension}. These\nrepresent the first unified regret bounds for model-based reinforcement\nlearning and provide state of the art guarantees in several important settings.\nMoreover, we present a simple and computationally efficient algorithm\n\\emph{posterior sampling for reinforcement learning} (PSRL) that satisfies\nthese bounds.", | |
| "authors": "Ian Osband, Benjamin Van Roy", | |
| "published": "2014-06-07", | |
| "updated": "2014-10-31", | |
| "primary_cat": "stat.ML", | |
| "cats": [ | |
| "stat.ML", | |
| "cs.LG" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1903.08543v6", | |
| "title": "Optimizing thermodynamic trajectories using evolutionary and gradient-based reinforcement learning", | |
| "abstract": "Using a model heat engine, we show that neural network-based reinforcement\nlearning can identify thermodynamic trajectories of maximal efficiency. We\nconsider both gradient and gradient-free reinforcement learning. We use an\nevolutionary learning algorithm to evolve a population of neural networks,\nsubject to a directive to maximize the efficiency of a trajectory composed of a\nset of elementary thermodynamic processes; the resulting networks learn to\ncarry out the maximally-efficient Carnot, Stirling, or Otto cycles. When given\nan additional irreversible process, this evolutionary scheme learns a\npreviously unknown thermodynamic cycle. Gradient-based reinforcement learning\nis able to learn the Stirling cycle, whereas an evolutionary approach achieves\nthe optimal Carnot cycle. Our results show how the reinforcement learning\nstrategies developed for game playing can be applied to solve physical problems\nconditioned upon path-extensive order parameters.", | |
| "authors": "Chris Beeler, Uladzimir Yahorau, Rory Coles, Kyle Mills, Stephen Whitelam, Isaac Tamblyn", | |
| "published": "2019-03-20", | |
| "updated": "2021-11-22", | |
| "primary_cat": "cs.NE", | |
| "cats": [ | |
| "cs.NE", | |
| "cond-mat.stat-mech", | |
| "cs.LG", | |
| "physics.comp-ph" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2301.03933v1", | |
| "title": "Hint assisted reinforcement learning: an application in radio astronomy", | |
| "abstract": "Model based reinforcement learning has proven to be more sample efficient\nthan model free methods. On the other hand, the construction of a dynamics\nmodel in model based reinforcement learning has increased complexity. Data\nprocessing tasks in radio astronomy are such situations where the original\nproblem which is being solved by reinforcement learning itself is the creation\nof a model. Fortunately, many methods based on heuristics or signal processing\ndo exist to perform the same tasks and we can leverage them to propose the best\naction to take, or in other words, to provide a `hint'. We propose to use\n`hints' generated by the environment as an aid to the reinforcement learning\nprocess mitigating the complexity of model construction. We modify the soft\nactor critic algorithm to use hints and use the alternating direction method of\nmultipliers algorithm with inequality constraints to train the agent. Results\nin several environments show that we get the increased sample efficiency by\nusing hints as compared to model free methods.", | |
| "authors": "Sarod Yatawatta", | |
| "published": "2023-01-10", | |
| "updated": "2023-01-10", | |
| "primary_cat": "astro-ph.IM", | |
| "cats": [ | |
| "astro-ph.IM", | |
| "cs.LG" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2106.03688v1", | |
| "title": "A Computational Model of Representation Learning in the Brain Cortex, Integrating Unsupervised and Reinforcement Learning", | |
| "abstract": "A common view on the brain learning processes proposes that the three classic\nlearning paradigms -- unsupervised, reinforcement, and supervised -- take place\nin respectively the cortex, the basal-ganglia, and the cerebellum. However,\ndopamine outbursts, usually assumed to encode reward, are not limited to the\nbasal ganglia but also reach prefrontal, motor, and higher sensory cortices. We\npropose that in the cortex the same reward-based trial-and-error processes\nmight support not only the acquisition of motor representations but also of\nsensory representations. In particular, reward signals might guide\ntrial-and-error processes that mix with associative learning processes to\nsupport the acquisition of representations better serving downstream action\nselection. We tested the soundness of this hypothesis with a computational\nmodel that integrates unsupervised learning (Contrastive Divergence) and\nreinforcement learning (REINFORCE). The model was tested with a task requiring\ndifferent responses to different visual images grouped in categories involving\neither colour, shape, or size. Results show that a balanced mix of unsupervised\nand reinforcement learning processes leads to the best performance. Indeed,\nexcessive unsupervised learning tends to under-represent task-relevant features\nwhile excessive reinforcement learning tends to initially learn slowly and then\nto incur in local minima. These results stimulate future empirical studies on\ncategory learning directed to investigate similar effects in the extrastriate\nvisual cortices. Moreover, they prompt further computational investigations\ndirected to study the possible advantages of integrating unsupervised and\nreinforcement learning processes.", | |
| "authors": "Giovanni Granato, Emilio Cartoni, Federico Da Rold, Andrea Mattera, Gianluca Baldassarre", | |
| "published": "2021-06-07", | |
| "updated": "2021-06-07", | |
| "primary_cat": "q-bio.NC", | |
| "cats": [ | |
| "q-bio.NC", | |
| "cs.LG" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2007.12666v5", | |
| "title": "Safe Model-Based Reinforcement Learning for Systems with Parametric Uncertainties", | |
| "abstract": "Reinforcement learning has been established over the past decade as an\neffective tool to find optimal control policies for dynamical systems, with\nrecent focus on approaches that guarantee safety during the learning and/or\nexecution phases. In general, safety guarantees are critical in reinforcement\nlearning when the system is safety-critical and/or task restarts are not\npractically feasible. In optimal control theory, safety requirements are often\nexpressed in terms of state and/or control constraints. In recent years,\nreinforcement learning approaches that rely on persistent excitation have been\ncombined with a barrier transformation to learn the optimal control policies\nunder state constraints. To soften the excitation requirements, model-based\nreinforcement learning methods that rely on exact model knowledge have also\nbeen integrated with the barrier transformation framework. The objective of\nthis paper is to develop safe reinforcement learning method for deterministic\nnonlinear systems, with parametric uncertainties in the model, to learn\napproximate constrained optimal policies without relying on stringent\nexcitation conditions. To that end, a model-based reinforcement learning\ntechnique that utilizes a novel filtered concurrent learning method, along with\na barrier transformation, is developed in this paper to realize simultaneous\nlearning of unknown model parameters and approximate optimal state-constrained\ncontrol policies for safety-critical systems.", | |
| "authors": "S M Nahid Mahmud, Scott A Nivison, Zachary I. Bell, Rushikesh Kamalapurkar", | |
| "published": "2020-07-24", | |
| "updated": "2021-10-05", | |
| "primary_cat": "eess.SY", | |
| "cats": [ | |
| "eess.SY", | |
| "cs.SY", | |
| "math.OC" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2206.02025v1", | |
| "title": "Between Rate-Distortion Theory & Value Equivalence in Model-Based Reinforcement Learning", | |
| "abstract": "The quintessential model-based reinforcement-learning agent iteratively\nrefines its estimates or prior beliefs about the true underlying model of the\nenvironment. Recent empirical successes in model-based reinforcement learning\nwith function approximation, however, eschew the true model in favor of a\nsurrogate that, while ignoring various facets of the environment, still\nfacilitates effective planning over behaviors. Recently formalized as the value\nequivalence principle, this algorithmic technique is perhaps unavoidable as\nreal-world reinforcement learning demands consideration of a simple,\ncomputationally-bounded agent interacting with an overwhelmingly complex\nenvironment. In this work, we entertain an extreme scenario wherein some\ncombination of immense environment complexity and limited agent capacity\nentirely precludes identifying an exactly value-equivalent model. In light of\nthis, we embrace a notion of approximate value equivalence and introduce an\nalgorithm for incrementally synthesizing simple and useful approximations of\nthe environment from which an agent might still recover near-optimal behavior.\nCrucially, we recognize the information-theoretic nature of this lossy\nenvironment compression problem and use the appropriate tools of\nrate-distortion theory to make mathematically precise how value equivalence can\nlend tractability to otherwise intractable sequential decision-making problems.", | |
| "authors": "Dilip Arumugam, Benjamin Van Roy", | |
| "published": "2022-06-04", | |
| "updated": "2022-06-04", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.IT", | |
| "math.IT" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1906.08312v1", | |
| "title": "Calibrated Model-Based Deep Reinforcement Learning", | |
| "abstract": "Estimates of predictive uncertainty are important for accurate model-based\nplanning and reinforcement learning. However, predictive\nuncertainties---especially ones derived from modern deep learning systems---can\nbe inaccurate and impose a bottleneck on performance. This paper explores which\nuncertainties are needed for model-based reinforcement learning and argues that\ngood uncertainties must be calibrated, i.e. their probabilities should match\nempirical frequencies of predicted events. We describe a simple way to augment\nany model-based reinforcement learning agent with a calibrated model and show\nthat doing so consistently improves planning, sample complexity, and\nexploration. On the \\textsc{HalfCheetah} MuJoCo task, our system achieves\nstate-of-the-art performance using 50\\% fewer samples than the current leading\napproach. Our findings suggest that calibration can improve the performance of\nmodel-based reinforcement learning with minimal computational and\nimplementation overhead.", | |
| "authors": "Ali Malik, Volodymyr Kuleshov, Jiaming Song, Danny Nemer, Harlan Seymour, Stefano Ermon", | |
| "published": "2019-06-19", | |
| "updated": "2019-06-19", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "stat.ML" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2303.09013v1", | |
| "title": "Self-Inspection Method of Unmanned Aerial Vehicles in Power Plants Using Deep Q-Network Reinforcement Learning", | |
| "abstract": "For the purpose of inspecting power plants, autonomous robots can be built\nusing reinforcement learning techniques. The method replicates the environment\nand employs a simple reinforcement learning (RL) algorithm. This strategy might\nbe applied in several sectors, including the electricity generation sector. A\npre-trained model with perception, planning, and action is suggested by the\nresearch. To address optimization problems, such as the Unmanned Aerial Vehicle\n(UAV) navigation problem, Deep Q-network (DQN), a reinforcement learning-based\nframework that Deepmind launched in 2015, incorporates both deep learning and\nQ-learning. To overcome problems with current procedures, the research proposes\na power plant inspection system incorporating UAV autonomous navigation and DQN\nreinforcement learning. These training processes set reward functions with\nreference to states and consider both internal and external effect factors,\nwhich distinguishes them from other reinforcement learning training techniques\nnow in use. The key components of the reinforcement learning segment of the\ntechnique, for instance, introduce states such as the simulation of a wind\nfield, the battery charge level of an unmanned aerial vehicle, the height the\nUAV reached, etc. The trained model makes it more likely that the inspection\nstrategy will be applied in practice by enabling the UAV to move around on its\nown in difficult environments. The average score of the model converges to\n9,000. The trained model allowed the UAV to make the fewest number of rotations\nnecessary to go to the target point.", | |
| "authors": "Haoran Guan", | |
| "published": "2023-03-16", | |
| "updated": "2023-03-16", | |
| "primary_cat": "cs.RO", | |
| "cats": [ | |
| "cs.RO", | |
| "cs.LG" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2301.10119v2", | |
| "title": "Minimal Value-Equivalent Partial Models for Scalable and Robust Planning in Lifelong Reinforcement Learning", | |
| "abstract": "Learning models of the environment from pure interaction is often considered\nan essential component of building lifelong reinforcement learning agents.\nHowever, the common practice in model-based reinforcement learning is to learn\nmodels that model every aspect of the agent's environment, regardless of\nwhether they are important in coming up with optimal decisions or not. In this\npaper, we argue that such models are not particularly well-suited for\nperforming scalable and robust planning in lifelong reinforcement learning\nscenarios and we propose new kinds of models that only model the relevant\naspects of the environment, which we call \"minimal value-equivalent partial\nmodels\". After providing a formal definition for these models, we provide\ntheoretical results demonstrating the scalability advantages of performing\nplanning with such models and then perform experiments to empirically\nillustrate our theoretical results. Then, we provide some useful heuristics on\nhow to learn these kinds of models with deep learning architectures and\nempirically demonstrate that models learned in such a way can allow for\nperforming planning that is robust to distribution shifts and compounding model\nerrors. Overall, both our theoretical and empirical results suggest that\nminimal value-equivalent partial models can provide significant benefits to\nperforming scalable and robust planning in lifelong reinforcement learning\nscenarios.", | |
| "authors": "Safa Alver, Doina Precup", | |
| "published": "2023-01-24", | |
| "updated": "2023-06-11", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2308.14897v1", | |
| "title": "Statistically Efficient Variance Reduction with Double Policy Estimation for Off-Policy Evaluation in Sequence-Modeled Reinforcement Learning", | |
| "abstract": "Offline reinforcement learning aims to utilize datasets of previously\ngathered environment-action interaction records to learn a policy without\naccess to the real environment. Recent work has shown that offline\nreinforcement learning can be formulated as a sequence modeling problem and\nsolved via supervised learning with approaches such as decision transformer.\nWhile these sequence-based methods achieve competitive results over\nreturn-to-go methods, especially on tasks that require longer episodes or with\nscarce rewards, importance sampling is not considered to correct the policy\nbias when dealing with off-policy data, mainly due to the absence of behavior\npolicy and the use of deterministic evaluation policies. To this end, we\npropose DPE: an RL algorithm that blends offline sequence modeling and offline\nreinforcement learning with Double Policy Estimation (DPE) in a unified\nframework with statistically proven properties on variance reduction. We\nvalidate our method in multiple tasks of OpenAI Gym with D4RL benchmarks. Our\nmethod brings a performance improvements on selected methods which outperforms\nSOTA baselines in several tasks, demonstrating the advantages of enabling\ndouble policy estimation for sequence-modeled reinforcement learning.", | |
| "authors": "Hanhan Zhou, Tian Lan, Vaneet Aggarwal", | |
| "published": "2023-08-28", | |
| "updated": "2023-08-28", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI", | |
| "cs.DC" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2008.00766v1", | |
| "title": "Tracking the Race Between Deep Reinforcement Learning and Imitation Learning -- Extended Version", | |
| "abstract": "Learning-based approaches for solving large sequential decision making\nproblems have become popular in recent years. The resulting agents perform\ndifferently and their characteristics depend on those of the underlying\nlearning approach. Here, we consider a benchmark planning problem from the\nreinforcement learning domain, the Racetrack, to investigate the properties of\nagents derived from different deep (reinforcement) learning approaches. We\ncompare the performance of deep supervised learning, in particular imitation\nlearning, to reinforcement learning for the Racetrack model. We find that\nimitation learning yields agents that follow more risky paths. In contrast, the\ndecisions of deep reinforcement learning are more foresighted, i.e., avoid\nstates in which fatal decisions are more likely. Our evaluations show that for\nthis sequential decision making problem, deep reinforcement learning performs\nbest in many aspects even though for imitation learning optimal decisions are\nconsidered.", | |
| "authors": "Timo P. Gros, Daniel H\u00f6ller, J\u00f6rg Hoffmann, Verena Wolf", | |
| "published": "2020-08-03", | |
| "updated": "2020-08-03", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2311.05546v2", | |
| "title": "Multi-Agent Quantum Reinforcement Learning using Evolutionary Optimization", | |
| "abstract": "Multi-Agent Reinforcement Learning is becoming increasingly more important in\ntimes of autonomous driving and other smart industrial applications.\nSimultaneously a promising new approach to Reinforcement Learning arises using\nthe inherent properties of quantum mechanics, reducing the trainable parameters\nof a model significantly. However, gradient-based Multi-Agent Quantum\nReinforcement Learning methods often have to struggle with barren plateaus,\nholding them back from matching the performance of classical approaches. We\nbuild upon an existing approach for gradient free Quantum Reinforcement\nLearning and propose three genetic variations with Variational Quantum Circuits\nfor Multi-Agent Reinforcement Learning using evolutionary optimization. We\nevaluate our genetic variations in the Coin Game environment and also compare\nthem to classical approaches. We showed that our Variational Quantum Circuit\napproaches perform significantly better compared to a neural network with a\nsimilar amount of trainable parameters. Compared to the larger neural network,\nour approaches archive similar results using $97.88\\%$ less parameters.", | |
| "authors": "Michael K\u00f6lle, Felix Topp, Thomy Phan, Philipp Altmann, Jonas N\u00fc\u00dflein, Claudia Linnhoff-Popien", | |
| "published": "2023-11-09", | |
| "updated": "2024-01-13", | |
| "primary_cat": "quant-ph", | |
| "cats": [ | |
| "quant-ph", | |
| "cs.AI", | |
| "cs.MA" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2306.07525v1", | |
| "title": "Using Collision Momentum in Deep Reinforcement Learning Based Adversarial Pedestrian Modeling", | |
| "abstract": "Recent research in pedestrian simulation often aims to develop realistic\nbehaviors in various situations, but it is challenging for existing algorithms\nto generate behaviors that identify weaknesses in automated vehicles'\nperformance in extreme and unlikely scenarios and edge cases. To address this,\nspecialized pedestrian behavior algorithms are needed. Current research focuses\non realistic trajectories using social force models and reinforcement learning\nbased models. However, we propose a reinforcement learning algorithm that\nspecifically targets collisions and better uncovers unique failure modes of\nautomated vehicle controllers. Our algorithm is efficient and generates more\nsevere collisions, allowing for the identification and correction of weaknesses\nin autonomous driving algorithms in complex and varied scenarios.", | |
| "authors": "Dianwei Chen, Ekim Yurtsever, Keith Redmill, Umit Ozguner", | |
| "published": "2023-06-13", | |
| "updated": "2023-06-13", | |
| "primary_cat": "cs.RO", | |
| "cats": [ | |
| "cs.RO", | |
| "cs.AI", | |
| "cs.LG" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1901.08162v1", | |
| "title": "Causal Reasoning from Meta-reinforcement Learning", | |
| "abstract": "Discovering and exploiting the causal structure in the environment is a\ncrucial challenge for intelligent agents. Here we explore whether causal\nreasoning can emerge via meta-reinforcement learning. We train a recurrent\nnetwork with model-free reinforcement learning to solve a range of problems\nthat each contain causal structure. We find that the trained agent can perform\ncausal reasoning in novel situations in order to obtain rewards. The agent can\nselect informative interventions, draw causal inferences from observational\ndata, and make counterfactual predictions. Although established formal causal\nreasoning algorithms also exist, in this paper we show that such reasoning can\narise from model-free reinforcement learning, and suggest that causal reasoning\nin complex settings may benefit from the more end-to-end learning-based\napproaches presented here. This work also offers new strategies for structured\nexploration in reinforcement learning, by providing agents with the ability to\nperform -- and interpret -- experiments.", | |
| "authors": "Ishita Dasgupta, Jane Wang, Silvia Chiappa, Jovana Mitrovic, Pedro Ortega, David Raposo, Edward Hughes, Peter Battaglia, Matthew Botvinick, Zeb Kurth-Nelson", | |
| "published": "2019-01-23", | |
| "updated": "2019-01-23", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI", | |
| "stat.ML" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1901.02219v1", | |
| "title": "Uncertainty-Based Out-of-Distribution Detection in Deep Reinforcement Learning", | |
| "abstract": "We consider the problem of detecting out-of-distribution (OOD) samples in\ndeep reinforcement learning. In a value based reinforcement learning setting,\nwe propose to use uncertainty estimation techniques directly on the agent's\nvalue estimating neural network to detect OOD samples. The focus of our work\nlies in analyzing the suitability of approximate Bayesian inference methods and\nrelated ensembling techniques that generate uncertainty estimates. Although\nprior work has shown that dropout-based variational inference techniques and\nbootstrap-based approaches can be used to model epistemic uncertainty, the\nsuitability for detecting OOD samples in deep reinforcement learning remains an\nopen question. Our results show that uncertainty estimation can be used to\ndifferentiate in- from out-of-distribution samples. Over the complete training\nprocess of the reinforcement learning agents, bootstrap-based approaches tend\nto produce more reliable epistemic uncertainty estimates, when compared to\ndropout-based approaches.", | |
| "authors": "Andreas Sedlmeier, Thomas Gabor, Thomy Phan, Lenz Belzner, Claudia Linnhoff-Popien", | |
| "published": "2019-01-08", | |
| "updated": "2019-01-08", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI", | |
| "stat.ML" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2211.10688v2", | |
| "title": "ReInform: Selecting paths with reinforcement learning for contextualized link prediction", | |
| "abstract": "We propose to use reinforcement learning to inform transformer-based\ncontextualized link prediction models by providing paths that are most useful\nfor predicting the correct answer. This is in contrast to previous approaches,\nthat either used reinforcement learning (RL) to directly search for the answer,\nor based their prediction on limited or randomly selected context. Our\nexperiments on WN18RR and FB15k-237 show that contextualized link prediction\nmodels consistently outperform RL-based answer search, and that additional\nimprovements (of up to 13.5% MRR) can be gained by combining RL with a link\nprediction model. The PyTorch implementation of the RL agent is available at\nhttps://github.com/marina-sp/reinform", | |
| "authors": "Marina Speranskaya, Sameh Methias, Benjamin Roth", | |
| "published": "2022-11-19", | |
| "updated": "2023-01-23", | |
| "primary_cat": "cs.CL", | |
| "cats": [ | |
| "cs.CL", | |
| "cs.AI" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2311.07315v1", | |
| "title": "An introduction to reinforcement learning for neuroscience", | |
| "abstract": "Reinforcement learning has a rich history in neuroscience, from early work on\ndopamine as a reward prediction error signal for temporal difference learning\n(Schultz et al., 1997) to recent work suggesting that dopamine could implement\na form of 'distributional reinforcement learning' popularized in deep learning\n(Dabney et al., 2020). Throughout this literature, there has been a tight link\nbetween theoretical advances in reinforcement learning and neuroscientific\nexperiments and findings. As a result, the theories describing our experimental\ndata have become increasingly complex and difficult to navigate. In this\nreview, we cover the basic theory underlying classical work in reinforcement\nlearning and build up to an introductory overview of methods used in modern\ndeep reinforcement learning that have found applications in systems\nneuroscience. We start with an overview of the reinforcement learning problem\nand classical temporal difference algorithms, followed by a discussion of\n'model-free' and 'model-based' reinforcement learning together with methods\nsuch as DYNA and successor representations that fall in between these two\ncategories. Throughout these sections, we highlight the close parallels between\nthe machine learning methods and related work in both experimental and\ntheoretical neuroscience. We then provide an introduction to deep reinforcement\nlearning with examples of how these methods have been used to model different\nlearning phenomena in the systems neuroscience literature, such as\nmeta-reinforcement learning (Wang et al., 2018) and distributional\nreinforcement learning (Dabney et al., 2020). Code that implements the methods\ndiscussed in this work and generates the figures is also provided.", | |
| "authors": "Kristopher T. Jensen", | |
| "published": "2023-11-13", | |
| "updated": "2023-11-13", | |
| "primary_cat": "q-bio.NC", | |
| "cats": [ | |
| "q-bio.NC", | |
| "cs.LG" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1901.11437v3", | |
| "title": "Successor Features Combine Elements of Model-Free and Model-based Reinforcement Learning", | |
| "abstract": "A key question in reinforcement learning is how an intelligent agent can\ngeneralize knowledge across different inputs. By generalizing across different\ninputs, information learned for one input can be immediately reused for\nimproving predictions for another input. Reusing information allows an agent to\ncompute an optimal decision-making strategy using less data. State\nrepresentation is a key element of the generalization process, compressing a\nhigh-dimensional input space into a low-dimensional latent state space. This\narticle analyzes properties of different latent state spaces, leading to new\nconnections between model-based and model-free reinforcement learning.\nSuccessor features, which predict frequencies of future observations, form a\nlink between model-based and model-free learning: Learning to predict future\nexpected reward outcomes, a key characteristic of model-based agents, is\nequivalent to learning successor features. Learning successor features is a\nform of temporal difference learning and is equivalent to learning to predict a\nsingle policy's utility, which is a characteristic of model-free agents.\nDrawing on the connection between model-based reinforcement learning and\nsuccessor features, we demonstrate that representations that are predictive of\nfuture reward outcomes generalize across variations in both transitions and\nrewards. This result extends previous work on successor features, which is\nconstrained to fixed transitions and assumes re-learning of the transferred\nstate representation.", | |
| "authors": "Lucas Lehnert, Michael L. Littman", | |
| "published": "2019-01-31", | |
| "updated": "2020-10-04", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2109.01659v1", | |
| "title": "Reinforcement Learning for Battery Energy Storage Dispatch augmented with Model-based Optimizer", | |
| "abstract": "Reinforcement learning has been found useful in solving optimal power flow\n(OPF) problems in electric power distribution systems. However, the use of\nlargely model-free reinforcement learning algorithms that completely ignore the\nphysics-based modeling of the power grid compromises the optimizer performance\nand poses scalability challenges. This paper proposes a novel approach to\nsynergistically combine the physics-based models with learning-based algorithms\nusing imitation learning to solve distribution-level OPF problems.\nSpecifically, we propose imitation learning based improvements in deep\nreinforcement learning (DRL) methods to solve the OPF problem for a specific\ncase of battery storage dispatch in the power distribution systems. The\nproposed imitation learning algorithm uses the approximate optimal solutions\nobtained from a linearized model-based OPF solver to provide a good initial\npolicy for the DRL algorithms while improving the training efficiency. The\neffectiveness of the proposed approach is demonstrated using IEEE 34-bus and\n123-bus distribution feeders with numerous distribution-level battery storage\nsystems.", | |
| "authors": "Gayathri Krishnamoorthy, Anamika Dubey", | |
| "published": "2021-09-02", | |
| "updated": "2021-09-02", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI", | |
| "cs.SY", | |
| "eess.SY" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2110.14524v1", | |
| "title": "Model based Multi-agent Reinforcement Learning with Tensor Decompositions", | |
| "abstract": "A challenge in multi-agent reinforcement learning is to be able to generalize\nover intractable state-action spaces. Inspired from Tesseract [Mahajan et al.,\n2021], this position paper investigates generalisation in state-action space\nover unexplored state-action pairs by modelling the transition and reward\nfunctions as tensors of low CP-rank. Initial experiments on synthetic MDPs show\nthat using tensor decompositions in a model-based reinforcement learning\nalgorithm can lead to much faster convergence if the true transition and reward\nfunctions are indeed of low rank.", | |
| "authors": "Pascal Van Der Vaart, Anuj Mahajan, Shimon Whiteson", | |
| "published": "2021-10-27", | |
| "updated": "2021-10-27", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.MA" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2111.02104v2", | |
| "title": "Model-Based Episodic Memory Induces Dynamic Hybrid Controls", | |
| "abstract": "Episodic control enables sample efficiency in reinforcement learning by\nrecalling past experiences from an episodic memory. We propose a new\nmodel-based episodic memory of trajectories addressing current limitations of\nepisodic control. Our memory estimates trajectory values, guiding the agent\ntowards good policies. Built upon the memory, we construct a complementary\nlearning model via a dynamic hybrid control unifying model-based, episodic and\nhabitual learning into a single architecture. Experiments demonstrate that our\nmodel allows significantly faster and better learning than other strong\nreinforcement learning agents across a variety of environments including\nstochastic and non-Markovian settings.", | |
| "authors": "Hung Le, Thommen Karimpanal George, Majid Abdolshah, Truyen Tran, Svetha Venkatesh", | |
| "published": "2021-11-03", | |
| "updated": "2021-11-06", | |
| "primary_cat": "cs.LG", | |
| "cats": [ | |
| "cs.LG", | |
| "cs.AI" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1705.07460v1", | |
| "title": "Experience enrichment based task independent reward model", | |
| "abstract": "For most reinforcement learning approaches, the learning is performed by\nmaximizing an accumulative reward that is expectedly and manually defined for\nspecific tasks. However, in real world, rewards are emergent phenomena from the\ncomplex interactions between agents and environments. In this paper, we propose\nan implicit generic reward model for reinforcement learning. Unlike those\nrewards that are manually defined for specific tasks, such implicit reward is\ntask independent. It only comes from the deviation from the agents' previous\nexperiences.", | |
| "authors": "Min Xu", | |
| "published": "2017-05-21", | |
| "updated": "2017-05-21", | |
| "primary_cat": "cs.AI", | |
| "cats": [ | |
| "cs.AI" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2003.13839v1", | |
| "title": "Model-Reference Reinforcement Learning Control of Autonomous Surface Vehicles with Uncertainties", | |
| "abstract": "This paper presents a novel model-reference reinforcement learning control\nmethod for uncertain autonomous surface vehicles. The proposed control combines\na conventional control method with deep reinforcement learning. With the\nconventional control, we can ensure the learning-based control law provides\nclosed-loop stability for the overall system, and potentially increase the\nsample efficiency of the deep reinforcement learning. With the reinforcement\nlearning, we can directly learn a control law to compensate for modeling\nuncertainties. In the proposed control, a nominal system is employed for the\ndesign of a baseline control law using a conventional control approach. The\nnominal system also defines the desired performance for uncertain autonomous\nvehicles to follow. In comparison with traditional deep reinforcement learning\nmethods, our proposed learning-based control can provide stability guarantees\nand better sample efficiency. We demonstrate the performance of the new\nalgorithm via extensive simulation results.", | |
| "authors": "Qingrui Zhang, Wei Pan, Vasso Reppa", | |
| "published": "2020-03-30", | |
| "updated": "2020-03-30", | |
| "primary_cat": "eess.SY", | |
| "cats": [ | |
| "eess.SY", | |
| "cs.AI", | |
| "cs.LG", | |
| "cs.RO", | |
| "cs.SY", | |
| "math.OC" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/2308.11336v1", | |
| "title": "On the Opportunities and Challenges of Offline Reinforcement Learning for Recommender Systems", | |
| "abstract": "Reinforcement learning serves as a potent tool for modeling dynamic user\ninterests within recommender systems, garnering increasing research attention\nof late. However, a significant drawback persists: its poor data efficiency,\nstemming from its interactive nature. The training of reinforcement\nlearning-based recommender systems demands expensive online interactions to\namass adequate trajectories, essential for agents to learn user preferences.\nThis inefficiency renders reinforcement learning-based recommender systems a\nformidable undertaking, necessitating the exploration of potential solutions.\nRecent strides in offline reinforcement learning present a new perspective.\nOffline reinforcement learning empowers agents to glean insights from offline\ndatasets and deploy learned policies in online settings. Given that recommender\nsystems possess extensive offline datasets, the framework of offline\nreinforcement learning aligns seamlessly. Despite being a burgeoning field,\nworks centered on recommender systems utilizing offline reinforcement learning\nremain limited. This survey aims to introduce and delve into offline\nreinforcement learning within recommender systems, offering an inclusive review\nof existing literature in this domain. Furthermore, we strive to underscore\nprevalent challenges, opportunities, and future pathways, poised to propel\nresearch in this evolving field.", | |
| "authors": "Xiaocong Chen, Siyu Wang, Julian McAuley, Dietmar Jannach, Lina Yao", | |
| "published": "2023-08-22", | |
| "updated": "2023-08-22", | |
| "primary_cat": "cs.IR", | |
| "cats": [ | |
| "cs.IR", | |
| "cs.AI" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| }, | |
| { | |
| "url": "http://arxiv.org/abs/1012.1552v1", | |
| "title": "Bridging the Gap between Reinforcement Learning and Knowledge Representation: A Logical Off- and On-Policy Framework", | |
| "abstract": "Knowledge Representation is important issue in reinforcement learning. In\nthis paper, we bridge the gap between reinforcement learning and knowledge\nrepresentation, by providing a rich knowledge representation framework, based\non normal logic programs with answer set semantics, that is capable of solving\nmodel-free reinforcement learning problems for more complex do-mains and\nexploits the domain-specific knowledge. We prove the correctness of our\napproach. We show that the complexity of finding an offline and online policy\nfor a model-free reinforcement learning problem in our approach is NP-complete.\nMoreover, we show that any model-free reinforcement learning problem in MDP\nenvironment can be encoded as a SAT problem. The importance of that is\nmodel-free reinforcement", | |
| "authors": "Emad Saad", | |
| "published": "2010-12-07", | |
| "updated": "2010-12-07", | |
| "primary_cat": "cs.AI", | |
| "cats": [ | |
| "cs.AI", | |
| "cs.LG", | |
| "cs.LO" | |
| ], | |
| "category": "Model AND Based AND Reinforcement AND Learning" | |
| } | |
| ] |