Time: May 6-9, 2019

Location: New Orleans, USA

Overview

  • Policy Transfer with Strategy Optimization
  • Diversity Is All You Need: Learning Skills without a Reward Function

  • Learning to Adapt in Dynamic, Real-World Environments Through Meta-Reinforcement Learning
  • Deep Online Learning via Meta-Learning
  • Composing Complex Skills by Learning Transition Policies
  • Large-Scale Study of Curiosity-Driven Learning
  • Learning Exploration Policies for Navigation
  • BabyAI: A Platform to Study the Sample Efficiency of Grounded Language Learning
  • Variance Reduction for Reinforcement Learning in Input-Driven Environments
  • Recall Traces: Backtracking Models for Efficient RL
  • Hindsight Policy Gradients
  • Self-Monitoring Navigation Agent via Auxiliary Progress Estimation
  • Learning Finite State Representations of Recurrent Policy Networks
  • Unsupervised Learning via Meta-Learning
  • DHER: Hindsight Experience Replay for Dynamic Goals

Policy Transfer with Strategy Optimization

  • Wenhao Yu, C. Karen Liu, Greg Turk
  • Georgia Institute of Technology
  • [Paper] [Poster] [Github]
  • Abstract
    • Computer simulation provides an automatic and safe way for training robotic control policies to achieve complex tasks such as locomotion. However, a policy trained in simulation usually does not transfer directly to the real hardware due to the differences between the two environments. Transfer learning using domain randomization is a promising approach, but it usually assumes that the target environment is close to the distribution of the training environments, thus relying heavily on accurate system identification. In this paper, we present a different approach that leverages domain randomization for transferring control policies to unknown environments. The key idea that, instead of learning a single policy in the simulation, we simultaneously learn a family of policies that exhibit different behaviors. When tested in the target environment, we directly search for the best policy in the family based on the task performance, without the need to identify the dynamic parameters. We evaluate our method on five simulated robotic control problems with different discrepancies in the training and testing environment and demonstrate that our method can overcome larger modeling errors compared to training a robust policy or an adaptive policy.
  • Notes
    • Other paper of Wenhao Yu: Learning Symmetric and Low-Energy Locomotion (Siggraph’18)

Diversity Is All You Need: Learning Skills without a Reward Function

  • Benjamin Eysenbach, Abhishek Gupta1, Julian Ibarz, Sergey Levine
  • Google Brain, UC Berkeley
  • [Paper] [Poster] [Github] [Website]
  • Abstract
    • Intelligent creatures can explore their environments and learn useful skills without supervision. In this paper, we propose DIAYN (“Diversity is All You Need”), a method for learning useful skills without a reward function. Our proposed method learns skills by maximizing an information theoretic objective using a maximum entropy policy. On a variety of simulated robotic tasks, we show that this simple objective results in the unsupervised emergence of diverse skills, such as walking and jumping. In a number of reinforcement learning benchmark environments, our method is able to learn a skill that solves the benchmark task despite never receiving the true task reward. We show how pretrained skills can provide a good parameter initialization for downstream tasks, and can be composed hierarchically to solve complex, sparse reward tasks. Our results suggest that unsupervised discovery of skills can serve as an effective pretraining mechanism for overcoming challenges of exploration and data efficiency in reinforcement learning.
  • Notes
    • Other paper of Benjamin Eysenbach: Leave No Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning (ICLR’18)
    • Talk of Benjamin Eysenbach (Sep. 18, 2018): Towards Autonomous RL - Learning to Act with Less Human Supervision

Large-Scale Study of Curiosity-Driven Learning

  • Yuri Burda, Harri Edwards, Deepak Pathak, Amos Storkey, Trevor Darrell, Alexei A. Efros
  • OpenAI, UC Berkeley, Univ. of Edinburgh
  • [Paper] [Poster] [Github] [Website]
  • Abstract
    • Reinforcement learning algorithms rely on carefully engineering environment rewards that are extrinsic to the agent. However, annotating each environment with hand-designed, dense rewards is not scalable, motivating the need for developing reward functions that are intrinsic to the agent. Curiosity is a type of intrinsic reward function which uses prediction error as reward signal. In this paper: (a) We perform the first large-scale study of purely curiosity-driven learning, i.e. without any extrinsic rewards, across 54 standard benchmark environments, including the Atari game suite. Our results show surprisingly good performance, and a high degree of alignment between the intrinsic curiosity objective and the handdesigned extrinsic rewards of many game environments. (b) We investigate the effect of using different feature spaces for computing prediction error and show that random features are sufficient for many popular RL game benchmarks, but learned features appear to generalize better (e.g. to novel game levels in Super Mario Bros.). (c) We demonstrate limitations of the prediction-based rewards in stochastic setups. Game-play videos and code are at https://pathak22.github. io/large-scale-curiosity/.
  • Notes