Homepage: https://sites.google.com/view/mbrl-icml2019

Overview

  • In the recent explosion of interest in deep RL, “model-free” approaches based on Q-learning and actor-critic architectures have received the most attention due to their flexibility and ease of use. However, this generality often comes at the expense of efficiency (statistical as well as computational) and robustness. The large number of required samples and safety concerns often limit direct use of model-free RL for real-world settings.

Model-based methods are expected to be more efficient. Given accurate models, trajectory optimization and Monte-Carlo planning methods can efficiently compute near-optimal actions in varied contexts. Advances in generative modeling, unsupervised, and self-supervised learning provide methods for learning models and representations that support subsequent planning and reasoning. Against this backdrop, our workshop aims to bring together researchers in generative modeling and model-based control to discuss research questions at their intersection, and to advance the state of the art in model-based RL for robotics and AI. In particular, this workshops aims to make progress on questions related to:

  1. How can we learn generative models efficiently? Role of data, structures, priors, and uncertainty.
  2. How to use generative models efficiently for planning and reasoning? Role of derivatives, sampling, hierarchies, uncertainty, counterfactual reasoning etc.
  3. How to harmoniously integrate model-learning and model-based decision making?
  4. How can we learn compositional structure and environmental constraints? Can this be leveraged for better generalization and reasoning?

Schedule

  • Yann LeCun: Self Supevised Learning
  • Jessica Hamrick: Mental Simulation, Imagination, and Model-Based Deep RL
  • Spotlights: Session 1
  • Stefan Schaal: What Should Be Learned?
  • Contributed Talks 1: When to Trust Your Model__Model-Based Policy Optimization
  • Contributed Talks 2: Model Based Planning with Energy Bassed Models
  • Contributed Talks 3: A Perspective on Objects and Systematic Generalization in Model-Based RL
  • David Silver : Value-Focused Models
  • Spotlights: Session 2
  • Byron Boots: Online Learning for Adaptive Robotic Systems
  • Contributed Talks 4: An Inference Perspective on Model-Based Reinforcement Learning
  • Contributed Talks 5: SVRE__New Method for Training GANs
  • Chelsea Finn: Complexity without Losing Generality__The Role of Supervision and Composition
  • Abhinav Gupta: Self-supervised Learning for Exploration & Representation