Published on June 12, 2018 by Microsoft Research

Continual learning is the problem faced by intelligent agents of the sort that people and other animals are, that of learning increasingly complex skills and knowledge over time from experience, of becoming increasingly competent over time regardless of their environment. In this talk I will break down the research needed to accomplish continual learning into multiple components and then present some results from my research group on a couple of the components. In the main part of the talk I will describe the optimal rewards framework and algorithms to learn rewards that help planning and learning agents followed by some theoretical results on the repeated inverse reinforcement learning problem. Time permitting, I will briefly describe a DeepRL architecture that can learn predictions that allow planning without the ability or the need to make the usual observation predictions made by traditional planning models.

See more at

Leave a Reply

1 Comment on "On Intrinsic Rewards and Continual Learning"

Notify of

Nick King
Nick King
7 days 18 hours ago

This is great! Separating out reward functions seems so beautiful and powerful.