VASE: Variational Assorted Surprise Exploration for Reinforcement Learning.

Journal: IEEE Transactions On Neural Networks And Learning Systems
Published:
Abstract

Exploration in environments with continuous control and sparse rewards remains a key challenge in reinforcement learning (RL). One of the approaches to encourage more systematic and efficient exploration relies on surprise as an intrinsic reward for the agent. We introduce a new definition of surprise and its RL implementation named variational assorted surprise exploration (VASE). VASE uses a Bayesian neural network as a model of the environment dynamics and is trained using variational inference, alternately updating the accuracy of the agent's model and policy. Our experiments show that in continuous control sparse reward environments, VASE outperforms other surprise-based exploration techniques.

Authors
Haitao Xu, Lech Szymanski, Brendan Mccane