Co-adaptation of algorithmic and implementational innovations in inference-based deep reinforcement learning

Abstract

Recently many algorithms were devised for reinforcement learning (RL) with function approximation. While they have clear algorithmic distinctions, they also have many implementation differences that are algorithm-independent and sometimes under-emphasized. Such mixing of algorithmic novelty and implementation craftsmanship makes rigorous analyses of the sources of performance improvements across algorithms difficult. In this work, we focus on a series of off-policy inference-based actor-critic algorithms – MPO, AWR, and SAC – to decouple their algorithmic innovations and implementation decisions. We present unified derivations through a single control-as-inference objective, where we can categorize each algorithm as based on either Expectation-Maximization (EM) or direct Kullback-Leibler (KL) divergence minimization and treat the rest of specifications as implementation details. We performed extensive ablation studies, and identified substantial performance drops whenever implementation details are mismatched for algorithmic choices. These results show which implementation details are co-adapted and co-evolved with algorithms, and which are transferable across algorithms: as examples, we identified that tanh Gaussian policy and network sizes are highly adapted to algorithmic types, while layer normalization and ELU are critical for MPO’s performances but also transfer to noticeable gains in SAC. We hope our work can inspire future work to further demystify sources of performance improvements across multiple algorithms and allow researchers to build on one another’s both algorithmic and implementational innovations.

Publication
Advances in neural information processing systems
Hiroki Furuta
Hiroki Furuta
Ph.D. student
Tatsuya Matsushima
Tatsuya Matsushima
Project Researcher

My research interests include robot learning, robot system, and XR.

Yutaka Matsuo
Yutaka Matsuo
Professor
Shixiang Shane Gu
Shixiang Shane Gu
Visiting Associate Professor

Shixiang Shane Gu is a Senior Research Scientist at Google Brain, and a Visiting Associate Professor at the University of Tokyo, with research interests around (1) algorithmic problems in deep learning, reinforcement learning, robotics, and probabilistic machine learning, and (2) mastering a universal physics prior for continuous control. website