Robotics: Science and Systems XIV

Reinforcement and Imitation Learning for Diverse Visuomotor Skills

Yuke Zhu, Ziyu Wang, Josh Merel, Andrei Rusu, Tom Erez, Serkan Cabi, Saran Tunyasuvunakool, János Kramár, Raia Hadsell, Nando de Freitas, Nicolas Heess

Abstract:

We propose a general model-free deep reinforcement learning method and apply it to robotic manipulation tasks. Our approach leverages a small amount of demonstration data to assist a reinforcement learning agent. We train end-to-end visuomotor policies to learn a direct mapping from RGB camera inputs to joint velocities. We demonstrate that the same agent, trained with the same algorithm, can solve a wide variety of visuomotor tasks, where engineering a scripted controller can be laborious. In experiments, our reinforcement and imitation agent achieves significantly better performances than agents trained with reinforcement learning or imitation learning alone. We also illustrate that these policies, trained with large visual and dynamics variations, can achieve preliminary successes in zero-shot sim2real transfer.

Download:

Bibtex:

  
@INPROCEEDINGS{Zhu-RSS-18, 
    AUTHOR    = {Yuke Zhu AND Ziyu Wang AND Josh Merel AND Andrei Rusu AND Tom Erez AND Serkan Cabi AND Saran  Tunyasuvunakool AND János Kramár AND Raia Hadsell AND Nando de Freitas AND Nicolas Heess}, 
    TITLE     = {Reinforcement and Imitation Learning for Diverse Visuomotor Skills}, 
    BOOKTITLE = {Proceedings of Robotics: Science and Systems}, 
    YEAR      = {2018}, 
    ADDRESS   = {Pittsburgh, Pennsylvania}, 
    MONTH     = {June}, 
    DOI       = {10.15607/RSS.2018.XIV.009} 
}