Robotics: Science and Systems XX
Imitation Bootstrapped Reinforcement Learning
Hengyuan Hu, Suvir Mirchandani, Dorsa SadighAbstract:
Despite the considerable potential of reinforcement learning (RL), robotics control tasks predominantly rely on imitation learning (IL) due to its better sample efficiency. However, it is costly to collect comprehensive expert demonstrations that enable IL to generalize to all possible scenarios, and any distribution shift would require recollecting data for finetuning. Therefore, RL is appealing if it can build upon IL as an efficient autonomous self-improvement procedure. We propose _imitation bootstrapped reinforcement learning_ (IBRL), a novel framework for sample-efficient RL with demonstrations that first trains an IL policy on the provided demonstrations and then uses it to propose alternative actions for both online exploration and bootstrapping target values. Compared to prior works that oversample the demonstrations or regularize RL with additional imitation loss, IBRL is able to utilize high quality actions from IL policies since the beginning of training, which greatly accelerates exploration and training efficiency. We evaluate IBRL on 6 simulation and 3 real-world tasks spanning various difficulty levels. IBRL significantly outperforms prior methods and the improvement is particularly more prominent in harder tasks.
Bibtex:
@INPROCEEDINGS{Hu-RSS-24, AUTHOR = {Hengyuan Hu AND Suvir Mirchandani AND Dorsa Sadigh}, TITLE = {{Imitation Bootstrapped Reinforcement Learning}}, BOOKTITLE = {Proceedings of Robotics: Science and Systems}, YEAR = {2024}, ADDRESS = {Delft, Netherlands}, MONTH = {July}, DOI = {10.15607/RSS.2024.XX.056} }