Robotics: Science and Systems XVII

Hierarchical Neural Dynamic Policies

Shikhar Bahl, Abhinav Gupta, Deepak Pathak


We tackle the problem of generalization to unseen configurations for dynamic tasks in the real world while learning from high-dimensional image input. The family of nonlinear dynamical system-based methods have successfully demonstrated dynamic robot behaviors but have difficulty in generalizing to unseen configurations as well as learning from image inputs. Recent works approach this issue by using deep network policies and reparameterize actions to embed the structure of dynamical systems but still struggle in domains with diverse configurations of image goals; and hence; find it difficult to generalize. In this paper; we address this dichotomy by leveraging embedding the structure of dynamical systems in a hierarchical deep policy learning framework; called Hierarchical Neural Dynamical Policies (H-NDPs). Instead of fitting deep dynamical systems to diverse data directly; H-NDPs form a curriculum by learning local dynamical system-based policies on small regions in state-space and then distill them into a global dynamical system-based policy that operates only from high-dimensional images. H-NDPs additionally provide smooth trajectories; a strong safety benefit in the real world. We perform extensive experiments on dynamic tasks both in the real world (digit writing; scooping; and pouring) and simulation (catching; throwing; picking). We show that H-NDPs are easily integrated with both imitation as well as reinforcement learning setups and achieve state-of-the-art results. Video results at



    AUTHOR    = {Shikhar Bahl AND Abhinav Gupta AND Deepak Pathak}, 
    TITLE     = {{Hierarchical Neural Dynamic Policies}}, 
    BOOKTITLE = {Proceedings of Robotics: Science and Systems}, 
    YEAR      = {2021}, 
    ADDRESS   = {Virtual}, 
    MONTH     = {July}, 
    DOI       = {10.15607/RSS.2021.XVII.023}