Robotics: Science and Systems XX
Diffusion Meets DAgger: Supercharging Eye-in-hand Imitation Learning
Xiaoyu Zhang, Matthew Chang, Pranav Kumar, Saurabh GuptaAbstract:
A common failure mode for policies trained with imitation is compounding execution errors at test time. When the learned policy encounters states that are not present in the expert demonstrations, the policy fails, leading to degenerate behavior. The Dataset Aggregation, or DAgger approach to this problem simply collects more data to cover these failure states. However, in practice, this is often prohibitively expensive. In this work, we propose Diffusion Meets DAgger (DMD), a method that reaps the benefits of DAgger but without the cost, for eye-in-hand imitation learning problems. Instead of *collecting* new samples to cover out-of-distribution states, DMD uses recent advances in diffusion models to *synthesize* these samples. This leads to robust performance from few demonstrations. We compare DMD against behavior cloning baseline across four tasks: pushing, stacking, pouring, and hanging a shirt. In pushing, DMD achieves 80% success rate with as few as 8 expert demonstrations, where naive behavior cloning reaches only 20%. In stacking, DMD succeeds on average 92% of the time across 5 cups, versus 40% for BC. When pouring coffee beans, DMD transfers to another cup successfully 80% of the time. Finally, DMD attains 90% success rate for hanging shirt on a clothing rack.
Bibtex:
@INPROCEEDINGS{Zhang-RSS-24, AUTHOR = {Xiaoyu Zhang AND Matthew Chang AND Pranav Kumar AND Saurabh Gupta}, TITLE = {{Diffusion Meets DAgger: Supercharging Eye-in-hand Imitation Learning}}, BOOKTITLE = {Proceedings of Robotics: Science and Systems}, YEAR = {2024}, ADDRESS = {Delft, Netherlands}, MONTH = {July}, DOI = {10.15607/RSS.2024.XX.048} }