Robotics: Science and Systems XXI

RAPID: Robust and Agile Planner Using Inverse Reinforcement Learning for Vision-Based Drone Navigation

Minwoo Kim, Geunsik Bae, Jinwoo Lee, Woojae Shin, Changseung Kim, Myongyol Choi, Heejung Shin, Hyongdong Oh

Abstract:

This paper introduces a learning-based visual planner for agile drone flight in cluttered environments. The proposed planner generates collision-free waypoints in milliseconds, enabling drones to perform agile maneuvers in complex environments without building separate perception, mapping, and planning modules. Learning-based methods, such as behavior cloning (BC) and reinforcement learning (RL), demonstrate promising performance in visual navigation but still face inherent limitations. BC is susceptible to compounding errors due to limited expert imitation, while RL struggles with reward function design and sample inefficiency. To address these limitations, this paper proposes an inverse reinforcement learning (IRL)based framework for high-speed visual navigation. By leveraging IRL, it is possible to reduce the number of interactions with simulation environments and improve capability to deal with high-dimensional spaces (i.e., visual information) while preserving the robustness of RL policies. A motion primitivebased path planning algorithm collects an expert dataset with privileged map data from diverse environments (e.g., narrow gaps, cubes, spheres, trees), ensuring comprehensive scenario coverage. By leveraging both the acquired expert and learner dataset gathered from the agent’s interactions with the simulation environments, a robust reward function and policy are learned across diverse states. While the proposed method is trained in a simulation environment only, it can be directly applied to real-world scenarios without additional training or tuning. The performance of the proposed method is validated in both simulation and real-world environments, including forests and various structures. The trained policy achieves an average speed of 7 m/s and a maximum speed of 8.8 m/s in real flight experiments. To the best of our knowledge, this is the first work to successfully apply an IRL framework for high-speed visual navigation of drones. The experimental videos can be found at https://youtu.be/ZfV6ij0qZMI.

Download:

Bibtex:

  
@INPROCEEDINGS{KimM2-RSS-25, 
    AUTHOR    = {Minwoo Kim AND Geunsik Bae AND Jinwoo Lee AND Woojae Shin AND Changseung Kim AND Myongyol Choi AND Heejung Shin AND Hyongdong Oh}, 
    TITLE     = {{RAPID: Robust and Agile Planner Using Inverse Reinforcement Learning for Vision-Based Drone Navigation}}, 
    BOOKTITLE = {Proceedings of Robotics: Science and Systems}, 
    YEAR      = {2025}, 
    ADDRESS   = {LosAngeles, CA, USA}, 
    MONTH     = {June}, 
    DOI       = {10.15607/RSS.2025.XXI.142} 
}