Robotics: Science and Systems III

Active Policy Learning for Robot Planning and Exploration under Uncertainty

Ruben Martinez-Cantin, Nando de Freitas, Arnaud Doucet, and Jose Castellanos

Abstract: This paper proposes a simulation-based active policy learning algorithm for finite-horizon, partially-observed sequential decision processes. The algorithm is tested in the domain of robot navigation and exploration under uncertainty. In such a setting, the expected cost, that must be minimized, is a function of the belief state (filtering distribution). This filtering distribution is in turn nonlinear and depends on an observation model with discontinuities. These discontinuities arise because the robot has a finite field of view and the environment may contain occluding obstacles. As a result, the expected cost is non-differentiable and very expensive to simulate. The new algorithm overcomes the first difficulty and reduces the number of required simulations as follows. First, it assumes that we have carried out previous simulations which returned values of the expected cost for different corresponding policy parameters. Second, it fits a Gaussian process (GP) regression model to these values, so as to approximate the expected cost as a function of the policy parameters. Third, it uses the GP predicted mean and variance to construct a statistical measure that determines which policy parameters should be used in the next simulation. The process is then repeated using the new parameters and the newly gathered expected cost observation. Since the objective is to find the policy parameters that minimize the expected cost, this iterative active learning approach effectively trades-off between exploration (in regions where the GP variance is large) and exploitation (where the GP mean is low). In our experiments, a robot uses the proposed algorithm to plan an optimal path for accomplishing a series of tasks, while maximizing the information about its pose and map estimates. These estimates are obtained with a standard filter for simultaneous localization and mapping. Upon gathering new observations, the robot updates the state estimates and is able to replan a new path in the spirit of open-loop feedback control.

Download:

Bibtex:

@INPROCEEDINGS{ Martinez-Cantin-RSS-07,
    AUTHOR    = {R. Martinez-Cantin and N. de Freitas and A. Doucet and J. Castellanos},
    TITLE     = {Active Policy Learning for Robot Planning and Exploration under Uncertainty},
    BOOKTITLE = {Proceedings of Robotics: Science and Systems},
    YEAR      = {2007},
    ADDRESS   = {Atlanta, GA, USA},
    MONTH     = {June},
    DOI       = {10.15607/RSS.2007.III.041} 
}