Robotics: Science and Systems XIII
Safe Visual Navigation via Deep Learning and Novelty Detection
Charles Richter, Nicholas RoyAbstract:
Robots that use learned perceptual models in the real world must be able to safely handle cases where they are forced to make decisions in scenarios that are unlike any of their training examples. However, state-of-the-art deep learning methods are known to produce erratic or unsafe predictions when faced with novel inputs. Furthermore, recent ensemble, bootstrap and dropout methods for quantifying neural network uncertainty may not efficiently provide accurate uncertainty estimates when queried with inputs that are very different from their training data. Rather than unconditionally trusting the predictions of a neural network for unpredictable real-world data, we use an autoencoder to recognize when a query is novel, and revert to a safe prior behavior. With this capability, we can deploy an autonomous deep learning system in arbitrary environments, without concern for whether it has received the appropriate training. We demonstrate our method with a vision-guided robot that can leverage its deep neural network to navigate 50% faster than a safe baseline policy in familiar types of environments, while reverting to the prior behavior in novel environments so that it can safely collect additional training data and continually improve. A video illustrating our approach is available at: http://groups.csail.mit.edu/rrg/videos/safe visual navigation.
Bibtex:
@INPROCEEDINGS{Richter-RSS-17, AUTHOR = {Charles Richter AND Nicholas Roy}, TITLE = {Safe Visual Navigation via Deep Learning and Novelty Detection}, BOOKTITLE = {Proceedings of Robotics: Science and Systems}, YEAR = {2017}, ADDRESS = {Cambridge, Massachusetts}, MONTH = {July}, DOI = {10.15607/RSS.2017.XIII.064} }