Robotics: Science and Systems XVII

Learned Visual Navigation for Under-Canopy Agricultural Robots

Arun Narenthiran Sivakumar, Sahil Modi, Mateus Valverde Gasparino, Che Ellis, Andres Eduardo Baquero Velasquez, Girish Chowdhary, Saurabh Gupta


This paper describes a system for visually guided autonomous navigation of under-canopy farm robots. Low-cost under-canopy robots can drive between crop rows under the plant canopy and accomplish tasks that are infeasible for over-the-canopy drones or larger agricultural equipment. However; autonomously navigating them under the canopy presents a number of challenges: unreliable GPS and LiDAR; high cost of sensing; challenging farm terrain; clutter due to leaves and weeds; and large variability in appearance over the season and across crop types. We address these challenges by building a modular system that leverages machine learning for robust and generalizable perception from monocular RGB images from low-cost cameras; and model predictive control for accurate control in challenging terrain. Our system; CropFollow; is able to autonomously drive 485 meters per intervention on average; outperforming a state-of-the-art LiDAR based system (286 meters per intervention) in extensive field testing spanning over 25 km.



    AUTHOR    = {Arun Narenthiran Sivakumar AND Sahil Modi AND Mateus Valverde Gasparino AND Che Ellis AND Andres Eduardo {Baquero Velasquez} AND Girish Chowdhary AND Saurabh Gupta}, 
    TITLE     = {{Learned Visual Navigation for Under-Canopy Agricultural Robots}}, 
    BOOKTITLE = {Proceedings of Robotics: Science and Systems}, 
    YEAR      = {2021}, 
    ADDRESS   = {Virtual}, 
    MONTH     = {July}, 
    DOI       = {10.15607/RSS.2021.XVII.019}