Robotics: Science and Systems XXI
NaVILA: Legged Robot Vision-Language-Action Model for Navigation
An-Chieh Cheng, Yandong Ji, Zhaojing Yang, Zaitian Gongye, Xueyan Zou, Jan Kautz, Erdem Biyik, Hongxu Yin, Sifei Liu, Xiaolong WangAbstract:
This paper proposes to solve the problem of Vision and-Language Navigation with legged robots, which not only provides a flexible way for humans to command but also allows the robot to navigate through more challenging and cluttered scenes. However, it is non-trivial to translate human language instructions all the way to low-level leg joint actions. We propose NaVILA, a 2-level framework that unifies a Vision-Language Action model (VLA) with locomotion skills. Instead of directly predicting low-level actions from VLA, NaVILA first generates mid-level actions with spatial information in the form of language, (e.g., “moving forward 75cm”), which serves as an input for a visual locomotion RL policy for execution. NaVILA substantially improves previous approaches on existing benchmarks. The same advantages are demonstrated in our newly developed benchmarks with IsaacLab, featuring more realistic scenes, low-level controls, and real-world robot experiments.
Bibtex:
@INPROCEEDINGS{ChengA-RSS-25, AUTHOR = {An-Chieh Cheng AND Yandong Ji AND Zhaojing Yang AND Zaitian Gongye AND Xueyan Zou AND Jan Kautz AND Erdem Biyik AND Hongxu Yin AND Sifei Liu AND Xiaolong Wang}, TITLE = {{NaVILA: Legged Robot Vision-Language-Action Model for Navigation}}, BOOKTITLE = {Proceedings of Robotics: Science and Systems}, YEAR = {2025}, ADDRESS = {LosAngeles, CA, USA}, MONTH = {June}, DOI = {10.15607/RSS.2025.XXI.018} }