Embodiment-Agnostic Navigation Policy Trained with Visual Demonstrations

Tel-Aviv University
Project Teaser

Abstract

Learning to navigate in unstructured environments is a challenging task for robots. While reinforcement learning can be effective, it often requires extensive data collection and can pose risk. Learning from expert demonstrations, on the other hand, offers a more efficient approach. However, many existing methods rely on specific robot embodiments, pre-specified target images and require large datasets. We propose the Visual Demonstration-based Embodiment-agnostic Navigation (ViDEN) framework, a novel framework that leverages visual demonstrations to train embodiment-agnostic navigation policies. ViDEN utilizes depth images to reduce input dimensionality and relies on relative target positions, making it more adaptable to diverse environments. By training a diffusion-based policy on task-centric and embodiment-agnostic demonstrations, ViDEN can generate collision-free and adaptive trajectories in real-time. Our experiments on human reaching and tracking demonstrate that ViDEN outperforms existing methods, requiring a small amount of data and achieving superior performance in various indoor and outdoor navigation scenarios.

Method

Deploy - Outdoor

Deploy - Indoor

Handling of disturbances

Low light

Dynamic obstacle

Physical perturbations

Examples

BibTeX

@misc{curtis2024embodimentagnosticnavigationpolicytrained,
        title={Embodiment-Agnostic Navigation Policy Trained with Visual Demonstrations}, 
        author={Nimrod Curtis and Osher Azulay and Avishai Sintov},
        year={2024},
        eprint={2412.20226},
        archivePrefix={arXiv},
        primaryClass={cs.RO},
        url={https://arxiv.org/abs/2412.20226}, 
  }