DR-MPC: Deep Residual Model Predictive Control for Real-World Social Navigation
AuthorsJames Han†, Nicholas Rhinehart†, Hugues Thomas, Jian Zhang, Timothy D. Barfoot†
AuthorsJames Han†, Nicholas Rhinehart†, Hugues Thomas, Jian Zhang, Timothy D. Barfoot†
How can a robot safely navigate around people with complex motion patterns? Deep Reinforcement Learning (DRL) in simulation holds some promise, but much prior work relies on simulators that fail to capture the nuances of real human motion. Thus, we propose Deep Residual Model Predictive Control (DR-MPC) to enable robots to quickly and safely perform DRL from real-world crowd navigation data. By blending MPC with model-free DRL, DR-MPC overcomes the DRL challenges of large data requirements and unsafe initial behavior. DR-MPC is initialized with MPC-based path tracking, and gradually learns to interact more effectively with humans. To further accelerate learning, a safety component estimates out-of-distribution states to guide the robot away from likely collisions. In simulation, we show that DR-MPC substantially outperforms prior work, including traditional DRL and residual DRL models. Hardware experiments show our approach successfully enables a robot to navigate a variety of crowded situations with few errors using less than 4 hours of training data.
† University of Toronto
Earlier this year, Apple hosted the Workshop on Machine Learning for Health. This two-day hybrid event brought together Apple and the academic research community and clinicians to discuss state-of-the-art machine learning (ML) research in health.