Published online by Cambridge University Press: 11 July 2019
Navigation tasks are often subject to several constraints that can be related to the sensors (visibility) or come from the environment (obstacles). In this paper, we propose a framework for autonomous omnidirectional wheeled robots that takes into account both collision and occlusion risk, during sensor-based navigation. The task consists in driving the robot towards a visual target in the presence of static and moving obstacles. The target is acquired by fixed – limited field of view – on-board cameras, while the surrounding obstacles are detected by lidar scanners. To perform the task, the robot has not only to keep the target in view while avoiding the obstacles, but also to predict its location in the case of occlusion. The effectiveness of our approach is validated through several experiments.