Published online by Cambridge University Press: 10 April 2014
We propose a distributed algorithm for estimating the 3D pose (position and orientation) of multiple robots with respect to a common frame of reference when Global Positioning System is not available. This algorithm does not rely on the use of any maps, or the ability to recognize landmarks in the environment. Instead, we assume that noisy relative measurements between pairs of robots are intermittently available, which can be any one, or combination, of the following: relative pose, relative orientation, relative position, relative bearing, and relative distance. The additional information about each robot's pose provided by these measurements are used to improve over self-localization estimates. The proposed method is similar to a pose-graph optimization algorithm in spirit: pose estimates are obtained by solving an optimization problem in the underlying Riemannian manifold $(SO(3)\times{\mathcal R}^3)^{n(k)}$. The proposed algorithm is directly applicable to 3D pose estimation, can fuse heterogeneous measurement types, and can handle arbitrary time variation in the neighbor relationships among robots. Simulations show that the errors in the pose estimates obtained using this algorithm are significantly lower than what is achieved when robots estimate their pose without cooperation. Results from experiments with a pair of ground robots with vision-based sensors reinforce these findings. Further, simulations comparing the proposed algorithm with two state-of-the-art existing collaborative localization algorithms identify under what circumstances the proposed algorithm performs better than the existing methods. In addition, the question of trade-offs between cost (of obtaining a certain type of relative measurement) and benefit (improvement in localization accuracy) for various types of relative measurements is considered.