No CrossRef data available.
Published online by Cambridge University Press: 01 July 2008
Robotic sensing is a relatively new field of activity compared with the design and control of robot mechanisms. In both areas the role of geometry is natural and necessary for the development of devices, their control and use in challenging environments. At the very beginning odometry, tactile and touch sensors dominated robot sensing. More recently, due to the fall in the price of laser devices, they have become more attractive to the community. On the other hand, progress in photogrametry, particularly during the nineties as the n-view geometry in projective geometry matured, boot-strapped the use of computer vision as an extra powerful sensor technique for robot guidance. Cameras were used in monocular or stereoscopic fashion, catadioptric systems for ominidirectional vision, fish-eye cameras and camera networks made the use of computer vision even more diverse. Researchers started to combine sensors for 2D and 3D sensing by fusing sensor data in a projective framework. Thanks to the continuous progress in mechatronics, the low prices of fast computers and increasing accuracy of sensor systems, one can build a robot to perceive its surroundings, reconstruct, plan and ultimately act intelligently. In these perception-action systems there is of course, the urgent need for a geometric stochastic framework to deal with uncertainty in the sensing, planning and action in a robust manner. Here geometry can play a central role for the representation and computing in higher dimensions using projective geometry and differential geometry on Lie groups manifolds with a pseudo Euclidean metric. Let us review briefly the developments towards modern geometry that have been often overlooked by the robotic researchers and practitioners.