Hostname: page-component-586b7cd67f-r5fsc Total loading time: 0 Render date: 2024-11-26T13:54:39.840Z Has data issue: false hasContentIssue false

A vision system for mobile robot navigation

Published online by Cambridge University Press:  09 March 2009

M. Elarbi Boudihir
Affiliation:
CRAN-INPL, CNRS URA 821, 2, Avenue de la Foret de Haye, Vandoeuvre-les-Nancy 54516 Cedex, France
M. Dufaut
Affiliation:
CRAN-INPL, CNRS URA 821, 2, Avenue de la Foret de Haye, Vandoeuvre-les-Nancy 54516 Cedex, France
R. Husson
Affiliation:
CRAN-INPL, CNRS URA 821, 2, Avenue de la Foret de Haye, Vandoeuvre-les-Nancy 54516 Cedex, France

Extract

A new vision system architecture has been developed to support the visual navigation of an autonomous mobile robot. This robot is primarily intended for urban park inspection, so it should be able to move in a complex unstructured environment. The system consists of various modules each ensuring a specific task involved in autonomous navigation. Task coordination focuses on the central module called the supervisor which triggers each module at the time appropriate to the current situation of the robot. Most of the processing time is spent with the scene exploration module which is based on the Hough transform to extract the dominant straight features. This module operates in two modes: the initial phase which forms the type of processing applied to the first image acquired in order to initiate navigation, and the continuous following mode which ensures the processing of subsequent images taken at the end of the blind distance. In order to rely less on visual data, a detailed map of the environment has been established, and an algorithm is used to make a scene prediction based on robot position provided by the localization system. The predicted scene is used to validate the objects detected by the knowledge base. This knowledge base uses the acquired and predicted data to construct a scene model which is the main element of the vision system.

Type
Research Article
Copyright
Copyright © Cambridge University Press 1994

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1.Brzakovic, D. and Hong, L.. “Road Edge Detection for Mobile Robot Navigation”. IEEE J. Robotics and Automation 11431147 (1989).Google Scholar
2.Thorpe, C., Martial, H., Takeo, K. and Shafer, S., “Vision and Navigation for the Carnegie-Mellon Navlab”. IEEE Transactions on Pattern Analysis and Machine Intelligence, 10, No. 3, 362372 (05, 1988).Google Scholar
3.Darwin, K., Phipps, G. and Hsueh, C., “Autonomous Robotic Vehicle Road Following”. IEEE Transactions on Pattern Analysis and Machine Intelligence, 10, No. 5, 648658 (09., 1988).Google Scholar
4.Turk, M., Morgenthaler, D., Gremban, K. and Mana, M., “VITS A Vision System for Autonomous Land Vehicle Navigation”. IEEE Transactions on Pattern Analysis and Machine Intelligence, 10, No. 3, 342360 (05, 1988).CrossRefGoogle Scholar
5.Lemoigne, J., “Domain-Dependent Reasoning for Visual Navigation of Roadways“. IEEE J. Robotics and Automation, 4, No. 4, 419427 (08 1988).CrossRefGoogle Scholar
6.Waxman, A.M., Lemoigne, K.et al.A Visual Navigation System for Autonomous Land Vehicles”. IEEE J. Robotics and Automation, RA-3, No. 2,, 124141 (04, 1987).Google Scholar
7.Sharma, U.K. and Davis, L., “Road Boundary Detection in Range Imagery for an Autonomous Robot”. IEEE J. Robotics and Automation, 4, No. 5, 515523 (10., 1988).CrossRefGoogle Scholar
8.Lee, C.Y. and Deman, H., “An Efficient ASIC Architecture for Real-Time Edge Detection”. IEEE Transactions on Circuits and Systems, 36, 10, 13501358 (10., 1989).Google Scholar
9.Kluge, K. and Thorpe, C., “Explicit Models for Robot Following“. IEEE International Conference on Robotics and Automation, Scottdale (15-19 05 1989) pp. 11481159.Google Scholar
10.Lahaye, J.C. and Chehikian, A., “Localization d'Objets en Temps Réel par Transformée de Hough”. Deuxiéme Colloque Image. Nice (Avril, 1986) pp. 166172.Google Scholar
11.Elarbi-Boudihir, M., Dafaut, M. and Husson, R., “Guldage de robot mobile par détection de bord de route. Etude de la phase initiale de navigation”. Revue Automatique Productique Appliquées 3, No. 2, 6781 (1990).Google Scholar
12.Chen, F.H., Hsu, W.H. and Chen, M.Y., “Recognition of Handwritten Chinese Characters by Modified Hough Transform Techniques”. IEEE Transactions on Pattern Recognition and Machine Intelligence, 11, 4, 429438 (04, 1989).Google Scholar
13.Maitre, H., “Contribution to the Prediction of Performances of the Hough Transform”, IEEE Transactions on Pattern Recognition and Machine lntelligence, PAMI-8, No. 5 669674 (09., 1986).Google Scholar