Hostname: page-component-586b7cd67f-dlnhk Total loading time: 0 Render date: 2024-11-24T22:08:15.943Z Has data issue: false hasContentIssue false

Robust ground plane region detection using multiple visual cues for obstacle avoidance of a mobile robot

Published online by Cambridge University Press:  04 March 2014

Chia-How Lin
Affiliation:
Institute of Electrical Control Engineering, National Chiao Tung University, Taiwan, Republic of China
Kai-Tai Song*
Affiliation:
Institute of Electrical Control Engineering, National Chiao Tung University, Taiwan, Republic of China
*
*Corresponding author. E-mail: [email protected]

Summary

This paper presents a vision-based obstacle avoidance design using a monocular camera onboard a mobile robot. A novel image processing procedure is developed to estimate the distance between the robot and obstacles based-on inverse perspective transformation (IPT) in an image plane. A robust image processing solution is proposed to detect and segment a drivable ground area within the camera view. The proposed method integrates robust feature matching with adaptive color segmentation for plane estimation and tracking to cope with variations in illumination and camera view. After IPT and ground region segmentation, distance measurement results are obtained similar to those of a laser range finder for mobile robot obstacle avoidance and navigation. The merit of this algorithm is that the mobile robot can have the capacity of path finding and obstacle avoidance by using a single monocular camera. Practical experimental results on a wheeled mobile robot show that the proposed imaging system successfully obtains distances of surrounding objects for reactive navigation in an indoor environment.

Type
Articles
Copyright
Copyright © Cambridge University Press 2014 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1.Lenser, S. and Veloso, M., “Visual Sonar Fast Obstacle Avoidance Using Monocular Vision,” Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Las Vegas, Nevada, USA (2003) pp. 886891.Google Scholar
2.Chen, C., Cheng, C., Page, D., Koschan, A. and Abidi, M., “A Moving Object Tracked by a Mobile Robot with Real-Time Obstacles Avoidance Capacity,” Proceedings of the 18th International Conference on Pattern Recognition, Washington, D.C., USA (2006) pp. 10911094.Google Scholar
3.Ohnishi, N. and Imiya, A., “Dominant plane detection from optical flow for robot navigation,” Pattern Recognit. Lett. 27, 10091021 (2006).CrossRefGoogle Scholar
4.Kim, Y. and Kim, H., “Layered Ground Floor Detection for Vision Based Mobile Robot Navigation,” Proceedings of IEEE International Conference on Robotics and Automation, New Orleans, Louisiana, USA (2004) pp. 1318.Google Scholar
5.Michels, J., Saxena, A. and Ng, A.Y., “High Speed Obstacle Avoidance using Monocular Vision and Reinforcement Learning,” Proceedings of the Twenty-first International Conference on Machine Learning (ICML), Bonn, Germany (2005) pp. 593600.Google Scholar
6.Micusik, B., Wildenauer, H. and Vincze, M., “Towards Detection of Orthogonal Planes in Monocular Images of Indoor Environments,” Proceedings of IEEE International Conference on Robotics and Automation, Los Angeles, USA (2008) pp. 9991004.Google Scholar
7.Liang, B., Pears, N. and Chen, Z., “Affine Height Landscapes for Monocular Mobile Robot Obstacle Avoidance,” Proceedings of the of the 8th International Conference on Intelligent Autonomous Systems, Amsterdam, The Netherlands (2004) pp. 863872.Google Scholar
8.Zhou, J. and Li, B., “Homography-based Ground Detection for a Mobile Robot Platform Using a Single Camera,” Proceedings of the 2006 IEEE International Conference on Robotics and Automation, Orlando, Florida (2006) pp. 41004105.Google Scholar
9.Zhou, J. and Li, B., “Robust Ground Plane Detection with Normalized Homography in Monocular Sequences from a Robot Platform,” Proceedings of the 2006 IEEE International Conference on Image Processing, Atlanta, GA, USA (2006) pp. 30173020.Google Scholar
10.Conrad, D. and DeSouza, G. N., “Homography-Based Ground Plane Detection for Mobile Robot Navigation Using a Modified EM Algorithm,” Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, Alaska, USA (2010) pp. 910915.CrossRefGoogle Scholar
11.Mittal, A. and Sofat, S., “A robust and efficient homography based approach for ground plane detection,” BVICAM's Int. J. Inf. Technol. (BIJIT) 4 (2) (2012).Google Scholar
12.Simond, N. and Parent, M., “Obstacle Detection from IPM and Super-Homography,” Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA (2007) pp. 42834288.Google Scholar
13.Wybo, S., Bendahan, R., Bougnoux, S., Vestri, C., Abad, F. and Kakinami, T., “Improving backing-up manoeuver safety with vision-based movement detection,” Intell. Transp. Syst. IET, 1 (2), 150158 (2007).CrossRefGoogle Scholar
14.Yamaguchi, K., Watanabe, A., Naito, T. and Ninomiya, Y., “Road Region Estimation Using a Sequence of Monocular Images,” Proceedings of the 19th international Conference on Pattern Recognition, Tampa, Florida, USA (2008) pp. 14.Google Scholar
15.Fazl-Ersi, E. and Tsotsos, J. K., “Region Classification for Robust Floor Detection in Indoor Environments,” Proceedings of the 6th International Conference on Image Analysis and Recognition (ICIAR), Halifax, Canada (2009) pp. 717726.Google Scholar
16.Bonin-Font, F. and Ortiz, A., “Building a Qualitative Local Occupancy Grid in a New Vision-Based Reactive Navigation Strategy for Mobile Robots,” Proceedings of the IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Palma de Mallorca, Spain (2009) pp. 15111514.Google Scholar
17.Rosten, E. and Drummond, T., “Fusing Points and Lines for High Performance Tracking,” Proceedings of the 10th IEEE International Conference on Computer Vision (ICCV 2005), Beijing, China (2005) pp. 15081515.Google Scholar
18.Bay, H., Ess, A., Tuytelaars, T. and Gool, L.Van, “Speeded-up robust features (SURF),” Comput. Vis. Image Underst. 110, 346359 (2008).CrossRefGoogle Scholar
19.Lowe, D. G., “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis. 60 (2), 91110 (2004).CrossRefGoogle Scholar
20.Hartley, R. and Zisserman, A., “Multiple View Geometry in Computer Vision,” (Cambridge University Press, New York, NY, 2003).Google Scholar
21.Kähler, O. and Denzler, J., “Detecting Coplanar Feature Points in Handheld Image Sequences,” Proceedings of Conference on Computer Vision Theory and Applications (VISAPP), Barcelona, Spain (2007) pp. 447452.Google Scholar
22.Serradell, E., Özuysal, M., Lepetit, V., Fua, P. and Moreno-Noguer, F., “Combining Geometric and Appearance Priors for Robust Homography Estimation,” Proceedings of European Conference on Computer Vision, Greece (2010) pp. 5872.Google Scholar
23.Comaniciu, D., Ramesh, V. and Meer, P., “The Variable Bandwidth Mean-shift and Data-Driven Scale Selection,” Proceedings of the Eighth International Conference on Computer Vision, Vancouver, British Columbia, Canada, July (2001) pp. 438445.Google Scholar
24.Lin, C. H., Chen, Y. T. and Song, K. T., “A Visual Attention and Tracking Design for a Robotic Vision System,” Proceedings of the 2009 IEEE Workshop on Advanced Robotics and Its Social Impacts, Tokyo, Japan (2009) pp. 3035.CrossRefGoogle Scholar
25.Hong, L. and Chen, G., “Segment-Based Stereo Matching Using Graph Cuts,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, D.C., USA (2004) pp.74–78.Google Scholar
26.Bhattacharyya, A., “On a measure of divergence between two statistical populations defined by probability distributions,” Bull. Calcutta Math. Soc. 35, 99109 (1943).Google Scholar
27.Song, K. T. and Lin, J. Y., “Behavior Fusion of Robot Navigation Using a Fuzzy Neural Network,” Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Taipei, Taiwan (2006) pp. 49104915.Google Scholar