Hostname: page-component-78c5997874-j824f Total loading time: 0 Render date: 2024-11-09T08:02:09.475Z Has data issue: false hasContentIssue false

Robust stereo visual odometry: A comparison of random sample consensus algorithms based on three major hypothesis generators

Published online by Cambridge University Press:  12 May 2022

Guangzhi Guo
Affiliation:
Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai, China University of Chinese Academy of Sciences, Beijing, China
Zuoxiao Dai*
Affiliation:
Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai, China
Yuanfeng Dai
Affiliation:
Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai, China University of Chinese Academy of Sciences, Beijing, China
*
*Corresponding author. E-mail: [email protected]

Abstract

Almost all robust stereo visual odometry work uses the random sample consensus (RANSAC) algorithm for model estimation with the existence of noise and outliers. To date, there have been few comparative studies to evaluate the performance of RANSAC algorithms based on different hypothesis generators. In this work, we analyse and compare three popular and efficient RANSAC schemes. They mainly differ in using the two-dimensional (2-D) data points measured directly and the three-dimensional (3-D) data points inferred through triangulation. This comparison presents several quantitative experiments intended for comparing the accuracy, robustness and efficiency of each scheme under varying levels of noise and different percentages of outlier conditions. The results suggest that in the presence of noise and outliers, the perspective-three-point RANSAC provides more accurate and robust pose estimates. However, in the absence of noise, the iterative closest point RANSAC obtains better results regardless of the percentage of outliers. Efficiency, in terms of the number of RANSAC iterations, is found in that the relative speed of the perspective-three-point RANSAC becomes superior under low noise levels and low percentages of outlier conditions. Otherwise, the iterative closest-point RANSAC may be computationally more efficient.

Type
Research Article
Copyright
Copyright © The Author(s), 2022. Published by Cambridge University Press on behalf of The Royal Institute of Navigation

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Alismail, H., Browning, B. and Dias, M. B. (2010). Evaluating Pose Estimation Methods for Stereo Visual Odometry on Robots. Proceedings of the 11th International Conference on Intelligent Autonomous Systems (IAS-11), Ottawa, Canada.Google Scholar
Chen, Y. G., Mei, Y. H. and Wan, S. F. (2021). Visual odometry for self-driving with multithpothesis and network prediction. Mathematical Problems in Engineering, 2021, 1930881.CrossRefGoogle Scholar
Cvišić, I., Ćesić, J., Marković, I. and Petrović, I. (2018). SOFT-SLAM: Computationally efficient stereo visual simultaneous localization and mapping for autonomous unmanned aerial vehicles. Journal of Field Robotics, 35(4), 578595.CrossRefGoogle Scholar
Fischler, M. A. and Bolles, R. C. (1981). Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6), 381395.CrossRefGoogle Scholar
Fraundorfer, F. and Scaramuzza, D. (2012). Visual odometry: Part II: Matching, robustness, optimization, and applications. IEEE Robotics & Automation Magazine, 19(2), 7890.CrossRefGoogle Scholar
Hartley, R. and Zisserman, A. (2000). Multiple View Geometry in Computer Vision. Cambridge: Cambridge Univ. Press.Google Scholar
He, M., Zhu, C. Z., Huang, Q., Ren, B. S. and Liu, J. T. (2019). A review of monocular visual odometry. Visual Computer, 36(5), 10531065.CrossRefGoogle Scholar
Kaess, M., Ni, K. and Dellaert, F. (2009). Flow Separation for Fast and Robust Stereo Odometry. 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan.CrossRefGoogle Scholar
Liu, F., Balazadegan Sarvrood, Y., Liu, Y. and Gao, Y. (2021). Stereo visual odometry with velocity constraint for ground vehicle applications. The Journal of Navigation, 74(5), 10261038.CrossRefGoogle Scholar
Lourakis, M. I. A. and Argyros, A. A. (2009). SBA: A software package for generic sparse bundle adjustment. ACM Transactions on Mathematical Software, 36(1), 130.CrossRefGoogle Scholar
Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91110.CrossRefGoogle Scholar
Maimone, M., Cheng, Y. and Matthies, L. (2007). Two years of visual odometry on the Mars exploration rovers. Journal of Field Robotics, 24(3), 169186.CrossRefGoogle Scholar
Matthies, L. H. (1989). Dynamic stereo vision. Ph.D. thesis, Pittsburgh: Carnegie Mellon University.Google Scholar
Mur-Artal, R. and Tardós, J. D. (2017). ORB-SLAM2: An open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Transactions on Robotics, 33(5), 12551262.CrossRefGoogle Scholar
Nistér, D. (2004). An efficient solution to the five-point relative pose problem. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(6), 756770.CrossRefGoogle Scholar
Nistér, D. (2005). Preemptive RANSAC for live structure and motion estimation. Machine Vision and Applications, 16(5), 321329.CrossRefGoogle Scholar
Nistér, D., Naroditsky, O. and Bergen, J. (2006). Visual odometry for ground vehicle applications. Journal of Field Robotics, 23(1), 320.CrossRefGoogle Scholar
Rublee, E., Rabaud, V., Konolige, K. and Bradski, G. (2011). ORB: An efficient alternative to SIFT or SURF. International Conference on Computer Vision, 2011, 25642571.Google Scholar
Scaramuzza, D. and Fraundorfer, F. (2011). Visual odometry [tutorial]. IEEE Robotics & Automation Magazine, 18(4), 8092.CrossRefGoogle Scholar
Vancea, C. C., Miclea, V. C. and Nedevschi, S. (2016). Improving Stereo Reconstruction by Sub-Pixel Correction Using Histogram Matching. 2016 IEEE Intelligent Vehicles Symposium (IV), 2016, 335–-341.CrossRefGoogle Scholar
Wan, W. H., Liu, Z. Q., Di, K. C., Wang, B. and Zhou, J. Z. (2014). A Cross-Site Visual Localization Method for Yutu Rover. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XL-4, 279284.CrossRefGoogle Scholar