Hostname: page-component-586b7cd67f-t8hqh Total loading time: 0 Render date: 2024-11-24T20:13:31.307Z Has data issue: false hasContentIssue false

A Study of Sensor-Fusion Mechanism for Mobile Robot Global Localization

Published online by Cambridge University Press:  22 April 2019

Yonggang Chen
Affiliation:
School of Electro-mechanical Engineering, Guangdong University of Technology, Guangzhou, China. E-mails: [email protected], [email protected] Department of Electro-mechanical Engineering, Dongguan Polytechnic, Dongguan, China. E-mail: [email protected]
Weinan Chen*
Affiliation:
School of Electro-mechanical Engineering, Guangdong University of Technology, Guangzhou, China. E-mails: [email protected], [email protected]
Lei Zhu
Affiliation:
School of Electro-mechanical Engineering, Guangdong University of Technology, Guangzhou, China. E-mails: [email protected], [email protected]
Zerong Su
Affiliation:
Guangdong Key Laboratory of Modern Control Technology, Guangdong Institute of Intelligent Manufacturing, Guangzhou, China. E-mails: [email protected], [email protected]
Xuefeng Zhou
Affiliation:
Guangdong Key Laboratory of Modern Control Technology, Guangdong Institute of Intelligent Manufacturing, Guangzhou, China. E-mails: [email protected], [email protected]
Yisheng Guan
Affiliation:
School of Electro-mechanical Engineering, Guangdong University of Technology, Guangzhou, China. E-mails: [email protected], [email protected]
Guanfeng Liu
Affiliation:
Guangdong Polytechnic Normal University, Guangzhou, China. E-mail: [email protected]
*
*Corresponding author. E-mail: [email protected]

Summary

Estimating the robot state within a known map is an essential problem for mobile robot; it is also referred to “localization”. Even LiDAR-based localization is practical in many applications, it is difficult to achieve global localization with LiDAR only for its low-dimension feedback, especially in environments with repetitive geometric features. A sensor-fusion-based localization system is introduced in this paper, which has the capability of addressing the global localization problem. Both LiDAR and vision sensors are integrated, making use of the rich information introduced by vision sensor and the robustness from LiDAR. A hybrid grid-map is built for global localization, and a visual global descriptor is applied to speed up the localization convergence, combined with a pose refining pipeline for improving the localization accuracy. Also, a trigger mechanism is introduced to solve kidnapped problem and verify the relocalization result. The experiments under different conditions are designed to evaluate the performance of the proposed approach, as well as a comparison with the existing localization systems. According to the experimental results, our system is able to solve the global localization problem, and the sensor-fusion mechanism in our system has an improved performance.

Type
Articles
Copyright
© Cambridge University Press 2019 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

The first two authors contributed equally to this work.

References

Sebastian, T., Wolfram, B. and Dieter, F., Probabilistic Robotics (MIT Press, Cambridge, MA, USA, 2005).Google Scholar
Ben, G., Jamie, S., Antonio, C. and Shahram, I., “Real-time RGB-D camera relocalization via randomized ferns for keyframe encoding,” IEEE Trans. Visual. Comput. Graph. 21(5), 571583 (2015).Google Scholar
Raúl, M. and Juan, T. D., “ORB-SLAM: a versatile and accurate monocular SLAM system,” IEEE Trans. Robot. 31(5), 11471163 (2015).Google Scholar
Jakob, E., Thomas, S. and Daniel, C., “LSD-SLAM: Large-scale Direct Monocular SLAM,” European Conference on Computer Vision, Zürich, Switzerland (2014) pp. 834849.Google Scholar
Christian, F., Matia, P. and Davide, S., “SVO: Fast Semi-direct Monocular Visual Odometry,” Robotics and Automation (ICRA), 2014 IEEE International Conference, Hong Kong, China (2014) pp. 1522.Google Scholar
Wolfgang, H., Damon, K., Holger, R. and Daniel, A., “Real-time loop closure in 2D LIDAR SLAM,” Robotics and Automation (ICRA), 2016 IEEE International Conference, Stockholm, Sweden (2016) pp. 12711278.Google Scholar
Karel, K., Vojtech, V., Miroslav, K. and Libor, P., “Comparison of shape matching techniques for place recognition,” Mobile Robots (ECMR), 2013 European Conference, Barcelona, Spain (2013) pp. 107112.Google Scholar
Guillaume, B. and Remi, M., “Robust large scale monocular visual SLAM,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA (2015) pp. 16381647.Google Scholar
Charbel, A., “Efficient Image-Based Localization Using Context,” University of Waterloo (2015).Google Scholar
Mustafa, O., Michael, C., Vincent, L. and Pascal, F., “Fast keypoint recognition using random ferns,” IEEE Trans. Pattern Anal. Machine Intelligence 32(3), 448461 (2010).Google Scholar
Torsten, S., Bastian, L. and Leif, K., “Fast image-based localization using direct 2d-to-3d matching,” Computer Vision (ICCV), 2011 IEEE International Conference, Barcelona, Spain (2011) pp. 667674.Google Scholar
Mark, C., Paul, N., Vincent, L. and Pascal, F., “FAB-MAP: probabilistic localization and mapping in the space of appearance,” Int. J. Robot. Res. 27(6), 647665 (2008).Google Scholar
Gautam, S. and , K. J, “Visual loop closing using gist descriptors in manhattan world,” ICRA Omnidirectional Vision Workshop, Anchorage, AK, USA (2010).Google Scholar
Sebastian, T., Dieter, F., Wolfram, B. and Frank, D., “Robust Monte Carlo localization for mobile robots,” Artif. Intelligence 128(1–2), 99141 (2001).Google Scholar
Yang, L. and Hong, Z., “Visual loop closure detection with a compact image descriptor,” Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference, Vilamoura, Portugal (2012) pp. 10511056.CrossRefGoogle Scholar
Charbel, A., Daniel C, A., Adel H, F. and John S, Z., “Filtering 3D Keypoints Using GIST For Accurate Image-Based Localization,” Proceedings of the British Machine Vision Conference (BMVC), York, UK (2016) pp. 127.1127.12.Google Scholar
José, M., Andrew, C. and Walterio, M., “Enhancing 6D visual relocalisation with depth cameras,” Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference, Tokyo, Japan (2013) pp. 899906.Google Scholar
Jungho, K., Kuk-Jin, Y. and In, K. So, “Bayesian filtering for keyframe-based visual SLAM,” Int. J. Robot. Res. 34(4–5), 99141 (2015).Google Scholar
Cristiano, P. and Urbano, N., “Fusing LIDAR, camera and semantic information: a context-based approach for pedestrian detection,” Int. J. Robot. Res. 32(3), 371384 (2013).Google Scholar
Nicholas, C., Anush, M., James R, M. and Ryan M, E., “Visual localization in fused image and laser range data,” Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ International Conference, San Francisco, CA, USA (2011) pp. 43784385.Google Scholar
Colin, M., Paul, F. and Timothy, B. D, “Towards lighting-invariant visual navigation: an appearance-based approach using scanning laser-rangefinders,” Robot. Autonom. Syst. 61(8), 836852 (2013).Google Scholar
Giorgio, G., Cyrill, S. and Wolfram, B., “Improved techniques for grid mapping with rao-blackwellized particle filters,” IEEE Trans. Robot. 23(1), 3446 (2007).Google Scholar
Antonio, T., Aude, O, Monica S, C. and John M, H., “Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search,” Psychol. Rev. 113(4), 766 (2006).Google Scholar
Jing, L., Research on the Fast Scene Classification based on Gist of a Scene (Jilin University, Jilin, China, 2013).Google Scholar
Laurent, K., Davide, S. and Roland, S., “A novel parametrization of the perspective-three-point problem for a direct computation of absolute camera position and orientation,” Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference, CO, USA (2011) pp. 29692976.Google Scholar
Ethan, R., Vincent, R., Kurt, K. and Gary, B., “ORB: An efficient alternative to SIFT or SURF,” Computer Vision (ICCV), 2011 IEEE International Conference, Barcelona, Spain (2011) pp. 25642571.Google Scholar
Yunpeng, L., Noah, S. and Daniel, H. P, “Location recognition using prioritized feature matching,” European conference on computer vision (2010) pp. 791804.Google Scholar
Su, Z., Zhou, X., Cheng, T., Zhang, H., Xu, B. and Chen, W., “Global localization of a mobile robot using lidar and visual features,” Robotics and Biomimetics (ROBIO), 2017 IEEE International Conference, Macau, China (2017) pp. 23772383.CrossRefGoogle Scholar
Perea, D., Hernandez, J., Morell, A. and et. at, “MCL with sensor fusion based on a weighting mechanism versus a particle generation approach,” International IEEE Conference on Intelligent Transportation Systems, 2013, The Hague, Netherlands (2013) pp. 166171.Google Scholar
Yim, B., Lee, Y. and Song, J. and Chung, W., “Mobile Robot Localization Using Fusion of Object Recognition and Range Information,” IEEE International Conference on Robotics and Automation, Rome, Italy (2007) pp. 35333538.Google Scholar

Chen et al. supplementary material

Chen et al. supplementary material

Download Chen et al. supplementary material(Video)
Video 69.9 MB