Hostname: page-component-f554764f5-44mx8 Total loading time: 0 Render date: 2025-04-21T05:09:14.941Z Has data issue: false hasContentIssue false

A semantic knowledge database-based localization method for UAV inspection in perceptual-degraded underground mine

Published online by Cambridge University Press:  30 October 2024

Qinghua Liang
Affiliation:
Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
Minghui Zhao
Affiliation:
School of Electronics and Information Engineering, Tongji University, Shanghai, China China Coal Technology & Engineering Group Shanghai Co. Ltd. Shanghai, China
Shigang Wang
Affiliation:
Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
Min Chen*
Affiliation:
Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
*
Corresponding author: Min Chen; Email: [email protected]

Abstract

In recent years, unmanned aerial vehicles (UAVs) have been applied in underground mine inspection and other similar works depending on their versatility and mobility. However, accurate localization of UAVs in perceptually degraded mines is full of challenges due to the harsh light conditions and similar roadway structures. Due to the unique characteristics of the underground mines, this paper proposes a semantic knowledge database-based localization method for UAVs. By minimizing the spatial point-to-edge distance and point-to-plane distance, the relative pose constraint factor between keyframes is designed for UAV continuous pose estimation. To reduce the accumulated localization errors during the long-distance flight in a perceptual-degraded mine, a semantic knowledge database is established by segmenting the intersection point cloud from the prior map of the mine. The topological feature of the current keyframe is detected in real time during the UAV flight. The intersection position constraint factor is constructed by comparing the similarity between the topological feature of the current keyframe and the intersections in the semantic knowledge database. Combining the relative pose constraint factor of LiDAR keyframes and the intersection position constraint factor, the optimization model of the UAV pose factor graph is established to estimate UAV flight pose and eliminate the cumulative error. Two UAV localization experiments conducted on the simulated large-scale Edgar Mine and a mine-like indoor corridor indicate that the proposed UAV localization method can realize accurate localization during long-distance flight in degraded mines.

Type
Research Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable

References

Mur-Artal, R., Montiel, J. M. M. and Tardos, J. D., “Orb-slam: A versatile and accurate monocular slam system,” IEEE T. Robot. 31(5), 11471163 (2015).CrossRefGoogle Scholar
Mur-Artal, R. and Tardós, J. D., “Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras,” IEEE T. Robot. 33(5), 12551262 (2017).CrossRefGoogle Scholar
Campos, C., Elvira, R., Rodríguez, J. J. Gómez, Montiel, J.é M. M. and Tardós, J. D., “Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam,” IEEE T. Robot. 37(6), 18741890 (2021).CrossRefGoogle Scholar
Rogers, J. G., Gregory, J. M., Fink, J. and Stump, E.. Test your Slam! The Subt-Tunnel Dataset and Metric for Mapping. In 2020 IEEE International Conference on Robotics and Automation (ICRA), IEEE (2020) pp. 955961.Google Scholar
Özaslan, T., Shen, S., Mulgaonkar, Y., Michael, N. and Kumar, V., “Inspection of Penstocks and Featureless Tunnel-Like Environments Using Micro Uavs,” In: Field and Service Robotics, Tracts in Advanced Robotics (Springer, 2015) vol. 5, pp. 123136.CrossRefGoogle Scholar
Özaslan, T., Mohta, K., Keller, J., Mulgaonkar, Y., Taylor, C. J., Kumar, V., Wozencraft, J. M. and Hood, T.. “Towards Fully Autonomous Visual Inspection of Dark Featureless Dam Penstocks using Mavs.” In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE (2016) pp. 49985005.Google Scholar
Özaslan, T., Loianno, G., Keller, J., Taylor, C. J., Kumar, V., Wozencraft, J. M. and Hood, T., “Autonomous navigation and mapping for inspection of penstocks and tunnels with mavs,” IEEE Robot. Auto. Lett. 2(3), 17401747 (2017).CrossRefGoogle Scholar
Shin, J., Kim, S., Kang, S., Lee, S.-W., Paik, J., Abidi, B. and Abidi, M., “Optical flow-based real-time object tracking using non-prior training active feature model,” Real-Time Imaging 11(3), 204218 (2005).CrossRefGoogle Scholar
Jacobson, A., Zeng, F., Smith, D., Boswell, N., Peynot, T. and Milford, M.. “Semi-Supervised Slam: Leveraging Low-Cost Sensors on Underground Autonomous Vehicles for Position Tracking.” In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE (2018) pp. 39703977.Google Scholar
Zhu, S. S., Wang, G., Blum, H., Liu, J., Song, L., Pollefeys, M. and Wang, H., “SNI-SLAM: Semantic     ,Neural Implicit SLAM”, 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR), Seattle, WA, USA, 2024, pp. 21167–21177. doi: 10.1109/CVPR52733.2024.02000.Google Scholar
Kramer, A., Kasper, M. and Heckman, C., “Vi-Slam for Subterranean Environments,” In: Field and Service Robotics, Proceedings in Advanced Robotics (Springer, 2021) vol.16, pp. 159172.CrossRefGoogle Scholar
Leutenegger, S., Lynen, S., Bosse, M., Siegwart, R. and Furgale, P., “Keyframe-based visual–inertial odometry using nonlinear optimization,” Int. J. Robot. Res. 34(3), 314334 (2015).CrossRefGoogle Scholar
Chen, L. J., Henawy, J., Kocer, B. B. and Seet, G. G. L.. “Aerial Robots on the Way to Underground: An Experimental Evaluation of Vins-Mono on Visual-Inertial Odometry Camera.” In 2019 International Conference on Data Mining Workshops (ICDMW), IEEE (2019) pp. 9196.Google Scholar
Qin, T., Li, P. and Shen, S., “Vins-mono: A robust and versatile monocular visual-inertial state estimator,” IEEE T. Robot. 34(4), 10041020 (2018).CrossRefGoogle Scholar
Papachristos, C., Mascarich, F. and Alexis, K.. “Thermal-inertial localization for autonomous navigation of aerial robots through obscurants.” In 2018 International Conference on Unmanned Aircraft Systems (ICUAS), IEEE (2018) pp. 394399.Google Scholar
Khattak, S., Mascarich, F., Dang, T., Papachristos, C. and Alexis, K.. “Robust thermal-inertial localization for aerial robots: A case for direct methods.” 2019 International Conference on Unmanned Aircraft Systems (ICUAS), IEEE (2019) pp. 10611068.Google Scholar
Khattak, S., Papachristos, C. and Alexis, K.. “Visual-Thermal Landmarks and Inertial Fusion for Navigation in Degraded Visual Environments.” In 2019 IEEE Aerospace Conference, IEEE (2019) pp. 19.Google Scholar
Khattak, S., Papachristos, C. and Alexis, K.. “Keyframe-Based Direct Thermal–Inertial Odometry.” In 2019 International Conference on Robotics and Automation (ICRA), IEEE (2019) pp. 35633569.Google Scholar
Fasiolo, D. T., Scalera, L. and Maset, E., “Comparing lidar and imu-based slam approaches for 3D robotic mapping,” Robotica 41(9117 (2023).Google Scholar
Bakambu, J. N. and Polotski, V., “Autonomous system for navigation and surveying in underground mines,” J. Field Robot. 24(10), 829847 (2007).CrossRefGoogle Scholar
Thrun, S., Thayer, S., Whittaker, W., Baker, C. R., Burgard, W., Ferguson, D., Hähnel, D., Montemerlo, M. D., Morris, A., Omohundro, Z. and Reverte, C. F., “Autonomous exploration and mapping of abandoned mines.” IEEE Robotics & Automation Magazine 11, 79–91 (2004).CrossRefGoogle Scholar
Zhang, J. and Singh, S., “Loam: Lidar Odometry and Mapping in Real-Time,” In: Robotics: Science and Systems. vol. 2, Berkeley, CA, (2014) pp. 19.Google Scholar
Shan, T. and Englot, B.. “Lego-Loam: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain.” In. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE (2018) pp. 47584765.Google Scholar
Li, M., Zhu, H., You, S., Wang, L. and Tang, C., “Efficient laser-based 3d slam for coal mine rescue robots,” IEEE Access 7, 1412414138 (2018).CrossRefGoogle Scholar
Papachristos, C., Khattak, S., Mascarich, F. and Alexis, K.. “Autonomous navigation and mapping in underground mines using aerial robots.” In 2019 IEEE Aerospace Conference, IEEE (2019) pp. 18.Google Scholar
Chow, J. F., Kocer, B. B., Henawy, J., Seet, G., Li, Z., Yau, W. Y. and Pratama, M., “Toward underground localization: Lidar inertial odometry enabled aerial robot navigation.” (2019) [J]. CoRR abs/1910.13085.Google Scholar
Kohlbrecher, S., Von Stryk, O., Meyer, J. and Klingauf, U.. “A flexible and scalable slam system with full 3d motion estimation.” In 2011 IEEE international symposium on safety, security, and rescue robotics, IEEE (2011) pp. 155160.Google Scholar
Grisetti, G., Stachniss, C. and Burgard, W.. Improving Grid-Based Slam with Rao-Blackwellized particle Filters by Adaptive Proposals and Selective Resampling. In: Proceedings of the 2005 IEEE international conference on robotics and automation, IEEE (2005) pp. 24322437.Google Scholar
Hess, W., Kohler, D., Rapp, H. and Andor, D.. " Real-Time Loop Closure in 2D Lidar Slam.” 2016 IEEE international conference on robotics and automation (ICRA), IEEE (2016) pp. 12711278.Google Scholar
Koval, A., Kanellakis, C. and Nikolakopoulos, G., “Evaluation of lidar-based 3d slam algorithms in subt environment,” IFAC-PapersOnLine 55(38), 126131 (2022).CrossRefGoogle Scholar
Wang, G., Wu, X., Liu, Z. and Wang, H.. “Pwclo-net: Deep Lidar Odometry in 3D Point Clouds using Hierarchical Embedding Mask Optimization.” Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, (2021) pp. 1591015919.Google Scholar
Alexis, K.. “Resilient Autonomous Exploration and Mapping of Underground Mines using Aerial Robots.” In 2019 19th International Conference on Advanced Robotics (ICAR), IEEE (2019) pp. 18.Google Scholar
Jacobson, A., Zeng, F., Smith, D., Boswell, N., Peynot, T. and Milford, M., “What localizes beneath: A metric multisensor localization and mapping system for autonomous underground mining vehicles,” J. Field Robot. 38(1), 527 (2021).CrossRefGoogle Scholar
Lavigne, N. J. and Marshall, J. A., “A landmark-bounded method for large-scale underground mine mapping,” J. Field Robot. 29(6), 861879 (2012).CrossRefGoogle Scholar
Li, M.-G., Zhu, H., You, S.-Z. and Tang, C.-Q., “Uwb-based localization system aided with inertial sensor for underground coal mine applications,” IEEE Sens. J. 20(12), 66526669 (2020).CrossRefGoogle Scholar
Wang, G., Wang, W., Ding, P., Liu, Y., Wang, H., Fan, Z., Bai, H., Hongbiao, Z. and Du, Z., “Development of a search and rescue robot system for the underground building environment,” J. Field Robot. 40(3), 655683 (2023).CrossRefGoogle Scholar
Li, S., Gu, J., Li, Z., Li, S., Guo, B., Gao, S., Zhao, F., Yang, Y., Li, G. and Dong, L., “A visual slam-based lightweight multi-modal semantic framework for an intelligent substation robot,” Robotica, 42(7), 21692183 (2024). doi: 10.1017/S0263574724000511.CrossRefGoogle Scholar
Ma, T., Jiang, G., Ou, Y. and Xu, S., “Semantic geometric fusion multi-object tracking and lidar odometry in dynamic environment,” Robotica 42(3), 891910 (2024). doi: 10.1017/S0263574723001868.CrossRefGoogle Scholar
Chen, M., Feng, Y., Zhao, M., Wang, S. and Liang, Q., “Fusion of sparse lidar and uncertainty-based 3d vision for accurate depth estimation with bayesian kriging,” Opt. Eng. 61(1), 013106013106 (2022).CrossRefGoogle Scholar
Chen, M., Feng, Y., Wang, S. and Liang, Q., “A mine intersection recognition method based on geometric invariant point detection using 3D point cloud,” IEEE Robot. Auto. Lett. 7(4), 1193411941 (2022).CrossRefGoogle Scholar
Zhou, L., Koppel, D. and Kaess, M., “Lidar slam with plane adjustment for indoor environment,” IEEE Robot. Auto. Lett. 6(4), 70737080 (2021).CrossRefGoogle Scholar
Zhang, Y., “Lilo: A novel lidar–imu slam system with loop optimization,” IEEE T. Aero. Elec. Sys. 58(4), 26492659 (2021).CrossRefGoogle Scholar
Chalvatzaras, A., Pratikakis, I. and Amanatiadis, A. A., “A survey on map-based localization techniques for autonomous vehicles,” IEEE T. Intell. Veh. 8(2), 15741596 (2022).CrossRefGoogle Scholar
Liu, Q., Di, X. and Xu, B., “Autonomous vehicle self-localization in urban environments based on 3D curvature feature points–monte carlo localization,” Robotica 40(3), 817833 (2022).CrossRefGoogle Scholar
Kschischang, F. R., Frey, B. J. and Loeliger, H.-A., “Factor graphs and the sum-product algorithm,” IEEE T. Inform. Theory 47(2), 498519 (2001).CrossRefGoogle Scholar
Thrun, S., “Probabilistic robotics,” Commun. ACM 45(3), 5257 (2002).CrossRefGoogle Scholar
Gao, X., Zhang, T., Gao, X. and Zhang, T., “Lie Group and Lie Algebra,” In: Introduction to Visual SLAM: From Theory to Practice, (2021) pp. 6386.CrossRefGoogle Scholar
Rogers, J. G., Gregory, J. M., Fink, J. and Stump, E.. “Test your Slam! The Subt-Tunnel Dataset and Metric for Mapping.” In 2020 IEEE International Conference on Robotics and Automation (ICRA), IEEE (2020) pp. 955961.Google Scholar
Zhang, Z. and Scaramuzza, D.. “A Tutorial on Quantitative Trajectory Evaluation for Visual (-Inertial) Odometry.” In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE (2018) pp. 72447251.Google Scholar
Wang, J. and Olson, E.. “Apriltag 2: Efficient and Robust Fiducial Detection.” In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE (2016) pp. 41934198.Google Scholar