Hostname: page-component-5cf477f64f-r2nwp Total loading time: 0 Render date: 2025-03-25T21:06:53.018Z Has data issue: false hasContentIssue false

Mobile robot tracking control based on lightweight network

Published online by Cambridge University Press:  19 March 2025

Yiming Hua
Affiliation:
School of Artificial Intelligence, Anhui University, Hefei, 230601, China Engineering Research Center of Autonomous Unmanned System Technology, Ministry of Education, Hefei, 230601, China Anhui Provincial Key Laboratory of Security Artificial Intelligence, Hefei, 230601, China
Xueyou Huang
Affiliation:
School of Artificial Intelligence, Anhui University, Hefei, 230601, China Engineering Research Center of Autonomous Unmanned System Technology, Ministry of Education, Hefei, 230601, China Anhui Provincial Key Laboratory of Security Artificial Intelligence, Hefei, 230601, China
Haoxiang Li
Affiliation:
School of Artificial Intelligence, Anhui University, Hefei, 230601, China Engineering Research Center of Autonomous Unmanned System Technology, Ministry of Education, Hefei, 230601, China Anhui Provincial Key Laboratory of Security Artificial Intelligence, Hefei, 230601, China
Xiang Cao*
Affiliation:
School of Artificial Intelligence, Anhui University, Hefei, 230601, China Engineering Research Center of Autonomous Unmanned System Technology, Ministry of Education, Hefei, 230601, China Anhui Provincial Key Laboratory of Security Artificial Intelligence, Hefei, 230601, China
*
Corresponding author: Xiang Cao; Email: [email protected]

Abstract

Target tracking technology is a key research area in the field of mobile robots, with wide applications in logistics, security, autonomous driving, and more. It generally involves two main components: target recognition and target following. However, the limited computational power of the mobile robot’s controller makes achieving high precision and fast target recognition and tracking a challenge. To address the challenges posed by limited computing power, this paper proposes a target-tracking control algorithm based on lightweight neural networks. First, a depthwise separable convolution-based backbone is introduced for feature extraction. Then, an efficient channel attention module is incorporated into the target recognition algorithm to minimize the impact of redundant features and emphasize important channels, thereby reducing model complexity and enhancing network efficiency. Finally, based on the data collected from visual and ultrasonic sensors, a model predictive control strategy is used to achieve target tracking. Validation of the proposed algorithm is conducted using a mobile robot equipped with Raspberry Pi 4B. Experimental results demonstrate that the proposed algorithm achieves rapid target tracking.

Type
Research Article
Copyright
© The Author(s), 2025. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Sun, N. Y., Zhao, J., Shi, Q., Liu, C. and Liu, P., “Moving target tracking by unmanned aerial vehicle: A survey and taxonomy,” IEEE Trans. Indus. Inf. 20(5), 70567068 (2024).CrossRefGoogle Scholar
Li, Y., Wu, J., Zhou, W. P., Wang, Y. B. and Wu, C.C., “Linear quadratic regulator method in vision-based laser beam tracking for a mobile target robot,” Robotica 39(3), 524534 (2021).Google Scholar
Sharma, M. and Voruganti, H. K., “Multi-objective optimization approach for coverage path planning of mobile robot,” Robotica 42(7), 21252149 (2024).CrossRefGoogle Scholar
Wang, J. N., Jia, Z. H., Lai, H. C. and Shi, F., “A real time target face tracking algorithm based on saliency detection and Camshift,” Multimedia Tool Appl. 82(28), 4359943624 (2023).CrossRefGoogle Scholar
Gao, H., Ma, C., Zhang, X. D. and Zhou, C., “Compliant variable admittance adaptive fixed-time sliding mode control for trajectory tracking of robotic manipulators,” Robotica 42(6), 17311760 (2024).CrossRefGoogle Scholar
Tian, Y. W., Liu, M. Q., Zhang, S. L., Zheng, R. H. and Dong, S. L., “A feature-aided multiple model algorithm for maneuvering target tracking,” IEEE-CAA J. Autom. Sinica 11(2), 566568 (2024).CrossRefGoogle Scholar
Eschmann, H., Ebel, H. and Eberhard, P., “Exploration-exploitation-based trajectory tracking of mobile robots using Gaussian processes and model predictive control,” Robotica 41(10), 30403058 (2023).CrossRefGoogle Scholar
Liu, J. X., Yan, J. J., Wan, D. H., Li, X. R., Al-Rubaye, S., Al-Dulaimi, A. and Quan, Z., “Digital twins based intelligent state prediction method for maneuvering-target tracking,” IEEE J. Selec. Areas Commun. 41(11), 35893606 (2023).CrossRefGoogle Scholar
Dong, M. L. and Zhang, J., “A review of robotic grasp detection technology,” Robotica 41(12), 38463885 (2023).CrossRefGoogle Scholar
Pan, C., Peng, Z. H., Li, Y. M., Han, B. and Wang, D., “Flocking of under-actuated unmanned surface vehicles via deep reinforcement learning and model predictive path integral control,” IEEE Trans. Instr. Meas. 73, 111 (2024).Google Scholar
Yadav, S. P., Nagar, R. and Shah, S. V., “Learning vision-based robotic manipulation tasks sequentially in offline reinforcement learning settings,” Robotica 42(6), 17151730 (2024).CrossRefGoogle Scholar
Liu, Y., An, B. L., Chen, S. H. and Zhao, D. M., “Multi-target detection and tracking of shallow marine organisms based on improved YOLO v5 and DeepSORTimage,” IET Process. 18(9), 22732290 (2024).CrossRefGoogle Scholar
Yang, J. L., Xu, M. F., Liu, J. J. and Li, F. D., “Multiple extended target tracking based on distributed multi-sensor fusion and shape estimation,” IET Radar. Sonar Navig. 17(5), 733747 (2023).CrossRefGoogle Scholar
Cao, S. H., Wang, T., Li, T. and Mao, Z., “UAV small target detection algorithm based on an improved YOLOv5s model,” J. Visual Commun. Image Repres. 97, 103936 (2023).CrossRefGoogle Scholar
Xiong, X. R., He, M. T., Li, T. Y., Zheng, G. F., Xu, W., Fan, X. L. and Zhang, Y., “Adaptive feature fusion and improved attention mechanism-based small object detection for UAV target tracking,” IEEE Int. Things J. 11(12), 2123921249 (2024).CrossRefGoogle Scholar
Chen, R. F., Wu, J., Peng, Y., Li, Z. W. and Shang, H., “Solving floating pollution with deep learning: A novel SSD for floating objects based on continual unsupervised domain adaptation,” Eng. Appl. Artifi. Intell. 120, 105857 (2023).CrossRefGoogle Scholar
Wu, C., Tao, B., Wu, H., Gong, Z. Y. and Yin, Z. P., “A UHF RFID-based dynamic object following method for a mobile robot using phase difference information,” IEEE Trans. Instru. Meas. 70, 111 (2021).Google Scholar
Pan, Z., Li, K., Deng, H. and Wei, Y. R., “Obstacle recognition for intelligent vehicle based on radar and vision fusion,” Int. J. Robot. Autom. 36(3), 178187 (2021).Google Scholar
Yan, Q., Huang, J., Yang, Z., Hasegawa, Y. and Fukuda, T., “Human-following control of cane-type walking-aid robot within fixed relative posture,” IEEE/ASME Trans. Mech. 27(1), 537548 (2022).CrossRefGoogle Scholar
Chen, B. X., Sahdev, R. and Tsotsos, J. K., “Integrating Stereo Vision with a CNN Tracker for a Person-Following Robot, Inter,” International Conference on Computer Vision Systems, Springer, Cham (2017).Google Scholar
Tai, W. W., Ilias, B., Shukor, S. A. A., Abdul Rahim, N. and Markom, M. A., “A study of ultrasonic sensor capability in human following robot system,” IOP Confer. Ser.: Mater. Sci. Eng. 705(1), 012045–012019 (2019). IOP Publishing.CrossRefGoogle Scholar
Wang, M., Liu, Y., Su, D., Liao, Y. F, Shi, L. and Miro, J. V., “Accurate and real-time 3-D tracking for the following robots by fusing vision and ultrasonar information,” IEEE/ASME Trans. Mech. 23(3), 9971006 (2018).CrossRefGoogle Scholar
Cui, Y., Li, S., Qu, D., Fan, X. Y. and Lu, H. C., “A new denoising preprocessing approach for image recognition by improved Hopfield neural network,” Int. J. Rob. Autom. 36(6), 383391 (2021).Google Scholar
Biswas, S. and Barma, S., “MicrosMobiNet: A deep lightweight network with hierarchical feature fusion scheme for microscopy image analysis in mobile-edge computing,” IEEE Inter. Thing J. 11(5), 82888298 (2024).CrossRefGoogle Scholar
Liu, B. Z., Ning, X., Ma, S. C. and Lian, X. B., “A lightweight pyramid feature fusion network for single image super-resolution reconstruction,” IEEE Signal Process. Lett. 32, 15751579 (2024).CrossRefGoogle Scholar
Zhang, K., Guo, X. Y., He, Y. X., Wang, X. S., Guo, Y. R. and Ding, Q. L., “IMS-SSH: Multiscale face detection method in unconstrained settings,” J. Electr. Imag. 28(1) (2019).Google Scholar
Qasaimeh, M., Denolf, K., Khodamoradi, A., Blott, M., Lo, J., Halder, L., Vissers, K., Zambreno, J. and Jones, P. H., “Benchmarking vision kernels and neural network inference accelerators on embedded platforms,” J. Syst. Arch. 113, 101896 (2021).CrossRefGoogle Scholar
Putra, Y. A. and Imelda, I., “Real-time face recognition civil servant presence system using DNN algorithm,” Indon J. Comput. Cyber. Syst. 16(4), 112 (2022).Google Scholar
Zhu, D. Q., Shi, M. H., Wang, Y., Xue, K., Liao, J., Xiong, W., Kuang, F. M. and Zhang, S., “Path tracking control method for automatic navigation rice transplanters based on VUFC and improved BAS algorithm,” Robotica 41(10), 31163136 (2023).CrossRefGoogle Scholar
Yang, S., Luo, P., Loy, C. and Tang, X. O., “Wider Face: A Face Detection Benchmark,” IEEE Conference on Computer Vision and Pattern Recognition (2016) pp. 55255533.Google Scholar
Zhu, J. W., Ma, Y. E., Xia, J. and Zhou, X. G., “A lightweight fatigue driving detection method based on facial features,” Signal Image Video Process. 18(S1), 335343 (2024).CrossRefGoogle Scholar
Luo, J., Liu, J., Lin, J. and Wang, Z. F., “A lightweight face detector by integrating the convolutional neural network with the image pyramid,” Pattern Recog. Lett. 133, 180187 (2020).CrossRefGoogle Scholar