Hostname: page-component-55f67697df-2mk96 Total loading time: 0 Render date: 2025-05-13T03:13:04.321Z Has data issue: false hasContentIssue false

Priority-based intelligent resolution method of multi-aircraft flight conflicts

Published online by Cambridge University Press:  16 October 2024

D. Sui
Affiliation:
Nanjing University of Aeronautics and Astronautics, College of Civil Aviation, Nanjing, China
Z. Zhou*
Affiliation:
Nanjing University of Aeronautics and Astronautics, College of Civil Aviation, Nanjing, China
X. Cui
Affiliation:
Nanjing University of Aeronautics and Astronautics, College of Civil Aviation, Nanjing, China
*
Corresponding author: Z. Zhou; Email: [email protected]

Abstract

The rising demand for air traffic will inevitably result in a surge in both the number and complexity of flight conflicts, necessitating intelligent strategies for conflict resolution. This study addresses the critical challenges of scalability and real-time performance in multi-aircraft flight conflict resolution by proposing a comprehensive method that integrates a priority ranking mechanism with a conflict resolution model based on the Markov decision process (MDP). Within this framework, the proximity between aircraft in a multi-aircraft conflict set is dynamically assessed to establish a conflict resolution ranking mechanism. The problem of multi-aircraft conflict resolution is formalised through the MDP, encompassing the design of state space, discrete action space and reward function, with the transition function implemented via simulation prediction using model-free methods. To address the positional uncertainty of aircraft in real-time scenarios, the conflict detection mechanism introduces the aircraft’s positional error. A deep reinforcement learning (DRL) environment is constructed incorporating actual airspace structures and traffic densities, leveraging the Actor Critic using Kronecker-factored Trust Region (ACKTR) algorithm to determine resolution actions. The experimental results indicate that with 20–30 aircraft in the airspace, the success rate can reach 94% for the training set and 85% for the test set. Furthermore, this study analyses the impact of varying aircraft numbers on the success rate within a specific airspace scenario. The outcomes of this research provide valuable insights for the automation of flight conflict resolution.

Type
Research Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Royal Aeronautical Society

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable

References

Civil Aviation of China. Civil Aviation of China February 2023 Key Production Indicators Statistics, 2023.Google Scholar
Eurocontrol. European ATM Master Plan, 2020.Google Scholar
Pelegrín, M. and D’Ambrosio, C. Aircraft deconfliction via mathematical programming: Review and insights, Transport. Sci., 2022, 56, (1), pp 118140.CrossRefGoogle Scholar
Dias, F.H.C., Hijazi, H. and Rey, D. Disjunctive linear separation conditions and mixed-integer formulations for aircraft conflict resolution, Eur. J. Oper. Res., 2022, 296, (2), pp 520538.CrossRefGoogle Scholar
Liu, X. and Xiao, G. Multi-aircraft flight conflict resolution and trajectory recovery scheme based on mixed integer linear programming and geometric rules, Transport. Res. Rec. J. Transport. Res. Board, 2022, 2677, (5), pp 166182.CrossRefGoogle Scholar
Tang, X.-M., Chen, P. and Li, B. Optimal air route flight conflict resolution based on receding horizon control. Aerosp. Sci. Technol., 2016, 50, pp 7787.CrossRefGoogle Scholar
Liu, W., Liang, X., Ma, Y. and Liu, W. Aircraft trajectory optimization for collision avoidance using stochastic optimal control, Asian J. Control, 2018, 21, (5), pp 23082320.CrossRefGoogle Scholar
Pei, H., Guanlong, L., Haotian, N., Yang, X. and Cunbao, M. Research on civil aircraft conflict resolution based on TBO, In 2019 IEEE 4th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), 2019, pp 5256.Google Scholar
Sui, D., Zhang, K. and Rey, D. A tactical conflict detection and resolution method for en route conflicts in trajectory-based operations, J. Adv. Transp., 2022, 2022, pp 116.CrossRefGoogle Scholar
Pham, D.T., Tran, N.P., Goh, S.K., Alam, S., Duong, V. and IEEE. Reinforcement learning for two-aircraft conflict resolution in the presence of uncertainty, In IEEE - RIVF International Conference on Computing and Communication Technologies (RIVF), 2019.CrossRefGoogle Scholar
Tran, P.N., Pham, D.-T., Goh, S.K., Alam, S. and Duong, V. An interactive conflict solver for learning air traffic conflict resolutions, J. Aerosp. Inf. Syst., 2020, 17, (6), pp 271277.Google Scholar
Pham, D.-T., Tran, P.N., Alam, S., Duong, V. and Delahaye, D. Deep reinforcement learning based path stretch vector resolution in dense traffic with uncertainties. Transport. Res. C Emerg. Technol., 2022, 135, p 103463.CrossRefGoogle Scholar
Sui, D., Ma, C. and Wei, C. Tactical conflict solver assisting air traffic controllers using deep reinforcement learning, Aerospace, 2023, 10, (2), p 182.CrossRefGoogle Scholar
Sui, D., Xu, W. and Zhang, K. Study on the resolution of multi-aircraft flight conflicts based on an IDQN. Chin. J. Aeronaut., 2022, 35, (2), pp 195213.CrossRefGoogle Scholar
Dalmau, R. and Allard, E. Air traffic control using message passing neural networks and multi-agent reinforcement learning, In 10th SESAR Innovation Days (SID), 2020.Google Scholar
Lai, J., Cai, K., Liu, Z. and Yang, Y. A multi-agent reinforcement learning approach for conflict resolution in dense traffic scenarios, In 2021 IEEE/AIAA 40th Digital Avionics Systems Conference (DASC), 2021.CrossRefGoogle Scholar
Brittain, M.W. and Wei, P. One to any: Distributed conflict resolution with deep multi-agent reinforcement learning and long short-term memory, AIAA Scitech 2021 Forum. 2021, pp 1952 (1910 pp.)–1952 (1910 pp).Google Scholar
Brittain, M. and Wei, P. Scalable autonomous separation assurance with heterogeneous multi-agent reinforcement learning, IEEE Trans. Autom. Sci. Eng., 2022, 19, (4), pp 28372848.CrossRefGoogle Scholar
Isufaj, R., Aranega Sebastia, D. and Angel Piera, M. Toward conflict resolution with deep multi-agent reinforcement learning. J. Air Transport., 2022, 30, (3), pp 7180.CrossRefGoogle Scholar
Huang, C., Petrunin, I. and Tsourdos, A. Strategic conflict management for performance-based urban air mobility operations with multi-agent reinforcement learning, In 2022 International Conference on Unmanned Aircraft Systems (ICUAS), 2022.CrossRefGoogle Scholar
Chen, Y., Hu, M., Yang, L., Xu, Y. and Xie, H. General multi-agent reinforcement learning integrating adaptive manoeuvre strategy for real-time multi-aircraft conflict resolution. Transport. Res. C Emerg. Technol., 2023, 151.CrossRefGoogle Scholar
Papadopoulos, G., Bastas, A., Vouros, G.A., Crook, I., Andrienko, N., Andrienko, G. and Cordero, J.M. Deep reinforcement learning in service of air traffic controllers to resolve tactical conflicts. Expert Syst. Appl., 2024, 236, p 104125.CrossRefGoogle Scholar
Sheng, L., Egorov, M. and Kochenderfer, M. Optimizing collision avoidance in dense airspace using deep reinforcement learning arXiv. 2019, pp 10.Google Scholar
Zhao, P. and Liu, Y. Physics informed deep reinforcement learning for aircraft conflict resolution, IEEE Trans. Intell. Transp. Syst., 2022, 23, (7), pp 82888301.CrossRefGoogle Scholar
Zhang, M., Yu, J., Zhang, Y., Wang, S. and Yu, H. Flight conflict resolution during low-altitude rescue operation based on ensemble conflict models. Adv. Mech. Eng., 2017, 9, (4), p 168781401769665.Google Scholar
Wang, W.Y. and IOP, A satisficing game theoretic approach to multi-aircraft collision avoidance, In 5th International Conference on Electrical Engineering, Control and Robotics (EECR), 2019.CrossRefGoogle Scholar
Wu, M., Yang, W., Bi, K., Wen, X., Li, J. and Ghenaiet, A. Conflict resolution strategy based on flight conflict network optimal dominating set, Int. J. Aerosp. Eng., 2022, 2022, pp 119.Google Scholar
Ho, F., Geraldes, R., Gonçalves, A., Rigault, B., Sportich, B., Kubo, D., Cavazza, M. and Prendinger, H. Decentralized multi-agent path finding for UAV traffic management, IEEE Trans. Intell. Transp. Syst., 2022, 23, (2), pp 9971008.CrossRefGoogle Scholar
Lauderdale, T. Probabilistic conflict detection for robust detection and resolution, In 12th AIAA Aviation Technology, Integration, and Operations (ATIO) Conference and 14th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, 2012.CrossRefGoogle Scholar
Wu, Y., Mansimov, E., Liao, S., Grosse, R. and Ba, J. Scalable Trust-Region Method for Deep Reinforcement Learning using Kronecker-Factored Approximation, 2017.Google Scholar
Ba, J., Grosse, R. and Martens, J. Distributed Second-Order Optimization using Kronecker-Factored Approximations, 2016.Google Scholar
Eurocontrol. User Manual for the Base of Aircraft Data (BADA) Revision 3.11, 2013.Google Scholar