Hostname: page-component-586b7cd67f-dlnhk Total loading time: 0 Render date: 2024-11-25T06:16:23.844Z Has data issue: false hasContentIssue false

A MULTI-AGENT REINFORCEMENT LEARNING FRAMEWORK FOR INTELLIGENT MANUFACTURING WITH AUTONOMOUS MOBILE ROBOTS

Published online by Cambridge University Press:  27 July 2021

Akash Agrawal
Affiliation:
The Pennsylvania State University;
Sung Jun Won
Affiliation:
The Pennsylvania State University;
Tushar Sharma
Affiliation:
Siemens Technology
Mayuri Deshpande
Affiliation:
Siemens Technology
Christopher McComb*
Affiliation:
The Pennsylvania State University;
*
McComb, Christopher Carson, The Pennsylvania State University, School of Engineering Design, Technology, and Professional Programs, United States of America, [email protected]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

Intelligent manufacturing (IM) embraces Industry 4.0 design principles to advance autonomy and increase manufacturing efficiency. However, many IM systems are created ad hoc, which limits the potential for generalizable design principles and operational guidelines. This work offers a standardizing framework for integrated job scheduling and navigation control in an autonomous mobile robot driven shop floor, an increasingly common IM paradigm. We specifically propose a multi-agent framework involving mobile robots, machines, humans. Like any cyberphysical system, the performance of IM systems is influenced by the construction of the underlying software platforms and the choice of the constituent algorithms. In this work, we demonstrate the use of reinforcement learning on a sub-system of the proposed framework and test its effectiveness in a dynamic scenario. The case study demonstrates collaboration amongst robots to maximize throughput and safety on the shop floor. Moreover, we observe nuanced behavior, including the ability to autonomously compensate for processing delays, and machine and robot failures in real time.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
The Author(s), 2021. Published by Cambridge University Press

References

Cals, B., Zhang, Y., Dijkman, R. and van Dorst, C. (2020), “Solving the Order Batching and Sequencing Problem using Deep Reinforcement Learning”, ArXiv, pp. 131.Google Scholar
Cimini, C., Pirola, F., Pinto, R. and Cavalieri, S. (2020), “A human-in-the-loop manufacturing control architecture for the next generation of production systems”, Journal of Manufacturing Systems, Elsevier, Vol. 54 No. July 2019, pp. 258271. https://doi.org/10.1016/j.jmsy.2020.01.002CrossRefGoogle Scholar
Fan, T., Long, P., Liu, W. and Pan, J. (2020), “Distributed multi-robot collision avoidance via deep reinforcement learning for navigation in complex scenarios”, The International Journal of Robotics Research, Vol. 39 No. 7, pp. 856892. https://doi.org/10.1177/0278364920916531CrossRefGoogle Scholar
Fragapane, G., Ivanov, D., Peron, M., Sgarbossa, F. and Strandhagen, J.O. (2020), “Increasing flexibility and productivity in Industry 4.0 production networks with autonomous mobile robots and smart intralogistics”, Annals of Operations Research, Springer US. https://doi.org/10.1007/s10479-020-03526-7CrossRefGoogle Scholar
Han, R., Chen, S. and Hao, Q. (2020), “Cooperative Multi-Robot Navigation in Dynamic Environment with Deep Reinforcement Learning”, 2020 IEEE International Conference on Robotics and Automation (ICRA), IEEE, pp. 448454. https://doi.org/10.1109/icra40945.2020.9197209CrossRefGoogle Scholar
Hermann, M., Pentek, T. and Otto, B. (2016), “Design Principles for Industrie 4.0 Scenarios”, 2016 49th Hawaii International Conference on System Sciences (HICSS), IEEE, pp. 39283937. https://doi.org/10.1109/hicss.2016.488CrossRefGoogle Scholar
Hernandez-Leal, P., Kartal, B. and Taylor, M.E. (2019), “A survey and critique of multiagent deep reinforcement learning”, Autonomous Agents and Multi-Agent Systems, Springer US, Vol. 33 No. 6, pp. 750797. https://doi.org/10.1007/s10458-019-09421-1CrossRefGoogle Scholar
Jennings, N.R. and Wooldridge, M.J. (1998), Agent Technology: Foundations, Applications, and Markets, edited by Jennings, N.R. and Wooldridge, M.J., Springer-Verlag, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-03678-5CrossRefGoogle Scholar
Juliani, A., Berges, V.-P., Teng, E., Cohen, A., Harper, J., Elion, C., Goy, C., et al. (2018), “Unity: A General Platform for Intelligent Agents”, pp. 128.Google Scholar
Kober, J., Bagnell, J.A. and Peters, J. (2013), “Reinforcement learning in robotics: A survey”, The International Journal of Robotics Research, Vol. 32 No. 11, pp. 12381274. https://doi.org/10.1177/0278364913495721CrossRefGoogle Scholar
Kuhnle, A., Schäfer, L., Stricker, N. and Lanza, G. (2019), “Design, Implementation and Evaluation of Reinforcement Learning for an Adaptive Order Dispatching in Job Shop Manufacturing Systems”, Procedia CIRP, Elsevier B.V., Vol. 81, pp. 234239. https://doi.org/10.1016/j.procir.2019.03.041CrossRefGoogle Scholar
Lee, J., Bagheri, B. and Kao, H.-A. (2015), “A Cyber-Physical Systems architecture for Industry 4.0-based manufacturing systems”, Manufacturing Letters, Society of Manufacturing Engineers (SME), Vol. 3, pp. 1823. https://doi.org/10.1016/j.mfglet.2014.12.001Google Scholar
Leitão, P. (2009), “Agent-based distributed manufacturing control: A state-of-the-art survey”, Engineering Applications of Artificial Intelligence, Vol. 22 No. 7, pp. 979991. https://doi.org/10.1016/j.engappai.2008.09.005CrossRefGoogle Scholar
Leusin, M., Frazzon, E., Uriona Maldonado, M., Kück, M. and Freitag, M. (2018), “Solving the Job-Shop Scheduling Problem in the Industry 4.0 Era”, Technologies, Vol. 6 No. 4, p. 107. https://doi.org/10.3390/technologies6040107CrossRefGoogle Scholar
Li, Q., Tang, Q., Chan, I., Wei, H., Pu, Y., Jiang, H., Li, J., et al. (2018), “Smart manufacturing standardization: Architectures, reference models and standards framework”, Computers in Industry, Elsevier, Vol. 101 No. July, pp. 91106. https://doi.org/10.1016/j.compind.2018.06.005CrossRefGoogle Scholar
Li, Y. (2019), “Reinforcement Learning Applications”, ArXiv, pp. 141.Google Scholar
Malus, A., Kozjek, D. and Vrabič, R. (2020), “Real-time order dispatching for a fleet of autonomous mobile robots using multi-agent reinforcement learning”, CIRP Annals, Vol. 69 No. 1, pp. 397400. https://doi.org/10.1016/j.cirp.2020.04.001CrossRefGoogle Scholar
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., et al. (2015), “Human-level control through deep reinforcement learning”, Nature, Vol. 518 No. 7540, pp. 529533. https://doi.org/10.1038/nature14236CrossRefGoogle ScholarPubMed
Monostori, L., Váncza, J. and Kumara, S.R.T. (2006), “Agent-Based Systems for Manufacturing”, CIRP Annals, Vol. 55 No. 2, pp. 697720. https://doi.org/10.1016/j.cirp.2006.10.004CrossRefGoogle Scholar
Nguyen, T.T., Nguyen, N.D. and Nahavandi, S. (2020), “Deep Reinforcement Learning for Multiagent Systems: A Review of Challenges, Solutions, and Applications”, IEEE Transactions on Cybernetics, Vol. 50 No. 9, pp. 38263839. https://doi.org/10.1109/tcyb.2020.2977374CrossRefGoogle ScholarPubMed
Parente, M., Figueira, G., Amorim, P. and Marques, A. (2020), “Production scheduling in the context of Industry 4.0: review and trends”, International Journal of Production Research, Taylor & Francis, Vol. 58 No. 17, pp. 54015431. https://doi.org/10.1080/00207543.2020.1718794CrossRefGoogle Scholar
Pathak, D., Agrawal, P., Efros, A.A. and Darrell, T. (2017), “Curiosity-driven Exploration by Self-supervised Prediction”, 34th International Conference on Machine Learning, ICML 2017, Vol. 6, pp. 42614270.CrossRefGoogle Scholar
Pham, H.X., La, H.M., Feil-Seifer, D. and Nefian, A. (2018), “Cooperative and Distributed Reinforcement Learning of Drones for Field Coverage”, ArXiv, available at: http://arxiv.org/abs/1803.07250.Google Scholar
Qie, H., Shi, D., Shen, T., Xu, X., Li, Y. and Wang, L. (2019), “Joint Optimization of Multi-UAV Target Assignment and Path Planning Based on Multi-Agent Reinforcement Learning”, IEEE Access, IEEE, Vol. 7, pp. 146264146272. https://doi.org/10.1109/access.2019.2943253CrossRefGoogle Scholar
Schulman, J., Wolski, F., Dhariwal, P., Radford, A. and Klimov, O. (2017), “Proximal Policy Optimization Algorithms”, ArXiv, pp. 112.Google Scholar
Sharp, M., Ak, R. and Hedberg, T. (2018), “A survey of the advancing use and development of machine learning in smart manufacturing”, Journal of Manufacturing Systems, Vol. 48, pp. 170179. https://doi.org/10.1016/j.jmsy.2018.02.004CrossRefGoogle ScholarPubMed
Shen, W., Hao, Q., Yoon, H.J. and Norrie, D.H. (2006), “Applications of agent-based systems in intelligent manufacturing: An updated review”, Advanced Engineering Informatics, Vol. 20 No. 4, pp. 415431. https://doi.org/10.1016/j.aei.2006.05.004CrossRefGoogle Scholar
Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser, J., et al. (2016), “Mastering the game of Go with deep neural networks and tree search”, Nature, England, Vol. 529 No. 7587, pp. 484489. https://doi.org/10.1038/nature16961CrossRefGoogle ScholarPubMed
Sutton, R.S. and Barto, A.G. (2018), Reinforcement Learning: An Introduction, MIT press.Google Scholar
Wang, J., Ma, Y., Zhang, L., Gao, R.X. and Wu, D. (2018), “Deep learning for smart manufacturing: Methods and applications”, Journal of Manufacturing Systems, The Society of Manufacturing Engineers, Vol. 48, pp. 144156. https://doi.org/10.1016/j.jmsy.2018.01.003CrossRefGoogle Scholar
Wang, L., Törngren, M. and Onori, M. (2015), “Current status and advancement of cyber-physical systems in manufacturing”, Journal of Manufacturing Systems, Vol. 37 No. May, pp. 517527. https://doi.org/10.1016/j.jmsy.2015.04.008CrossRefGoogle Scholar
Watkins, C.J.C.H. and Dayan, P. (1992), “Q-learning”, Machine Learning, Vol. 8 No. 3–4, pp. 279292. https://doi.org/10.1007/bf00992698CrossRefGoogle Scholar
Wichmann, R.L., Eisenbart, B. and Gericke, K. (2019), “The Direction of Industry: A Literature Review on Industry 4.0”, Proceedings of the Design Society: International Conference on Engineering Design, Vol. 1 No. 1, pp. 21292138. https://doi.org/10.1017/dsi.2019.219Google Scholar
Williams, R.J. (1992), “Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning”, Machine Learning, Vol. 8 No. 3, pp. 229256. https://doi.org/10.1007/bf00992696CrossRefGoogle Scholar
Wuest, T., Weimer, D., Irgens, C. and Thoben, K.-D. (2016), “Machine learning in manufacturing: advantages, challenges, and applications”, Production & Manufacturing Research, Taylor & Francis, Vol. 4 No. 1, pp. 2345. https://doi.org/10.1080/21693277.2016.1192517CrossRefGoogle Scholar
Zhang, J., Yu, Z., Mao, S., Periaswamy, S.C.G., Patton, J. and Xia, X. (2020), “IADRL: Imitation Augmented Deep Reinforcement Learning Enabled UGV-UAV Coalition for Tasking in Complex Environments”, IEEE Access, Vol. 8, pp. 102335102347. https://doi.org/10.1109/access.2020.2997304CrossRefGoogle Scholar
Zhou, J., Zhou, Y., Wang, B. and Zang, J. (2019), “Human–Cyber–Physical Systems (HCPSs) in the Context of New-Generation Intelligent Manufacturing”, Engineering, Vol. 5 No. 4, pp. 624636. https://doi.org/10.1016/j.eng.2019.07.015CrossRefGoogle Scholar
Zhou, R. and Le Cardinal, J. (2019), “Exploring the Impacts of Industry 4.0 from a Macroscopic Perspective”, Proceedings of the Design Society: International Conference on Engineering Design, Vol. 1 No. 1, pp. 21112120. https://doi.org/10.1017/dsi.2019.217Google Scholar