Hostname: page-component-78c5997874-ndw9j Total loading time: 0 Render date: 2024-11-20T00:46:21.149Z Has data issue: false hasContentIssue false

A HIERARCHICAL MACHINE LEARNING WORKFLOW FOR OBJECT DETECTION OF ENGINEERING COMPONENTS

Published online by Cambridge University Press:  19 June 2023

Lee Kent
Affiliation:
University of Bristol
Chris Snider*
Affiliation:
University of Bristol
James Gopsill
Affiliation:
University of Bristol
Mark Goudswaard
Affiliation:
University of Bristol
Aman Kukreja
Affiliation:
University of Bristol
Ben Hick
Affiliation:
University of Bristol
*
Snider, Chris, University of Bristol, United Kingdom, [email protected]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

Machine Learning (ML) techniques are showing increasing use and value in the engineering sector. Object Detection methods, by which an ML system identifies objects from an image presented to it, have demonstrated promise for search and retrieval and synchronised physical/digital version control, amongst many applications.

However, accuracy of detection often decreases as the number of objects considered by the system increases which, combined with very high training times and computational overhead, makes widespread use infeasible.

This work presents a hierarchical ML workflow that leverages the pre-existing taxonometric structures of engineering components and abundant digital models (CAD) to streamline training and increase accuracy. With a two-layer structure, the approach demonstrates potential to increase accuracy to >90%, with potential time savings of 75% and greatly increased flexibility and expandability.

While further refinement is required to increase robustness of detection and investigate scalability, the approach shows significant promise to increase feasibility of Object Detection techniques in engineering.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
The Author(s), 2023. Published by Cambridge University Press

References

Burnap, A., Hartley, J., Pan, Y., Gonzalez, R. and Papalambros, P.Y. (2016), “Balancing design freedom and brand recognition in the evolution of automotive brand styling”, Design Science, Vol. 2, p. e9, https://doi.org/10.1017/dsj.2016.9.CrossRefGoogle Scholar
Dhillon, A. and Verma, G.K. (2020), “Convolutional neural network: a review of models, methodologies and applications to object detection”, Progress in Artificial Intelligence, Vol. 9 No. 2, pp. 85112, https://doi.org/10.1007/s13748-019-00203-0.CrossRefGoogle Scholar
Gopsill, J., Goudswaard, M., Jones, D. and Hicks, B. (2021), “Capturing mathematical and human perceptions of shape and form through machine learning”, Proceedings of the Design Society, Vol. 1, pp. 591600, https://doi.org/10.1017/pds.2021.59.CrossRefGoogle Scholar
Gopsill, J. and Hicks, B. (2023), “A sustainable simulation framework for computational design using web service concepts”, Proceedings of the Design Society. In Review.Google Scholar
Gopsill, J. and Jennings, S. (2020), “Democratising design through surrogate model convolutional neural networks of computer aided design repositories”, in: Proceedings of the design society: design conference, Vol. 1, Cambridge University Press, pp. 12851294, https://doi.org/10.1017/dsd.2020.93.Google Scholar
Goudswaard, M., Hicks, B. and Nassehi, A. (2021), “The creation of a neural network based capability profile to enable generative design and the manufacture of functional fdm parts”, The International Journal of Advanced Manufacturing Technology, Vol. 113 No. 9, pp. 29512968, https://doi.org/10.1007/ s00170-021-06770-8.CrossRefGoogle Scholar
He, Y., Ma, W., Li, Y., Hao, C., Wang, Y. and Wang, Y. (2022), “An octree-based two-step method of surface defects detection for remanufacture”, International Journal of Precision Engineering and ManufacturingGreen Technology, pp. 116, https://doi.org/10.1007/s40684-022-00433-z.CrossRefGoogle Scholar
Hermans, G. (2015), Opening up design: Engaging the layperson in the design of everyday products, Ph.D. thesis, Umeå University.Google Scholar
Jones, D., Snider, C., Nassehi, A., Yon, J. and Hicks, B. (2020), “Characterising the digital twin: A systematic literature review”, CIRP Journal of Manufacturing Science and Technology, Vol. 29, pp. 3652, https://doi.org/10.1016/j.cirpj.2020.02.002.CrossRefGoogle Scholar
Kent, L., Snider, C., Gopsill, J. and Hicks, B. (2021), “Mixed reality in design prototyping: A systematic review”, Design Studies, Vol. 77, p. 101046, https://doi.org/10.1016/j.destud.2021.101046.CrossRefGoogle Scholar
Kim, S., Chi, H.g., Hu, X., Huang, Q. and Ramani, K. (2020), “A large-scale annotated mechanical components benchmark for classification and retrieval tasks with deep neural networks”, in: Computer Vision – ECCV 2020, Springer International Publishing, Cham, pp. 175191.CrossRefGoogle Scholar
Kukreja, A., Dhanda, M. and Pande, S. (2020), “Efficient toolpath planning for voxel-based cnc rough machining”, Comput. Aided Des. Appl., Vol. 18 No. 2, pp. 285296, https://doi.org/10.14733/cadaps.2021.285-296.CrossRefGoogle Scholar
Maturana, D. and Scherer, S. (2015), “Voxnet: A 3d convolutional neural network for real-time object recognition”, in: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 922928, https://doi.org/10.1109/IROS.2015.7353481.CrossRefGoogle Scholar
Ranscombe, C., Hicks, B., Mullineux, G. and Singh, B. (2012), “Visually decomposing vehicle images: Exploring the influence of different aesthetic features on consumer perception of brand”, Design Studies, Vol. 33, pp. 319341, https://doi.org/10.1016/j.destud.2011.06.006.CrossRefGoogle Scholar
Real, R., Gopsill, J., Jones, D., Snider, C. and Hicks, B. (2021), “Distinguishing artefacts: evaluating the saturation point of convolutional neural networks”, Procedia CIRP, Vol. 100, pp. 385390, https://doi.org/10.1016/j.procir.2021.05.089.CrossRefGoogle Scholar
Su, H., Maji, S., Kalogerakis, E. and Learned-Miller, E. (2015), “Multi-view convolutional neural networks for 3d shape recognition”, in: Proceedings of the IEEE international conference on computer vision, pp. 945953, https://doi.org/10.48550/arXiv.1505.00880.CrossRefGoogle Scholar
Zaki, H.F., Shafait, F. and Mian, A. (2016), “Modeling 2d appearance evolution for 3d object categorization”, in: 2016 international conference on digital image computing: techniques and applications (DICTA), IEEE, pp. 18, https://doi.org/10.1109/DICTA.2016.7797065.Google Scholar
Zou, Z., Shi, Z., Guo, Y. and Ye, J. (2019), “Object detection in 20 years: A survey”, arXiv preprint arXiv:1905.05055, https://doi.org/10.48550/arXiv.1905.05055.CrossRefGoogle Scholar