Hostname: page-component-586b7cd67f-t7czq Total loading time: 0 Render date: 2024-11-26T23:49:29.306Z Has data issue: false hasContentIssue false

Towards digital representations for brownfield factories using synthetic data generation and 3D object detection

Published online by Cambridge University Press:  16 May 2024

Javier Villena Toro*
Affiliation:
Linköping University, Sweden
Lars Bolin
Affiliation:
Linköping University, Sweden
Jacob Eriksson
Affiliation:
Linköping University, Sweden
Anton Wiberg
Affiliation:
Linköping University, Sweden

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

This study emphasizes the importance of automatic synthetic data generation in data-driven applications, especially in the development of a 3D computer vision system for engineering contexts such as brownfield factory projects, where no data is readily available. Key points: (1) A successful integration of a synthetic data generator with the S3DIS dataset, leading to a significant enhancement in object detection of previous classes and enabling recognition of new ones; (2) A proposal for a CAD-based configurator for efficient and customizable scene reconstruction from LiDAR scanner point clouds.

Type
Artificial Intelligence and Data-Driven Design
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
The Author(s), 2024.

References

ABB. (2015), “Dual-arm YuMi - IRB 14000”, ABB.Google Scholar
Anderson, J.W., Kennedy, K.E., Ngo, L.B., Luckow, A. and Apon, A.W. (2014), “Synthetic data generation for the internet of things”, 2014 IEEE Big Data, pp. 171176, https://dx.doi.org/10.1109/BigData.2014.7004228.CrossRefGoogle Scholar
Armeni, I., Sax, A., Zamir, A.R. and Savarese, S. (2017), “Joint 2D-3D-Semantic Data for Indoor Scene Understanding”, 2017 IEEE CVPR, https://dx.doi.org/10.48550/arXiv.1702.01105.CrossRefGoogle Scholar
Chang, A.X., Funkhouser, T., Guibas, L., Hanrahan, P., Huang, Q., Li, Z., Savarese, S., et al. (2015), “ShapeNet: An Information-Rich 3D Model Repository”, arXiv, https://dx.doi.org/10.48550/arXiv.1512.03012.CrossRefGoogle Scholar
Chen, B., Wan, J., Shu, L., Li, P., Mukherjee, M. and Yin, B. (2018), “Smart Factory of Industry 4.0: Key Technologies, Application Case, and Challenges”, IEEE Access, Vol. 6, pp. 65056519, https://dx.doi.org/10.1109/ACCESS.2017.2783682.CrossRefGoogle Scholar
Chen, W., Li, Y., Tian, Z. and Zhang, F. (2023), “2D and 3D object detection algorithms from images: A Survey”, Array, Vol. 19, p. 100305, https://doi.org/10.1016/j.array.2023.100305.CrossRefGoogle Scholar
MMDetection3D, C. (2020), “MMDetection3D: OpenMMLab next-generation platform for general 3D object detection”.Google Scholar
Dai, A., Chang, A.X., Savva, M., Halber, M., Funkhouser, T. and Nießner, M. (2017), “ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes”, 2017 IEEE CVPR, https://dx.doi.org/10.48550/arXiv.1702.04405.CrossRefGoogle Scholar
Esteves, C., Allen-Blanchette, C., Makadia, A. and Daniilidis, K. (2017), “Learning SO(3) Equivariant Representations with Spherical CNNs”, 2017 IEEE CVPR, https://dx.doi.org/10.48550/ARXIV.1711.06721.Google Scholar
Fang, Y., Xie, J., Dai, G., Wang, M., Zhu, F., Xu, T. and Wong, E. (2015), “3D deep shape descriptor”, 2015 IEEE CVPR, pp. 23192328, https://dx.doi.org/10.1109/CVPR.2015.7298845.CrossRefGoogle Scholar
Blender, F. (n.d.). “Raycast Node”, Blender 3.6 Manual.Google Scholar
Martínez, G.S., Karhela, T.A., Ruusu, R.J., Sierla, S.A. and Vyatkin, V. (2018), “An Integrated Implementation Methodology of a Lifecycle-Wide Tracking Simulation Architecture”, IEEE Access, Vol. 6, pp. 1539115407, https://dx.doi.org/10.1109/ACCESS.2018.2811845.CrossRefGoogle Scholar
Maturana, D. and Scherer, S. (2015), “VoxNet: A 3D Convolutional Neural Network for real-time object recognition”, 2015 IEEE/RSJ IROS, pp. 922928, https://dx.doi.org/10.1109/IROS.2015.7353481.CrossRefGoogle Scholar
Nguyen, H.G., Habiboglu, R. and Franke, J. (2022), “Enabling deep learning using synthetic data: A case study for the automotive wiring harness manufacturing”, Procedia CIRP, Vol. 107, pp. 12631268, https://doi.org/10.1016/j.procir.2022.05.142.CrossRefGoogle Scholar
Piascik, R., et al. (2010), “Technology Area 12: Materials, Structures, Mechanical Systems, and Manufacturing Road Map”, NASA Office of Chief Technologist.Google Scholar
Qi, C.R., Litany, O., He, K. and Guibas, L.J. (2019), “Deep Hough Voting for 3D Object Detection in Point Clouds”, 2019 ICCV, https://dx.doi.org/10.48550/arXiv.1904.09664.CrossRefGoogle Scholar
Qi, C.R., Su, H., Mo, K. and Guibas, L.J. (2017), “PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation”, 2017 IEEE CVPR, https://dx.doi.org/10.48550/ARXIV.2302.02858.CrossRefGoogle Scholar
Qi, C.R., Yi, L., Su, H. and Guibas, L.J. (2017), “PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space”, ArXiv CVPR, arXiv, https://dx.doi.org/10.48550/ARXIV.2302.02858.CrossRefGoogle Scholar
Regenwetter, L., Curry, B. and Ahmed, F. (2021), “BIKED: A Dataset for Computational Bicycle Design With Machine Learning Benchmarks”, Journal of Mechanical Design, Vol. 144 No. 3, p. 31706, https://dx.doi.org/10.1115/1.4052585.Google Scholar
Rukhovich, D., Vorontsova, A. and Konushin, A. (2023), “TR3D: Towards Real-Time Indoor 3D Object Detection”, ArXiv CVPR, https://dx.doi.org/10.48550/ARXIV.2302.02858.CrossRefGoogle Scholar
Schluse, M., Priggemeyer, M., Atorf, L. and Rossmann, J. (2018), “Experimentable Digital Twins—Streamlining Simulation-Based Systems Engineering for Industry 4.0”, IEEE TII, Vol. 14 No. 4, pp. 17221731, https://dx.doi.org/10.1109/TII.2018.2804917.Google Scholar
Shellshear, E., Berlin, R. and Carlson, J.S. (2015), “Maximizing Smart Factory Systems by Incrementally Updating Point Clouds”, IEEE CGA, Vol. 35 No. 2, pp. 6269, https://dx.doi.org/10.1109/MCG.2015.38.Google ScholarPubMed
Sierla, S., Azangoo, M., Fay, A., Vyatkin, V. and Papakonstantinou, N. (2020), “Integrating 2D and 3D Digital Plant Information Towards Automatic Generation of Digital Twins”, 2020 IEEE ISIE, pp. 460467, https://dx.doi.org/10.1109/ISIE45063.2020.9152371.CrossRefGoogle Scholar
Sierla, S., Sorsamäki, L., Azangoo, M., Villberg, A., Hytönen, E. and Vyatkin, V. (2020), “Towards Semi-Automatic Generation of a Steady State Digital Twin of a Brownfield Process Plant”, Applied Sciences, Vol. 10 No. 19, https://dx.doi.org/10.3390/app10196959.CrossRefGoogle Scholar
Song, S., Lichtenberg, S.P. and Xiao, J. (2015), “SUN RGB-D: A RGB-D scene understanding benchmark suite”, 2015 IEEE CVPR, pp. 567576, https://dx.doi.org/10.1109/CVPR.2015.7298655.CrossRefGoogle Scholar
Su, H., Maji, S., Kalogerakis, E. and Learned-Miller, E. (2015), “Multi-view Convolutional Neural Networks for 3D Shape Recognition”, ArXiv CVPR, arXiv, https://dx.doi.org/10.48550/ARXIV.1505.00880.CrossRefGoogle Scholar
Wang, X., Pan, H., Guo, K., Yang, X. and Luo, S. (2020), “The evolution of LiDAR and its application in high precision measurement”, IOP EES, IOP Publishing, Vol. 502 No. 1, p. 12008, https://dx.doi.org/10.1088/1755-1315/502/1/012008.Google Scholar
Winiwarter, L., Pena, A.M.E., Weiser, H., Anders, K., Sanchez, J.M., Searle, M. and Höfle, B. (2021), “Virtual laser scanning with HELIOS++: A novel take on ray tracing-based simulation of topographic 3D laser scanning”, arXiv CVPR, https://dx.doi.org/10.48550/arXiv.2101.09154.CrossRefGoogle Scholar
Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X. and Xiao, J. (2015), “3D ShapeNets: A deep representation for volumetric shapes”, 2015 IEEE CVPR, pp. 19121920, https://dx.doi.org/10.1109/CVPR.2015.7298801.CrossRefGoogle Scholar