Hostname: page-component-cd9895bd7-gvvz8 Total loading time: 0 Render date: 2024-12-24T01:31:33.918Z Has data issue: false hasContentIssue false

Stable Robotic Grasping of Multiple Objects using Deep Neural Networks

Published online by Cambridge University Press:  20 July 2020

Dongeon Kim
Affiliation:
Department of Electrical and Electronic Engineering, Pusan National University, Busan 46241, South Korea. E-mail: [email protected]
Ailing Li
Affiliation:
Department of Electrical and Electronic Engineering, Pusan National University, Busan 46241, South Korea. E-mail: [email protected]
Jangmyung Lee*
Affiliation:
Department of Electrical and Electronic Engineering, Pusan National University, Busan 46241, South Korea. E-mail: [email protected]
*
*Corresponding author. E-mail: [email protected]

Summary

Optimal grasping points for a robotic gripper were derived, based on object and hand geometry, using deep neural networks (DNNs). The optimal grasping cost functions were derived using probability density functions for each local cost function of the normal distribution. Using the DNN, the optimum height and width were set for the robot hand to grasp objects, whose geometric and mass centre points were also considered in obtaining the optimum grasping positions for the robot fingers and the object. The proposed algorithm was tested on 10 differently shaped objects and showed improved grip performance compared to conventional methods.

Type
Articles
Copyright
Copyright © The Author(s), 2020. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Burgner, J., Rucker, D. C. and Choset, H., “Continuum robots for medical applications: A survey,” IEEE Trans. Robot. 31(6), 12611280 (2015).Google Scholar
Najmaei, N. and Kermani, M. R., “Applications of artificial intelligence in safe human–robot interactions,” IEEE Trans. Syst. Man Cyber. Part B (Cybernetics) 41(2), 448459 (2010).CrossRefGoogle ScholarPubMed
Calandra, R., Owns, A., Jayaraman, D., Lin, J., Yuan, W., Malik, J., Adelson, E. H. and Levine, S., “Learning from humans how to grasp: A data-driven architecture for autonomous grasping with anthropomorphic soft hands,” IEEE Robot. Autom. Lett. 4(2), 15331540 (2019).Google Scholar
Dan, S., Ek, C. H., Heubner, K. and Kragic, D., “Task-based robot grasp planning using probabilistic inference,” IEEE Trans. Robot. 31(3), 546561 (2015).Google Scholar
Ugur, E., Nagai, Y., Celikkanat, H. and Oztop, E., “Parental scaffolding as a bootstrapping mechanism for learning grasp affordances and imitation skills,” Robotica 33(5), 11631180 (2015).CrossRefGoogle Scholar
Romano, J. M., Hsiao, K., Chitta, S. and Kuchenbecker, K. J., “Human-inspired robotic grasp control with tactile sensing,” IEEE Trans. Robot. 27(6) 10671079 (2011).CrossRefGoogle Scholar
Cherrier, B., Cramphorn, L. and Lepora, N. F., “Tactile manipulation with a TacThumb integrated on the open-hand M2 gripper,” IEEE Robot. Autom. Lett. 1(1) 169175 (2016).CrossRefGoogle Scholar
Bicchi, A., “On the closure properties of robotic grasping,” Int. J. Robot. Res. 14(4) 8694 (1970).Google Scholar
Wakamatsu, H., Hirai, S. and Iwata, K., “Static analysis of deformable object grasping based on bounded force closure,” Proceedings of IEEE International Conference on Robotics and Automation. Minneapolis, MN, USA, vol. 4 (1996) pp. 33243329.Google Scholar
Zhu, X. and Ding, H., “Planning force-closure grasps on 3-D objects,” IEEE International Conference on Robotics and Automation, 2004. Proceedings ICRA’04, New Orleans, LA, USA, vol. 2 (2004) pp. 12581263.Google Scholar
Daoud, N., Gazeau, J. P. and Arsicault, M., “A real-time strategy for dexterous manipulation: Fingertips motion planning, force sensing and grasp stability,” Robot. Auton. Syst. 60(3), 377386 (2012).CrossRefGoogle Scholar
Grammatikopoulou, M., Psomopoulou, E., Droukas, L. and Doulgeri, Z., “A controller for stable grasping and desired finger shaping without contact sensing,” 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China (2014) pp. 36623668.Google Scholar
Levine, S., Pastor, P., Krizhevsky, A. and Quillen, D., “Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection,” Int. J. Robot. Res. 37(4–5) 421436 (2018).CrossRefGoogle Scholar
Guo, D., Sun, F., Fang, B., Yang, C. and Xi, N., “Robotic grasping using visual and tactile sensing,” Inform. Sci. 417, 274286 (2017).CrossRefGoogle Scholar
Chu, F. J., Xu, R. and Vela, P. A., “Real-world multiobject, multigrasp detection,” IEEE Robot. Autom. Lett. 3(4) 33553362 (2018).CrossRefGoogle Scholar
Ikhwantri, F., Habibie, N., Syulistyo, A. R. and Jatmoko, W., “Learning semantic segmentation score in weakly supervised convolutional neural network,” 2015 International Conference on Computers, Communications, and Systems (ICCCS), Pamplemousses, Mauritius, India (2015) pp. 1925.Google Scholar
Meng, F., Gou, L., Wu, Q. and Li, H., “A new deep segmentation quality assessment network for refining bounding box based segmentation,” IEEE Access 7 5951459523 (2019).CrossRefGoogle Scholar
Arimoto, S., Yoshida, M. and Bae, J. H., “Dynamics and stability of blind grasping of a 3-dimensional object under non-holonomic constraints,” Int. J. Autom. Comput. 3(3) 263270 (2006).CrossRefGoogle Scholar
Yan, W., Deng, Z., Chen, J., Nie, H. and Zhang, J., “Precision grasp planning for multi-finger hand to grasp unknown objects,” Robotica 37(8) 14151437 (2019).CrossRefGoogle Scholar
Engelbrecht, A., “Particle swarm optimization,” 2012 IEEE Congress on Evolutionary Computation (WCCI), Brisbane, ALD, Australia (2012) pp. 18.Google Scholar
Chen, H., and Lee, J. M., “Path planning of 5-DOF manipulator based on maximum mobility,” Int. J. Precis. Eng. Man. 15(1) 4552 (2014).Google Scholar