Hostname: page-component-cd9895bd7-dzt6s Total loading time: 0 Render date: 2024-12-23T17:16:35.015Z Has data issue: false hasContentIssue false

Distance-based Global Descriptors for Multi-view Object Recognition

Published online by Cambridge University Press:  26 April 2019

Prasanna Kannappan
Affiliation:
Department of Mechanical Engineering, University of Delaware, Newark, DE, 19716, USA E-mail: [email protected]
Herbert G. Tanner*
Affiliation:
Department of Mechanical Engineering, University of Delaware, Newark, DE, 19716, USA E-mail: [email protected]
*
*Corresponding author. E-mail: [email protected]

Summary

The paper reports on a new multi-view algorithm that combines information from multiple images of a single target object, captured at different distances, to determine the identity of an object. Due to the use of global feature descriptors, the method does not involve image segmentation. The performance of the algorithm has been evaluated on a binary classification problem for a data set consisting of a series of underwater images.

Type
Articles
Copyright
© Cambridge University Press 2019 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Alpaydin, E., Introduction to Machine Learning (MIT Press, Cambridge, MA, USA, 2014).Google Scholar
Krizhevsky, A., Sutskever, I. and Hinton, G., “Imagenet Classification with Deep Convolutional Neural Networks,” In: Advances in Neural Information Processing Systems (Neural Information Processing Systems Foundation, Lake Tahoe, NV, USA, 2012) pp. 10971105.Google Scholar
Roth, P. and Winter, M., “Survey of appearance-based methods for object recognition,” Institute for Computer Graphics and Vision, Graz University of Technology, Austria, Tech. Rep. ICG-TR-01 (2008).Google Scholar
Campbell, R. and Flynn, P., “A survey of free-form object representation and recognition techniques,” Comput. Vis. Image Underst. 81(2), 166210 (2001).10.1006/cviu.2000.0889CrossRefGoogle Scholar
Belongie, S., Malik, J. and Puzicha, J., “Shape matching and object recognition using shape contexts,” IEEE Trans. Pattern Anal. Mach. Intell. 24(4), 509522 (2002).10.1109/34.993558CrossRefGoogle Scholar
Lowe, D., “Object Recognition from Local Scale-invariant Features,” Proceedings of the Seventh IEEE International Conference on Computer Vision, vol. 2 (1999) pp. 11501157.10.1109/ICCV.1999.790410CrossRefGoogle Scholar
Bay, H., Ess, A., Tuytelaars, T. and Gool, L., “Speeded-up robust features (SURF),” Comput. Vis. Image Underst. 110(3), 346359 (2008).10.1016/j.cviu.2007.09.014CrossRefGoogle Scholar
Oliva, A. and Torralba, A., “Building the gist of a scene: The role of global image features in recognition,” Prog. Brain Res. 155, 2336 (2006).10.1016/S0079-6123(06)55002-2CrossRefGoogle ScholarPubMed
Boykov, Y. and Kolmogorov, V., “An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 359374 (2001).Google Scholar
Boykov, Y. Y. and Jolly, M.-P., “Interactive Graph Cuts for Optimal Boundary & Region Segmentation of Objects in N-D Images,” Proceedings of Eighth IEEE International Conference on Computer Vision, vol. 1, IEEE, Vancouver, BC, Canada (2001) pp. 105112.Google Scholar
Rother, C., Kolmogorov, V. and Blake, A., “Grabcut: Interactive foreground extraction using iterated graph cuts,” ACM Trans. Graph. 23(3), 309314 (2004).10.1145/1015706.1015720CrossRefGoogle Scholar
Meng, M., Gorelick, L., Veksler, O. and Boykov, Y., “Grabcut in One Cut,” International Conference on Computer Vision, Sydney, Australia (2013) pp. 17691776.Google Scholar
Chen, S., Li, Y. and Kwok, N., “Active vision in robotic systems: A survey of recent developments,” Int. J. Rob. Res. 30(11), 13431377 (2011).10.1177/0278364911410755CrossRefGoogle Scholar
Roy, S., Chaudhury, S. and Banerjee, S., “Active recognition through next view planning: A survey,” Pattern Recognit. 37(3), 429446 (2004).Google Scholar
Dunn, E., Berg, J. and Frahm, J., “Developing Visual Sensing Strategies through Next Best View Planning,” IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA (2009) pp. 40014008.Google Scholar
Kannappan, P., Walker, J., Trembanis, A. and Tanner, H. G., “Identifying sea scallops from benthic camera images,” ASLO Limnol. Oceanol. Methods 12, 680693 (2014).10.4319/lom.2014.12.680CrossRefGoogle Scholar
Rasmussen, C., Zhao, J., Ferraro, D. and Trembanis, A., “Deep Census: AUV-based Scallop Population Monitoring,” International Conference on Computer Vision: Workshop on Visual Wildlife Monitoring, IEEE, Venice, Italy (2017) pp. 28652873.Google Scholar
Dawkins, M., Stewart, C., Gallager, S. and York, A., “Automatic Scallop Detection in Benthic Environments,” IEEE Workshop on Applications of Computer Vision, Portland, OR, USA (2013) pp. 160167.Google Scholar
Enomoto, K., Toda, M. and Kuwahara, Y., “Scallop Detection from Sand-seabed Images for Fishery Investigation,” 2nd International Congress on Image and Signal Processing, IEEE, Tianjin, China (2009) pp. 15.Google Scholar
Schoening, T., “Automated Detection in Benthic Images for Megafauna Classification and Marine Resource Exploration: Supervised and Unsupervised Methods for Classification and Regression Tasks in Benthic Images with Efficient Integration of Expert Knowledge,” Ph.D. Dissertation (Universität Bielefeld, 2015).Google Scholar
Moniruzzaman, M., Islam, S. M. S., Bennamoun, M. and Lavery, P., “Deep Learning on Underwater Marine Object Detection: A Survey,” International Conference on Advanced Concepts for Intelligent Vision Systems, Antwerp, Belgium (2017) pp. 150160.10.1007/978-3-319-70353-4_13CrossRefGoogle Scholar