Hostname: page-component-586b7cd67f-2plfb Total loading time: 0 Render date: 2024-11-22T06:59:16.488Z Has data issue: false hasContentIssue false

Real-time multiview data fusion for object tracking with RGBD sensors

Published online by Cambridge University Press:  01 December 2014

Abdenour Amamra*
Affiliation:
Centre For Electronic Warfare, Cranfield University, Defence Academy of the United Kingdom, Shrivenham, SN6 8LA.
Nabil Aouf
Affiliation:
Centre For Electronic Warfare, Cranfield University, Defence Academy of the United Kingdom, Shrivenham, SN6 8LA.
*
*Corresponding author. E-mail: [email protected]

Summary

This paper presents a new approach to accurately track a moving vehicle with a multiview setup of red–green–blue depth (RGBD) cameras. We first propose a correction method to eliminate a shift, which occurs in depth sensors when they become worn. This issue could not be otherwise corrected with the ordinary calibration procedure. Next, we present a sensor-wise filtering system to correct for an unknown vehicle motion. A data fusion algorithm is then used to optimally merge the sensor-wise estimated trajectories. We implement most parts of our solution in the graphic processor. Hence, the whole system is able to operate at up to 25 frames per second with a configuration of five cameras. Test results show the accuracy we achieved and the robustness of our solution to overcome uncertainties in the measurements and the modelling.

Type
Articles
Copyright
Copyright © Cambridge University Press 2014 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1. Yang, H., Shao, L., Zheng, F., Wang, L. and Song, Z., “Recent advances and trends in visual tracking: A review,” Neurocomputing, 74 (18), 38233831 (Nov. 2011).Google Scholar
2. Litomisky, K., “Consumer rgb-d cameras and their applications,” Tech. rep. University of California, 2012.Google Scholar
3. Fung, J. and Mann, S., “OpenVIDIA,” Proceedings of the 13th Annual ACM International Conference on Multimedia – MULTIMEDIA '05 (2005) pp. 849–852.Google Scholar
4. Amamra, A. and Aouf, N., “Real-Time Robust Tracking with Commodity RGBD Camera,” Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (2013) pp. 2408–2413.Google Scholar
5. Yilmaz, A., Javed, O. and Shah, M., “Object tracking: A survey,” ACM Comput. Surv. 38 (4), 145 (2006).Google Scholar
6. Taj, M. and Cavallaro, A., “Multi-view Multi-object Detection and Tracking,” IEEE Comput. Soc Studies in Computational Intelligence, 263–280 (2010).CrossRefGoogle Scholar
7. Lv, F., Zhao, T. and Nevatia, R., “Self-calibration of a camera from video of a walking human,” Proceedings of the 16th International Conference on Pattern Recognition, vol. 1 (2002), pp. 562–567.Google Scholar
8. Okuma, K., Taleghani, A. and De Freitas, N., “A Boosted Particle Filter: Multitarget Detection and Tracking,” Proceedings of the 8th European Conference on Computer Vision (ECCV), Prague, Czech Republic (2004).Google Scholar
9. Li, S. Z., “Multi-Pedestrian Detection in Crowded Scenes: A Global View,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2012) pp. 3124–3129.Google Scholar
10. Bobick, A. F., Intille, S. S., Davis, J. W., Baird, F., Pinhanez, C. S., Campbell, L. W., Ivanov, Y. A., Schütte, A. and Wilson, A., “The KidsRoom: A perceptually-based interactive and immersive story environment,” Presence Teleoperators Virtual Environ. 8 (4), 369393 (Aug. 1999).Google Scholar
11. Intille, S., Davis, J. and Bobick, A., “Real-Time Closed-World Tracking,” Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (1997) pp. 697–703.Google Scholar
12. Wren, C., Azarbayejani, A., Darrell, T. and Pentland, A. P., “Pfinder: Real-time tracking of the human body,” IEEE Trans. Pattern Anal. Mach. Intell. 19 (7), 780785 (Jul. 1997).Google Scholar
13. Flusser, J. and Suk, T., “Rotation moment invariants for recognition of symmetric objects,” IEEE Trans. Image Process. 15 (12) 37843790 (Dec. 2006).Google Scholar
14. Chen, S. Y., “Kalman filter for robot vision: A survey,” IEEE Trans. Ind. Electron. 59 (11) 44094420 (Nov. 2012).Google Scholar
15. Liu, L., Sun, B., Wei, N., Hu, C. and Meng, M. Q.-H., “A Novel Marker Tracking Method Based on Extended Kalman Filter for Multi-Camera Optical Tracking Systems,” Proceedings of the 5th International Conference on Bioinformatics and Biomedical Engineering (2011) pp. 1–5.Google Scholar
16. Rui, Y. and Chen, Y., “Better Proposal Distributions: Object Tracking Using Unscented Particle Filter,” Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2001), vol. 2 (2001), pp. II-786–II-793.Google Scholar
17. Pupilli, M. and Calway, A., “Real-Time Camera Tracking Using a Particle Filter.,” British Machine Vision Conference (BMVC), 519–528 (2005).CrossRefGoogle Scholar
18. Simon, D., Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches. (Wiley-Interscience, New York, NY, 2006) 552 p.CrossRefGoogle Scholar
19. Niehsen, W., “Information Fusion Based on Fast Covariance Intersection Filtering,” Proceedings of the Fifth International Conference on Information Fusion (FUSION 2002), vol 2 (IEEE Cat. No. 02EX5997) (2002) pp. 901–904.Google Scholar
20. Smith, D. and Singh, S., “Approaches to multisensor data fusion in target tracking: A survey,” IEEE Trans. Knowl. Data Eng. 18 (12), 16961710 (Dec. 2006).Google Scholar
21. Shotton, J., Sharp, T., Kipman, A., Fitzgibbon, A., Finocchio, M., Blake, A., Cook, M. and Moore, R., “Real-time human pose recognition in parts from single depth images,” Commun. ACM 56 (1), 116 (Jan. 2013).Google Scholar
22. Correa, D. S. O., Sciotti, D. F., Prado, M. G., Sales, D. O., Wolf, D. F. and Osorio, F. S., “Mobile Robots Navigation in Indoor Environments Using Kinect Sensor,” Proceedings of the 2012 Second Brazilian Conference on Critical Embedded Systems (2012) pp. 36–41.Google Scholar
23. Henry, P., Krainin, M., Herbst, E., Ren, X. and Fox, D., “RGB-D mapping: Using kinect-style depth cameras for dense 3D modeling of indoor environments,” Int. J. Rob. Res. 31 (5), 647663 (Feb. 2012).Google Scholar
24. Nakamura, T., “Real-Time 3-D Object Tracking Using Kinect Sensor,” Proceedings of the 2011 IEEE International Conference on Robotics and Biomimetics (2011) pp. 784–788.Google Scholar
25. Izadi, S., Davison, A., Fitzgibbon, A., Kim, D., Hilliges, O., Molyneaux, D., Newcombe, R., Kohli, P., Shotton, J., Hodges, S. and Freeman, D., “Kinect Fusion,” Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology – UIST '11 (2011) 559.Google Scholar
26. Tong, J., Zhou, J., Liu, L., Pan, Z. and Yan, H., “Scanning 3D full human bodies using Kinects,” IEEE Trans. Vis. Comput. Graph. 18 (4) 643650 (2012).Google Scholar
27. Han, J., Shao, L., Xu, D. and Shotton, J., “Enhanced computer vision with microsoft kinect sensor: A review,” IEEE Trans. Cybern. 43 (5), 13181334 (Oct. 2013).Google Scholar
28. ROS Wiki, “kinect_calibration/technical - ROS Wiki” (online). Available at: http://wiki.ros.org/kinect_calibration/technical. Accessed: Jan. 27, 2014.Google Scholar
29. Reyes, R., Lopez, I., Fumero, J. J. and de Sande, F., “accULL: A User-Directed Approach to Heterogeneous Programming,” Proceedings of the 2012 IEEE 10th International Symposium on Parallel and Distributed Processing with Applications, (2012) pp. 654–661.Google Scholar
30. Andersen, M., Jensen, T., Lisouski, P., Mortensen, A., Hansen, M., Gregersen, T., and Ahrendt, P., “Kinect Depth Sensor Evaluation for Computer Vision Applications,” Technical report, Dept. of Engineering, Aarhus University, Denmark, 2012.Google Scholar
31. Sedláček, M., “Evaluation of RGB and HSV Models in Human Faces Detection. Central European Seminar on Computer Graphics, Budmerice, Slovakia.Google Scholar
32. Hung, Y. S. and Yang, F., “Robust H∞ Filtering with Error Variance Constraints for Discrete Time-Varying Systems with Uncertainty”, Automatica, 39 (7), 11851194 (2003).Google Scholar
33. Franken, D. and Hupper, A., “Improved Fast Covariance Intersection for Distributed Data Fusion,” Proceedings of the 7th International Conference on Information Fusion, vol. 1 (2005), p 7.Google Scholar
34. Khoshelham, K. and Elberink, S. O., “Accuracy and resolution of kinect depth data for indoor mapping applications,” Sensors (Basel) 12 (2), 14371454 (Jan. 2012).Google Scholar
35. Amamra, A. and Aouf, N., “Robust and Sparse RGBD Data Registration of Scene Views,” Proceedings of the 17th International Conference on Information Visualisation (2013) pp. 488–493.Google Scholar
36. Cucchiara, R., Grana, C., Piccardi, M., Prati, A. and Sirotti, S., “Improving Shadow Suppression in Moving Object Detection with HSV Color Information,” Proc. IEEE Int',l Conf. Intelligent Transportation Systems, 334–339 (Aug. 2001).Google Scholar
37. Mahmoud, M. S., “Resilient Linear Filtering of Uncertain Systems,” Automatica, 40, 17971802 (2004).Google Scholar
38. Xie, L., Lu, L., Zhang, D., and Zhang, H., “Improved Robust H2 and H∞ Filtering for Uncertain Discrete-Time Systems40 (5), 873880 (2004).Google Scholar
39. Nguyen-Tuong, D. and Peters, J., “Model learning for robot control: a survey,” Cogn. Process. 12 (4), 319340 (Nov. 2011).Google Scholar
40. Fung, J. and Mann, S., “Using Graphic Devices in Reverse: GPU-Based Image Processing and Computer Vision,” Proceedings of the 2008 IEEE International Conference on Multimedia and Expo (2008), pp. 9–12.Google Scholar
41. Jargstorff, F., “GPU Image Processing,” Proceedings of SIGGRAPH 2004 (2004).Google Scholar
42. Kilgariff, E. and Fernando, R., “The GeForce 6 Series GPU Architecture,” ACM SIGGRAPH 2005 Courses (2005).Google Scholar