Hostname: page-component-586b7cd67f-gb8f7 Total loading time: 0 Render date: 2024-11-25T09:08:30.325Z Has data issue: false hasContentIssue false

Catadioptric panoramic stereovision for humanoid robots

Published online by Cambridge University Press:  03 October 2011

C. Salinas
Affiliation:
Centre for Automation and Robotics – CAR (CSIC-UPM), Robotics Locomotion & Interaction Group, Ctra. de Campo Real. Km 0.200, La Poveda, Arganda del Rey, 28500, Madrid, Spain
H. Montes*
Affiliation:
Centre for Automation and Robotics – CAR (CSIC-UPM), Robotics Locomotion & Interaction Group, Ctra. de Campo Real. Km 0.200, La Poveda, Arganda del Rey, 28500, Madrid, Spain Facultad de Ingenieria Electrica, Universidad Tecnológica de Panamá, Republic of Panama
G. Fernandez
Affiliation:
Departamento de Electronica y Circuitos, Simon Bolivar University, Republic of Venezuela
P. Gonzalez de Santos
Affiliation:
Centre for Automation and Robotics – CAR (CSIC-UPM), Robotics Locomotion & Interaction Group, Ctra. de Campo Real. Km 0.200, La Poveda, Arganda del Rey, 28500, Madrid, Spain
M. Armada
Affiliation:
Centre for Automation and Robotics – CAR (CSIC-UPM), Robotics Locomotion & Interaction Group, Ctra. de Campo Real. Km 0.200, La Poveda, Arganda del Rey, 28500, Madrid, Spain
*
*Corresponding author. E-mail: [email protected]

Summary

This paper proposes a novel design of a reconfigurable humanoid robot head, based on biological likeness of human being so that the humanoid robot could agreeably interact with people in various everyday tasks. The proposed humanoid head has a modular and adaptive structural design and is equipped with three main components: frame, neck motion system and omnidirectional stereovision system modules. The omnidirectional stereovision system module being the last module, a motivating contribution with regard to other computer vision systems implemented in former humanoids, it opens new research possibilities for achieving human-like behaviour. A proposal for a real-time catadioptric stereovision system is presented, including stereo geometry for rectifying the system configuration and depth estimation. The methodology for an initial approach for visual servoing tasks is divided into two phases, first related to the robust detection of moving objects, their depth estimation and position calculation, and second the development of attention-based control strategies. Perception capabilities provided allow the extraction of 3D information from a wide range of visions from uncontrolled dynamic environments, and work results are illustrated through a number of experiments.

Type
Articles
Copyright
Copyright © Cambridge University Press 2011

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

This paper was originally submitted under the auspices of the CLAWAR Association. It is an extension of work presented at CLAWAR 2009: The 12th International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines, Istanbul, Turkey.

References

1.Sakagami, Y., Watanabe, R., Aoyama, C., Matsunaga, S., Higaki, N. and Fujimura, K., “The Intelligent ASIMO: System Overview and Integration,” In: Proceedings of the IEEE/RSJ, International Conference on Intelligent Robots and Systems, EPFL, Lausanne, Switzerland (Sep. 30–Oct. 4, 2002) pp. 24782483.Google Scholar
2.Kaneko, K., Kanehiro, F., Kajita, S., Hirukawa, H., Kawasaki, T., Hirata, M., Akachi, K. and Isozumi, T., “Humanoid Robot HRP-2,” In: Proceedings of the IEEE International Conference on Robotics and Automation, New Orleans, LA (Apr. 26–May 1, 2004) pp. 10831090.Google Scholar
3.Tanaka, F. and Suzuki, H., “Dance Interaction with QRIO: A Case Study for Nonboring Interaction by Using an Entrainment Ensemble Model,” In: Proceedings of the 13th IEEE International Workshop on Robot and Human Interactive Communication, ROMAN (Sep. 20–22, 2004) pp. 419424.Google Scholar
4.Lohmeier, S., Loffler, K., Gienger, M., Ulbrich, H. and Pfeiffer, F., “Computer System and Control of Biped ‘Johnnie’,” In: Proceedings of the IEEE International Conference on Robotics and Automation, New Orleans, LA, Vol. 4 (Apr. 26–May 1, 2004) pp. 42224227.Google Scholar
5.Breazeal, C. L., Sociable Machines: Expressive Social Exchange between Humans and Robots Ph.D. Dissertation (Massachusetts Institute of Technology, Cambridge, MA, USA, 2000).Google Scholar
6.Brooks, R., Breazeal, C., Marjanović, M., Scassellati, B. and Williamson, M., The Cog Project: Building a Humanoid Robot, Lecture Notes in Computer Science (LNCS). (Springer-Verlag, Heidelberg, Germany, 1999) pp. 5287.Google Scholar
7.Hirth, J., Schmitz, N. and Berns, K., “Emotional Architecture for the Humanoid Robot Head ROMAN,” In: Proceedings of the IEEE International Conference on Robotics and Automation, Roma, Italy (Apr. 10–14, 2007) pp. 21502155.Google Scholar
8.Yoshida, E., Laumond, J-P., Esteves, C., Kanoun, O., Mallet, A., Sakaguchi, T. and Yokoi, K., “Motion autonomy for humanoids: experiments on HRP-2 No. 14,” Comput. Animat. Virtual Worlds 20, 511522 (2009).Google Scholar
9.Stasse, O., Verrelst, B., Vanderborght, B. and Yokoi, K., “Strategies for humanoid robots to dynamically walk over large obstacles,” IEEE Trans. Robot. 25 (4), 960967 (2009).CrossRefGoogle Scholar
10.Chestnutt, J., Michel, P., Kuffner, J. and Kanade, T., “Locomotion Among Dynamic Obstacles for the Honda ASIMO,” Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA (Oct 29–Nov 2, 2009).Google Scholar
11.Pfeiffer, F., “The TUM walking machines,” Phil. Trans. R. Soc. 365 (1850), 109131 (2007).Google Scholar
12.Rees, D. W., “Panoramic Television Viewing System,” US Patent No. 3505465 (1970).Google Scholar
13.Hong, J., “Image Based Homing,” In: Proceedings of the International Conference on Robotics and Automation, Sacramento, USA (1991) pp. 620625.Google Scholar
14.Yamazawa, K., Yagi, Y., Yachida, M., “Omnidirectional Imaging with Hyperboloidal Projection,” In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Yokohama, Japan (Jul. 26–30, 1993) pp. 10291034.Google Scholar
15.Yagi, Y., Nishizawa, Y. and Yachida, M., “Map-based navigation for a mobile robot with omnidirectional image sensor COPIS,” IEEE Trans. Robot. Autom. 11 (5), 634648 (1995).CrossRefGoogle Scholar
16.Geyer, C. and Daniilidis, K., “Catadioptric projective geometry,” Int. J. Comput. Vision 45 (3), 223243 (2001).CrossRefGoogle Scholar
17.Svodoba, T., “Central Panoramic Cameras Design, Geometry, Egomotion,” Ph.D. Thesis (Center for Machine Perception, Czech Technical University, Prague, Czech Republic, 1999).Google Scholar
18.Baker, S. and Nayar, S. K., “A theory of single-viewpoint catadioptric image formation,” Int. J. Comput. Vis. 35 (2), 122 (1999).CrossRefGoogle Scholar
19.Svodoba, T., Padjdla, T. and Hlavac, V., “Epipolar Geometry for Panoramic Cameras,” In: Proceedings of the European Conference on Computer Vision, Bombay, India (Jan. 1998) pp. 218232.Google Scholar
20.Gluckman, J., Nayar, S. K. and Thoresz, K. J., “Real-Time Omnidirectional and Panoramic Stereo,” In: Proceedings of DARPA Image Understanding Workshop (Nov. 1998) pp. 299–303.Google Scholar
21.Cabral, E. L., Junior, J. C. and Hunold, M.C., “Omnidirectional Stereo Vision with a Hyperbolic Double Lobed Mirror,” Proceedings of the Pattern Recognition, 17th International Conference, Vol. 1 (IEEE CS Press, Washington, DC, 2004).Google Scholar
22.Nene, S. A. and Nayar, S. K., “Stereo with Mirrors,” In: Proceedings of International Conference on Computer Vision, Bombay, India (Jan. 1998) pp. 10871094.Google Scholar
23.Armada, M., Caballero, R., Akinfiev, T., Montes, H., Manzano, C., Pedraza, L. and González de Santos, P., “Design of SILO2 Humanoid Robot,” In: Proceedings of IARP Workshop on Humanoid and Friendly Robotics, Tsukuba, Japan (Dec. 11–12, 2002) pp. 3742.Google Scholar
24.Montes, H., “Análisis, diseño y evaluación de estrategias de control de fuerza en robots caminantes,” Ph.D. Thesis (U. Complutense, Spain, 2005).Google Scholar
25.Montes, H., Salinas, C., Fernandez, G., Clarembaux, P., Gonzalez de Santos, P. and Armada, M., “Omnidirectional Stereo Vision Head for Humand Robots,” In: Proceedings of CLAWAR'09, Istabul, Turkey (Sep. 9–11, 2009) pp. 909918.Google Scholar
26.Winter, D. A., Biomechanics and Motor Control of Human Movement (John Wiley, Hoboken, NJ, 1990).Google Scholar
27.Vasavada, A., Siping, L. and Delp, S., “Influence of muscle morphometry and moment arms on the moment-generating capacity of human neck muscles,” SPINE 23 (4), 412422 (1998).CrossRefGoogle ScholarPubMed
28.Benosman, R. and Kang, S. B., Panoramic Vision: Theory System and Applications (Springer-Verlag, New York, 2001).Google Scholar
29.Padjdla, T., “Localization Using SVAVISCA Panoramic Image of Agam Fiducials – Limits of Performance,” Technical Report (Center for Machine Perception, Czech Technical University, Prague, Czech, 2001).Google Scholar
30.Gaspar, J., Deccó, C., Okamoto, J. and Santos-Victor, J., “Constast Resolution Omnidrectional Cameras,” Proceedings of Workshop on Omni-Directional Vision, Copenhagen, Denmark (2002).Google Scholar
31.Hartley, R. and Zisserman, A., Multiple View Geometry in Computer Vision (Cambridge University Press, Cambridge, UK, 2004).CrossRefGoogle Scholar
32.Zhu, Z., “Omnidirectional Stereo Vision,” Workshop on Omnidirectional Vision, Proceedings of the 10th IEEE ICAR, Budapest, Hungary (2001).Google Scholar
33.Comaniciu, D. and Meer, P.. “Mean shift: A robust approach toward feature space analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 24 (5), 603619 (2002).Google Scholar
34.Salinas, C. and Armada, M., “Analysing Human-Robot Interaction Using Omnidirectional Vision and Structure from Motion,” Proceedings of CLAWAR'08, Coimbra, Portugal (2008).Google Scholar
35.Black, M., “The robust estimation of multiple motions: Parametric and piecewise smooth flow fields,” Comput. Vis. Image Underst. 63 (1), 75104 (1996).CrossRefGoogle Scholar
36.Bab-Hadiashar, A. and Suter, D., “Robust optical flow computation,” IJCV 29 (1), 5977 (1998).CrossRefGoogle Scholar