Hostname: page-component-586b7cd67f-rcrh6 Total loading time: 0 Render date: 2024-11-22T11:13:21.806Z Has data issue: false hasContentIssue false

Binocular vision planning with anthropomorphic features for grasping parts by robots

Published online by Cambridge University Press:  09 March 2009

Jae-Moon Chung
Affiliation:
Department of Computer Science and Communication EngineeringKyushu University, Fukuoka, 812-81(Japan) e-mail: [email protected]
Tadashi Nagata
Affiliation:
Department of Computer Science and Communication EngineeringKyushu University, Fukuoka, 812-81(Japan) e-mail: [email protected]

Summary

Planning of an active vision having anthropomorphic features, such as binocularity, foveas and gaze control, is proposed. The aim of the vision is to provide robots with the pose informaton of an adequate object to be grasped by the robots. For this, the paper describes a viewer-oriented fixation point frame and its calibration, active motion and gaze control of the vision, disparity filtering, zoom control, and estimation of the pose of a specific portion of a selected object. On the basis of the importance of the contour information and the scheme of stereo vision in recognizing objects by humans, the occluding contour pairs of objects are used as inputs in order to show the proposed visual planning.

Type
Article
Copyright
Copyright © Cambridge University Press 1996

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

REFERENCES

[1] Brooks, R. A., “Foreword” In: (Blake, A. and-Yuille, A., editors). Active Vision. (MIT Press, Cambridge, Mass., 1992).Google Scholar
[2] Ballard, D. H., “Animate visionArtificial Intelligence, 48 5786 (1991).CrossRefGoogle Scholar
[3] Brown, C., Coombs, D. and Soong, J. “Real-time smooth pursuit tracking” In: (Blake, A. and Yuille, A., editors) Active Vision (MIT Press, 1992) Chapter 8. pp. 123136.Google Scholar
[4] Clark, J. J. and Ferrier, N. J., “Attentive visual serving” In: (Blake, A. and Yuillle, A., editors) Active Vision (MIT Press, Cambridge, Mass., 1992) Chapter 9, pp. 137154.Google Scholar
[5] Murray, D. W.. Du, F., McLauchlan, P. F., Reid, I. D., Sharkey, P. M. and Brady, M., “Forward” In: (Blake, A. and Yuille, A.. editors) Active Vision, (MIT Press, 1992) Chapter 10, pp. 155172.Google Scholar
[6] Tsai, R. Y., “A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-shelf tv cameras and lensIEEE J. Robotics and Automation RA-3(4) 323344 (1987).CrossRefGoogle Scholar
[7] Lenz, R. K. and Tsai, R. Y., “Techniques for calibration of the scale factor and image center for high accuracy 3-d machine vision metrologyIEEE Trans. Pattern. Anal. Mach. Intell, 10(5) 713720 (1988).CrossRefGoogle Scholar
[8] Chen, S.-Y. and Tsai, W.-H., “A systematic approach to analytic determination of camera parameters by line featuresPattern Recognition 23(8) 859877 (1990).CrossRefGoogle Scholar
[9] Tomovic, R., Bekey, G. A. and Karplus, W. J., “A strategy for grasp synthesis with multifingered robot hands” Proc. IEEE Intl. Conf. on Robotics and Automation (1987) pp. 87101.Google Scholar
[10] Faugers, O. D. and Hebert, M., “The representation, recognition, and location of 3-d objectsInt. J. Robotics Research 5(3) 2752 (Fall, 1986).CrossRefGoogle Scholar
[11] Bolles, R. C. and Horaud, P., “3dpo: A three-dimensional part orientation systemInt. J. Robotics Research 5(3) 326 (Fall, 1986).CrossRefGoogle Scholar
[12] Horn, B. P. K. and Ikeuchi, K., “The mechanical manipulation of randomly oriented parts”. Sci. Amer., 251(2) 100113 (08 1984).CrossRefGoogle Scholar
[13] Rao, K., Medioni, G., Liu, H. and Bekey, G. A., “Shape description and grasping for robot hand-eye coordination” IEEE Control System Magazine 2229 (Feb., 1989).CrossRefGoogle Scholar
[14] Binford, T. O., “Visual perception by computer” Conf. on Systems and Control(Dec, 1971) pp. 115129.Google Scholar
[15] Chung, J.-M. and Nagata, T., “Extracting parametric descriptions of circular gcs from a pair of contours for 3-d shapes recognition” Proc. of IEEE Int'l Conf. Robotics and Automation (1995) pp. 255260.Google Scholar
[16] Shapiro, L. G., Moriarty, J. D. and Haralick, R. M., “Matching three-dimensional objects using a relational paradigmPattern Recognition 17(4) 385405 (1984).CrossRefGoogle Scholar
[17] Chung, J.-M. and Nagata, T., “Reasoning simplified volumetric shapes for robot grasping” Proc. of IEEE/RSJ Int'l Conf. Intelligent Robots and Systems, 2, 348353 (1995).Google Scholar
[18] Lee, D. T. and Schachter, B. J., “Two algorithms for constructing a delaunay triangularionInt. J. Comput. Inform. Sci. 9(3) 219242 (1980).CrossRefGoogle Scholar
[19] Fukushima, S. and Okumura, T., “Estimating the three-dimensional shape from a silhouette by geometrical division of the plane” Proc. of IECON'91,(Oct., 1991) pp. 17731778.Google Scholar
[20] Basu, A., “Active calibration: Alternative strategies and analysis” Proc. IEEE Intl. Conf. on Computer Vision and Pattern Recognition (1993) pp. 495500.Google Scholar
[21] Basu, A. and Ravi, K., “Active camera calibration using pan, tilt and roll” Proc. IEEE Intl. Conf. on Robotics and Automation (1995) pp. 29612967.Google Scholar
[22] Mokhtarian, F. and Mackworth, A., “Scale-based descripton and recognition of planar curves and two-dimensional shapesIEEE Trans. Pattern. Anal. Mach. Intell. PAMI-8(1). 3443 (01, 1986).CrossRefGoogle Scholar
[23] Marr, D. and Hildreth, E., “Theory of edge detection” Proc. Royal Soc.London B207,187217 (1980).CrossRefGoogle Scholar