Hostname: page-component-cd9895bd7-7cvxr Total loading time: 0 Render date: 2024-12-23T12:04:07.011Z Has data issue: false hasContentIssue false

Active visual sensing of the 3-D pose of a flexible object

Published online by Cambridge University Press:  09 March 2009

Jong-Eun Byun
Affiliation:
Department of Electrical Engineering, Kyushu University, 6–10–1 Hakozaki, Hagashiku, Fukuoka, 812–81 (Japan).
Tadashi Nagatat
Affiliation:
Department of Computer Science and Communication Engineering, Kyushu University, 6–10–1 Hakozaki, Higashiku, Fukuoka, 812–81 (Japan).

Summary

This paper presents an active visual method for determining the 3-D pose of a flexible object with a hand-eye system. Some simple and effective on-line algorithms to overcome various exceptional situations in chaincoding or determine the object pose more precisely are developed. The pose of a flexible object can be easily changed because of the flexible nature and the prediction of the pose is almost impossible. A new sensing pose is computed by using the image coordinates of the points on the border line of an image window and the current pose of a hand-eye system for the cases that the flexible object is extended outside the window. In a case of exceptional overlapping, a new sensing pose is computed by using the image coordinates of four extreme image points and the current pose of the hand-eye system. Through a chaincoding process on the skeletonized images, the stereo matching problem of two images is transformed into the matching of the curvature representations of the two skeletonized images. The 3-D pose of a flexible object is computed by using the results of this matching and the camera and hand-eye parameters calibrated beforehand. The initial sensing results are used in computing a new sensing pose to determine the object more precisely.

Type
Article
Copyright
Copyright © Cambridge University Press 1996

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1.Guo, H.L., Yachida, M. and Tsuji, S., “The Measurement of 3-D Coordinates for Many Line-like ObjectsJ. Robotics Society of Japan 2 (5), 417424 (1984).CrossRefGoogle Scholar
2.Inaba, M. and Inoue, H., “Hand Eye Coordination in Rope HandlingJ. Robotics Society of Japan, 3(6), 538547 (1985).CrossRefGoogle Scholar
3.Byun, J.E. and Nagata, T., “Determination of 3-D Pose of a Flexible Object by Stereo Matching of Curvature Representations” Proc. of IEEE Int. Workshop on Intelligent Robotics and Systems IROS'94, (1994) pp. 19921999.Google Scholar
4.Cowan, C.K. and Kovesi, P.D., “Automatic Sensor Placement from Vision Task RequirementsIEEE Trans, on Pattern Anal. Mach. Intell. 10(3), 407416 (1988).CrossRefGoogle Scholar
5.Hutchinson, S.A. and Kak, A.C., “Planning Sensing Strategies in a Robot Work Cell with Multi-Sensor CapabiltiesIEEE Trans, on Robotics and Automation 5(6), 765783 (1989).Google Scholar
6.Hutchinson, S.A., “Exploiting Visual Constraints in Robot Motion Planning” Proc. of IEEE Int. Conf. on Robotics and Automation(1991) pp. 17221727.Google Scholar
7.Sakane, S. and Sato, T., “Automatic Planning of Light Source and Camera Placement for an Active Photometric Stereo System” Proc. of IEEE Int. Conf. on Robotics and Automation(1991) pp. 10801087.Google Scholar
8.Tarabanis, K., Tsai, R.Y. and Allen, P.K., “Automated sensor planning for robotic vision tasks” Proc. of IEEE Int. Conf. on Robotics and Automation(1991) pp. 7682.Google Scholar
9.Maver, J. and Bajcsy, R., “Occlusions as a Guide for Planning the Next ViewIEEE Trans, on Pattern Anal. Mach. Intell. 15(5), 417433 (1993).CrossRefGoogle Scholar
10.Kemmotsu, K. and Kanade, T., “Sensor Placement Design for Object Pose Determination with Three Light-Stripe Range Finders” Proc. of IEEE Int. Conf. on Robotics and Automation(1994) pp. 13571364.Google Scholar
11.Rosenfeld, A. and Weszka, J., “An Improved Method of Angle Detection on Digital CurveIEEE Trans, on Computers C-24, 940941 (1975).CrossRefGoogle Scholar
12.Tsai, R.Y., “A Versatile Camera Calibration Technique for High-Accuracy 3D Machine Vision Metrology Using Off-the-Shelf TV Cameras and LensesIEEE Trans, on Robotics and Automation RA-3, 323344 (1987).CrossRefGoogle Scholar
13.Yokoi, S., Toriwaki, J. and Fukumura, T., “An Analysis of Topological Properties of Digitized Binary Pictures Using Local FeaturesCGIP 4, 6373 (1975).Google Scholar
14.Haralick, R.M., Computer and Robot Vision (Addison-Wesley, Reading, Mass., 1993).Google Scholar
15.Chiu, Y.C. and Ahmad, S., “Calibration of Wrist-mounted Robotic Sensors by Solving Homogeneous Transform Equations of the Form AX = XBIEEE Trans, on Robotics and Automation 5(1), 1629 (1989).Google Scholar
16.Tsai, R.Y. and Lenz, R.K., “A New Technique for Fully Autonomous and Efficient 3D Robotics Hand/Eye CalibrationIEEE Trans, on Robotics and Automation 5(3), 345358 (1989).Google Scholar