Hostname: page-component-586b7cd67f-gb8f7 Total loading time: 0 Render date: 2024-11-26T07:07:22.151Z Has data issue: false hasContentIssue false

Complementary data fusion in vision-guide and control of robotic tracking

Published online by Cambridge University Press:  17 January 2001

Y.M. Chen
Affiliation:
Department of Electrical Engineering, Lee Ming Institute of Technology, Taipei 243, Taiwan (R.O.C.)[email protected]
C.S. Hsueh
Affiliation:
Chung Shan Institute of Science & Technology(Taiwan)[email protected]

Abstract

We present a data fusion control scheme for the hand-held camera of the SCORBOT-ER VII robot arm for learning visual tracking and interception. The control scheme consists of two modules: The first one generates candidate actions to drive the end-effector as accurate as possible directly above a moving target, so that the second module can handily take over to intercept it. The desired camera-joint coordinate mappings are generalized by Elman neural networks for a tracking module. The intercept module then determines a suitable intercept trajectory for the robot within the required conditions. The simulation results support the claim that it could be successfully applied to track and intercept a moving target.

Type
Research Article
Copyright
© 2001 Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)