Hostname: page-component-586b7cd67f-2brh9 Total loading time: 0 Render date: 2024-11-29T00:16:48.275Z Has data issue: false hasContentIssue false

Recognition of features of parts subjected to motion using ARTMAP incorporated in a flexible vibratory bowl feeder system

Published online by Cambridge University Press:  10 February 2006

S.K. SIM
Affiliation:
School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore
PATRICK S.K. CHUA
Affiliation:
School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore
M.L. TAY
Affiliation:
School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore
YUN GAO
Affiliation:
Sybase (Singapore) Pte. Ltd., Singapore

Abstract

The recognition and identification of parts are important processes in modern manufacturing systems. Although machine vision systems have played an important role in these tasks, there are still challenges in performing these tasks in which parts may be in motion and subjected to noise. Using a flexible vibratory bowl feeder system as a test bed to simulate motion of parts subjected to noise, scanned signatures of part features are acquired using fiber optic sensors and a data acquisition system. Because neural networks have been shown to exhibit good pattern recognition capability, ARTMAP, a neural network that learns patterns under supervision, was incorporated into the feeder system. The pattern recognition capability of the feeder system is dependent on a set of parameters that characterized ARTMAP, the sampling rate of the data acquisition system, and the mean speed of the vibrating parts. The parameters that characterized ARTMAP are the size of an input vector, the vigilance, threshold value of the nonlinear noise suppression function, and the learning rate. Through extensive training and testing of the ARTMAP within the feeder system, it was shown that high success rates of recognition of parts features in motion under noisy conditions can be obtained provided these parameters of ARTMAP are appropriately selected.

Type
Research Article
Copyright
2006 Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

REFERENCES

Boothroyd, G., Poli, C., & Murch, L.E. (1982). Automatic Assembly. New York: Marcel Dekker.
Carpenter, G.A. & Grossberg, S. (1987a). A massively parallel architecture for a self-organizing neural pattern recognition machine. Computer Vision Graphics Image Processing 37(1), 54115.Google Scholar
Carpenter, G.A. & Grossberg, S. (1987b). ART2: self-organizing of stable category recognition codes for analog input pattern. Applied Optics 26(23), 49194930.Google Scholar
Carpenter, G.A., Grossberg, S., & Reynolds, J.H. (1991a). ARTMAP: supervised real-time learning and classification of non-stationary data by a self-organizing neural network. Neural Networks 4(5), 565588.Google Scholar
Carpenter, G.A., Grossberg, S., & Rosen, D.B. (1991b). ART 2-A: an adaptive resonance algorithm for rapid category learning and recognition. Neural Networks 4(4), 493504.Google Scholar
Causey, G.C. & Quinn, R.D. (1997). Design of a flexible parts feeding system. IEEE Int. Conf. Robotics and Automation, Vol. 2, pp. 12351240.CrossRef
Chua, P.S.K., Tay, M.L., & Sim, S.K. (1997). The development of an intelligent flexible and programmable vibratory bowl feeder incorporating neural network. Proc. ICCIM'97, Vol. 2, pp. 14051416, Singapore.
Cronshaw, A.J. (1980). A practical vision system for use with bowl feeders. 1st Int. Conf. Assembly Automation, pp. 265274, Brighton, UK, September.
Gurney, K. (1997). An Introduction to Neural Networks. London: UCL Press.CrossRef
Lankalapalli, K., Chatterjee, S., & Chang, T.C. (1997). Feature recognition using ART2: a self-organising neural network. Journal of Intelligent Manufacturing 8(8), 203214.CrossRefGoogle Scholar
Maul, G.P. & Jaksic, N. I. (1994). Sensor-based solution to contiguous and overlapping parts in vibratory bowl feeders. Journal of Manufacturing Systems 13(3), 190195.CrossRefGoogle Scholar
Maul, G.P. & Ou-Yang, C. (1987). Predicting the cycle time for a sequence of parts in a sensor-based vibratory bowl feeder. International Journal of Production Research 25(12), 17051714.Google Scholar
Newman, T.S. & Jain, A.K. (1995). A survey of automated visual inspection. Computer Vision and Image Understanding 61(2), 231262.CrossRefGoogle Scholar
Pham, D.T. & Alcock, R.J. (2003). Smart Inspection System—Techniques and Applications of Intelligent Vision. London: Elsevier Science Ltd.
Rosandich, R.G. (1997). Intelligent Visual Inspection. London: Chapman & Hall.
Rumelhart, D.E. & McClelland, J.L. (1986). Parallel Distributed Processing, Vol. 1. Cambridge, MA: MIT Press.
Sim, S.K., Chua, P.S.K., Tay, M.L., & Gao, Y. (2003). Incorporating pattern recognition capability in a flexible vibratory bowl feeder using neural network. International Journal of Production Research 41(6), 12171237.CrossRefGoogle Scholar
Smals, A.T.J.M. (1985). Economy vision. Proc. 5th Int. Conf. Robot Vision and Sensory Controls, pp. 315322, October 29–31.
Suzuki, T. & Kohno, M. (1981). The flexible parts feeder which helps a robot assemble automatically. Assembly Automation 1(2), 8692.CrossRefGoogle Scholar
Warnecke, H.J., Schraft, R.D., Lindner, H., Schmid, S., & Sieger, E. (1991). Components and modules for automatic assembly. Proc. Automation-in-Manufacturing (AIM'91) Int. Conf., Singapore.