Article contents
RANDOM NEURAL NETWORK LEARNING HEURISTICS
Published online by Cambridge University Press: 22 May 2017
Abstract
The random neural network (RNN) is a probabilitsic queueing theory-based model for artificial neural networks, and it requires the use of optimization algorithms for training. Commonly used gradient descent learning algorithms may reside in local minima, evolutionary algorithms can be also used to avoid local minima. Other techniques such as artificial bee colony (ABC), particle swarm optimization (PSO), and differential evolution algorithms also perform well in finding the global minimum but they converge slowly. The sequential quadratic programming (SQP) optimization algorithm can find the optimum neural network weights, but can also get stuck in local minima. We propose to overcome the shortcomings of these various approaches by using hybridized ABC/PSO and SQP. The resulting algorithm is shown to compare favorably with other known techniques for training the RNN. The results show that hybrid ABC learning with SQP outperforms other training algorithms in terms of mean-squared error and normalized root-mean-squared error.
Keywords
- Type
- Research Article
- Information
- Probability in the Engineering and Informational Sciences , Volume 31 , Special Issue 4: G-Networks and their Applications , October 2017 , pp. 436 - 456
- Copyright
- Copyright © Cambridge University Press 2017