Hostname: page-component-cd9895bd7-mkpzs Total loading time: 0 Render date: 2024-12-26T18:06:20.516Z Has data issue: false hasContentIssue false

RANDOM NEURAL NETWORK LEARNING HEURISTICS

Published online by Cambridge University Press:  22 May 2017

Abbas Javed
Affiliation:
School of Electrical and Built Environment, Glasgow Caledonian University, Glasgow, UK E-mail: [email protected]
Hadi Larijani
Affiliation:
School of Electrical and Built Environment, Glasgow Caledonian University, Glasgow, UK E-mail: [email protected]
Ali Ahmadinia
Affiliation:
School of Electrical and Built Environment, Glasgow Caledonian University, Glasgow, UK E-mail: [email protected]
Rohinton Emmanuel
Affiliation:
School of Electrical and Built Environment, Glasgow Caledonian University, Glasgow, UK E-mail: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

The random neural network (RNN) is a probabilitsic queueing theory-based model for artificial neural networks, and it requires the use of optimization algorithms for training. Commonly used gradient descent learning algorithms may reside in local minima, evolutionary algorithms can be also used to avoid local minima. Other techniques such as artificial bee colony (ABC), particle swarm optimization (PSO), and differential evolution algorithms also perform well in finding the global minimum but they converge slowly. The sequential quadratic programming (SQP) optimization algorithm can find the optimum neural network weights, but can also get stuck in local minima. We propose to overcome the shortcomings of these various approaches by using hybridized ABC/PSO and SQP. The resulting algorithm is shown to compare favorably with other known techniques for training the RNN. The results show that hybrid ABC learning with SQP outperforms other training algorithms in terms of mean-squared error and normalized root-mean-squared error.

Type
Research Article
Copyright
Copyright © Cambridge University Press 2017 

References

1. Abdelbaki, H. (1999). Random neural network simulator (rnnsim) v. 2. Free simulator available at ftp://ftp.mathworks.com/pub/contrib/v5/nnet/rnnsimv2.Google Scholar
2. Abdelbaki, H., Gelenbe, E. & El-Khamy, S.E. (2000). Analog hardware implementation of the random neural network model. In Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks, 2000. IJCNN 2000, vol. 4, pp. 197201. IEEE.CrossRefGoogle Scholar
3. Abdelbaki, H., Gelenbe, E. & Kocak, T. (2005). Neural algorithms and energy measures for emi based mine detection. Differential Equations and Dynamical Systems 13: 6386.Google Scholar
4. Abdelrahman, O.H. & Gelenbe, E. (2015). Search in big networks and big data. In Mityushev, V.V. & Ruzhansky, M. (eds.), Analytic Methods in Interdisciplinary Applications, Springer Proceedings in Mathematics & Statistics, vol. 116, pp. 115. Springer International Publishing, Switzerland.Google Scholar
5. Aguilar, J. & Colmenares, A. (1998). Resolution of pattern recognition problems using a hybrid genetic/random neural network learning algorithm. Pattern Analysis and Applications 1: 5261.Google Scholar
6. Akinwande, O.J., Bi, H. & Gelenbe, E. (2015). Managing crowds in hazards with dynamic grouping. IEEE Access 3: 10601070.Google Scholar
7. Atalay, V. (1998). Learning by optimization in random neural networks. In Proceedings of the Thirteenth International Symposium on Computer and Information Sciences, Antalya, Turkey, pp. 143148.Google Scholar
8. Bache, K. & Lichman, M. (2013). UCI machine learning repository. http://archive.ics.uci.edu/ml.Google Scholar
9. Bakirciouglu, H., Gelenbe, E. & Kocak, T. (1997). Image enhancement and fusion with the random neural network. Turkish Journal Of Electrical Engineering & Computer Sciences 5: 6577.Google Scholar
10. Basterrech, S., Mohammed, S., Rubino, G. & Soliman, M. (2011). Levenberg–marquardt training algorithms for random neural networks. The Computer Journal 54: 125135.Google Scholar
11. Beale, M.H., Hagan, M.T. & Demuth, H.B. (2010). Neural network toolbox 7. User's Guide, MathWorks.Google Scholar
12. Bernal, W., Behl, M., Nghiem, T.X. & Mangharam, R. (2012). Mle+: a tool for integrated design and deployment of energy efficient building controls. In Proceedings of the Fourth ACM Workshop on Embedded Sensing Systems for Energy-Efficiency in Buildings, pp. 123130. ACM.Google Scholar
13. Bi, H., Akinwande, O.J. & Gelenbe, E. (2015). Emergency navigation in confined spaces using dynamic grouping. In 2015 9th International Conference on Next Generation Mobile Applications, Services and Technologies, pp. 120125. IEEE.Google Scholar
14. Bi, H. & Gelenbe, E. (2015). Cloud enabled emergency navigation using faster-than-real-time simulation. In 2015 IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom Workshops), pp. 475480. IEEE.Google Scholar
15. Brun, O., Wang, L. & Gelenbe, E. (2016). Big data for autonomic intercontinental overlays. IEEE Journal on Selected Areas in Communications 34: 575583.Google Scholar
16. Cancela, H., Robledo, F. & Rubino, G. (2004). A grasp algorithm with RNN based local search for designing a wan access network. Electronic Notes in Discrete Mathematics 18: 5965.Google Scholar
17. Ceran, E.T. & Gelenbe, E. (2016). Energy packet model optimisation with approximate matrix inversion. In Proceedings of the 2nd International Workshop on Energy-Aware Simulation, p. 4. ACM.Google Scholar
18. Cerkez, C., Aybay, I. & Halici, U. (1997). A digital neuron realization for the random neural network model. In International Conference on Neural Networks, 1997, vol. 2, pp. 10001004. IEEE.Google Scholar
19. Chau, K. (2006). Particle swarm optimization training algorithm for ANNS in stage prediction of Shing Mun river. Journal of hydrology 329: 363367.Google Scholar
20. Cramer, C., Gelenbe, E. & Bakircloglu, H. (1996). Low bit-rate video compression with neural networks and temporal subsampling. Proceedings of the IEEE 84: 15291543.Google Scholar
21. Eberhart, R.C. & Kennedy, J. (1995). A new optimizer using particle swarm theory. In Proceedings of the Sixth International Symposium on Micro Machine and Human Science, New York, NY, vol. 1, pp. 3943.Google Scholar
22. Fourneau, J.-M., Gelenbe, E. & Suros, R. (1996). G-networks with multiple classes of negative and positive customers. Theoretical Computer Science 155: 141156.Google Scholar
23. Francois, F. & Gelenbe, E. (2016). Towards a cognitive routing engine for software defined networks. In 2016 IEEE International Conference on Communications (ICC), pp. 16. IEEE.Google Scholar
24. de Freitas Vaz, A.I. & da Graça Pinto Fernandes, E.M. (2006). Optimization of nonlinear constrained particle swarm. Technological and Economic Development of Economy 12: 3036.Google Scholar
25. Garro, B.A., Sossa, H. & Vázquez, R.A. (2011). Artificial neural network synthesis by means of artificial bee colony (abc) algorithm. In 2011 IEEE Congress on Evolutionary Computation (CEC), pp. 331338. IEEE.Google Scholar
26. Gelenbe, E. (1989). Random neural networks with negative and positive signals and product form solution. Neural Computation 1: 502510.Google Scholar
27. Gelenbe, E. (1990). Stability of the random neural network model. Neural Computation 2: 239247.CrossRefGoogle Scholar
28. Gelenbe, E. (1991). Product-form queueing networks with negative and positive customers. Journal of Applied Probability 28: 656663.CrossRefGoogle Scholar
29. Gelenbe, E. (1993). G-networks with triggered customer movement. Journal of Applied Probability 30: 742748.Google Scholar
30. Gelenbe, E. (1993). Learning in the recurrent random neural network. Neural Computation 5: 154164.Google Scholar
31. Gelenbe, E. (1994). G-networks: a unifying model for neural and queueing networks. Annals of Operations Research 48: 433461.Google Scholar
32. Gelenbe, E. (2000). The first decade of g-networks. European Journal of Operational Research 126: 231232.CrossRefGoogle Scholar
33. Gelenbe, E. (2014). Error and energy when communicating with spins. In 2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP), pp. 784787. IEEE.Google Scholar
34. Gelenbe, E. (2015). Errors and power when communicating with spins. IEEE Transactions on Emerging Topics in Computing 3: 483488.Google Scholar
35. Gelenbe, E. (2015). Synchronising energy harvesting and data packets in a wireless sensor. Energies 8: 356369.Google Scholar
36. Gelenbe, E. (2016). Agreement in spins and social networks. ACM SIGMETRICS Performance Evaluation Review 44: 1517.Google Scholar
37. Gelenbe, E. & Ceran, E.T. (2015). Central or distributed energy storage for processors with energy harvesting. In Sustainable Internet and ICT for Sustainability (SustainIT), 2015, pp. 13. IEEE.Google Scholar
38. Gelenbe, E. & Ceran, E.T. (2016). Energy packet networks with energy harvesting. IEEE Access 4: 13211331.Google Scholar
39. Gelenbe, E., Feng, Y. & Krishnan, K.R.R. (1996). Neural network methods for volumetric magnetic resonance imaging of the human brain. Proceedings of the IEEE 84: 14881496.Google Scholar
40. Gelenbe, E. & Fourneau, J.-M. (2002). G-networks with resets. Performance Evaluation 49: 179191.Google Scholar
41. Gelenbe, E., Hussain, K. & Kaptan, V. (2005). Simulating autonomous agents in augmented reality. Journal of Systems and Software 74: 255268.Google Scholar
42. Gelenbe, E. & Hussain, K.F. (2002). Learning in the multiple class random neural network. IEEE Transactions on Neural Networks 13: 12571267.CrossRefGoogle ScholarPubMed
43. Gelenbe, E. & Kazhmaganbetova, Z. (2014). Cognitive packet network for bilateral asymmetric connections. IEEE Transactions on Industrial Informatics 10: 17171725.CrossRefGoogle Scholar
44. Gelenbe, E. & Labed, A. (1998). G-networks with multiple classes of signals and positive customers. European journal of operational research 108: 293305.Google Scholar
45. Gelenbe, E. & Marin, A. (2015). Interconnected wireless sensors with energy harvesting. In International Conference on Analytical and Stochastic Modeling Techniques and Applications, pp. 8799. Springer International Publishing.Google Scholar
46. Gelenbe, E. & Schassberger, R. (1992). Stability of product form g-networks. Probability in the Engineering and Informational Sciences 6: 271276.Google Scholar
47. Gelenbe, E. & Shachnai, H. (2000). On g-networks and resource allocation in multimedia systems. European Journal of Operational Research 126: 308318.Google Scholar
48. Gelenbe, E. & Timotheou, S. (2008). Synchronized interactions in spiked neuronal networks. The Computer Journal 51: 723730.Google Scholar
49. Gelenbe, E. & Wang, L. (2016). Tap: A task allocation platform for the eu fp7 panacea project. In Advances in Service-Oriented and Cloud Computing: Workshops of ESOCC 2015, Taormina, Italy, 15–17 September 2015, Revised Selected Papers, vol. 567, p. 425. Springer.Google Scholar
50. Gelenbe, E. & Wu, F.-J. (2012). Large scale simulation for human evacuation and rescue. Computers & Mathematics with Applications 64: 38693880.Google Scholar
51. Gelenbe, E. & Yin, Y. (2016). Deep learning with random neural networks. In 2016 International Joint Conference on Neural Networks (IJCNN), pp. 16331638. IEEE.Google Scholar
52. Georgiopoulos, M., Li, C. & Kocak, T. (2011). Learning in the feed-forward random neural network: A critical review. Performance Evaluation 68: 361384.CrossRefGoogle Scholar
53. Halici, U. (1997). Reinforcement learning in random neural networks for cascaded decisions. Biosystems 40: 8391.Google Scholar
54. Henderson, W. (1993). Queueing networks with negative customers and negative queue lengths. Journal of Applied Probability 30: 931942.Google Scholar
55. Hock, W. & Schittkowski, K. (1983). A comparative performance evaluation of 27 nonlinear programming codes. Computing 30: 335358.Google Scholar
56. Holland, J.H. (1975). Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. The University of Michigan Press, USA.Google Scholar
57. Hubert, C. (1993). Pattern completion with the random neural network using the RPROP learning algorithm. In Conference Proceedings., International Conference on Systems, Man and Cybernetics, 1993. ‘Systems Engineering in the Service of Humans’, pp. 613617. IEEE.Google Scholar
58. Ilonen, J., Kamarainen, J.-K. & Lampinen, J. (2003). Differential evolution training algorithm for feed-forward neural networks. Neural Processing Letters 17: 93105.Google Scholar
59. Irani, R. & Nasimi, R. (2011). Application of artificial bee colony-based neural network in bottom hole pressure prediction in underbalanced drilling. Journal of Petroleum Science and Engineering 78: 612.Google Scholar
60. Javed, A., Larijani, H., Ahmadinia, A. & Emmanuel, R. (2014). Modelling and optimization of residential heating system using random neural networks. In 2014 IEEE International Conference on Control Science and Systems Engineering (CCSSE), pp. 9095. IEEE.CrossRefGoogle Scholar
61. Javed, A., Larijani, H., Ahmadinia, A., Emmanuel, R., Gibson, D. & Clark, C. (2015). Experimental testing of a random neural network smart controller using a single zone test chamber. IET Networks 4: 350358.Google Scholar
62. Javed, A., Larijani, H., Ahmadinia, A., Emmanuel, R., Mannion, M. & Gibson, D. (2016). Design and implementation of cloud enabled random neural network based decentralized smart controller with intelligent sensor nodes for hvac. IEEE Internet of Things Journal pp: 1–1.Google Scholar
63. Javed, A., Larijani, H., Ahmadinia, A. & Gibson, D. (2017). Smart random neural network controller for hvac using cloud computing technology. IEEE Transactions on Industrial Informatics 13: 351360. doi: 10.1109/TII.2016.2597746.Google Scholar
64. Karaboga, D., Akay, B. & Ozturk, C. (2007). Artificial bee colony (abc) optimization algorithm for training feed-forward neural networks. In Modeling decisions for artificial intelligence, pp. 318329. Springer.Google Scholar
65. Karaboga, D. & Basturk, B. (2007). A powerful and efficient algorithm for numerical function optimization: artificial bee colony (abc) algorithm. Journal of global optimization 39: 459471.CrossRefGoogle Scholar
66. Kurban, T. & Beşdok, E. (2009). A comparison of RBF neural network training algorithms for inertial sensor based terrain classification. Sensors 9: 63126329.Google Scholar
67. Likas, A. & Stafylopatis, A. (2000). Training the random neural network using quasi-newton methods. European Journal of Operational Research 126: 331339.Google Scholar
68. Lu, R. & Shen, Y. (2006). Image segmentation based on random neural network model and gabor filters. In 27th Annual International Conference of the Engineering in Medicine and Biology Society, 2005. IEEE-EMBS 2005, pp. 64646467. IEEE.Google Scholar
69. Mohamed, S. & Rubino, G. (2002). A study of real-time packet video quality using random neural networks. IEEE Transactions on Circuits and Systems for Video Technology 12: 10711083.Google Scholar
70. Öke, G. & Loukas, G. (2007). A denial of service detector based on maximum likelihood detection and the random neural network. The Computer Journal 50: 717727.Google Scholar
71. Ozturk, C. & Karaboga, D. (2011). Hybrid artificial bee colony algorithm for neural network training. In 2011 IEEE Congress on Evolutionary Computation (CEC), pp. 8488. IEEE.Google Scholar
72. Raja, M.A.Z., Ahmad, S.I. & Samar, R. (2014). Solution of the 2-dimensional bratu problem using neural network, swarm intelligence and sequential quadratic programming. Neural Computing and Applications 25: 17231739.Google Scholar
73. Richards, Z.D. (2009). Constrained particle swarm optimisation for sequential quadratic programming. International Journal of Modelling, Identification and Control 8: 361367.CrossRefGoogle Scholar
74. Serrano, W. & Gelenbe, E. (2016). An intelligent internet search assistant based on the random neural network. In IFIP International Conference on Artificial Intelligence Applications and Innovations, pp. 141153. Springer.Google Scholar
75. Shah, H., Ghazali, R. & Nawi, N.M. (2011). Using artificial bee colony algorithm for mlp training on earthquake time series data prediction. Journal of Computing 3: 135142.Google Scholar
76. Storn, R. & Price, K. (1997). Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. Journal of global optimization 11: 341359.CrossRefGoogle Scholar
77. Timotheou, S. (2008). Nonnegative least squares learning for the random neural network. In Artificial Neural Networks-ICANN 2008, pp. 195204. Springer.Google Scholar
78. Venter, G. & Haftka, R. (2010). Constrained particle swarm optimization using a bi-objective formulation. Structural and Multidisciplinary Optimization 40: 6576.Google Scholar
79. Wang, L., Brun, O. & Gelenbe, E. (2016). Adaptive Workload Distribution for Local and Remote Clouds. In IEEE International Conference On Systems, Man, AND Cybernetics (SMC 2016), Budapest, Hungary.Google Scholar
80. Wang, L. & Gelenbe, E. (2015a). Adaptive dispatching of tasks in the cloud. IEEE Transactions on Cloud Computing PP: 1–1.Google Scholar
81. Wang, L. & Gelenbe, E. (2015b). Demonstrating voice over an autonomic network. In 2015 IEEE International Conference on Autonomic Computing (ICAC), pp. 139140. IEEE.Google Scholar
82. Wang, L. & Gelenbe, E. (2015c). Experiments with smart workload allocation to cloud servers. In 2015 IEEE Fourth Symposium on Network Cloud Computing and Applications (NCCA), pp. 3135. IEEE.Google Scholar
83. Wang, L. & Gelenbe, E. (2016). Real-time traffic over the cognitive packet network. In International Conference on Computer Networks, pp. 321. Springer International Publishing.Google Scholar
84. Yin, Y. & Gelenbe, E. (2016). Deep learning in multi-layer architectures of dense nuclei. arXiv preprint arXiv:1609.07160.Google Scholar
85. Zhang, J.-R., Zhang, J., Lok, T.-M. & Lyu, M.R. (2007). A hybrid particle swarm optimization–back-propagation algorithm for feedforward neural network training. Applied Mathematics and Computation 185: 10261037.Google Scholar
86. Zhong, Y., Sun, D. & Wu, J. (2005). Dynamical random neural network approach to a problem of optimal resource allocation. In International Work-Conference on Artificial Neural Networks, pp. 11571163. Springer.Google Scholar