Hostname: page-component-586b7cd67f-t8hqh Total loading time: 0 Render date: 2024-11-26T01:05:33.406Z Has data issue: false hasContentIssue false

A new genetic algorithm specifically based on mutation and selection

Published online by Cambridge University Press:  01 July 2016

L. Rigal*
Affiliation:
Université de Provence
L. Truffet*
Affiliation:
Ecole des Mines de Nantes
*
Postal address: Laboratoire d'Analyse, Topologie, Probabilités (LATP/UMR 6632), Université de Provence, 39 rue F. Joliot-Curie, F-13453 Marseille cedex 13, France.
∗∗ Postal address: Ecole des Mines de Nantes & IRCCYN, 4 rue Alfred Kastler, BP 20722, 44307 Nantes cedex 3, France. Email address: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

In this paper we propose a new genetic algorithm specifically based on mutation and selection in order to maximize a fitness function. This mutation-selection algorithm behaves as a gradient algorithm which converges to local maxima. In order to obtain convergence to global maxima we propose a new algorithm which is built by randomly perturbing the selection operator of the gradient-like algorithm. The perturbation is controlled by only one parameter: that which allows the selection pressure to be governed. We use the Markov model of the perturbed algorithm to prove its convergence to global maxima. The arguments used in the proofs are based on Freidlin and Wentzell's (1984) theory and large deviation techniques also applied in simulated annealing. Our main results are that (i) when the population size is greater than a critical value, the control of the selection pressure ensures the convergence to the global maxima of the fitness function, and (ii) the convergence also occurs when the population is the smallest possible, i.e. 1.

Type
General Applied Probability
Copyright
Copyright © Applied Probability Trust 2007 

References

Catoni, O. (1992). Large deviations for annealing. , Université Paris XI.Google Scholar
Catoni, O. (1992). Rough large deviations estimates for simulated annealing. Application to exponential schedules. Ann. Prob. 20, 11091146.Google Scholar
Cerf, R. (1994). Une théorie asymptotique des algorithmes génétiques. , Université Montpellier II.Google Scholar
Cerf, R. (1996). A new genetic algorithm. Ann. Appl. Prob. 6, 778817.Google Scholar
Cerf, R. (1996). The dynamics of mutation-selection algorithms with large population sizes. Ann. Inst. H. Poincaré 32, 455508.Google Scholar
Cerf, R. (1998). Asymptotic convergence of genetic algorithms. Adv. Appl. Prob. 30, 521550.Google Scholar
Davis, T. E. and Principe, J. C. (1991). A simulated annealing like convergence theory for the simple genetic algorithm. In Proc. Fourth Internat. Conf. Genetic Algorithms, Morgan Kauffman, San Mateo, CA, pp. 174181.Google Scholar
Del Moral, P. and Miclo, L. (1999). On the convergence and the applications of the generalized simulated annealing. SIAM J. Control Optimization 37, 12221250.Google Scholar
François, O. (2002). Global optimization with exploration/selection algorithms and simulated annealing. Ann. Appl. Prob. 12, 248271.Google Scholar
Fogel, L. J., Owens, A. J. and Walsh, M. J. (1996). Artificial Intelligence Through Simulated Evolution. John Wiley, New York.Google Scholar
Freidlin, M. I. and Wentzell, A. D. (1984). Random Perturbations of Dynamical Systems. Springer, New York.Google Scholar
Goldberg, D. (1989). Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley, Reading, MA.Google Scholar
Holland, J. H. (1975). Adaptation in Natural and Artificial Systems. University of Michigan Press, Ann Arbor, MI.Google Scholar
Kuo, W., Prasad, V. R. and Tillman, F. A. (2000). Optimal Reliability Design. Cambridge University Press.Google Scholar
Rechenberg, I. (1973). Evolutionsstrategie: Optimierung Technischer Systeme nach Prinzipien der Biologischen Evolution. Frommann-Holzboog, Stuttgart.Google Scholar
Rudolph, G. (1994). Convergence analysis of canonical genetic algorithms. IEEE Trans. Neural Networks 5, 96101.Google Scholar
Trouvé, A. (1993). Parallélisation massive du recuit simulé. , Université Paris XI.Google Scholar
Trouvé, A. (1996). Cycle decompositions and simulated annealing. SIAM J. Control Optimization 34, 966986.Google Scholar
Wolpert, D. H. and Macready, W. G. (1997). No free lunch theorems for optimization. IEEE Trans. Evolutionary Comput. 1, 6782.Google Scholar