Hostname: page-component-586b7cd67f-l7hp2 Total loading time: 0 Render date: 2024-11-25T04:07:32.437Z Has data issue: false hasContentIssue false

Hill-Climbing Algorithm with a Stick for Unconstrained Optimization Problems

Published online by Cambridge University Press:  09 January 2017

Yunqing Huang*
Affiliation:
Hunan Key Laboratory for Computation and Simulation in Science and Engineering, Key Laboratory of Intelligent Computing & Information Processing of Ministry of Education, School of Mathematics and Computational Science, Xiangtan University, Xiangtan, Hunan 411105, China
Kai Jiang*
Affiliation:
Hunan Key Laboratory for Computation and Simulation in Science and Engineering, Key Laboratory of Intelligent Computing & Information Processing of Ministry of Education, School of Mathematics and Computational Science, Xiangtan University, Xiangtan, Hunan 411105, China
*
*Corresponding author. Email:[email protected] (Y. Q. Huang), [email protected] (K. Jiang)
*Corresponding author. Email:[email protected] (Y. Q. Huang), [email protected] (K. Jiang)
Get access

Abstract

Inspired by the behavior of the blind for hill-climbing using a stick to detect a higher place by drawing a circle, we propose a heuristic direct search method to solve the unconstrained optimization problems. Instead of searching a neighbourhood of the current point as done in the traditional hill-climbing, or along specified search directions in standard direct search methods, the new algorithm searches on a surface with radius determined by the motion of the stick. The significant feature of the proposed algorithm is that it only has one parameter, the search radius, which makes the algorithm convenient in practical implementation. The developed method can shrink the search space to a closed ball, or seek for the final optimal point by adjusting search radius. Furthermore our algorithm possesses multi-resolution feature to distinguish the local and global optimum points with different search radii. Therefore, it can be used by itself or integrated with other optimization methods flexibly as a mathematical optimization technique. A series of numerical tests, including high-dimensional problems, have been well designed to demonstrate its performance.

Type
Research Article
Copyright
Copyright © Global-Science Press 2017 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

[1] Sun, W. Y. and Yuan, Y., Optimization Theory and Methods: Nonlinear Programming, New York: Springer, 2006.Google Scholar
[2] Conn, A. R., Gould, N. I. M. and Toint, P. L., Trust Region Methods, Philadelphia: SIAM, 2000.CrossRefGoogle Scholar
[3] Nocedal, J. and Wright, S. J., Numerical Optimization, Berlin: Springer-Verlag, 2nd ed., 2006.Google Scholar
[4] Conn, A. R., Scheinberg, K. and Vicente, L. N., Introduction to Derivative-Free Optimization, Philadelphia: SIAM, 2009.Google Scholar
[5] Rios, L. M. and Sahinidis, N. V., Derivative-free optimization: a review of algorithms and comparison of software implementations, J. Global Optim., 56 (2013), pp. 12471293.Google Scholar
[6] Powell, M. J. D., UOBYQA: unconstrained optimization by quadratic approximation, Technical Report DAMTP NA2000/14, CMS, University of Cambridge, 2000.Google Scholar
[7] Powell, M. J. D., On trust region methods for unconstrained minimization without derivatives, Technical Report DAMTP NA2002/NA02, CMS, University of Cambridge, February 2002.Google Scholar
[8] Wu, T., Yang, Y., Sun, L., and Shao, H., A heuristic iterated-subspace minimization method with pattern search for unconstrained optimization, Comput. Math. Appl., 58 (2009), pp. 20512059.Google Scholar
[9] Zhang, Z., Sobolev seminorm of quadratic functions with applications to derivative-free optimization, Math. Program., 146 (2014), pp. 7796.Google Scholar
[10] Michalewicz, Z. and Fogel, D. B., How to Solve It: Modern Heuristics, Springer, 2004.Google Scholar
[11] Lecun, Y., Bengio, , Hinton, Y. and Hinton, G., Deep learning, Nature, 521 (2015), pp. 521–436.Google Scholar
[12] Hooke, R. and Jeeves, T. A., “Direct search” solution of numerical and statistical problems, J. ACM, 8 (1961), pp. 212229.Google Scholar
[13] Lewis, R. M., Torczon, V. and Trosset, M. W., Direct search methods: then and now, J. Comput. Appl. Math., 124 (2000), pp. 191207.Google Scholar
[14] Nelder, J. A. and Mead, R., A simplex method for function minimization, Comput. J., 7 (1965), pp. 308313.Google Scholar
[15] Torczon, V., On the convergence of pattern search algorithms, SIAM J. Optim., 7 (1997), pp. 125.Google Scholar
[16] Kolda, T. G., Lewis, R. W. and Torczon, V., Optimization by direct search: new perspectives on some classical and modern methods, SIAM Rev., 45 (2003), pp. 385482.Google Scholar
[17] Dennis, J. E. Jr and Torczon, V., Direct search methods on parallel machines, SIAM J. Optim., 1 (1991), pp. 448474.Google Scholar
[18] Dieterich, J. M. and Hartke, B., Empirical review of standard benchmark functions using evolutionary global optimization, Appl. Math. 3 (2012), pp. 15521564.Google Scholar
[19] Gratton, S., Royer, C. W., Vicente, L. N. and Zhang, Z., Direct search based on probabilistic descent, SIAM J. Optim., 25 (2015), pp. 15151541.Google Scholar
[20] Russell, S. J. and Norvig, P., Artificial Intelligence: a Modern Approach, 3rd ed., Prentice Hall, 2010.Google Scholar
[21] Bäck, T., Evolutionary Algorithms in Theory And Practice: Evolution Strategies, Evolutionary Programming, Genetic Algorithms, Oxford University Press, 1996.Google Scholar
[22] Dennis, J. E. Jr and Woods, D. J., Optimization on microcomputers: The Nelder-Mead simplex algorithm, In: New Computing Environments: Microcomputers in Large-Scale Computing, Wouk, A. ed., Philadelphia: SIAM, 1987.Google Scholar