Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgements
- 1 Introduction
- 2 Search Spaces
- 3 Blind Search
- 4 Heuristic Search
- 5 Stochastic Local Search
- 6 Algorithm A* and Variations
- 7 Problem Decomposition
- 8 Chess and Other Games
- 9 Automated Planning
- 10 Deduction as Search
- 11 Search in Machine Learning
- 12 Constraint Satisfaction
- Appendix: Algorithm and Pseudocode Conventions
- References
- Index
5 - Stochastic Local Search
Published online by Cambridge University Press: 30 April 2024
- Frontmatter
- Contents
- Preface
- Acknowledgements
- 1 Introduction
- 2 Search Spaces
- 3 Blind Search
- 4 Heuristic Search
- 5 Stochastic Local Search
- 6 Algorithm A* and Variations
- 7 Problem Decomposition
- 8 Chess and Other Games
- 9 Automated Planning
- 10 Deduction as Search
- 11 Search in Machine Learning
- 12 Constraint Satisfaction
- Appendix: Algorithm and Pseudocode Conventions
- References
- Index
Summary
Search spaces can be huge. The number of choices faced by a search algorithm can grow exponentially. We have named this combinatorial explosion, the principal adversary of search, CombEx. In Chapter 4 we looked at one strategy to battle CombEx, the use of knowledge in the form of heuristic functions – knowledge that would point towards the goal node. Yet, for many problems, such heuristics are hard to acquire and often inadequate, and algorithms continue to demand exponential time.
In this chapter we introduce stochastic moves to add an element of randomness to search. Exploiting the gradient deterministically has its drawbacks when the heuristic functions are imperfect, as they often are. The steepest gradient can lead to the nearest optimum and end there. We add a tendency of exploration, which could drag search away from the path to local optima.
We also look at the power of many for problem solving, as opposed to a sole crusader. Population based methods have given a new dimension to solving optimization problems.
Douglas Hofstadter says that humans are not known to have a head for numbers (Hofstadter, 1996). For most of us, the numbers 3.2 billion and 5.3 million seem vaguely similar and big. A very popular book (Gamow, 1947) was titled One, Two, Three … Infinity. The author, George Gamow, talks about the Hottentot tribes who had the only numbers one, two, and three in their vocabulary, and beyond that used the word many. Bill Gates is famously reputed to have said, ‘Most people overestimate what they can do in one year and underestimate what they can do in ten years.’
So, how big is big? Why are computer scientists wary of combinatorial growth? In Table 2.1 we looked at the exponential function 2N and the factorial N!, which are respectively the sizes of search spaces for SAT and TSP, with N variables or cities. How long will take it to inspect all the states when N = 50?
For a SAT problem with 50 variables, 250 = 1,125,899,906,842,624. How big is that? Let us say we can inspect a million or 106 nodes a second. We would then need 1,125,899,906.8 seconds, which is about 35.7 years! There are N! = 3.041409320 × 1064 non-distinct tours (each distinct tour has 2N representations) of 50 cities.
- Type
- Chapter
- Information
- Search Methods in Artificial Intelligence , pp. 115 - 146Publisher: Cambridge University PressPrint publication year: 2024