Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgements
- 1 Introduction
- 2 Search Spaces
- 3 Blind Search
- 4 Heuristic Search
- 5 Stochastic Local Search
- 6 Algorithm A* and Variations
- 7 Problem Decomposition
- 8 Chess and Other Games
- 9 Automated Planning
- 10 Deduction as Search
- 11 Search in Machine Learning
- 12 Constraint Satisfaction
- Appendix: Algorithm and Pseudocode Conventions
- References
- Index
3 - Blind Search
Published online by Cambridge University Press: 30 April 2024
- Frontmatter
- Contents
- Preface
- Acknowledgements
- 1 Introduction
- 2 Search Spaces
- 3 Blind Search
- 4 Heuristic Search
- 5 Stochastic Local Search
- 6 Algorithm A* and Variations
- 7 Problem Decomposition
- 8 Chess and Other Games
- 9 Automated Planning
- 10 Deduction as Search
- 11 Search in Machine Learning
- 12 Constraint Satisfaction
- Appendix: Algorithm and Pseudocode Conventions
- References
- Index
Summary
In this chapter we introduce the basic machinery needed for search. We devise algorithms for navigating the implicit search space and look at their properties. One distinctive feature of the algorithms in this chapter is that they are all blind or uninformed. This means that the way the algorithms search the space is always the same irrespective of the problem instance being solved.
We look at a few variations and analyse them on the four parameters we defined in the last chapter: completeness, quality of solution, time complexity, and space complexity. We observe that complexity becomes a stumbling block, as our principal foe CombEx inevitably rears its head. We end by making a case for different approaches to fight CombEx in the chapters that follow.
In the last chapter we looked at the notion of search spaces. Search spaces, as shown in Figure 2.2, are trees corresponding to the different traversals possible in the state space or the solution space. In this chapter we begin by constructing the machinery, viz. algorithms, for navigating this space. We begin our study with the corresponding tiny state space shown in Figure 3.1.
The tiny search problem has seven nodes, including the start node S, the goal node G, and five other nodes named A, B, C, D, and E. Without any loss of generality, let us assume that the nodes are states in a state space. The algorithms apply to the solution space as well. The left side of the figure describes the MoveGen function with the notation Node → (list of neighbours). On the right side is the corresponding graph which, remember, is implicit and not given upfront. The algorithm itself works with the MoveGen function and also the GoalTest function. The latter, for this example, simply knows that state G is the goal node. For configuration problems like the N-queens, it will need to inspect the node given as the argument.
The search space that an algorithm explores is implicit. It is generated on the fly by the MoveGen function, as described in Algorithm 2.1. The candidates generated are added to what is traditionally called OPEN, from where they are picked one by one for inspection. In this chapter we represent OPEN as a list data structure.
- Type
- Chapter
- Information
- Search Methods in Artificial Intelligence , pp. 47 - 74Publisher: Cambridge University PressPrint publication year: 2024