11 - Monte Carlo Optimization
Published online by Cambridge University Press: 12 December 2009
Summary
Monte Carlo Methods for Optimization
Consider the problem of searching for the extremal values of an objective function f defined on a domain Ω, and equally important, for the points x ∈ Ω, where these values occur. An extremal value is called an optimum (maximum or minimum) while a point where an optimum occurs is called an optimizer (maximizer or minimizer).
If the domain is a subset of Euclidean space, we will assume f is differentiable. In this case gradient descent (or ascent) methods are used to locate local minima (or maxima). Whether or not a global extremum has been found depends upon the starting point of the search. Each local minimum (maximum) has its own basin of attraction and so it becomes a matter of starting in the right basin. Thus there is an element of chance involved if globally extreme values are desired.
On the other hand, we allow the possibility that Ω is a discrete, and possibly large, finite set. In this case downhill/uphill directional information is nonexistent and the search is forced to make due with objective values only. As the search proceeds from one point to the next, selecting the next point to try is often best left to chance.
A search process in which the next point or next starting point to try is randomly determined and may depend on the current location is, mathematically, a finite Markov Chain. Although the full resources of that theory may be brought to bear on the problem, only general assertions will be possible without knowing the nature of the specific objective function.
- Type
- Chapter
- Information
- An Introduction to Parallel and Vector Scientific Computation , pp. 244 - 264Publisher: Cambridge University PressPrint publication year: 2006