Book contents
- Frontmatter
- Dedication
- Contents
- Figures
- Tables
- Preface
- Acronyms
- General Notations
- 1 Probability Theory and Random Variables
- 2 Random Variables: Conditioning, Convergence and Simulation
- 3 An Introduction to Stochastic Processes
- 4 Stochastic Calculus and Diffusion Processes
- 5 Numerical Solutions to Stochastic Differential Equations
- 6 Non-linear Stochastic Filtering and Recursive Monte Carlo Estimation
- 7 Non-linear Filters with Gain-type Additive Updates
- 8 Improved Numerical Solutions to SDEs by Change of Measures
- 9 Evolutionary Global Optimization via Change of Measures: A Martingale Route
- 10 COMBEO–A New Global Optimization Scheme By Change of Measures
- Appendix A (Chapter 1)
- Appendix B (Chapter 2)
- Appendix C (Chapter 3)
- Appendix D (Chapter 4)
- Appendix E (Chapter 5)
- Appendix F (Chapter 6)
- Appendix G (Chapter 7)
- Appendix H (Chapter 8)
- Appendix I (Chapter 9)
- References
- Bibliography
- Index
9 - Evolutionary Global Optimization via Change of Measures: A Martingale Route
Published online by Cambridge University Press: 08 February 2018
- Frontmatter
- Dedication
- Contents
- Figures
- Tables
- Preface
- Acronyms
- General Notations
- 1 Probability Theory and Random Variables
- 2 Random Variables: Conditioning, Convergence and Simulation
- 3 An Introduction to Stochastic Processes
- 4 Stochastic Calculus and Diffusion Processes
- 5 Numerical Solutions to Stochastic Differential Equations
- 6 Non-linear Stochastic Filtering and Recursive Monte Carlo Estimation
- 7 Non-linear Filters with Gain-type Additive Updates
- 8 Improved Numerical Solutions to SDEs by Change of Measures
- 9 Evolutionary Global Optimization via Change of Measures: A Martingale Route
- 10 COMBEO–A New Global Optimization Scheme By Change of Measures
- Appendix A (Chapter 1)
- Appendix B (Chapter 2)
- Appendix C (Chapter 3)
- Appendix D (Chapter 4)
- Appendix E (Chapter 5)
- Appendix F (Chapter 6)
- Appendix G (Chapter 7)
- Appendix H (Chapter 8)
- Appendix I (Chapter 9)
- References
- Bibliography
- Index
Summary
Introduction
The efficacy of the concept of change of measures was demonstrated in the last few chapters in the context of non-linear stochastic filtering—a tool that also has considerable scientific usefulness in developing numerical schemes for system identification problems. This chapter also concerns an application of the same notion leading to a paradigm [Sarkar et al. 2014] on global optimization problems, wherein solutions are guided mainly through derivative-free directional information computable from the sample statistical moments of the design (state) variables within a MC setup. Before the ideas on this approach are presented in some detail, it is advisable to first focus on some of the available methodologies/strategies for solving such optimization problems.
In most cases of practical interest, the cost or objective functional, whose extremization solves the optimization problem, could be non-convex, non-separable and even non-smooth. Here separability means that the cost function can be additively split in terms of the component functions and the optimization problem may actually be split into a set of sub-problems. An optimization problem is convex if it involves minimization of a convex function (or maximization of a concave function) where the admissible state variables are in a convex set. For a convex problem, a fundamental result is that a locally optimal solution is also globally optimal. The classical methods [Fletcher and Reeves 1964, Fox 1971, Rao 2009] that mostly use directional derivatives are particularly useful in solving convex problems (Fig. 9.1). Non-convex problems, on the other hand, may have many local optima, and choosing the best one (i.e., the global extremum) could be an extremely hard task. In global optimization, we seek, in the design or state or parameter space, the extremal locations of nonconvex functions subject to (possibly) nonconvex constraints. Here the objective functional could be multivariate, multimodal and even non-differentiable, which together precludes applying a gradient-based Newton–step whilst solving the optimization problem.
- Type
- Chapter
- Information
- Stochastic Dynamics, Filtering and Optimization , pp. 507 - 555Publisher: Cambridge University PressPrint publication year: 2017