Book contents
- Frontmatter
- Contents
- 1 Introduction
- 2 Discrete-time Markov chains
- 3 Continuous-time Markov chains
- 4 State aggregation
- 5 Sojourn times in subsets of states
- 6 Occupation times of subsets of states – interval availability
- 7 Linear combination of occupation times – performability
- 8 Stationarity detection
- 9 Simulation techniques
- 10 Bounding techniques
- References
- Index
1 - Introduction
Published online by Cambridge University Press: 05 July 2014
- Frontmatter
- Contents
- 1 Introduction
- 2 Discrete-time Markov chains
- 3 Continuous-time Markov chains
- 4 State aggregation
- 5 Sojourn times in subsets of states
- 6 Occupation times of subsets of states – interval availability
- 7 Linear combination of occupation times – performability
- 8 Stationarity detection
- 9 Simulation techniques
- 10 Bounding techniques
- References
- Index
Summary
Preliminary words
From the theoretical point of view, Markov chains are a fundamental class of stochastic processes. They are the most widely used tools for solving problems in a large number of domains. They allow the modeling of all kinds of systems and their analysis allows many aspects of those systems to be quantified. We find them in many subareas of operations research, engineering, computer science, networking, physics, chemistry, biology, economics, finance, and social sciences. The success of Markov chains is essentially due to the simplicity of their use, to the large set of theoretical associated results available, that is, the high degree of understanding of the dynamics of these stochastic processes, and to the power of the available algorithms for the numerical evaluation of a large number of associated metrics.
In simple terms, the Markov property means that given the present state of the process, its past and future are independent. In other words, knowing the present state of the stochastic process, no information about the past can be used to predict the future. This means that the number of parameters that must be taken into account to represent the evolution of a system modeled by such a process can be reduced considerably. Actually, many random systems can be represented by a Markov chain, and certainly most of the ones used in practice. The price to pay for imposing the Markov property on a random system consists of cleverly defining the present of the system or equivalently its state space.
- Type
- Chapter
- Information
- Markov Chains and Dependability Theory , pp. 1 - 25Publisher: Cambridge University PressPrint publication year: 2014