Book contents
- Frontmatter
- Contents
- Preface
- 1 Observed Markov Chains
- 2 Estimation of an Observed Markov Chain
- 3 Hidden Markov Models
- 4 Filters and Smoothers
- 5 The Viterbi Algorithm
- 6 The EM Algorithm
- 7 A New Markov Chain Model
- 8 Semi-Markov Models
- 9 Hidden Semi-Markov Models
- 10 Filters for Hidden Semi-Markov Models
- Appendix A Higher-Order Chains
- Appendix B An Example of a Second-Order Chain
- Appendix C A Conditional Bayes Theorem
- Appendix D On Conditional Expectations
- Appendix E Some Molecular Biology
- Appendix F Earlier Applications of HiddenMarkov Chain Models
- References
- Index
Appendix D - On Conditional Expectations
Published online by Cambridge University Press: 01 February 2018
- Frontmatter
- Contents
- Preface
- 1 Observed Markov Chains
- 2 Estimation of an Observed Markov Chain
- 3 Hidden Markov Models
- 4 Filters and Smoothers
- 5 The Viterbi Algorithm
- 6 The EM Algorithm
- 7 A New Markov Chain Model
- 8 Semi-Markov Models
- 9 Hidden Semi-Markov Models
- 10 Filters for Hidden Semi-Markov Models
- Appendix A Higher-Order Chains
- Appendix B An Example of a Second-Order Chain
- Appendix C A Conditional Bayes Theorem
- Appendix D On Conditional Expectations
- Appendix E Some Molecular Biology
- Appendix F Earlier Applications of HiddenMarkov Chain Models
- References
- Index
Summary
Let X and Y be random variables with on some probability space. In fact, let us suppose that.
By the Doob–Dynkin Lemma (see for example, Oksendal, 2010, Lemma 2.1.2)
for some Borel function g : if Y takes values in.
If Y takes countably many distinct values, then g can be described in more detail. The atoms of σ(Y), the σ-algebra generated by Y, are the disjoint sets
This means that any element of σ(Y) is either ∅ or a countable union of atoms. The function g is characterized as follows: g is constant on each atom, and
for all A ∈ σ(Y). In fact (D.1) is equivalent to the result holding for atoms A. This leads to
where. If P, then
But if, then (D.2) becomes
as. This means that gk can be given any finite value. If we set, then
So we can say then when, then.
We could also note that
and we can set on sets when and obtain a random variable equal to the conditional expected value with probability 1. As many authors use this convention, we shall also adopt it.
Now let W be another random variable which takes countably many distinct values. If, then Then
with probability 1. This is because
with probability 1, since
We could apply these ideas with, and and. If, then
with probability 1. In the application above, if
implies
We could also discuss what is meant by P(A|B) when P(B) = 0. It should be the value of
on the set B. Here.With our convention, we would set.
- Type
- Chapter
- Information
- Introduction to Hidden Semi-Markov Models , pp. 158 - 160Publisher: Cambridge University PressPrint publication year: 2018