Book contents
- Frontmatter
- Contents
- Preface
- 1 Introduction
- 2 Propositional Logic
- 3 Probability Calculus
- 4 Bayesian Networks
- 5 Building Bayesian Networks
- 6 Inference by Variable Elimination
- 7 Inference by Factor Elimination
- 8 Inference by Conditioning
- 9 Models for Graph Decomposition
- 10 Most Likely Instantiations
- 11 The Complexity of Probabilistic Inference
- 12 Compiling Bayesian Networks
- 13 Inference with Local Structure
- 14 Approximate Inference by Belief Propagation
- 15 Approximate Inference by Stochastic Sampling
- 16 Sensitivity Analysis
- 17 Learning: The Maximum Likelihood Approach
- 18 Learning: The Bayesian Approach
- A Notation
- B Concepts from Information Theory
- C Fixed Point Iterative Methods
- D Constrained Optimization
- Bibliography
- Index
4 - Bayesian Networks
Published online by Cambridge University Press: 23 February 2011
- Frontmatter
- Contents
- Preface
- 1 Introduction
- 2 Propositional Logic
- 3 Probability Calculus
- 4 Bayesian Networks
- 5 Building Bayesian Networks
- 6 Inference by Variable Elimination
- 7 Inference by Factor Elimination
- 8 Inference by Conditioning
- 9 Models for Graph Decomposition
- 10 Most Likely Instantiations
- 11 The Complexity of Probabilistic Inference
- 12 Compiling Bayesian Networks
- 13 Inference with Local Structure
- 14 Approximate Inference by Belief Propagation
- 15 Approximate Inference by Stochastic Sampling
- 16 Sensitivity Analysis
- 17 Learning: The Maximum Likelihood Approach
- 18 Learning: The Bayesian Approach
- A Notation
- B Concepts from Information Theory
- C Fixed Point Iterative Methods
- D Constrained Optimization
- Bibliography
- Index
Summary
We introduce Bayesian networks in this chapter as a modeling tool for compactly specifying joint probability distributions.
Introduction
We have seen in Chapter 3 that joint probability distributions can be used to model uncertain beliefs and change them in the face of hard and soft evidence. We have also seen that the size of a joint probability distribution is exponential in the number of variables of interest, which introduces both modeling and computational difficulties. Even if these difficulties are addressed, one still needs to ensure that the synthesized distribution matches the beliefs held about a given situation. For example, if we are building a distribution that captures the beliefs of a medical expert, we may need to ensure some correspondence between the independencies held by the distribution and those believed by the expert. This may not be easy to enforce if the distribution is constructed by listing all possible worlds and assessing the belief in each world directly.
The Bayesian network is a graphical modeling tool for specifying probability distributions that, in principle, can address all of these difficulties. The Bayesian network relies on the basic insight that independence forms a significant aspect of beliefs and that it can be elicited relatively easily using the language of graphs. We start our discussion in Section 4.2 by exploring this key insight, and use our developments in Section 4.3 to provide a formal definition of the syntax and semantics of Bayesian networks.
- Type
- Chapter
- Information
- Modeling and Reasoning with Bayesian Networks , pp. 53 - 75Publisher: Cambridge University PressPrint publication year: 2009
- 5
- Cited by