1 - The basics of Bayesian analysis
Published online by Cambridge University Press: 05 June 2014
Summary
The general principles of Bayesian analysis are easy to understand. First, uncertainty or “degree of belief” is quantified by probability. Second, the observed data are used to update the prior information or beliefs to become posterior information or beliefs. That's it!
To see how this works in practice, consider the following example. Assume you are given a test that consists of 10 factual questions of equal difficulty. What we want to estimate is your ability, which we define as the rate θ with which you answer questions correctly. We cannot directly observe your ability θ. All that we can observe is your score on the test.
Before we do anything else (for example, before we start to look at your data) we need to specify our prior uncertainty with respect to your ability θ. This uncertainty needs to be expressed as a probability distribution, called the prior distribution. In this case, keep in mind that θ can range from 0 to 1, and that we do not know anything about your familiarity with the topic or about the difficulty level of the questions. Then, a reasonable “prior distribution,” denoted by p (θ), is one that assigns equal probability to every value of θ. This uniform distribution is shown by the dotted horizontal line in Figure 1.1.
Now we consider your performance, and find that you answered 9 out of 10 questions correctly. After having seen these data, the updated knowledge about θ is described by the posterior distribution, denoted p (θ | D), where D indicates the observed data.
- Type
- Chapter
- Information
- Bayesian Cognitive ModelingA Practical Course, pp. 3 - 15Publisher: Cambridge University PressPrint publication year: 2014