Book contents
- Frontmatter
- Contents
- List of contributors
- Preface
- 1 Introduction: Bayesian decision theory – foundations and problems
- Part I Foundations of Bayesian decision theory
- Part II Conceptualization of probability and utility
- 6 Bets and beliefs
- 7 Slightly more realistic personal probability
- 8 Risk aversion as a problem of conjoint measurement
- Part III Questionable rules of rationality
- Part IV Unreliable probabilities
- Part V Causal decision theory
- References
- Name index
- Subject index
7 - Slightly more realistic personal probability
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Contents
- List of contributors
- Preface
- 1 Introduction: Bayesian decision theory – foundations and problems
- Part I Foundations of Bayesian decision theory
- Part II Conceptualization of probability and utility
- 6 Bets and beliefs
- 7 Slightly more realistic personal probability
- 8 Risk aversion as a problem of conjoint measurement
- Part III Questionable rules of rationality
- Part IV Unreliable probabilities
- Part V Causal decision theory
- References
- Name index
- Subject index
Summary
A person required to risk money on a remote digit of π would, in order to comply fully with the theory [of personal probability] have to compute that digit, though this would really be wasteful if the cost of computation were more than the prize involved. For the postulates of the theory imply that you should behave in accordance with the logical implications of all that you know. Is it possible to improve the theory in this respect, making allowance within it for the cost of thinking, or would that entail paradox?
Like each of Professor Savage's difficulties in the theory of personal probability, his problem about the remote digit of π is entirely general. It concerns logical consequence as much as logical truth: his theory implies that if e entails h you should be as confident of h as of e. His own example is one of three distinct cases which militate against this part of his theory. In his example there is a known algorithm for working out of the relevant logical implications, but it is too costly for sensible use. A second case arises when there is no known algorithm for finding out whether the hypothesis h follows from the evidence e. Perhaps there are two subcases: in the first, the algorithm is not known to anyone; in the second, it is not accessible to the person who is making decisions.
- Type
- Chapter
- Information
- Decision, Probability and UtilitySelected Readings, pp. 118 - 135Publisher: Cambridge University PressPrint publication year: 1988
- 2
- Cited by