Published online by Cambridge University Press: 25 April 2022
Formal and social epistemologists have devoted significant attention to the question of how to aggregate the credences of a group of agents who disagree about the probabilities of events. Moss (2011) and Pettigrew (2019) argue that group credences can be a linear mean of the credences of each individual in the group. By contrast, I argue that if the epistemic value of a credence function is determined solely by its accuracy, then we should, where possible, aggregate the underlying statistical models that individuals use to generate their credence functions, using “stacking” techniques from statistics and machine learning first developed by Wolpert (1992).
Many thanks to David Wolpert for first introducing me to the literature on stacking. For their helpful comments on various drafts, I am also grateful to Hein Duijf, Remco Heesen, James Nguyen, Richard Pettigrew, Joe Roussos, Jeremy Strasser, David Watson, Kevin Zollman, and two anonymous reviewers for this journal, as well as audiences at the London School of Economics and Political Science (LSE) Choice Group Seminar, the 2020 Formal Epistemology Workshop, the 2020 Conference on Bayesian Epistemology: Perspectives and Challenges at the Munich Center for Mathematical Philosophy, and the 2020 Workshop on the Wisdom and Madness of Crowds at the Institute for Logic, Language, and Computation at the University of Amsterdam.