Hostname: page-component-cd9895bd7-gbm5v Total loading time: 0 Render date: 2024-12-25T13:00:02.845Z Has data issue: false hasContentIssue false

Criteria for Accrediting Expert Wine Judges*

Published online by Cambridge University Press:  30 September 2013

Robert Hodgson
Affiliation:
Professor Emeritus, Humboldt State University, 1 Harpst Street, Arcata, CA 95521, e-mail: [email protected]
Jing Cao
Affiliation:
Associate Professor, Southern Methodist University, 6425 Boaz Street, Dallas, TX 75275, e-mail: [email protected]

Abstract

A test for evaluating wine judge performance is developed. The test is based on the premise that an expert wine judge will award similar scores to an identical wine. The definition of “similar” is parameterized to include varying numbers of adjacent awards on an ordinal scale, from No Award to Gold. For each index of similarity, a probability distribution is developed to determine the likelihood that a judge might pass the test by chance alone. When the test is applied to the results from a major wine competition, few judges pass the test. Of greater interest is that many judges who fail the test have vast professional experience in the wine industry. This leads to us to question the basic premise that experts are able to provide consistent evaluations in wine competitions and, hence, that wine competitions do not provide reliable recommendations of wine quality. (JEL Classifications: C02, C12, D81)

Type
Articles
Copyright
Copyright © American Association of Wine Economists 2013 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

*

The Advisory Board that oversees the conduct of the California State Fair Commercial Wine Competition deserves acknowledgment for sustaining this study to evaluate judge performance over the past decade. It is the only study of its kind, and the board should be commended for allowing the results to be offered in the public domain. In particular, G.M. “Pooch” Pucilowski, chief judge, should be commended for initiating and supporting this work. Analysis of the data would not have been possible without the help of Aaron Kidder, chief programmer for the competition, for implementing replicate sampling into the flights and supplying a concise format for data analysis. Finally, the authors thank the anonymous reviewers for helpful suggestions that improved this paper.

References

Ashenfelter, O., and Quandt, R. (1999). Analyzing a wine tasting statistically. Chance, 12(3), 1620.Google Scholar
Ashton, R.H. (2012). Reliability and consensus of experienced wine judges: expertise within and between? Journal of Wine Economics, 7(1), 7087.CrossRefGoogle Scholar
Cao, J., and Stokes, L. (2010). The evaluation of wine judge performance through three characteristics: bias, discrimination and variation. Journal of Wine Economics, 5(1), 132142.Google Scholar
Cicchetti, D.V. (2004). Who won the 1976 blind tasting of French Bordeaux and U.S. Cabernets? Parametrics to the rescue. Journal of Wine Research, 15(3), 211220.CrossRefGoogle Scholar
Gawel, R., and Godden, P.W. (2008). Evaluation of the consistency of wine quality assessments from expert wine tasters. Australian Journal of Grape and Wine Research, 14(1), 18.Google Scholar
Hodgson, R.T. (2008). An examination of judge reliability at a major U.S. wine competition. Journal of Wine Economics, 3(2), 105113.Google Scholar
Hodgson, R.T. (2009a). An analysis of the concordance among 13 U.S. wine competitions. Journal of Wine Economics, 4(1), 19.Google Scholar
Hodgson, R.T. (2009b). How expert are “expert” wine judges? Journal of Wine Economics, 4(2), 233241.Google Scholar
Miller, G.A. (1956). The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychological Review, 63(2), 8197.Google Scholar