Hostname: page-component-599cfd5f84-8nxqw Total loading time: 0 Render date: 2025-01-07T08:16:42.771Z Has data issue: false hasContentIssue false

Simulating power of economic experiments: the powerBBK package

Published online by Cambridge University Press:  01 January 2025

Charles Bellemare*
Affiliation:
Department of Economics, Laval University, Pavillon J.A.DeSève, Québec, Québec G1V 0A6, Canada
Luc Bissonnette*
Affiliation:
Department of Economics, Laval University, Pavillon J.A.DeSève, Québec, Québec G1V 0A6, Canada
Sabine Kröger*
Affiliation:
Department of Economics, Laval University, Pavillon J.A.DeSève, Québec, Québec G1V 0A6, Canada

Abstract

In this article, we highlight how simulation methods can be used to analyze power of economic experiments. We provide the powerBBK package programmed for experimental economists, that can be used to perform simulations in STATA. Power can be simulated using a single command line for various statistical tests (nonparametric and parametric), estimation methods (linear, binary, and censored regression models), treatment variables (binary, continuous, time-invariant or time varying), sample sizes, experimental periods, and other design features (within or between-subjects design). The package can be used to predict minimum sample sizes required to reach a user-specific level of power, to maximize power of a design given the researcher supplied a budget constraint, or to compute power to detect a user-specified treatment order effect in within-subjects designs. The package can also be used to compute the probability of sign errors—the probability of rejecting the null hypothesis in the wrong direction as well as the share of rejections pointing in the wrong direction. The powerBBK package is provided as an .ado file along with a help file, both of which can be downloaded here (http://www.bbktools.org).

Type
Experimental Tools
Copyright
Copyright © Economic Science Association 2016

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

Part of the paper was written at the Institute of Finance at the School of Business and Economics at Humboldt Universität zu Berlin and at the Department of Economics at Zurich University. We thank both institutions for their hospitality. We thank Nicolas Couët for his valuable research assistance. We are grateful to participants at the ASFEE conference in Montpellier (2012), ESA meeting in New York (2012), the IMEBE in Madrid (2013), and seminar participants at the Department of Economics at Zurich University (2013) and at Technische Universität Berlin (2013). Finally, we thank reviewers and co-editors for extremely helpful comments and suggestions.

References

Bellemare, C., Shearer, B. (2009). Gift giving and worker productivity: evidence from a firm-level experiment. Games and Economic Behavior, 67(1), 233244. 10.1016/j.geb.2008.12.001CrossRefGoogle Scholar
Button, K. S., Ioannidis, J. P. A., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S. J., Munafo, M. R. (2013). Power failure: why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14(5), 365376. 10.1038/nrn3475CrossRefGoogle ScholarPubMed
Cohen, J. (1988). Statistical power analysis for behavioral sciences, 2New Jersey: Routledge Academic.Google Scholar
Dattalo, P. (2009). A review of software for sample size determination. Evaluation & The Health Professions, 32(3), 229248. 10.1177/0163278709338556CrossRefGoogle ScholarPubMed
Fanelli, D., Ioannidis, J. P. A. (2013). US studies may overestimate effect sizes in softer research. Proceedings of the National Academy of Sciences, 110(37), 1503115036. 10.1073/pnas.1302997110CrossRefGoogle ScholarPubMed
Gelman, A., Carlin, J. (2014). Beyond power calculations: assessing type S (sign) and type M (magnitude) errors. Perspectives on Psychological Science, 9, 641651. 10.1177/1745691614551642CrossRefGoogle ScholarPubMed
Gneezy, U., List, J. A. (2006). Putting behavioral economics to work: testing for gift exchange in labor markets using field experiments. Econometrica, 74(5), 13651384. 10.1111/j.1468-0262.2006.00707.xCrossRefGoogle Scholar
Keren, G. (1993). Between- or within-subjects design: a methodological dilemma. In Keren, G., Lewis, C. (Eds.), A Handbook for data analysis in the behavioral sciences, Chapter 8. New Jersey: Lawrence Erlbaum Associates Inc.Google Scholar
List, J., Sadoff, S., Wagner, M. (2011). So you want to run an experiment, now what? Some simple rules of thumb for optimal experimental design. Experimental Economics, 14(4), 439457. 10.1007/s10683-011-9275-7CrossRefGoogle Scholar
Nosek, B. A., Spies, J. R., Motyl, M. (2012). Scientific Utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science, 7(6), 615631. 10.1177/1745691612459058CrossRefGoogle ScholarPubMed
Peng, C. Y. J., Long, H., Abaci, S. (2012). Power analysis software for educational researchers. The Journal of Experimental Education, 80(2), 113136. 10.1080/00220973.2011.647115CrossRefGoogle Scholar
Thomas, L., Krebs, C. J. (1997). A review of statistical power analysis software. Bulletin of the Ecological Society of America, 78(2), 126139.Google Scholar