Hostname: page-component-cd9895bd7-gbm5v Total loading time: 0 Render date: 2024-12-23T12:49:17.406Z Has data issue: false hasContentIssue false

Averaging Dependent Effect Sizes in Meta-Analysis: a Cautionary Note about Procedures

Published online by Cambridge University Press:  10 April 2014

Fulgencio Marín-Martínez*
Affiliation:
University of Murcia
Julio Sánchez-Meca
Affiliation:
University of Murcia
*
Correspondence concerning this article should be addressed to Dr. Fulgencio Marín-Martínez, Departamento de Psicología Básica y Metodología. Facultad de Psicología.Universidad de Murcia. Campus de Espinardo, Apdo 4021. 30080 Murcia (Spain). E-mail: [email protected]

Abstract

When a primary study includes several indicators of the same construct, the usual strategy to meta-analytically integrate the multiple effect sizes is to average them within the study. In this paper, the numerical and conceptual differences among three procedures for averaging dependent effect sizes are shown. The procedures are the simple arithmetic mean, the Hedges and Olkin (1985) procedure, and the Rosenthal and Rubin (1986) procedure. Whereas the simple arithmetic mean ignores the dependence among effect sizes, both the procedures by Hedges and Olkin and Rosenthal and Rubin take into account the correlational structure of the effect sizes, although in a different way. Rosenthal and Rubin's procedure provides the effect size for a single composite variable made up of the multiple effect sizes, whereas Hedges and Olkin's procedure presents an effect size estimate of the standard variable. The three procedures were applied to 54 conditions, where the magnitude and homogeneity of both effect sizes and correlation matrix among effect sizes were manipulated. Rosenthal and Rubin's procedure showed the highest estimates, followed by the simple mean, and the Hedges and Olkin procedure, this last having the lowest estimates. These differences are not trivial in a meta-analysis, where the aims must guide the selection of one of the procedures.

La estrategia usual para integrar meta-analíticamente los múltiples tamaños del efecto cuando un estudio primario incluye varios indicadores del mismo constructo, es la de promediarlos. En este trabajo se muestran las diferencias numéricas y conceptuales entre tres procedimientos para promediar tamaños del efecto dependientes. Los procedimientos son el de Hedges y Olkin (1985), el de Rosenthal y Rubin (1986) y el de la media aritmética. Mientras que el de la media aritmética ignora la dependencia entre los tamaños del efecto, tanto el de Hedges y Olkin como el de Rosenthal y Rubin tienen en cuenta, aunque de diferente forma, la estructura correlacional de los tamaños del efecto. El procedimiento de Rosenthal y Rubin proporciona el tamaño del efecto de una sola variable compuesta, obtenida a partir de los diversos tamaños del efecto, mientras que el de Hedges y Olkin aporta una estimación del efecto para la variable estándar. Los tres procedimientos se aplicaron a 54 condiciones, manipulándose la magnitud y homogeneidad del vector de los tamaños del efecto y de la matriz de correlaciones entre ellos. Con el procedimiento de Rosenthal y Rubin se obtuvieron las estimaciones más elevadas, seguido del de la media y del de Hedges y Olkin. Estas diferencias no son triviales en un meta-análisis, cuyos objetivos son los que deben guiar la elección de uno u otro de los procedimientos.

Type
Articles
Copyright
Copyright © Cambridge University Press 1999

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Abrami, P.C., Cohen, P.A., & d'Apollonia, S. (1988). Implementation problems in meta-analysis. Review of Educational Research, 58, 151179.CrossRefGoogle Scholar
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum.Google Scholar
Cooper, H.M. (1989). Integrating research: A guide for literature reviews (2nded.). Beverly Hills, CA: Sage.Google Scholar
GAUSS (1992). The GAUSS System (Vers. 3.0). Washington: Aptech Systems, Inc.Google Scholar
Glass, G.V., McGaw, B., & Smith, M.L. (1981). Meta-analysis in social research. Beverly Hills, CA: Sage.Google Scholar
Gleser, L.J., & Olkin, I. (1994). Stochastically dependent effect sizes. In Cooper, H.M. & Hedges, L.V. (Eds.), The handbook of research synthesis (pp. 339355). New York: Sage.Google Scholar
Hedges, L.V., & Olkin, I. (1985). Statistical methods for meta-analysis. Orlando, FL: Academic Press.Google Scholar
Hunter, J. E., & Schmidt, F. L. (1990). Methods of meta-analysis: Correcting error and bias in research findings. Beverly Hills, CA: Sage.Google Scholar
Johnson, B. T., & Eagly, A. H. (in press). Quantitative synthesis of social psychological research. In Reis, H.T. & Judd, C.M. (Eds.), The handbook of research methods in social psychology. London: Cambridge University Press.Google Scholar
Kalaian, H.A., & Raudenbush, S.W. (1996). A multivariate mixed linear model for meta-analysis. Psychological Methods, 1, 225235.CrossRefGoogle Scholar
Marascuillo, L.A., Busk, P.L., & Serlin, R.C. (1988). Large sample multivariate procedures for comparing and combining effect sizes within a single study. Journal of Experimental Education, 58, 6985.CrossRefGoogle Scholar
Marín-Martínez, F., & Sánchez-Meca, J. (1998). Testing for dichotomous moderators in meta-analysis. Journal of Experimental Education, 67, 6981.CrossRefGoogle Scholar
Matt, G.E. (1989). Decision rules for selecting effect sizes in meta-analysis: A review and reanalysis of psychotherapy outcome studies. Psychological Bulletin, 105, 106115.CrossRefGoogle ScholarPubMed
Matt, G.E., & Cook, T.D. (1994). Threats to the validity of research syntheses. In Cooper, H.M. & Hedges, L.V. (Eds.), The handbook of research synthesis (pp. 503520). New York: Sage.Google Scholar
Raudenbush, S.W., Becker, B.J., & Kalaian, H. (1988). Modeling multivariate effect sizes. Psychological Bulletin, 103, 111120.CrossRefGoogle Scholar
Rosenthal, R. (1991). Meta-analytic procedures for social research (rev. ed.). Newbury Park, CA: Sage.CrossRefGoogle Scholar
Rosenthal, R. (1994). Parametric measures of effect size. In Cooper, H.M. & Hedges, L.V. (Eds.), The handbook of research synthesis (pp. 231244). New York: Sage.Google Scholar
Rosenthal, R. (1995). Writing meta-analytic reviews. Psychological Bulletin, 118, 183192.CrossRefGoogle Scholar
Rosenthal, R., & Rubin, D.B. (1986). Meta-analytic procedures for combining studies with multiple effect sizes. Psychological Bulletin, 99, 400406.CrossRefGoogle Scholar
Sánchez-Meca, J., & Ato, M. (1989). Meta-análisis: Una alternativa metodológica a las revisiones tradicionales de la investigación. In Arnau, J. & Carpintero, H. (Eds.), Tratado de psicología general. I: Historia, teoría y método (pp. 617669). Madrid: Alhambra.Google Scholar
Sánchez-Meca, J., & Marín-Martínez, F. (1997). Homogeneity tests in meta-analysis: A Monte Carlo comparison of statistical power and Type I error. Quality and Quantity, 31, 385399.CrossRefGoogle Scholar
Sánchez-Meca, J., & Marín-Martínez, F. (1998a). Weighting by inverse-variance or by sample size in meta-analysis: A simulation study. Educational and Psychological Measurement, 58, 211220CrossRefGoogle Scholar
Sánchez-Meca, J., & Marín-Martínez, F. (1998b). Testing continuous moderators in meta-analysis: A comparison of procedures. British Journal of Mathematical and Statistical Psychology, 51, 311326.CrossRefGoogle Scholar