Hostname: page-component-586b7cd67f-2plfb Total loading time: 0 Render date: 2024-11-25T10:13:37.187Z Has data issue: false hasContentIssue false

A Pragmatic Guideline for Evaluation of Social Intervention

Published online by Cambridge University Press:  10 April 2014

Isabel María Herrera Sánchez*
Affiliation:
University of Seville
José María León Rubio
Affiliation:
University of Seville
Silvia Medina Anzano
Affiliation:
University of Seville
*
Correspondence should be addressed to: Isabel María Herrera Sánchez. Departamento de Psicología Social., Universidad de Sevilla. Camilo José Cela s/n. 41018 Sevilla (Spain). Phone: 95455988. E-mail: [email protected]

Abstract

Considering the many theoretical and methodological viewpoints in the field of evaluation, a guideline was established to facilitate the evaluation of social intervention programs. For this purpose, the goals of the evaluation were taken into account: (a) planning interventions, (b) learning and continuous improvement of interventions, (c) programming policies, and (d) transformation of society. These goals will determine the perspective of the analysis selected (focusing on the evaluand or the context) and the strategies for change employed (focusing on processes or results). The elements that, according to Shadish, Cook, and Levinton (1991), constitute the theory of evaluation (evaluand, value, building of knowledge, and uses) have also been considered. The analysis of all these components led to the elaboration of a guideline to orient the practice of evaluation from a pragmatic perspective, in accordance with the demands and needs of a certain social context.

Considerando la multiplicidad de perspectivas teóricas y metodológicas existentes en el campo de la evaluación, se ha elaborado una pauta que permita conducir la evaluación de programas de intervención social. Para ello se han tenido en cuenta los propósitos u objetivos que puede perseguir la evaluación: (a) planificación de intervenciones, (b) aprendizaje y mejora continua de las intervenciones, (c) programación de políticas, y (d) transformación de la sociedad. Estos objetivos determinarán la perspectiva de análisis adoptada (centrada en el evaluando o en su contexto) y las estrategias de cambio utilizadas (orientadas a los procesos o a los resultados). Se han considerado también los elementos que para Shadish, Cook y Levinton (1991) constituyen la teoría de la evaluación (evaluando, valor, construcción del conocimiento y usos). El análisis de todos estos componentes ha permitido elaborar una guía que oriente la práctica de la evaluación desde una perspectiva pragmática acorde a las demandas y necesidades que se planteen en un contexto social determinado.

Type
Articles
Copyright
Copyright © Cambridge University Press 2005

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Álvaro, J.L. (1995). Psicología social: perspectivas teóricas y metodológicas. Madrid: Siglo XXI.Google Scholar
Alvira, F. (1983). Perspectiva cuantitativa-cualitativa en la metodología sociológica. Revista española de investigaciones sociológicas, 22, 5375.Google Scholar
Bickman, L. (2000). Summing up program theory. New Directions for Evaluation, 87, 103111.CrossRefGoogle Scholar
Birckmayer, J.D., & Weiss, C.H. (2000). Theory-based evaluation in practice. Evaluation review, 24, 407431.CrossRefGoogle ScholarPubMed
Campbell, D.T. (1988). Methodology and epistemology for social science. Selected papers. Chicago: University of Chicago.Google Scholar
Chelismky, E., & Shadish, W.R. (Eds.) (1997). Evaluation for the 21st century. A handbook. Thousand Oaks, CA: Sage.Google Scholar
Chen, H.T. (1990). Theory-driven evaluations. Newbury Park, CA: Sage.Google Scholar
Cook, T.D. (1997). Lessons learned in evaluation over the past 25 years. In Chelimsky, E. & Shadish, W. (Eds.), Evaluation for the 21st century: A handbook (pp. 3052). Thousand Oaks, CA: Sage.CrossRefGoogle Scholar
Cook, T.D. (2000). The false choice between theory-based evaluation and experimentation. New Directions for Evaluation, 87, 2734.CrossRefGoogle Scholar
Cronbach, L.J. (1982). Designing evaluations of educational and social programs. San Francisco: Jossey-Bass.Google Scholar
Donaldson, S.I. (2003). Theory-driven program evaluation in the new millennium. In Donaldson, S.I. & Scriven, E.M. (Eds.), Evaluating social programs and problems: Visions for the new millennium (pp. 109141). Mahwah, NJ: Erlbaum.CrossRefGoogle Scholar
Fetterman, D.M. (2001). Foundations of empowerment evaluation. Thousand Oaks, CA: Sage.Google Scholar
Guba, E.G., & Lincoln, Y.S. (1989). The fourth-generation evaluation. Newbury Park, CA: Sage.Google Scholar
House, E.R. (1980). Evaluating with validity. Beverly Hills, CA: Sage.Google Scholar
House, E.R. (2001). Responsive evaluations (and its influence on deliberative democratic evaluation). New Directions for Evaluation, 92, 2330.CrossRefGoogle Scholar
House, E.R., & Howe, K.R. (2000). Deliberative democratic evaluation. New Directions for Evaluation, 85, 312.CrossRefGoogle Scholar
Julnes, G., & Mark, M.M. (1998). Evaluation as sensemaking: Knowledge construction in a realist world. New Directions for Evaluation, 78, 3352.CrossRefGoogle Scholar
Kalafat, J., & Illback, R. (1998). A qualitative evaluation of school-based family resource and youth service centers. American Journal of Community Psychology, 26, 573604.CrossRefGoogle ScholarPubMed
Kirkhart, K.E. (2000). Reconceptualizing evaluation use: An integrated theory of influence. New Directions for Evaluation, 88, 523.CrossRefGoogle Scholar
Leviton, L.C., & Hughes, E.F. (1981). Research on the utilization of evaluations: A review and synthesis. Evaluation Review, 5, 525548.CrossRefGoogle Scholar
Lipsey, M.W. (1997). What can you build with thousands of bricks? Musings on the accumulation of knowledge in program evaluation. New Directions for Evaluation, 76, 723.CrossRefGoogle Scholar
Lipsey, M.W., & Cordray, D.S. (2000). Evaluation methods for social intervention. Annual review of Psychology, 51, 345375.CrossRefGoogle ScholarPubMed
Lipsey, M.W., & Wilson, D.B. (1993). The efficacy of psychological, educational and behavioral treatment: Confirmation from meta-analysis. American Psychologist, 48, 11811209.CrossRefGoogle ScholarPubMed
Mark, M.M. (2001). Evaluation's future: Furor, futile or fertile? American Journal of Evaluation, 22, 457479.Google Scholar
Mark, M.M., Feller, I., & Button, S. (1997). Integrating qualitative methods in a predominantly quantitative evaluation. A case study and some reflections. New Directions for Evaluation, 74, 4759.CrossRefGoogle Scholar
Mertens, D.M. (2003). The inclusive view of evaluation: Visions for the new millennium. In Donaldson, S. I. & Scriven, E. M. (Eds.), Evaluating social programs and problems: Visions for the new millennium (pp. 91107). Mahwah, NJ: Erlbaum.Google Scholar
Patton, M. Q. (1994). Developmental evaluation. Evaluation Practice, 15, 311319.CrossRefGoogle Scholar
Patton, M.Q. (1997). Utilization-focused evaluation: The new century text (3rd Ed.). Thousand Oaks, CA: Sage.Google Scholar
Patton, M.Q. (2001). Evaluation, knowledge management, best practices, and high quality lessons learned. American Journal of Evaluation, 22, 329336.CrossRefGoogle Scholar
Rebolloso, E., & Rebolloso, J.R. (1998). Significado y desarrollo actual de la evaluación de programas. In Rebolloso, E. (Ed.), Evaluación de programas: ámbitos de aplicación (pp. 929). Barcelona: Textos Universitarios “Saint Jordi.”Google Scholar
Reichardt, C.S., & Cook, T.D. (1979). Beyond qualitative versus quantitative methods. In Cook, T.D. & Reichardt, C.S. (Eds.), Qualitative and quantitative methods in evaluation research (pp. 732). Beverly Hills, CA: Sage.Google Scholar
Rossi, P.H., Freeman, H.E., & Lipsey, M.W. (1999). Evaluation. A systematic approach (6th ed.). Thousand Oaks, CA: Sage.Google Scholar
Sanders, J.R. (1997). Cluster Evaluation. In Chelimsky, E. & Shadish, W. (Eds.), Evaluation for the 21st century: A handbook (pp. 396404). Thousand Oaks, CA: Sage.CrossRefGoogle Scholar
Schwandt, T.A. (1997). The landscape of values in evaluation: Charted terrain and unexplored territory. New Directions for Evaluation, 76, 2537.CrossRefGoogle Scholar
Scriven, M. (1980). The logic of evaluation. Inverness, CA: Edgepress.Google Scholar
Scriven, M. (1994). The final synthesis. Evaluation Practice, 15, 367382.CrossRefGoogle Scholar
Shadish, W.R. (1987). Program micro- and macrotheories: A guide for social change. New Directions for Program Evaluation, 33, 93109.CrossRefGoogle Scholar
Shadish, W.R., Cook, T.D., & Leviton, L.C. (1991). Foundations of program evaluation: Theories of practice. Newbury Park, CA: Sage.Google Scholar
Sielbeck-Bowen, K.A., Brisolara, S., Seigar, D., Tischler, C., & Whitmore, E. (2002). Exploring feminist evaluation: The ground from which we rise. New Directions for Evaluation, 96, 38.CrossRefGoogle Scholar
Stake, R.E. (1983). Program evaluation, particularly responsive evaluation. In Madaus, G.F., Scriven, M.S., & Stufflebeam, D.L. (Eds.), Evaluation models: Viewpoints on educational and human services evaluation (pp. 287310). Boston: Kluwer-Nijhoff.Google Scholar
Stake, R.E., & Migotsky, C. (1997). The evolving syntheses of program value. Evaluation Practice, 18, 89103.CrossRefGoogle Scholar
Stufflebeam, D.L. (2001). Evaluation models. New Directions for Evaluation, 89, 798.CrossRefGoogle Scholar
Vedung, E. (1993). Modelos de evaluación. Revista de servicios sociales y política social, 30, 3968.Google Scholar
Veney, J.E., & Kaluzny, A.D. (1991). Evaluation and decision making for health services program (2nd Ed.). Ann Arbor, MI: Health Administration Press.Google Scholar
Weiss, C.H. (1997). How can theory-based evaluation make a greater headway? Evaluation Review, 21, 501524.CrossRefGoogle Scholar
Weiss, C.H. (1998). Evaluation (2nd Ed.). Englewood Cliffs, NJ: Prentice Hall.Google ScholarPubMed
Weiss, C.H., & Bucuvalas, M.J. (1981). Truth test and utility tests: Decision-makers frame of reference for social science research. Evaluation Studies Review Annual, 6, 695706.Google Scholar
Wholey, J.S. (1983). Evaluation and effective public management. Boston: Little Brown.Google Scholar
Yeh, S.S. (2000). Building the knowledge base for improving educational and social programs through planned variation evaluations. American Journal of Evaluation, 21, 2740.CrossRefGoogle Scholar